DeepGHS

community
Verified

AI & ML interests

Computer Vision Technology and Data Collection for Anime Waifu

Recent Activity

deepghs's activity

ImranzamanML 
posted an update 7 days ago
view post
Post
1527

Llama 4 is here and it's making serious waves!

After diving into the latest benchmark results, it’s clear that Meta’s new Llama 4 lineup (Maverick, Scout, and Behemoth) is no joke.

Here are a few standout highlights🔍:

Llama 4 Maverick hits the sweet spot between cost and performance
- Outperforms GPT-4o in image tasks like ChartQA (90.0 vs 85.7) and DocVQA (94.4 vs 92.8)
- Beats others in MathVista and MMLU Pro too and at a fraction of the cost ($0.19–$0.49 vs $4.38 🤯)

Llama 4 Scout is lean, cost-efficient, and surprisingly capable
- Strong performance across image and language tasks (e.g. ChartQA: 88.8, DocVQA: 94.4)
- More affordable than most competitors and still beats out larger models like Gemini 2.0 Flash-Lite

Llama 4 Behemoth is the heavy hitter.
- Tops the charts in LiveCodeBench (49.4), MATH-500 (95.0), and MMLU Pro (82.2)
- Even edges out Claude 3 Sonnet and Gemini 2 Pro in multiple areas

Meta didn’t just show up, they delivered across multimodal, coding, reasoning, and multilingual benchmarks.

And honestly? Seeing this level of performance, especially at lower inference costs, is a big deal for anyone building on LLMs.

Curious to see how these models do in real-world apps next.

#AI #Meta #Llama4 #LLMs #Benchmarking #MachineLearning #OpenSourceAI #GenerativeAI
  • 1 reply
·
MrDragonFox 
posted an update 18 days ago
view post
Post
1865
did a small emotive classified test dataset for all the tts tuners out there

MrDragonFox/Elise

3h total mit - single speaker voice

dataset is a copy of an existing one just added the emotional tags over 1200 samples - should be good enough to test if emotional tags stick in your finetune
  • 1 reply
·
not-lain 
posted an update about 1 month ago
Tonic 
posted an update about 1 month ago
view post
Post
1289
🙋🏻‍♂️Hey there folks,

Did you know that you can use ModernBERT to detect model hallucinations ?

Check out the Demo : Tonic/hallucination-test

See here for Medical Context Demo : MultiTransformer/tonic-discharge-guard

check out the model from KRLabs : KRLabsOrg/lettucedect-large-modernbert-en-v1

and the library they kindly open sourced for it : https://github.com/KRLabsOrg/LettuceDetect

👆🏻if you like this topic please contribute code upstream 🚀

  • 2 replies
·
Tonic 
posted an update about 1 month ago
view post
Post
748
Powered by KRLabsOrg/lettucedect-large-modernbert-en-v1 from KRLabsOrg.

Detect hallucinations in answers based on context and questions using ModernBERT with 8192-token context support!

### Model Details
- **Model Name**: [lettucedect-large-modernbert-en-v1]( KRLabsOrg/lettucedect-large-modernbert-en-v1)
- **Organization**: [KRLabsOrg]( KRLabsOrg )
- **Github**: [https://github.com/KRLabsOrg/LettuceDetect](https://github.com/KRLabsOrg/LettuceDetect)
- **Architecture**: ModernBERT (Large) with extended context support up to 8192 tokens
- **Task**: Token Classification / Hallucination Detection
- **Training Dataset**: [RagTruth]( wandb/RAGTruth-processed)
- **Language**: English
- **Capabilities**: Detects hallucinated spans in answers, provides confidence scores, and calculates average confidence across detected spans.

LettuceDetect excels at processing long documents to determine if an answer aligns with the provided context, making it a powerful tool for ensuring factual accuracy.
ImranzamanML 
posted an update 2 months ago
view post
Post
3253
Hugging Face just launched the AI Agents Course – a free journey from beginner to expert in AI agents!

- Learn AI Agent fundamentals, use cases and frameworks
- Use top libraries like LangChain & LlamaIndex
- Compete in challenges & earn a certificate
- Hands-on projects & real-world applications

https://huggingface.co/learn/agents-course/unit0/introduction

You can join for a live Q&A on Feb 12 at 5PM CET to learn more about the course here

https://www.youtube.com/live/PopqUt3MGyQ
Tonic 
posted an update 2 months ago
view post
Post
2391
🙋🏻‍♂️hey there folks ,

Goedel's Theorem Prover is now being demo'ed on huggingface : Tonic/Math

give it a try !
not-lain 
posted an update 2 months ago
Tonic 
posted an update 3 months ago
view post
Post
2979
🙋🏻‍♂️ Hey there folks ,

our team made a game during the @mistral-game-jam and we're trying to win the community award !

try our game out and drop us a ❤️ like basically to vote for us !

Mistral-AI-Game-Jam/TextToSurvive

hope you like it !
not-lain 
posted an update 3 months ago
view post
Post
1692
we now have more than 2000 public AI models using ModelHubMixin🤗
Tonic 
posted an update 3 months ago
view post
Post
1904
🙋🏻‍♂️ Hey there folks ,

Facebook AI just released JASCO models that make music stems .

you can try it out here : Tonic/audiocraft

hope you like it
Tonic 
posted an update 3 months ago
view post
Post
2471
🙋🏻‍♂️Hey there folks , Open LLM Europe just released Lucie 7B-Instruct model , a billingual instruct model trained on open data ! You can check out my unofficial demo here while we wait for the official inference api from the group : Tonic/Lucie-7B hope you like it 🚀
not-lain 
posted an update 3 months ago
Tonic 
posted an update 3 months ago
view post
Post
1728
microsoft just released Phi-4 , check it out here : Tonic/Phi-4

hope you like it :-)
DamarJati 
posted an update 3 months ago
view post
Post
3342
Happy New Year 2025 🤗
For the Huggingface community.
ImranzamanML 
posted an update 4 months ago
view post
Post
695
Deep understanding of (C-index) evaluation measure for better model
Lets start with three patients groups:

Group A
Group B
Group C
For each patient, we will predict risk score (higher score means higher risk of early event).

Step 1: Understanding Concordance Index
The Concordance Index (C-index) evaluate that how well the model ranks survival times.

Understand with sample data:
Group A has 3 patients with actual survival times and predicted risk scores:

Patient Actual Survival Time Predicted Risk Score
P1 5 months 0.8
P2 3 months 0.9
P3 10 months 0.2
Comparable pairs:

(P1, P2): P2 has a shorter survival time and a higher risk score → Concordant ✅
(P1, P3): P3 has a longer survival time and a lower risk score → Concordant ✅
(P2, P3): P3 has a longer survival time and a lower risk score → Concordant ✅
Total pairs = 3
Total concordant pairs = 3

C-index for Group A = Concordant pairs/Total pairs= 3/3 = 1.0

Step 2: Calculate C-index for All Groups
Repeat the process for all groups. For now we can assume:

Group A: C-index = 1.0
Group B: C-index = 0.8
Group C: C-index = 0.6
Step 3: Stratified Concordance Index
The Stratified Concordance Index combines the C-index scores of all groups and focusing on the following:

Average performance across groups (mean of C-indices).
Consistency across groups (low standard deviation of C-indices).
Formula:
Stratified C-index = Mean(C-index scores) - Standard Deviation(C-index scores)

Calculate the mean:
Mean=1.0 + 0.8 + 0.6/3 = 0.8

Calculate the standard deviation:
Standard Deviation= sqrt((1.0-0.8)^2 + (0.8-0.8)^2 + (0.6-0.8)^/3) = 0.16

Stratified C-index:
Stratified C-index = 0.8 - 0.16 = 0.64

Step 4: Interpret the Results
A high Stratified C-index means:

The model predicts well overall (high mean C-index).
lunarflu 
posted an update 4 months ago