Aritra Roy Gosthipaty's picture

Aritra Roy Gosthipaty PRO

ariG23498

AI & ML interests

Deep Representation Learning

Recent Activity

Organizations

Hugging Face's profile picture Google's profile picture Notebooks-explorers's profile picture 🧨Diffusers's profile picture PyTorch Image Models's profile picture Keras's profile picture Cohere Labs's profile picture Hugging Test Lab's profile picture Hugging Face Fellows's profile picture Probing ViTs's profile picture TrystAI's profile picture PyImageSearch's profile picture Keras Dreambooth Event's profile picture Hugging Face OSS Metrics's profile picture Blog-explorers's profile picture ZeroGPU Explorers's profile picture kotol's profile picture gg-hf's profile picture MLX Community's profile picture IBM Granite's profile picture Open Generative Fill's profile picture Social Post Explorers's profile picture Hugging Face Discord Community's profile picture nltpt's profile picture nltpt-q's profile picture qrias's profile picture Hugging Face Science's profile picture open/ acc's profile picture wut?'s profile picture LLM from Scratch's profile picture s0225's profile picture gg-hf-g's profile picture llrehf's profile picture University of Science and Technology of China's profile picture Model Metadata's profile picture all things vision LMs's profile picture

ariG23498's activity

posted an update 1 day ago
view post
Post
713
🚨 Implement KV Cache from scratch in pure PyTorch. 🚨

We have documented all of our learning while implementing KV Cache to nanoVLM. Joint work with @kashif @lusxvr @andito @pcuenq

Blog: hf.co/blog/kv-cache
  • 1 reply
Β·
reacted to danielhanchen's post with πŸ”₯ 2 days ago
reacted to hesamation's post with πŸ”₯ 9 days ago
view post
Post
2383
I really like how this seven-stage pipeline was laid out in the Ultimate Guide to Fine-Tuning book.

It gives an overview, then goes into detail for each stage, even providing best practices.

It’s 115 pages on arxiv, definitely worth a read.

Check it out: https://arxiv.org/abs/2408.13296
reacted to merve's post with πŸ”₯ 24 days ago
view post
Post
5024
VLMS 2025 UPDATE πŸ”₯

We just shipped a blog on everything latest on vision language models, including
πŸ€– GUI agents, agentic VLMs, omni models
πŸ“‘ multimodal RAG
⏯️ video LMs
🀏🏻 smol models
..and more! https://huggingface.co/blog/vlms-2025
  • 1 reply
Β·
reacted to merve's post with πŸ‘πŸš€ about 1 month ago
view post
Post
6574
A real-time object detector much faster and accurate than YOLO with Apache 2.0 license just landed to Hugging Face transformers πŸ”₯

D-FINE is the sota real-time object detector that runs on T4 (free Colab) 🀩

> Collection with all checkpoints and demo ustc-community/d-fine-68109b427cbe6ee36b4e7352

Notebooks:
> Tracking https://github.com/qubvel/transformers-notebooks/blob/main/notebooks/DFine_tracking.ipynb
> Inference https://github.com/qubvel/transformers-notebooks/blob/main/notebooks/DFine_inference.ipynb
> Fine-tuning https://github.com/qubvel/transformers-notebooks/blob/main/notebooks/DFine_finetune_on_a_custom_dataset.ipynb
h/t @vladislavbro @qubvel-hf @ariG23498 and the authors of the paper 🎩

Regular object detectors attempt to predict bounding boxes in (x, y, w, h) pixel perfect coordinates, which is very rigid and hard to solve πŸ₯²β˜ΉοΈ



D-FINE formulates object detection as a distribution for bounding box coordinates, refines them iteratively, and it's more accurate 🀩

Another core idea behind this model is Global Optimal Localization Self-Distillation ‡️

this model uses final layer's distribution output (sort of like a teacher) to distill to earlier layers to make early layers more performant.

  • 2 replies
Β·
reacted to burtenshaw's post with πŸš€ 4 months ago
view post
Post
4069
🚧 Work in Progress! 🚧

πŸ‘·β€β™€οΈ We're working hard on getting the official agents course ready for the 50,000 students that have signed up.

If you want to contribute to the discussion, I started these community posts. Looking forward to hearing from you:

- smolagents unit in the agents course - agents-course/README#7
- LlamaIndex Unit in the agents course - agents-course/README#6
- LangChain and LangGraph unit in the agents course - agents-course/README#5
- Real world use cases in the agents course - agents-course/README#8


posted an update 5 months ago
posted an update 5 months ago
reacted to burtenshaw's post with πŸš€πŸ”₯ 5 months ago
view post
Post
49109
We’re launching a FREE and CERTIFIED course on Agents!

We're thrilled to announce the launch of the Hugging Face Agents course on Learn! This interactive, certified course will guide you through building and deploying your own AI agents.

Here's what you'll learn:

- Understanding Agents: We'll break down the fundamentals of AI agents, showing you how they use LLMs to perceive their environment (observations), reason about it (thoughts), and take actions. Think of a smart assistant that can book appointments, answer emails, or even write code based on your instructions.
- Building with Frameworks: You'll dive into popular agent frameworks like LangChain, LlamaIndex and smolagents. These tools provide the building blocks for creating complex agent behaviors.
- Real-World Applications: See how agents are used in practice, from automating SQL queries to generating code and summarizing complex documents.
- Certification: Earn a certification by completing the course modules, implementing a use case, and passing a benchmark assessment. This proves your skills in building and deploying AI agents.
Audience

This course is designed for anyone interested in the future of AI. Whether you're a developer, data scientist, or simply curious about AI, this course will equip you with the knowledge and skills to build your own intelligent agents.

Enroll today and start building the next generation of AI agent applications!

https://bit.ly/hf-learn-agents
Β·
reacted to burtenshaw's post with πŸ”₯ 6 months ago
view post
Post
2533
Quick update from week 1 of smol course. The community is taking the driving seat and using the material for their own projects. If you want to do the same, join in!

- we have ongoing translation projects in Korean, Vietnamese, Portuguese, and Spanish
- 3 chapters are ready for students. On topics like, instruction tuning, preference alignment, and parameter efficient fine tuning
- 3 chapters are in progress on evaluation, vision language models, and synthetic data.
- around 780 people have forked the repo to use it for learning, teaching, sharing.

⏭️ Next step is to support people that want to use the course for teaching, content creation, internal knowledge sharing, or anything. If you're into this. Drop an issue or PR

REPO: https://buff.ly/3ZCMKX2
discord channel: https://buff.ly/4f9F8jA
posted an update 6 months ago
reacted to rwightman's post with πŸš€ 6 months ago
view post
Post
1460
There's a new timm release, v 1.0.12, with a focus on optimizers. The optimizer factory has been refactored, there's now a timm.optim.list_optimizers() and new way to register optimizers and their attributes. As always you can use an timm optimizer like a torch one, just replace torch.optim with timm.optim

New optimizers include:
* AdafactorBigVision - adfactorbv
* ADOPT - adopt / adoptw (decoupled decay)
* MARS - mars
* LaProp - laprop
* Cautious Optimizers - a modification to all of the above, prefix with c as well as cadamw, cnadamw, csgdw, clamb, crmsproptf

I shared some caution comparisons in this model repo: rwightman/timm-optim-caution

For details, references, see the code: https://github.com/huggingface/pytorch-image-models/tree/main/timm/optim

  • 3 replies
Β·
reacted to davidberenstein1957's post with πŸš€πŸ§ πŸ‘ 6 months ago
view post
Post
3508
The Data Is Better Together community is set to release the first Apache 2 licensed image preference dataset!

Great work and let's give this a final push :)

@aashish1904 congrats on your month of HF pro. There is more to win during this sprint!

@aashish1904 @AnyaDesdein @davidberenstein1957 @Malalatiana @beta3 @fffiloni @munish0838 @Reza2kn @bbunzeck @Creazycreator @andrei-saceleanu @jafhaponiuk @rca-etl @kf120 @burtenshaw @mmhamdy @grib0ed0v @Doopus @AnyaDes @ttkap @Xceron @Lewox @davanstrien @Azazelle @adirik @Ashish08 @AntonVic @kenantang @sdiazlor @g-ronimo @dennis-rall @prithivMLmods @girtss3 @flozi00 @WaveCut @Taylor658 @Wildminder @Sara9999 @phaelishall @sararob @dvilasuero @pgabrys @plaguss @CDS899 @timajwilliams @rudzinskimaciej @pavel-ai @aggr8 @ignacioct @MouseAI @Leeps @MaksKul @NicolasDmln @Muinez @kusht55 @caiolang @Jakub-Brand24 @loamy @Demijan @eliab96 @Viewegger @JosephCatrambone @p1atdev @mrshu @o639 @Targezed @Aviv-anthonnyolime @thliang01 @Ahmed-Amine @glards @pranaykoppula @nataliaElv @MaPirlet @alvarobartt @gabrielmbmb @zlicastro @Jaydip @Chouettecheveche @lilcheaty @ruyrdiaz @robintema @fdaudens @ggcristian @a-r-r-o-w @pates @joheras @stopsatgreen @bezo97 @chachi902 @iamyann @liamcripwell @dmb23 @korbih @anonymous7743 @akbdx18 @OVAWARE @severo @akontra @lichorosario @lhoestq @SebastianBodza @Vishnou @ameerazam08 @appoose @Mukei @mearco @joaquincabezas @Fizzarolli @thomastraum @igortopolski @OxxoCodes @patrickfleith @asoria @bn22 @sitammeur @Krodolf @bergr7f @Sbxxn @wietsevenema @sugatoray @Iamladi @MikeTrizna @feveromo @mokady @Bolero @prath @Dowwie @kfahn @decodingchris @alili2050 @RahulRaman @yzimmermann @Ameeeee @ecyht2 @MattMC001 @hemanthkumarak @Thegorgibus @akos2 @LawRun @ramithuh @SuperMuel @sjans @peterizsak @mosama @Eyel @mtr3 @cfahlgren1 @legentil @clem @Citaman @Aurelien-Morgan @AntoineBourgois @TotoB12 @Stanmey @osanseviero @multimodalart @maxiw @ariG23498 @ngk89 @femboysLover @dvs @tacohiddink @blanchon @DavidJimenez
  • 2 replies
Β·
reacted to clem's post with πŸš€ 6 months ago
view post
Post
4759
Six predictions for AI in 2025 (and a review of how my 2024 predictions turned out):

- There will be the first major public protest related to AI
- A big company will see its market cap divided by two or more because of AI
- At least 100,000 personal AI robots will be pre-ordered
- China will start to lead the AI race (as a consequence of leading the open-source AI race).
- There will be big breakthroughs in AI for biology and chemistry.
- We will begin to see the economic and employment growth potential of AI, with 15M AI builders on Hugging Face.

How my predictions for 2024 turned out:

- A hyped AI company will go bankrupt or get acquired for a ridiculously low price
βœ… (Inflexion, AdeptAI,...)

- Open-source LLMs will reach the level of the best closed-source LLMs
βœ… with QwQ and dozens of others

- Big breakthroughs in AI for video, time-series, biology and chemistry
βœ… for video πŸ”΄for time-series, biology and chemistry

- We will talk much more about the cost (monetary and environmental) of AI
βœ…Monetary πŸ”΄Environmental (😒)

- A popular media will be mostly AI-generated
βœ… with NotebookLM by Google

- 10 millions AI builders on Hugging Face leading to no increase of unemployment
πŸ”œcurrently 7M of AI builders on Hugging Face
Β·
reacted to merve's post with πŸš€πŸ”₯ 6 months ago
view post
Post
2694
small but mighty πŸ”₯
you can fine-tune SmolVLM on an L4 with batch size of 4 and it will only take 16.4 GB VRAM 🫰🏻 also with gradient accumulation simulated batch size is 16 ✨
I made a notebook that includes all the goodies: QLoRA, gradient accumulation, gradient checkpointing with explanations on how they work πŸ’ https://github.com/huggingface/smollm/blob/main/finetuning/Smol_VLM_FT.ipynb