CodeParrot

non-profit

AI & ML interests

Language models for code.

Recent Activity

codeparrot's activity

thomwolfย 
posted an update about 1 month ago
view post
Post
4868
We are proud to announce HuggingFaceFW/fineweb-2: A sparkling update to HuggingFaceFW/fineweb with 1000s of ๐Ÿ—ฃ๏ธlanguages.

We applied the same data-driven approach that led to SOTA English performance in๐Ÿท FineWeb to thousands of languages.

๐Ÿฅ‚ FineWeb2 has 8TB of compressed text data and outperforms other multilingual datasets in our experiments.

The dataset is released under the permissive ๐Ÿ“œ ODC-By 1.0 license, and the ๐Ÿ’ป code to reproduce it and our evaluations is public.

We will very soon announce a big community project, and are working on a ๐Ÿ“ blogpost walking you through the entire dataset creation process. Stay tuned!

In the mean time come ask us question on our chat place: HuggingFaceFW/discussion

H/t @guipenedo @hynky @lvwerra as well as @vsabolcec Bettina Messmer @negar-foroutan and @mjaggi
  • 2 replies
ยท
thomwolfย 
posted an update about 1 month ago
thomwolfย 
posted an update about 2 months ago
loubnabnlย 
posted an update about 2 months ago
view post
Post
1905
Making SmolLM2 reproducible: open-sourcing our training & evaluation toolkit ๐Ÿ› ๏ธ https://github.com/huggingface/smollm/

- Pre-training code with nanotron
- Evaluation suite with lighteval
- Synthetic data generation using distilabel (powers our new SFT dataset HuggingFaceTB/smoltalk)
- Post-training scripts with TRL & the alignment handbook
- On-device tools with llama.cpp for summarization, rewriting & agents

Apache 2.0 licensed. V2 pre-training data mix coming soon!

Which other tools should we add next?
thomwolfย 
posted an update about 2 months ago
thomwolfย 
posted an update about 2 months ago
thomwolfย 
posted an update 3 months ago
view post
Post
4158
Parents in the 1990: Teach the kids to code
Parents now: Teach the kids to fix the code when it starts walking around ๐Ÿค–โœจ
  • 2 replies
ยท
loubnabnlย 
posted an update 8 months ago
view post
Post
5574
๐Ÿท FineWeb technical report is out and so is ๐Ÿ“š FineWeb-Edu, a 1.3 trillion tokens dataset that outperforms all other open web datasets, with remarkable improvements on educational benchmarksย such as MMLU, ARC, and OpenBookQA.

Technical report: HuggingFaceFW/blogpost-fineweb-v1
Dataset: HuggingFaceFW/fineweb-edu

We used Llama 3 generations to train an educational quality classifier, filtering the 15 trillion tokens of FineWeb to select only those with high educational value (an approach also used in Llama 3 and Phi-3 training datasets). We're releasing both FineWeb-Edu and the classifier, along with a larger, less heavily filtered version containing 5.4 trillion tokens.

You can find more details about the dataset and the experiments we ran in the FineWeb technical report, It's a 45-minute read but it contains all the secret sauce for building high quality web datasets.

Enjoy!
thomwolfย 
posted an update 8 months ago
view post
Post
4572
[New crazy blog post alert] We are releasing an extensive blog post on the science of creating high quality web-scale datasets, detailing all the steps and learnings that came in our recent 15 trillion tokens ๐ŸทFineWeb release

Inspired by the distill.pub interactive graphics papers, we settled to write the most extensive, enjoyable and in-depth tech report we could draft on so prepare for a 45-mmin read with interactive graphics and all.

And it's not all, in this article we also introduce ๐Ÿ“šFineWeb-Edu a filtered subset of Common Crawl with 1.3T tokens containing only web pages with very high educational content. Up to our knowledge, FineWeb-Edu out-performs all openly release web-scale datasets by a significant margin on knowledge- and reasoning-intensive benchmarks like MMLU, ARC, and OpenBookQA

We also make a number of surprising observations on the "quality" of the internet it-self which may challenge some of the general assumptions on web data (not saying more, I'll let you draw your conclusions ;)

HuggingFaceFW/blogpost-fineweb-v1
  • 1 reply
ยท
thomwolfย 
posted an update 9 months ago
view post
Post
4880
Is is time for the open-source AI robots revolution ๐Ÿš€?

With @haixuantao and @Leyo weโ€™ve been playing with a low-cost DJI robot controlled by three local open-source AI models (Whisper, Idefics2, Parler-TTS - all Apache2) and orchestrated by Dora-cs.

Links to find all the hardware/software we used in the demo:
- robot control framework โ€“ dora-rs: https://github.com/dora-rs/dora
- speech-to-text model โ€“ whisper: openai/whisper-base
- vision-text model โ€“ Idefics2: HuggingFaceM4/idefics2-8b-AWQ
- text-to-speech model โ€“ ParlerTTS mini: parler-tts/parler_tts_mini_v0.1
- robot: https://dji.com/robomaster-s1
- code gist: https://gist.github.com/haixuanTao/860e1740245dc2c8dd85b496150a9320
- Larger codebase: dora-rs/dora-idefics2
- laptop/pc: any with a recent GPU card (our has a RTX 4090)

Enjoy!
ยท
thomwolfย 
posted an update 10 months ago
view post
Post
3126
Very interesting model just released by MyShell: jetmoe/jetmoe-8b . It's a 8B-parameters MoE LLM so 2.2B active parameters, really efficient.

Main characteristics:
- impressive performances for its size (beating meta-llama/Llama-2-7b and huggyllama/llama-13b)
- combine Mixture of Attention heads (MoA) and Mixture of MLP Experts (MoE) โ€“ 8 experts with 2 being active for each token
- trained on a rather limited 1.25T tokens from publicly available datasets โ€“ training recipe follows the MiniCPM's two-phases training method => first time I see this for a 2B+ model
- $100k to train
- open weights - open sharing of recipes - open dataset - open code => โ™ก
- still interesting room to improve performances (be it only by training longer)

Links:
- report: https://research.myshell.ai/jetmoe
- model: jetmoe/jetmoe-8b
- code: https://github.com/myshell-ai/JetMoE

Note: I actually detailed all of the MiniCPM schedule, Mixture-of-expert (MoE) and many of the datasets used in this work in my recent little guide to building LLMs in 2024, so feel free to check it out if you want to learn more on these topics: https://www.youtube.com/watch?v=2-SPH9hIKT8
  • 1 reply
ยท