Good question!
Before, you had to download the .wav file from Colab, I have now added an Audio display from IPython, and will be cleaning up for a future post. Sorry for crap 1st release, next will be much better. I work in a copy as to not disturb the original usually.
Samuel L Meyers PRO
MrOvkill
AI & ML interests
Dialogue Generation, Text Generation, etc...
Recent Activity
View all activity
Organizations
MrOvkill's activity

replied to
their
post
14 days ago

posted
an
update
16 days ago
Post
2279
Hello!
I was just playing around with Python's MIDI library and Colab's code generation, accidentally cooked up a quick n' dirty audio synthesis template.
Have fun!
https://colab.research.google.com/drive/1d-AF6jygCwmoJvAa9nnEMe5ROidnMJNY?usp=sharing
-<3
I was just playing around with Python's MIDI library and Colab's code generation, accidentally cooked up a quick n' dirty audio synthesis template.
Have fun!
https://colab.research.google.com/drive/1d-AF6jygCwmoJvAa9nnEMe5ROidnMJNY?usp=sharing
-<3

reacted to
luigi12345's
post with π₯
about 1 month ago
Post
1949
π OpenAI o3-mini Just Dropped β Hereβs What You Need to Know!
OpenAI just launched o3-mini, a faster, smarter upgrade over o1-mini. Itβs better at math, coding, and logic, making it more reliable for structured tasks. Now available in ChatGPT & API, with function calling, structured outputs, and system messages.
π₯ Why does this matter?
β Stronger in logic, coding, and structured reasoning
β Function calling now works reliably for API responses
β More stable & efficient for production tasks
β Faster responses with better accuracy
β οΈ Who should use it?
βοΈ Great for coding, API calls, and structured Q&A
β Not meant for long conversations or complex reasoning (GPT-4 is better)
π‘ Free users: Try it under βReasonβ mode in ChatGPT
π‘ Plus/Team users: Daily message limit tripled to 150/day!
OpenAI just launched o3-mini, a faster, smarter upgrade over o1-mini. Itβs better at math, coding, and logic, making it more reliable for structured tasks. Now available in ChatGPT & API, with function calling, structured outputs, and system messages.
π₯ Why does this matter?
β Stronger in logic, coding, and structured reasoning
β Function calling now works reliably for API responses
β More stable & efficient for production tasks
β Faster responses with better accuracy
β οΈ Who should use it?
βοΈ Great for coding, API calls, and structured Q&A
β Not meant for long conversations or complex reasoning (GPT-4 is better)
π‘ Free users: Try it under βReasonβ mode in ChatGPT
π‘ Plus/Team users: Daily message limit tripled to 150/day!

reacted to
MoritzLaurer's
post with β€οΈ
2 months ago
Post
1730
The TRL v0.13 release is π₯! My highlight are the new process reward trainer to train models similar to o1 and tool call support:
π§ Process reward trainer: Enables training of Process-supervised Reward Models (PRMs), which reward the quality of intermediate steps, promoting structured reasoning. Perfect for tasks like stepwise reasoning.
π Model merging: A new callback leverages mergekit to merge models during training, improving performance by blending reference and policy models - optionally pushing merged models to the Hugging Face Hub.
π οΈ Tool call support: TRL preprocessing now supports tool integration, laying the groundwork for agent fine-tuning with examples like dynamic temperature fetching in prompts.
βοΈ Mixture of judges: The new AllTrueJudge combines decisions from multiple binary judges for more nuanced evaluation.
Read the release notes and other resources here π
Release: https://github.com/huggingface/trl/releases/tag/v0.13.0
Mergekit: https://github.com/arcee-ai/mergekit
Mixture of judges paper: The Perfect Blend: Redefining RLHF with Mixture of Judges (2409.20370)
π§ Process reward trainer: Enables training of Process-supervised Reward Models (PRMs), which reward the quality of intermediate steps, promoting structured reasoning. Perfect for tasks like stepwise reasoning.
π Model merging: A new callback leverages mergekit to merge models during training, improving performance by blending reference and policy models - optionally pushing merged models to the Hugging Face Hub.
π οΈ Tool call support: TRL preprocessing now supports tool integration, laying the groundwork for agent fine-tuning with examples like dynamic temperature fetching in prompts.
βοΈ Mixture of judges: The new AllTrueJudge combines decisions from multiple binary judges for more nuanced evaluation.
Read the release notes and other resources here π
Release: https://github.com/huggingface/trl/releases/tag/v0.13.0
Mergekit: https://github.com/arcee-ai/mergekit
Mixture of judges paper: The Perfect Blend: Redefining RLHF with Mixture of Judges (2409.20370)