Doron Adler PRO

Norod78

AI & ML interests

Fooling around with Generative machine learning models.

Recent Activity

liked a Space 1 day ago
dicta-il/joint-demo
liked a dataset 1 day ago
Lightricks/Cakeify-Dataset
View all activity

Organizations

Spaces-explorers's profile picture Gradio-Blocks-Party's profile picture Yam Peleg's profile picture ZeroGPU Explorers's profile picture Social Post Explorers's profile picture Hugging Face Discord Community's profile picture Endless Technologies Ltd. 's profile picture

Norod78's activity

reacted to daavoo's post with 😎 5 days ago
view post
Post
1964
πŸ€– πŸ—ΊMapped all(?) the swimming pools ️🏊 around another town with https://github.com/mozilla-ai/osm-ai-helper.

This time, I have mapped and contributed to https://www.openstreetmap.org more than 100 swimming pools around my wife's hometown. Only took about 20min to find them all (+~3 min verification) in a free Colab GPUπŸš€

Try it yourself around a single point: mozilla-ai/osm-ai-helper
reacted to etemiz's post with 😎 5 days ago
view post
Post
1657
Started fine tuning Gemma 3 using evolutionary approach. It is not the worst model according to AHA leaderboard and it is one of the smart according to lmarena.ai. My objective is to make it based, anti woke, wise, beneficial and then some.

Several GPUs are fine tuning it at the same time, each using a different dataset and using QLoRA and the successful ones are merged later. Compared to LoRa this allows faster training and also reduced overfitting because the merge operation heals overfitting. The problem with this could be the 4 bit quantization may make models dumber. But I am not looking for sheer IQ. Too much mind is a problem anyway :)

Has anyone tried parallel QLoRa and merge before?

I also automated the dataset selection and benchmarking and converging to objectives (the fit function, the reward). It is basically trying to get higher score in AHA Leaderboard as fast as possible with a diverse set of organisms that "evolve by training".

I want to release some cool stuff when I have the time:
- how an answer to a single question changes over time, with each training round or day
- a chart to show AHA alignment over training rounds
  • 3 replies
Β·