AI & ML interests

🤖🤗multi media inputs and outputs to create augmented culture and better outcomes for humans everywhere.❤️🚀

MultiTransformer's activity

Tonic 
posted an update 7 days ago
view post
Post
352
🙋🏻‍♂️ hey there folks ,

So every bio/med/chem meeting i go to i always the same questions "why are you sharing a gdrive link with me for this?" and "Do you have any plans to publish your model weights and datasets on huggingface?" and finally i got a good answer today which explains everything :

basically there is some kind of government censorship on this (usa, but i'm sure others too) and they are told they are not allowed as it is considered a "dataleak" which is illegal !!!!

this is terrible ! but the good news is that we can do something about it !

so there is this "call for opinions and comments" here from the NIH (usa) , and here we can make our opinion on this topic known : https://osp.od.nih.gov/comment-form-responsibly-developing-and-sharing-generative-artificial-intelligence-tools-using-nih-controlled-access-data/

kindly consider dropping your opinion and thoughts about this censorship of science , and share this post , link or thoughts widely .

Together maybe we can start to share data and model weights appropriately and openly in a good way 🙏🏻🚀

cc. @cyrilzakka

AtAndDev 
posted an update 13 days ago
view post
Post
2697
deepseek-ai/DeepSeek-R1-0528

This is the end
  • 1 reply
·
hesamation 
posted an update 15 days ago
view post
Post
2413
I really like how this seven-stage pipeline was laid out in the Ultimate Guide to Fine-Tuning book.

It gives an overview, then goes into detail for each stage, even providing best practices.

It’s 115 pages on arxiv, definitely worth a read.

Check it out: https://arxiv.org/abs/2408.13296
Tonic 
posted an update 16 days ago
view post
Post
2456
🙋🏻‍♂️ Hey there folks ,

Yesterday the world's first "Learn to Vibe Code" application was released .

As vibe coding is the mainstream paradigm , so now the first educational app is there to support it .

You can try it out already :

https://vibe.takara.ai

and of course it's entirely open source, so i already made my issue and feature branch :-) 🚀
daavoo 
posted an update 22 days ago
hesamation 
posted an update 27 days ago
hesamation 
posted an update about 1 month ago
view post
Post
3072
this book actually exists for free, “the little book of deep learning”. best to refresh your mind about DL basics:
> foundations of machine learning
> how models train
> common layers (dropout, pooling…)
> basic intro to LLMs
actually optimized for mobile.

Book: https://fleuret.org/public/lbdl.pdf
Nymbo 
posted an update about 1 month ago
view post
Post
2374
Haven't seen this posted anywhere - Llama-3.3-8B-Instruct is available on the new Llama API. Is this a new model or did someone mislabel Llama-3.1-8B?
  • 1 reply
·
daavoo 
posted an update about 1 month ago
view post
Post
1468
Have you heard about the Agent2Agent Protocol (A2A)?

We have just released an option in https://github.com/mozilla-ai/any-agent to serve with A2A any of the supported agent frameworks (Agno, Google ADK, Langchain, LlamaIndex, OpenAI Agents SDK, smolagents and tinyagent)!

Check the docs https://mozilla-ai.github.io/any-agent/serving/

# google_expert.py
from any_agent import AgentConfig, AnyAgent
from any_agent.config import ServingConfig
from any_agent.tools import search_web

agent = AnyAgent.create(
    "google",
    AgentConfig(
        name="google_expert",
        model_id="gpt-4.1-nano",
        instructions="You must use the available tools to find an answer",
        description="An agent that can answer questions about the Google Agents Development Kit (ADK).",
        tools=[search_web]
    )
)

agent.serve(ServingConfig(port=5001))
daavoo 
posted an update about 1 month ago
view post
Post
1364
We've just released a new version of https://github.com/mozilla-ai/any-agent , including a Python implementation of https://huggingface.co/blog/tiny-agents!

Give it a ⭐!

from any_agent import AnyAgent, AgentConfig
from any_agent.config import MCPStdioParams

agent = AnyAgent.create(
    "tinyagent",
    AgentConfig(
        model_id="gpt-4.1-nano",
        instructions="You must use the available tools to find an answer",
        tools=[
            MCPStdioParams(
                command="uvx",
                args=["duckduckgo-mcp-server"]
            )
        ]
    )
)

result = agent.run(
    "Which Agent Framework is the best??"
)
print(result.final_output)

Nymbo 
posted an update about 1 month ago
view post
Post
2178
PSA for anyone using Nymbo/Nymbo_Theme or Nymbo/Nymbo_Theme_5 in a Gradio space ~

Both of these themes have been updated to fix some of the long-standing inconsistencies ever since the transition to Gradio v5. Textboxes are no longer bright green and in-line code is readable now! Both themes are now visually identical across versions.

If your space is already using one of these themes, you just need to restart your space to get the latest version. No code changes needed.
mkluczek 
posted an update about 1 month ago
view post
Post
298
Expansion of Global and Dense Open Embeddings Dataset of Earth 🌍

We updated our previous embeddings release with three models MMEarth and DeCUR-S2, DeCUR-S1 of the Major TOM embeddings dataset, developed in collaboration with CloudFerro S.A. asterisk labs and Φ-lab, European Space Agency - ESA. Together with @mikonvergence , Jędrzej S. Bojanowski, we extend the open-access collection of open dataset of Copernicus embeddings built at global scale, providing dense coverage across the entire acquisition area of Sentinel-1 and Sentinel-2 sensors.

Total embedding resources after the update:
- 51 TB of AI-embeddings generated from processed Sentinel data,
- over 40 billion embedding vectors,
- processing of 147 TB of raw satellite data,
- analysis covering more than 15 million Sentinel-1 and Sentinel-2 scenes and more than 16 trillion pixels.

This project delivers open and free vectorized expansions of Major TOM datasets available on CREODIAS and Hugging Face, setting a new standard for embedding releases and enabling lightweight, scalable ingestion of Earth Observation (EO) data for countless applications.

Datasets:
Major-TOM/Core-S2L2A-MMEarth
Major-TOM/Core-S2L1C-DeCUR
Major-TOM/Core-S1RTC-DeCUR


#EarthObservation #AI #CloudFerro #asterisklabs #ESA
daavoo 
posted an update about 2 months ago
view post
Post
1937
We have just released a new version of⭐https://github.com/mozilla-ai/any-agent ⭐exposing an API to be used in async contexts:

import asyncio
from any_agent import AgentConfig, AnyAgent, TracingConfig
from any_agent.tools import search_web

async def main():
    agent = await AnyAgent.create_async(
        "openai",
        AgentConfig(
            model_id="gpt-4.1-mini",
            instructions="You are the main agent. Use the other available agents to find an answer",
        ),
        managed_agents=[
            AgentConfig(
                name="search_web_agent",
                description="An agent that can search the web",
                model_id="gpt-4.1-nano",
                tools=[search_web]
            )
        ],
        tracing=TracingConfig()
    )

    await agent.run_async("Which Agent Framework is the best??")

if __name__ == "__main__":
    asyncio.run(main())
daavoo 
posted an update about 2 months ago
view post
Post
2058
Another day, another release in
⭐https://github.com/mozilla-ai/any-agent ⭐

You can now use MCP (Model Context Protocol) tools via SSE (Server-Sent Events):

from any_agent import AgentConfig, AnyAgent
from any_agent.config import MCPSseParams

agent = AnyAgent.create(
    "smolagents",
    AgentConfig(
        model_id="gpt-4o-mini",
        tools=[
            MCPSseParams(
                url="http://localhost:8000/sse"
            ),
        ]
    )
)
agent.run("What do MCP and SSE mean?")


See SuperGateway for an easy way to turn a Stdio server into an SSE server.
hesamation 
posted an update about 2 months ago
view post
Post
2974
The best researchers from DeepSeek, OpenAI, Microsoft, and ByteDance explored RL and Reasoning in LLMs,

Here's some of their key findings:

1/ RL can further improve distilled models. These models are essentially SFT fine-tuned with the data generated by larger models, and the SFT+RL combo does not disappoint.

This is verified in the DeepSeek-R1 paper.

2/ both GRPO and PPO algorithms suffer from length bias; they encourage longer responses. This can be tackled by introducing explicit rewards based on the length of the answer.

3/Most reasoning research is focused on code and math. But training models on logic puzzles improves them for mathematical tasks too.

This shows the RL reasoning is generalized beyond the specific domain knowledge.

Previous research also shows RL can be a great generalizer.

4/The reasoning might not be only induced by RL; it might already be hidden in the base models due to the pre-training and CoT data they were trained on.

So while RL does wake up the reasoning beast, maybe it's not the only solution (e.g. other methods such as distillation)

5/ back to the length bias; reasoning models tend to generate longer responses for wrong answers. RL might be the culprit.

RL favours longer answers when the reward is negative, to dilute the penalty per individual token and lower the loss.

This might explain the "aha" moments!

6/ OpenAI's competitive programming paper showed an interesting finding:

o3 can learn its own test-time strategies (like writing an inefficient but correct solution to verify the answer of an optimized solution)

RL helps LLMs develop their own reasoning & verification methods.
The recent article by @rasbt helped me a lot in getting a broad view of the recent research on reasoning models.

He also lists more influential papers on this topic, It's a must-read if you're interested.

check it out 👇
https://magazine.sebastianraschka.com/p/the-state-of-llm-reasoning-model-training
hesamation 
posted an update about 2 months ago
view post
Post
2181
OpenAI just released a 34-page practical guide to building agents,

Here's 10 things it teaches us:

1➜ agents are different from workflows: they are complete autonomous systems that perform tasks on your behalf. many applications use LLMs for workflows, but this is not an agent.

2➜ use them for tricky stuff: complex decision making, dynamic rules, unstructured data

3➜ core recipe: each agent has three main components: Model (the brain), Tools, Instructions on how to behave

4➜ choose the right brain: set up evals to get a baseline performance, use a smart model to see what's possible, gradually downgrade the model for cost and speed

5➜ tools are key: choose well-defined and tested tools. an agent needs tools to retrieve data and context, and take actions.

6➜ instruction matters A LOT: be super clear telling the agent its goals, steps, and rules. Vague instructions = unpredictable agent. Be explicit.

7➜ start simple, then scale: often a single agent with several tools is ok. don't jump to complex multi-agent systems immediately.

8➜ if you use multi-agents: you can have a "manager" agent directing traffic to specialist agents, or have agents hand off tasks to each other.

9➜ gaurdrails are a MUST: check user input for weird stuff, make sure the agent isn't about to do something risky, filter out private info, block harmful content. Don't let it run wild.

10➜ build and plan for humans: start small, test, improve. always have a plan for when the agent gets stuck or is about to do something high-risk.

Download: https://t.co/fJaCkgf7ph
·
daavoo 
posted an update about 2 months ago
view post
Post
1304
New release in https://github.com/mozilla-ai/any-agent 🤖

You can now use "managed_agents" also in langchain and llama_index, in addition to the other frameworks:

from any_agent import AgentConfig, AgentFramework, AnyAgent
from any_agent.tracing import setup_tracing

framework = AgentFramework("langchain")  # also in AgentFramework("llama_index") and the rest of frameworks
setup_tracing(framework)

agent = AnyAgent.create(
    framework,
    AgentConfig(
        model_id="gpt-4.1-mini",
        instructions="You are the main agent. Use the other available agents to find an answer",
    ),
    managed_agents=[
        AgentConfig(
            name="search_web_agent",
            description="An agent that can search the web",
            model_id="gpt-4.1-nano",
            tools=["any_agent.tools.search_web"]
        ),
        AgentConfig(
            name="visit_webpage_agent",
            description="An agent that can visit webpages",
            model_id="gpt-4.1-nano",
            tools=["any_agent.tools.visit_webpage"]
        )
    ]
)
agent.run("Which Agent Framework is the best??")
  • 2 replies
·
hesamation 
posted an update about 2 months ago
daavoo 
posted an update 2 months ago
view post
Post
2850
Wondering how the new Google Agent Development Toolkit (ADK) compares against other frameworks? 🤔You can try it in any-agent 🚀

https://github.com/mozilla-ai/any-agent

agent = AnyAgent.create(
    AgentFramework("google"),
    AgentConfig(
        model_id="gpt-4o-mini"
    )
)
agent.run("Which Agent Framework is the best??")

  • 1 reply
·
hesamation 
posted an update 2 months ago
view post
Post
9571
Google published a 69-page whitepaper on Prompt Engineering and its best practices, a must-read if you are using LLMs in production:
> zero-shot, one-shot, few-shot
> system prompting
> chain-of-thought (CoT)
> ReAct

LINK: https://www.kaggle.com/whitepaper-prompt-engineering
> code prompting
> best practices