ll's picture

ll PRO

Etherll

AI & ML interests

None yet

Recent Activity

Organizations

Replete-AI's profile picture Artificial Consciousness Organization's profile picture Hugging Face Discord Community's profile picture Skye Team's profile picture AI Starter Pack's profile picture

Etherll's activity

reacted to MrDragonFox's post with 👍🔥 2 days ago
view post
Post
2989
yet a other audio datasets pre classified for events + audio aestetics

this time for german - 680h sampled from emilia yodas

timestamps for asr training or other fancier things available as nc in the raw repo

MrDragonFox/DE_Emilia_Yodas_680h

cc by 4.0 as by emilia yodas

raw events / transcriptions are cc by NC 4.0

MrDragonFox/DE_Emilia_Yodas_680h_raw_timestamps

the coming days i should push about 600h english + some japanese too same format
New activity in Etherll/CodeFIM-Data 3 days ago
reacted to merterbak's post with 🔥 7 days ago
reacted to danielhanchen's post with 🚀❤️🔥🤗 8 days ago
reacted to hesamation's post with ❤️ 10 days ago
view post
Post
2812
The best researchers from Yale, Stanford, Google DeepMind, and Microsoft laid out all we know about Agents in a 264-page paper [book],

Here are some of their key findings:

They build a mapping of different agent components, such as perception, memory, and world modelling, to different regions of the human brain and compare them:

- brain is much more energy-efficient
- no genuine experience in agents
- brain learns continuously, agent is static

An agent is broken down to:
- Perception: the agent's input mechanism. can be improved with multi-modality, feedback mechanisms (e.g., human corrections), etc.
- Cognition: learning, reasoning, planning, memory. LLMs are key in this part.
- Action: agent's output and tool use.

Agentic memory is represented as:
- Sensory memory or short-term holding of inputs which is not emphasized much in agents.
- Short-term memory which is the LLM context window
- Long-term memory which is the external storage such as RAG or knowledge graphs.

The memory in agents can be improved and researched in terms of:
- increasing the amount of stored information
- how to retrieve the most relevant info
- combining context-window memory with external memory
- deciding what to forget or update in memory

The agent must simulate or predict the future states of the environment for planning and decision-making.

ai world models are much simpler than the humans' with their causal reasoning (cause-and-effect) or physical intuition.

LLM world models are mostly implicit and embedded.

EMOTIONS are a deep aspect of humans, helping them with social interactions, decision-making, or learning.

Agents must understand emotions to better interact with us.

But rather than encoding the feeling of emotions, they have a surface-level modelling of emotions.

Perception is the process by which an agent receives and interprets raw data from its surroundings.

READ PAPER: Advances and Challenges in Foundation Agents: From Brain-Inspired Intelligence to Evolutionary, Collaborative, and Safe Systems (2504.01990)
reacted to MohamedRashad's post with 👀❤️ 17 days ago
view post
Post
2210
I collected the recitations of the holy quran from 20 different reciters and uploaded the full dataset here:
MohamedRashad/Quran-Recitations

Check it out 🥷
liked a Space 19 days ago