Bruna Trevelin

brunatrevelin

AI & ML interests

None yet

Recent Activity

Organizations

Hugging Face's profile picture Society & Ethics's profile picture HF Legal's profile picture Blog-explorers's profile picture huggingPartyParis's profile picture Women on Hugging Face's profile picture Journalists on Hugging Face's profile picture Hugging Face for Legal's profile picture Nerdy Face's profile picture LeRobot Worldwide Hackathon's profile picture

brunatrevelin's activity

reacted to evijit's post with πŸ€— 3 days ago
upvoted an article 3 days ago
view article
Article

AI Policy @πŸ€—: Response to the 2025 National AI R&D Strategic Plan

By evijit and 2 others β€’
β€’ 12
upvoted an article 21 days ago
view article
Article

Reduce, Reuse, Recycle: Why Open Source is a Win for Sustainability

By sasha and 1 other β€’
β€’ 14
upvoted an article about 1 month ago
view article
Article

Consent by Design: Approaches to User Data in Open AI Ecosystems

By giadap and 1 other β€’
β€’ 13
upvoted an article about 2 months ago
view article
Article

Hugging Face to sell open-source robots thanks to Pollen Robotics acquisition πŸ€–

By thomwolf and 2 others β€’
β€’ 46
reacted to yjernite's post with πŸ”₯ about 2 months ago
view post
Post
3326
Today in Privacy & AI Tooling - introducing a nifty new tool to examine where data goes in open-source apps on πŸ€—

HF Spaces have tons (100Ks!) of cool demos leveraging or examining AI systems - and because most of them are OSS we can see exactly how they handle user data πŸ“šπŸ”

That requires actually reading the code though, which isn't always easy or quick! Good news: code LMs have gotten pretty good at automatic review, so we can offload some of the work - here I'm using Qwen/Qwen2.5-Coder-32B-Instruct to generate reports and it works pretty OK πŸ™Œ

The app works in three stages:
1. Download all code files
2. Use the Code LM to generate a detailed report pointing to code where data is transferred/(AI-)processed (screen 1)
3. Summarize the app's main functionality and data journeys (screen 2)
4. Build a Privacy TLDR with those inputs

It comes with a bunch of pre-reviewed apps/Spaces, great to see how many process data locally or through (private) HF endpoints πŸ€—

Note that this is a POC, lots of exciting work to do to make it more robust, so:
- try it: yjernite/space-privacy
- reach out to collab: yjernite/space-privacy
upvoted an article 2 months ago
view article
Article

I Clicked β€œI Agree”, But What Am I Really Consenting To?

By giadap β€’
β€’ 24
reacted to giadap's post with πŸ”₯ 2 months ago
view post
Post
2354
We've all become experts at clicking "I agree" without a second thought. In my latest blog post, I explore why these traditional consent models are increasingly problematic in the age of generative AI.

I found three fundamental challenges:
- Scope problem: how can you know what you're agreeing to when AI could use your data in different ways?
- Temporality problem: once an AI system learns from your data, good luck trying to make it "unlearn" it.
- Autonomy trap: the data you share today could create systems that pigeonhole you tomorrow.

Individual users shouldn't bear all the responsibility, while big tech holds all the cards. We need better approaches to level the playing field, from collective advocacy and stronger technological safeguards to establishing "data fiduciaries" with a legal duty to protect our digital interests.

Available here: https://huggingface.co/blog/giadap/beyond-consent
upvoted an article 3 months ago
view article
Article

AI Policy: πŸ€— Response to the White House AI Action Plan RFI

By yjernite and 2 others β€’
β€’ 26
reacted to AdinaY's post with πŸ”₯ 3 months ago
view post
Post
2505
Two AI startups, DeepSeek & Moonshot AI , keep moving in perfect sync πŸ‘‡

✨ Last December: DeepSeek & Moonshot AI released their reasoning models on the SAME DAY.
DeepSeek: deepseek-ai/DeepSeek-R1
MoonShot: https://github.com/MoonshotAI/Kimi-k1.5

✨ Last week: Both teams published papers on modifying attention mechanisms on the SAME DAY AGAIN.
DeepSeek: Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention (2502.11089)
Moonshot: MoBA: Mixture of Block Attention for Long-Context LLMs (2502.13189)

✨ TODAY:
DeepSeek unveiled Flash MLA: a efficient MLA decoding kernel for NVIDIA Hopper GPUs, optimized for variable-length sequences.
https://github.com/deepseek-ai/FlashMLA

Moonshot AI introduces Moonlight: a 3B/16B MoE trained on 5.7T tokens using Muon, pushing the Pareto frontier with fewer FLOPs.
moonshotai/Moonlight-16B-A3B

What's next? πŸ‘€
upvoted 2 articles 4 months ago
view article
Article

Announcing AI Energy Score Ratings

By sasha β€’
β€’ 27
view article
Article

ROOST: Safety Tooling needs Open TechπŸ“πŸ€—

By yjernite β€’
β€’ 5
upvoted an article 5 months ago
view article
Article

Democratization of AI, Open Source, and AI Auditing: Thoughts from the DisinfoCon Panel in Berlin

By frimelle β€’
β€’ 6
New activity in deepghs/sankaku_full 5 months ago
upvoted an article 6 months ago
view article
Article

πŸ‡ͺπŸ‡ΊβœοΈ EU AI Act: Systemic Risks in the First CoP Draft Comments ✍️πŸ‡ͺπŸ‡Ί

By yjernite and 1 other β€’
β€’ 14
reacted to yjernite's post with ❀️ 6 months ago
view post
Post
2251
πŸ‡ͺπŸ‡Ί Policy Thoughts in the EU AI Act Implementation πŸ‡ͺπŸ‡Ί

There is a lot to like in the first draft of the EU GPAI Code of Practice, especially as regards transparency requirements. The Systemic Risks part, on the other hand, is concerning for both smaller developers and for external stakeholders.

I wrote more on this topic ahead of the next draft. TLDR: more attention to immediate large-scale risks and to collaborative solutions supported by evidence can help everyone - as long as developers disclose sufficient information about their design choices and deployment contexts.

Full blog here, based on our submitted response with @frimelle and @brunatrevelin :

https://huggingface.co/blog/yjernite/eu-draft-cop-risks#on-the-proposed-taxonomy-of-systemic-risks
  • 2 replies
Β·