Before starting with the second night of the Agents & MCP Hackathon, I briefly wanted to share my progress from last night. Not much sleep, but lots of progress!
Also, I managed to get a rough draft of the Gradio app + MCP server done, but not so much yet that I can share the space with you. You will be able to define your question the AI participants should discuss, decide on the protocol, do role assignments like having a devil's advocate on the table, and define the communication pattern. Lastly, you can decide which AI should be the moderator and how many rounds of discussions there should be. You can see my progress in the attached image.
Most of the options are just placeholders right now, and I will work on their implementation tonight. Hopefully, I can add an MVP tomorrow evening to the following space: Agents-MCP-Hackathon/consilium_mcp.
I am also very interested in the cool stuff you all are building; please let me know in the comments. :)
✨ 3 models: 7B/32B/ Mix-3-32B (MIT license) ✨ Dataset: 35 verifiable logic tasks (Sudoku, Cipher, Arrow Maze etc.) ✨ RL training with auto-verifiable rewards ✨ Generalizes to math without explicit math training ✨ +6 pts on BBEH, +9.5 on KOR-Bench vs baselines
✨ Apache 2.0 ✨ Handles up to 10,000+ frames on a single GPU ✨ 2048-frame encoding in just 12s ✨ Efficient Chunk-based Prefilling & Bi-granularity KV decoding
🔥 New benchmark & dataset for Subject-to-Video generation
OPENS2V-NEXUS by Pekin University ✨ Fine-grained evaluation for subject consistency BestWishYsh/OpenS2V-Eval ✨ 5M-scale dataset: BestWishYsh/OpenS2V-5M ✨ New metrics – automatic scores for identity, realism, and text match
✨Emotion-controlled, high-dynamic avatar videos ✨Multi-character support with separate audio control ✨Works with any style: cartoon, 3D, real face, while keeping identity consistent