Awario 's picture
8 20

Awario

Awario
·

AI & ML interests

None yet

Recent Activity

reacted to alibabasglab's post with 👍 about 11 hours ago
We are thrilled to present the improved "ClearerVoice-Studio", an open-source platform designed to make speech processing easy use for everyone! Whether you’re working on speech enhancement, speech separation, speech super-resolution, or target speaker extraction, this unified platform has you covered. ** Why Choose ClearerVoice-Studio?** - Pre-Trained Models: Includes cutting-edge pre-trained models, fine-tuned on extensive, high-quality datasets. No need to start from scratch! - Ease of Use: Designed for seamless integration with your projects, offering a simple yet flexible interface for inference and training. **Where to Find Us?** - GitHub Repository: ClearerVoice-Studio (https://github.com/modelscope/ClearerVoice-Studio) - Try Our Demo: Hugging Face Space (https://huggingface.co/spaces/alibabasglab/ClearVoice) **What Can You Do with ClearerVoice-Studio?** - Enhance noisy speech recordings to achieve crystal-clear quality. - Separate speech from complex audio mixtures with ease. - Transform low-resolution audio into high-resolution audio. A full upscaled LJSpeech-1.1-48kHz dataset can be downloaded from https://huggingface.co/datasets/alibabasglab/LJSpeech-1.1-48kHz . - Extract target speaker voices with precision using audio-visual models. **Join Us in Growing ClearerVoice-Studio!** We believe in the power of open-source collaboration. By starring our GitHub repository and sharing ClearerVoice-Studio with your network, you can help us grow this community-driven platform. **Support us by:** - Starring it on GitHub. - Exploring and contributing to our codebase . - Sharing your feedback and use cases to make the platform even better. - Joining our community discussions to exchange ideas and innovations. - Together, let’s push the boundaries of speech processing! Thank you for your support! :sparkling_heart:
reacted to sanaka87's post with 🔥 about 11 hours ago
🚀 Excited to Share Our Latest Work: 3DIS & 3DIS-FLUX for Multi-Instance Layout-to-Image Generation! ❤️❤️❤️ 🎨 Daily Paper: https://huggingface.co/papers/2501.05131#community 🔓 Code is now open source! 🌐 Project Website: https://limuloo.github.io/3DIS/ 🏠 GitHub Repository: https://github.com/limuloo/3DIS 📄 3DIS Paper: https://arxiv.org/abs/2410.12669 📄 3DIS-FLUX Tech Report: https://arxiv.org/abs/2501.05131 🔥 Why 3DIS & 3DIS-FLUX? Current SOTA multi-instance generation methods are typically adapter-based, requiring additional control modules trained on pre-trained models for layout and instance attribute control. However, with the emergence of more powerful models like FLUX and SD3.5, these methods demand constant retraining and extensive resources. ✨ Our Solution: 3DIS We introduce a decoupled approach that only requires training a low-resolution Layout-to-Depth model to convert layouts into coarse-grained scene depth maps. Leveraging community and company pre-trained models like ControlNet + SAM2, we enable training-free controllable image generation on high-resolution models such as SDXL and FLUX. 🌟 Benefits of Our Decoupled Multi-Instance Generation: 1. Enhanced Control: By constructing scenes using depth maps in the first stage, the model focuses on coarse-grained scene layout, improving control over instance placement. 2. Flexibility & Preservation: The second stage employs training-free rendering methods, allowing seamless integration with various models (e.g., fine-tuned weights, LoRA) while maintaining the generative capabilities of pre-trained models. Join us in advancing Layout-to-Image Generation! Follow and star our repository to stay updated! ⭐
View all activity

Organizations

None yet

models

None public yet

datasets

None public yet