πΌοΈπ OneEncoder: A Unified Text & Image Model
OneEncoder is a lightweight framework for cross-modal alignment, focusing on efficiently integrating text and images (with future extensions to other modalities). Unlike traditional methods relying on massive modality-specific encoders, OneEncoder progressively aligns different data types, making it cost-effective and performant even on small paired datasets.
π Key Features
β
Multimodal Alignment: Initially supports text & image, with extension to other modalities.
β
Lightweight & Efficient: Avoids full retraining when adding new modalities.
β
Superior Performance: Outperforms models that require large specialized datasets.
π― Applications
- Visual Question Answering (VQA)
- Image-Text Retrieval
- Multimodal Content Understanding
π Authors
π Bilal FAYE, Hanane AZZAG, Mustapha LEBBAH, Djamel BOUCHAFFRA
π Research Paper
π arXiv: OneEncoder: Progressive Cross-Modal Alignment
π Resources
π GitHub Repo: OneEncoder
π Hugging Face Demo: OneEncoder Retriever
π Demo Notebook: OneEncoder Demos
π OneEncoder for Text, Image & Audio: HF Model
- Downloads last month
- 33
Model tree for bilalfaye/OneEncoder-text-image
Base model
google-bert/bert-base-uncased