Image Feature Extraction
PerceptionEncoder
PE-Lang-L14-448 / README.md
jz2023's picture
Update README.md
ad0409b verified
|
raw
history blame
2.5 kB
metadata
license: apache-2.0

Model Details

Perception Encoder (PE) is a state-of-the-art encoder for image and video understanding trained via simple vision-language learning. It was introduced in "Perception Encoder: The best visual embeddings are hidden inside the network".

Model Developer: Meta

Model Overview: Perception Encoder (PE) is a family of large-scale vision encoder models with state-of-the-art performance on a large variety of vision tasks. By using a robust contrastive pretraining recipe and finetuning on synthetically aligned videos, PE not only outperforms all existing models on classification and retrieval, but it also internally produces strong, general features that scale for downstream tasks. PE unlocks the ability for large-scale contrastive pretraining to transfer to downstream tasks with alignment tuning to capitalize on those general features.

Scale Tower Params Width Depth MLP Heads CLIP Dim Resolution Patch Size Text Context Length
B Vision 0.09B 768 12 3072 12 1024 384 16 32
Text 0.31B 1024 24 4096 16 1024 384 16 32
L Vision 0.32B 1024 24 4096 16 1024 336 14 32
Text 0.31B 1024 24 4096 16 1024 336 14 32
G Vision 1.88B 1536 50 8960 16 1280 392 14 72
Text 0.47B 1280 24 5120 20 1280 392 14 72

How to use

PE codebase

We provide the pretraining code in https://github.com/meta-ai-research-fair/occhi.git

You can find more details in the GitHub repo.

Evaluation

We evaluate the pretrained MobileLLM models on Zero-shot Common Sense Reasoning tasks

Here is the table in Markdown format:

Zero-Shot Image Results

Zero-Shot Video Results

Citation

If you find our code useful for your research, please consider citing:

@article{PE,
    title={Perception Encoder},
    author={},
    journal={arXiv:xxx.xxxxx},
    year={2025}
}