Cosmos
Safetensors
NeMo
cosmos-embed1
nvidia
custom_code

Cosmos-Embed1: A joint video-text embedder for physical AI

Website | Hugging Face | Demo app

Model Overview

Description

Cosmos-Embed1 is a joint video-text embedder tailored for physical AI. It can be used for text-to-video retrieval, inverse video search, semantic deduplication, zero-shot and k-nearest-neighbors (kNN) classification, and as a base model for video curation tasks. It has state-of-the-art (SOTA) performance on autonomous vehicle (AV) and robotics datasets, while maintaining competitive performance in general domains. This model is ready for commercial use.

Model Developer: NVIDIA

Model Versions

The Cosmos-Embed1 release includes the following embedders:

  • Cosmos-Embed1
    • Cosmos-Embed1-224p (optimized with 8 frames and 224x224 input resolution, 256-dim output text and video embeddings)
    • Cosmos-Embed1-336p (optimized with 8 frames and 336x336 input resolution, 768-dim output text and video embeddings)
    • Cosmos-Embed1-448p (optimized with 8 frames and 448x448 input resolution, 768-dim output text and video embeddings)

Note, while each checkpoint was optimized at a specific fixed resolution (and default to these), they all support arbitrary non-square resolutions.

License

This model is released under the NVIDIA Open Model License. Additional Information: Apache License 2.0; MIT.

For a custom license, please contact [email protected].

Under the NVIDIA Open Model License, NVIDIA confirms:

  • Models are commercially usable.
  • You are free to create and distribute Derivative Models.
  • NVIDIA does not claim ownership to any outputs generated using the Models or Derivative Models.

Important Note: If you bypass, disable, reduce the efficacy of, or circumvent any technical limitation, safety guardrail or associated safety guardrail hyperparameter, encryption, security, digital rights management, or authentication mechanism contained in the Model, your rights under NVIDIA Open Model License Agreement will automatically terminate.

Deployment Geography

Global

Use Case

Physical AI: encompassing robotics, autonomous vehicles (AV) etc.

Release Date

Model Architecture

The architecture is based on QFormer, with modifications for processing video inputs.

The video embedder processes frames individually with a ViT backbone. The per-frame ViT features are concatenated in the temporal dimension and augmented with temporal embeddings. These are then passed into the QFormer which summarizes via cross-attention a compact set of visual query tokens from the provided frames. The visual query tokens are then pooled into a single video embedding. The text embedder processes tokenized text via the self-attention branch of the Qformer to produce a text embedding.

The normalized text and video embeddings are aligned via a contrastive video-text loss, as well as auxiliary losses such as video-text matching and video captioning. For the 336p and 448p variants, we additionally use summary and dense distillation losses.

image/jpeg

Input/Output Specifications

  • Input

    • Input Type(s): Text+Video
    • Input Format(s):
      • Text: UTF-8 string
      • Video: tensor scaled from 0 to 1 of RGB frame sequences.
    • Input Parameters:
      • Text: One-dimensional (1D)
      • Video: Three-dimensional (3D)
    • Other Properties Related to Input:
      • The input string will be truncated or padded to 128 text tokens. When used for text-to-video (T2V) retrieval, it should contain a short description of the object, scene or action of interest.
      • Arbitrary, non-square resolutions are supported in the inference code. This can be configured at model loading time.
      • The model architecture supports input videos of varying lengths, but it has been optimized for 8 frames, sampled at 1-2 frames per second (FPS).
  • Output

    • Output Type(s): Text+Video
    • Output Format(s):
      • Text: floating-point normalized vector of size 256 (for 224p variant) or 768 (for 336p and 448p variants).
      • Video: floating-point normalized vector of size 256 (for 224p variant) or 768 (for 336p and 448p variants).
    • Output Parameters:
      • Text: One-dimensional (1D)
      • Video: One-dimensional (1D)
    • Other Properties Related to Output: Continuous-valued L2-normalized feature vectors with a dimensionality of 256 or 768. A distance can be calculated between embeddings using cosine distance. Dense intermediate feature maps are also provided for convenience.

Our AI models are designed and/or optimized to run on NVIDIA GPU-accelerated systems. By leveraging NVIDIA’s hardware (e.g. GPU cores) and software frameworks (e.g., CUDA libraries), the model achieves faster training and inference times compared to CPU-only solutions.

Software Integration

Runtime Engine(s):

Supported Hardware Microarchitecture Compatibility:

  • NVIDIA Ampere
  • NVIDIA Hopper
  • NVIDIA Blackwell

Note: We have only tested Cosmos-Embed1 with BF16 precision on Ampere and Hopper GPUs. If you are using older versions of NVIDIA GPUs (e.g., NVIDIA Volta GPUs), you may need to switch to FP32 precision.

Operating System(s)

  • Linux (We have not tested on other operating systems.)

Usage

Installation

The main model and processor dependencies can be fetched with:

pip install transformers einops torch torchvision

One can optionally install Transformer Engine for faster inference:

pip install --no-build-isolation transformer_engine[pytorch]

Example inference

A code snippet for video and text inference is shown below. For a step-by-step guide, please refer to the Juypter notebook here.

import decord
import numpy as np
import torch
from transformers import AutoProcessor, AutoModel
import subprocess
import io

# load model and pre-processor
model = AutoModel.from_pretrained("nvidia/Cosmos-Embed1-448p", trust_remote_code=True).to("cuda", dtype=torch.bfloat16)
preprocess = AutoProcessor.from_pretrained("nvidia/Cosmos-Embed1-448p", trust_remote_code=True)

# load mock data
video_url = "https://upload.wikimedia.org/wikipedia/commons/3/3d/Branko_Paukovic%2C_javelin_throw.webm"
subprocess.check_call(["wget", "-O", "/tmp/javelin_throw.mp4", video_url])
reader = decord.VideoReader("/tmp/javelin_throw.mp4")
frame_ids = np.linspace(0, len(reader)-1, 8, dtype=int).tolist()
frames = reader.get_batch(frame_ids).asnumpy()
batch = np.transpose(np.expand_dims(frames, 0), (0, 1, 4, 2, 3))  # BTCHW
captions = [
    "a person riding a motorcycle in the night",
    "a car overtaking a white truck",
    "a video of a knight fighting with a sword",
    "a man wearing red spandex throwing a javelin",
    "a young man javelin throwing during the evening", # distractor
    "a man throwing a javelin with both hands", # distractor
]

# video and text processing
video_inputs = preprocess(videos=batch).to("cuda", dtype=torch.bfloat16)
video_out = model.get_video_embeddings(**video_inputs)
text_inputs = preprocess(text=captions).to("cuda", dtype=torch.bfloat16)
text_out = model.get_text_embeddings(**text_inputs)

# ranking and argmax
probs = (torch.softmax(model.logit_scale.exp() * video_out.visual_proj @ text_out.text_proj.T, dim=-1))[0]
print(captions[probs.argmax()])

Training and Evaluation

We train and evaluate the Cosmos-Embed1 models on a variety of video datasets covering zero-shot classification and retrieval. We use public reference training/test splits when available (e.g. Kinetics), otherwise we take a 90/10% split. The training pool is approximately 8m unique videos with multiple curated captions, sampled from robotics, autonomous vehicle, activity recognition and general domains.

Data Collection Method:

  • AgiBot: Automatic/Sensors
  • BridgeV2: Automatic/Sensors
  • Robonet: Automatic/Sensors
  • DROID: Automatic/Sensors
  • 1X: Automatic/Sensors
  • Kinetics-400/600/700: Human
  • OpenDV: Automatic/Sensors
  • AV action recognition (internal): Automatic/Sensors
  • Curated captioned video dataset (internal): Human

Labeling Method:

Metrics

  • We compare with the state-of-the-art text/video embedders: InternVideo2-1B and Perception Encoder. As input, we use 8 linearly spaced frames per clip and resize to square inputs.
  • Evaluation metrics:
    • (Class-Weighted) F1-score for text-video zero-shot classification
    • Text-to-video (T2V) and video-to-text (V2T) recall at k=1 for multi-modal retrieval, including DSL rerank.

Robotics

Agibot Bridge
Model Architecture T2V-R@1 V2T-R@1 T2V-R@1 V2T-R@1
InternVideo2-1B-S2 1.23 0.91 8.51 8.11
PE-Core-G14-448 1.16 0.83 7.19 5.24
Cosmos-Embed1-224 4.26 4.10 23.99 23.99
Cosmos-Embed1-336 7.04 6.33 24.51 22.90
Cosmos-Embed1-448 7.18 6.39 24.28 23.76

AV

OpenDV
Model Architecture T2V-R@1 V2T-R@1
InternVideo2-1B-S2 7.40 8.06
PE-Core-G14-448 9.58 9.30
Cosmos-Embed1-224 30.11 30.99
Cosmos-Embed1-336 34.42 34.67
Cosmos-Embed1-448 34.66 34.87

General and Action Recognition

Kinetics-400 (val) Kinetics-600 (val) Kinetics-700 (val)
Model Architecture F1 F1 F1
InternVideo2-1B-S2 62.80 60.20 52.18
PE-Core-G14-448 76.00 75.14 68.28
Cosmos-Embed1-224 83.06 82.22 70.96
Cosmos-Embed1-336 87.66 88.06 74.57
Cosmos-Embed1-448 88.21 88.60 75.27

Inference

Acceleration Engine: PyTorch, Transformer Engine (optional)

Test Hardware: H100, A100

Ethical Considerations

NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.

Users are responsible for model inputs and outputs. Users are responsible for ensuring safe integration of this model, including implementing guardrails as well as other safety mechanisms, prior to deployment.

For more detailed information on ethical considerations for this model, please see the subcards of Explainability, Bias, Safety & Security, and Privacy below.

Please report security vulnerabilities or NVIDIA AI Concerns here.

Plus Plus (++) Promise

We value you, the datasets, the diversity they represent, and what we have been entrusted with. This model and its associated data have been:

  • Verified to comply with current applicable disclosure laws, regulations, and industry standards.
  • Verified to comply with applicable privacy labeling requirements.
  • Annotated to describe the collector/source (NVIDIA or a third-party).
  • Characterized for technical limitations.
  • Reviewed to ensure proper disclosure is accessible to, maintained for, and in compliance with NVIDIA data subjects and their requests.
  • Reviewed before release.
  • Tagged for known restrictions and potential safety implications.

Bias

Field Response
Participation considerations from adversely impacted groups protected classes in model design and testing: None
Measures taken to mitigate against unwanted bias: None

Explainability

Field Response
Intended Application & Domain: Embedding of text and videos for physical AI
Model Type: ViT, QFormer
Intended Users: Physical AI developers
Output: Text/Video embedding vectors
Describe how the model works: Projects inputs into aligned embedding space (text/video).
Technical Limitations: Due to the training datasets being predominantly composed of short action-focused English captions, different text prompts may not be properly aligned to video data, producing suboptimal matching.
Verified to have met prescribed NVIDIA quality standards: Yes
Performance Metrics: Classification metrics (F1-score, accuracy), retrieval metrics (T2V recall@1, V2T recall@1)
Potential Known Risks: Embedder's output can include all forms of input, including what may be considered toxic, offensive, or indecent.
Licensing: NVIDIA Open Model License. Additional Information: Apache License 2.0; MIT.

Privacy

Field Response
Generatable or reverse engineerable personal information? No
How often is dataset reviewed? Before Release
Is there provenance for all datasets used in training? Yes
Does data labeling (annotation, metadata) comply with privacy laws? Yes
Applicable Privacy Policy https://www.nvidia.com/en-us/about-nvidia/privacy-policy

Safety

Field Response
Model Application(s): Embedding of text and videos for physical AI applications (robotics, autonomous vehicles).
Describe the life critical impact (if present). None Known
Use Case Restrictions: NVIDIA Open Model License. Additional Information: Apache License 2.0; MIT.
Model and dataset restrictions: The Principle of least privilege (PoLP) is applied limiting access for dataset generation and model development. Restrictions enforce dataset access during training, and dataset license constraints adhered to. Model checkpoints are made available on Hugging Face, and may become available on cloud providers' model catalog.
Downloads last month
168
Safetensors
Model size
1.2B params
Tensor type
I64
·
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Collection including nvidia/Cosmos-Embed1-448p