Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
2
1
5
Hyunkyu KIM
george31
Follow
0 followers
ยท
3 following
minus31
AI & ML interests
CV, Recommendation System
Recent Activity
liked
a model
10 days ago
deepseek-ai/DeepSeek-R1-0528
reacted
to
Kseniase
's
post
with ๐
13 days ago
12 Types of JEPA JEPA, or Joint Embedding Predictive Architecture, is an approach to building AI models introduced by Yann LeCun. It differs from transformers by predicting the representation of a missing or future part of the input, rather than the next token or pixel. This encourages conceptual understanding, not just low-level pattern matching. So JEPA allows teaching AI to reason abstractly. Here are 12 types of JEPA you should know about: 1. I-JEPA -> https://huggingface.co/papers/2301.08243 A non-generative, self-supervised learning framework designed for processing images. It works by masking parts of the images and then trying to predict those masked parts 2. MC-JEPA -> https://huggingface.co/papers/2307.12698 Simultaneously interprets video data - dynamic elements (motion) and static details (content) - using a shared encoder 3. V-JEPA -> https://huggingface.co/papers/2404.08471 Presents vision models trained by predicting future video features, without pretrained image encoders, text, negative sampling, or reconstruction 4. UI-JEPA -> https://huggingface.co/papers/2409.04081 Masks unlabeled UI sequences to learn abstract embeddings, then adds a fine-tuned LLM decoder for intent prediction. 5. Audio-based JEPA (A-JEPA) -> https://huggingface.co/papers/2311.15830 Masks spectrogram patches with a curriculum, encodes them, and predicts hidden representations. 6. S-JEPA -> https://huggingface.co/papers/2403.11772 Signal-JEPA is used in EEG analysis. It adds a spatial block-masking scheme and three lightweight downstream classifiers 7. TI-JEPA -> https://huggingface.co/papers/2503.06380 Text-Image JEPA uses self-supervised, energy-based pre-training to map text and images into a shared embedding space, improving cross-modal transfer to downstream tasks Find more types below ๐ Also, explore the basics of JEPA in our article: https://www.turingpost.com/p/jepa If you liked it, subscribe to the Turing Post: https://www.turingpost.com/subscribe
reacted
to
Kseniase
's
post
with ๐
13 days ago
12 Types of JEPA JEPA, or Joint Embedding Predictive Architecture, is an approach to building AI models introduced by Yann LeCun. It differs from transformers by predicting the representation of a missing or future part of the input, rather than the next token or pixel. This encourages conceptual understanding, not just low-level pattern matching. So JEPA allows teaching AI to reason abstractly. Here are 12 types of JEPA you should know about: 1. I-JEPA -> https://huggingface.co/papers/2301.08243 A non-generative, self-supervised learning framework designed for processing images. It works by masking parts of the images and then trying to predict those masked parts 2. MC-JEPA -> https://huggingface.co/papers/2307.12698 Simultaneously interprets video data - dynamic elements (motion) and static details (content) - using a shared encoder 3. V-JEPA -> https://huggingface.co/papers/2404.08471 Presents vision models trained by predicting future video features, without pretrained image encoders, text, negative sampling, or reconstruction 4. UI-JEPA -> https://huggingface.co/papers/2409.04081 Masks unlabeled UI sequences to learn abstract embeddings, then adds a fine-tuned LLM decoder for intent prediction. 5. Audio-based JEPA (A-JEPA) -> https://huggingface.co/papers/2311.15830 Masks spectrogram patches with a curriculum, encodes them, and predicts hidden representations. 6. S-JEPA -> https://huggingface.co/papers/2403.11772 Signal-JEPA is used in EEG analysis. It adds a spatial block-masking scheme and three lightweight downstream classifiers 7. TI-JEPA -> https://huggingface.co/papers/2503.06380 Text-Image JEPA uses self-supervised, energy-based pre-training to map text and images into a shared embedding space, improving cross-modal transfer to downstream tasks Find more types below ๐ Also, explore the basics of JEPA in our article: https://www.turingpost.com/p/jepa If you liked it, subscribe to the Turing Post: https://www.turingpost.com/subscribe
View all activity
Organizations
None yet
spaces
1
Sleeping
Meta Llama Llama 2 7b Chat Hf
๐
models
1
george31/Stable_diffusion_LoRA_with_Certain_text
Updated
Feb 27, 2023
datasets
0
None public yet