130.8
TFLOPS
dame rajee
damerajee
15
followers
·
31 following
AI & ML interests
None yet
Recent Activity
reacted
to
Kseniase 's
post
with ❤️
1 day ago
8 types of RoPE
As we always use Transformers, it's helpful to understand RoPE—Rotary Position Embedding. Since token order matters, RoPE encodes it by rotating token embeddings based on their position, so the model knows how to interpret which token comes first, second, and so on.
Here are 8 types of RoPE that can be implemented in different cases:
1. Original RoPE -> https://huggingface.co/papers/2104.09864
Encodes token positions by rotating token embeddings in the complex plane via a position-based rotation matrix, thereby providing the self-attention mechanism with relative positional info.
2. LongRoPE -> https://huggingface.co/papers/2402.13753
Extends the context window of pre-trained LLMs to 2048k tokens, leveraging non-uniformities in positional interpolation with an efficient search.
3. LongRoPE2 -> https://huggingface.co/papers/2502.20082
Extends the effective context window of pre-trained LLMs to the target! length, rescaling RoPE guided by “needle-driven” perplexity.
4. Multimodal RoPE (MRoPE) -> https://huggingface.co/papers/2502.13923
Decomposes positional embedding into 3 components: temporal, height and width, so that positional features are aligned across modalities: text, images and videos.
5. Directional RoPE (DRoPE) -> https://huggingface.co/papers/2503.15029
Adds an identity scalar, improving how angles are handled without extra complexity. It helps balance accuracy, speed, and memory usage.
6. VideoRoPE -> https://huggingface.co/papers/2502.05173
Adapts RoPE for video, featuring 3D structure, low-frequency temporal allocation, diagonal layout, and adjustable spacing.
7. VRoPE -> https://huggingface.co/papers/2502.11664
An another RoPE for video, which restructures positional indices and balances encoding for uniform spatial focus.
8. XPos (Extrapolatable Position Embedding) -> https://huggingface.co/papers/2212.10
Introduces an exponential decay factor into the rotation matrix, improving stability on long sequences.
reacted
to
Kseniase 's
post
with 👀
1 day ago
8 types of RoPE
As we always use Transformers, it's helpful to understand RoPE—Rotary Position Embedding. Since token order matters, RoPE encodes it by rotating token embeddings based on their position, so the model knows how to interpret which token comes first, second, and so on.
Here are 8 types of RoPE that can be implemented in different cases:
1. Original RoPE -> https://huggingface.co/papers/2104.09864
Encodes token positions by rotating token embeddings in the complex plane via a position-based rotation matrix, thereby providing the self-attention mechanism with relative positional info.
2. LongRoPE -> https://huggingface.co/papers/2402.13753
Extends the context window of pre-trained LLMs to 2048k tokens, leveraging non-uniformities in positional interpolation with an efficient search.
3. LongRoPE2 -> https://huggingface.co/papers/2502.20082
Extends the effective context window of pre-trained LLMs to the target! length, rescaling RoPE guided by “needle-driven” perplexity.
4. Multimodal RoPE (MRoPE) -> https://huggingface.co/papers/2502.13923
Decomposes positional embedding into 3 components: temporal, height and width, so that positional features are aligned across modalities: text, images and videos.
5. Directional RoPE (DRoPE) -> https://huggingface.co/papers/2503.15029
Adds an identity scalar, improving how angles are handled without extra complexity. It helps balance accuracy, speed, and memory usage.
6. VideoRoPE -> https://huggingface.co/papers/2502.05173
Adapts RoPE for video, featuring 3D structure, low-frequency temporal allocation, diagonal layout, and adjustable spacing.
7. VRoPE -> https://huggingface.co/papers/2502.11664
An another RoPE for video, which restructures positional indices and balances encoding for uniform spatial focus.
8. XPos (Extrapolatable Position Embedding) -> https://huggingface.co/papers/2212.10
Introduces an exponential decay factor into the rotation matrix, improving stability on long sequences.
reacted
to
Kseniase 's
post
with 👀
1 day ago
8 types of RoPE
As we always use Transformers, it's helpful to understand RoPE—Rotary Position Embedding. Since token order matters, RoPE encodes it by rotating token embeddings based on their position, so the model knows how to interpret which token comes first, second, and so on.
Here are 8 types of RoPE that can be implemented in different cases:
1. Original RoPE -> https://huggingface.co/papers/2104.09864
Encodes token positions by rotating token embeddings in the complex plane via a position-based rotation matrix, thereby providing the self-attention mechanism with relative positional info.
2. LongRoPE -> https://huggingface.co/papers/2402.13753
Extends the context window of pre-trained LLMs to 2048k tokens, leveraging non-uniformities in positional interpolation with an efficient search.
3. LongRoPE2 -> https://huggingface.co/papers/2502.20082
Extends the effective context window of pre-trained LLMs to the target! length, rescaling RoPE guided by “needle-driven” perplexity.
4. Multimodal RoPE (MRoPE) -> https://huggingface.co/papers/2502.13923
Decomposes positional embedding into 3 components: temporal, height and width, so that positional features are aligned across modalities: text, images and videos.
5. Directional RoPE (DRoPE) -> https://huggingface.co/papers/2503.15029
Adds an identity scalar, improving how angles are handled without extra complexity. It helps balance accuracy, speed, and memory usage.
6. VideoRoPE -> https://huggingface.co/papers/2502.05173
Adapts RoPE for video, featuring 3D structure, low-frequency temporal allocation, diagonal layout, and adjustable spacing.
7. VRoPE -> https://huggingface.co/papers/2502.11664
An another RoPE for video, which restructures positional indices and balances encoding for uniform spatial focus.
8. XPos (Extrapolatable Position Embedding) -> https://huggingface.co/papers/2212.10
Introduces an exponential decay factor into the rotation matrix, improving stability on long sequences.
View all activity
Organizations
damerajee 's activity
All
Models
Datasets
Spaces
Papers
Collections
Community
Posts
Upvotes
Likes
Articles
reacted to
Kseniase 's
post with ❤️ 👀 👀
1 day ago
view post
8 types of RoPE As we always use Transformers, it's helpful to understand RoPE—Rotary Position Embedding. Since token order matters, RoPE encodes it by rotating token embeddings based on their position, so the model knows how to interpret which token comes first, second, and so on. Here are 8 types of RoPE that can be implemented in different cases: 1. Original RoPE ->
RoFormer: Enhanced Transformer with Rotary Position Embedding (2104.09864) Encodes token positions by rotating token embeddings in the complex plane via a position-based rotation matrix, thereby providing the self-attention mechanism with relative positional info. 2. LongRoPE ->
LongRoPE: Extending LLM Context Window Beyond 2 Million Tokens (2402.13753) Extends the context window of pre-trained LLMs to 2048k tokens, leveraging non-uniformities in positional interpolation with an efficient search. 3. LongRoPE2 ->
LongRoPE2: Near-Lossless LLM Context Window Scaling (2502.20082) Extends the effective context window of pre-trained LLMs to the target! length, rescaling RoPE guided by “needle-driven” perplexity. 4. Multimodal RoPE (MRoPE) ->
Qwen2.5-VL Technical Report (2502.13923) Decomposes positional embedding into 3 components: temporal, height and width, so that positional features are aligned across modalities: text, images and videos. 5. Directional RoPE (DRoPE) ->
DRoPE: Directional Rotary Position Embedding for Efficient Agent
Interaction Modeling (2503.15029) Adds an identity scalar, improving how angles are handled without extra complexity. It helps balance accuracy, speed, and memory usage. 6. VideoRoPE ->
VideoRoPE: What Makes for Good Video Rotary Position Embedding? (2502.05173) Adapts RoPE for video, featuring 3D structure, low-frequency temporal allocation, diagonal layout, and adjustable spacing. 7. VRoPE ->
VRoPE: Rotary Position Embedding for Video Large Language Models (2502.11664) An another RoPE for video, which restructures positional indices and balances encoding for uniform spatial focus. 8. XPos (Extrapolatable Position Embedding) -> https://huggingface.co/papers/2212.10 Introduces an exponential decay factor into the rotation matrix, improving stability on long sequences.
See translation
1 reply
·
Reply
reacted to
onekq 's
post with 🚀 🤯 🔥
3 days ago
view post
Folks, let's get ready.🥳 We will be busy soon. 😅🤗https://github.com/huggingface/transformers/pull/36878
See translation
reacted to
ginipick 's
post with 😎 🤗 👀 🚀 🔥
about 1 month ago
view post
Gini's AI Spaces: Everything You Need for Visual Content Creation! Hello! ✨ Let me introduce Gini’s 5 AI Spaces that effortlessly generate various styles of visual content. Each Space leverages Diffusers and Gradio, so you can create stunning images in just a few clicks! 1) Flowchart Features: Hand-drawn style flowcharts for workflows or business processes Use Cases: Software release pipelines, data pipelines, corporate workflows Benefits: Clear stage-by-stage structure, simple icon usage
ginigen/Flowchart 2) Infographic Features: Visually appealing infographics that communicate data or statistics Use Cases: Global energy charts, startup growth metrics, health tips and more Benefits: Eye-catching icons and layouts, perfect for storytelling at a glance
ginigen/Infographic 3) Mockup Features: Sketch-style wireframes or UX mockups for apps/websites Use Cases: Mobile login flows, dashboards, e-commerce site layouts Benefits: Rapid prototyping of early design ideas, perfect for storyboarding
ginigen/Mockup 4) Diagram Features: Educational diagrams (science, biology, geography, etc.) Use Cases: Water cycle, photosynthesis, chemical reactions, human anatomy Benefits: Vibrant, friendly illustrations, ideal for student-friendly materials
ginigen/Diagram 5) Design Features: Product/industrial design concepts (coffee machines, smartphones, etc.) Use Cases: Prototyping, concept car interiors, high-tech product sketches Benefits: From 3D render-like visuals to simple sketches, unleash your creativity!
ginigen/Design Click any link above and let AI spark your imagination. Enjoy a fun and productive creative process! 🚀✨
See translation
reacted to
Tonic 's
post with 🔥
about 2 months ago
view post
🙋🏻♂️hey there folks , Goedel's Theorem Prover is now being demo'ed on huggingface :
Tonic/Math give it a try !
See translation
reacted to
lewtun 's
post with 🔥 🤗 🚀
about 2 months ago
view post
We are reproducing the full DeepSeek R1 data and training pipeline so everybody can use their recipe. Instead of doing it in secret we can do it together in the open! 🧪 Step 1: replicate the R1-Distill models by distilling a high-quality reasoning corpus from DeepSeek-R1. 🧠 Step 2: replicate the pure RL pipeline that DeepSeek used to create R1-Zero. This will involve curating new, large-scale datasets for math, reasoning, and code. 🔥 Step 3: show we can go from base model -> SFT -> RL via multi-stage training. Follow along: https://github.com/huggingface/open-r1
See translation
view post
On the 2nd of October a really cool paper was released called "Were RNNs all we need" https://arxiv.org/abs/2410.01201 This paper introduces the MinGRU model, a simplified version of the traditional Gated Recurrent Unit (GRU) designed to enhance efficiency by removing hidden state dependencies from its gates. This allows for parallel training, making it significantly faster than conventional GRUs. Additionally, MinGRU eliminates non-linear activations like tanh, streamlining computations. So I read the paper and I tried training this model and it seems to be doing quite well , you could check out the pre-trained model on the huggingface spaces -
damerajee/mingru-stories
1 reply
·
Reply
reacted to
onekq 's
post with 🧠
6 months ago