-
Nayana-cognitivelab/Nayana-OCRBench-in-0.6k-v1-arxiv
Viewer • Updated • 6k • 36 -
Nayana-cognitivelab/Nayana-OCRBench-in-0.3k-v1-pubmed
Viewer • Updated • 3k • 5 -
Nayana-cognitivelab/Nayana-OCRBench-in-0.1k-v1-adimagenet
Viewer • Updated • 940 • 4 -
Nayana-cognitivelab/Nayana-OCRBench-in-0.1k-v1-docmatix
Viewer • Updated • 2k • 4
AI & ML interests
None defined yet.
Recent Activity
View all activity
Organization Card
Nayana - Vision AI for all

Enabling Vision Language Capabilites for Low resource langauges
Initiative by Cognitivelab
Problem Statement
Despite advancements in vision-language AI, a significant number of the world's languages remain underserved, leaving millions without tools to process documents in their native scripts.
Challenges Addressed by Nayana:
- Wide Language Gap: Lack of robust OCR solutions for a large spectrum of languages, particularly low-resource and rare languages.
- Script Complexity: Supporting diverse writing systems, including those with intricate scripts, cursive styles, or mixed-language content.
- Scalability: Need for adaptable models that can handle real-world multilingual document processing at scale.
Nayana is designed to tackle these challenges by fine-tuning cutting-edge OCR models for diverse languages across multiple regions, empowering users to extract actionable insights from their documents regardless of the language or script.
Vision
To democratize access to Vision-Language AI for all communities by empowering a wide range of languages, including low-resource and underrepresented ones, with cutting-edge OCR and document understanding capabilities.
Mission
- Enhance Accessibility: Build tools that enable equitable AI solutions for diverse linguistic groups worldwide.
- Expand Language Coverage: Support a vast range of languages and scripts, breaking barriers for multilingual document processing.
- Foster Collaboration: Provide an open-source platform where developers and researchers can enhance and expand multilingual OCR capabilities.
-
Nayana-cognitivelab/Nayana-OCRBench-in-0.6k-v1-arxiv
Viewer • Updated • 6k • 36 -
Nayana-cognitivelab/Nayana-OCRBench-in-0.3k-v1-pubmed
Viewer • Updated • 3k • 5 -
Nayana-cognitivelab/Nayana-OCRBench-in-0.1k-v1-adimagenet
Viewer • Updated • 940 • 4 -
Nayana-cognitivelab/Nayana-OCRBench-in-0.1k-v1-docmatix
Viewer • Updated • 2k • 4
models
13

Nayana-cognitivelab/exp-colpali-merged-en-20k
3B
•
Updated
•
10

Nayana-cognitivelab/exp-colpali-trained-en-20k-lora
Updated
•
4

Nayana-cognitivelab/exp-colpali-merged-hi-en-20k
3B
•
Updated
•
10

Nayana-cognitivelab/exp-colpali-trained-hi-en-20k-lora
Updated
•
4

Nayana-cognitivelab/exp-colpali-merged-hi-en-10k
3B
•
Updated
•
16

Nayana-cognitivelab/exp-colpali-trained-hi-en-10k-lora
Updated
•
4

Nayana-cognitivelab/exp-colpali-engine-merged-model
3B
•
Updated
•
25

Nayana-cognitivelab/colpali-trained-model-hi-10k
Updated
•
11

Nayana-cognitivelab/exp-colpali-merged-model-hi-10k
3B
•
Updated
•
8
•
1

Nayana-cognitivelab/exp-lora-colpali-trained-model-test2
Updated
•
7
datasets
88
Nayana-cognitivelab/NayanaDocs-Indic-45k
Viewer
•
Updated
•
366k
Nayana-cognitivelab/Nayana-Indic-45k-test
Viewer
•
Updated
•
137k
Nayana-cognitivelab/Nayana-DocOCR-Indic-45k-test
Viewer
•
Updated
•
45.7k
Nayana-cognitivelab/eval-results-Nayana-cognitivelab_exp-colpali-merged-en-20k-vidore_arxivqa_test_subsampled
Viewer
•
Updated
•
1
•
4
Nayana-cognitivelab/eval-results-Nayana-cognitivelab_exp-colpali-merged-en-20k-vidore_infovqa_test_subsampled
Viewer
•
Updated
•
1
•
3
Nayana-cognitivelab/eval-results-Nayana-cognitivelab_exp-colpali-merged-en-20k-vidore_docvqa_test_subsampled
Viewer
•
Updated
•
1
•
6
Nayana-cognitivelab/colpali_train_set_sampled_20k
Viewer
•
Updated
•
20k
•
54
Nayana-cognitivelab/eval-results-Nayana-cognitivelab_exp-colpali-merged-hi-en-20k-vidore_arxivqa_test_subsampled
Viewer
•
Updated
•
1
•
6
Nayana-cognitivelab/eval-results-Nayana-cognitivelab_exp-colpali-merged-hi-en-20k-vidore_infovqa_test_subsampled
Viewer
•
Updated
•
1
•
6
Nayana-cognitivelab/eval-results-Nayana-cognitivelab_exp-colpali-merged-hi-en-20k-vidore_docvqa_test_subsampled
Viewer
•
Updated
•
1
•
4