nielsr HF Staff commited on
Commit
950a398
·
verified ·
1 Parent(s): fb9f787

Add task category and paper link to dataset card

Browse files

This PR adds the robotics task category to the dataset and ensures the dataset can be found at https://huggingface.co/papers/2503.01378.

Files changed (1) hide show
  1. README.md +37 -31
README.md CHANGED
@@ -1,31 +1,37 @@
1
- # CognitiveDrone: A VLA Model and Evaluation Benchmark for Real-Time Cognitive Task Solving and Reasoning in UAVs
2
-
3
- ![Teaser](teaser.png)
4
-
5
- ## Abstract
6
-
7
- This paper introduces *CognitiveDrone*, a novel Vision-Language-Action (VLA) model tailored for complex Unmanned Aerial Vehicles (UAVs) tasks that demand advanced cognitive abilities. Trained on a dataset comprising over 8,000 simulated flight trajectories across three key categories—Human Recognition, Symbol Understanding, and Reasoning—the model generates real-time 4D action commands based on first-person visual inputs and textual instructions. To further enhance performance in intricate scenarios, we propose *CognitiveDrone-R1*, which integrates an additional Vision-Language Model (VLM) reasoning module to simplify task directives prior to high-frequency control. Experimental evaluations using our open-source benchmark, *CognitiveDroneBench*, reveal that while a racing-oriented model (RaceVLA) achieves an overall success rate of 31.3%, the base CognitiveDrone model reaches 59.6%, and CognitiveDrone-R1 attains a success rate of 77.2%. These results demonstrate improvements of up to 30% in critical cognitive tasks, underscoring the effectiveness of incorporating advanced reasoning capabilities into UAV control systems. Our contributions include the development of a state-of-the-art VLA model for UAV control and the introduction of the first dedicated benchmark for assessing cognitive tasks in drone operations. The complete repository is available at [CognitiveDrone](https://cognitivedrone.github.io/).
8
-
9
- ## Dataset Structure
10
-
11
- - **data/rlds/** – Data for training and validation in RLDS format:
12
- - `train/` Training data.
13
-
14
- - **data/benchmark/** – Data for the simulation benchmark:
15
- - `validation` – JSON files for model evaluation.
16
-
17
- ## Instructions for Use
18
-
19
- 1. **Training:** Use the data from `data/rlds/train/` for model training.
20
- 2. **Evaluation:** Run the simulation benchmark using the files from `data/benchmark/validation/`.
21
-
22
- ## Links and Bibliography
23
-
24
- - **Project Repository:** [CognitiveDrone](https://cognitivedrone.github.io/)
25
- - **Paper:** BibTeX reference will be available soon (coming soon).
26
-
27
- ---
28
-
29
- This provides you with a complete package of files that can be easily uploaded to Hugging Face. Once the repository is created on the platform, you can use the `git lfs` command for large files if necessary.
30
-
31
- If you have any further questions regarding repository organization or the dataset card, feel free to ask!
 
 
 
 
 
 
 
1
+ ---
2
+ task_categories:
3
+ - robotics
4
+ ---
5
+
6
+ # CognitiveDrone: A VLA Model and Evaluation Benchmark for Real-Time Cognitive Task Solving and Reasoning in UAVs
7
+
8
+ [![Teaser](teaser.png)](https://huggingface.co/papers/2503.01378)
9
+
10
+ ## Abstract
11
+
12
+ This paper introduces *CognitiveDrone*, a novel Vision-Language-Action (VLA) model tailored for complex Unmanned Aerial Vehicles (UAVs) tasks that demand advanced cognitive abilities. Trained on a dataset comprising over 8,000 simulated flight trajectories across three key categories—Human Recognition, Symbol Understanding, and Reasoning—the model generates real-time 4D action commands based on first-person visual inputs and textual instructions. To further enhance performance in intricate scenarios, we propose *CognitiveDrone-R1*, which integrates an additional Vision-Language Model (VLM) reasoning module to simplify task directives prior to high-frequency control. Experimental evaluations using our open-source benchmark, *CognitiveDroneBench*, reveal that while a racing-oriented model (RaceVLA) achieves an overall success rate of 31.3%, the base CognitiveDrone model reaches 59.6%, and CognitiveDrone-R1 attains a success rate of 77.2%. These results demonstrate improvements of up to 30% in critical cognitive tasks, underscoring the effectiveness of incorporating advanced reasoning capabilities into UAV control systems. Our contributions include the development of a state-of-the-art VLA model for UAV control and the introduction of the first dedicated benchmark for assessing cognitive tasks in drone operations. The complete repository is available at [CognitiveDrone](https://cognitivedrone.github.io/).
13
+
14
+ ## Dataset Structure
15
+
16
+ - **data/rlds/** – Data for training and validation in RLDS format:
17
+ - `train/` Training data.
18
+
19
+ - **data/benchmark/** Data for the simulation benchmark:
20
+ - `validation` JSON files for model evaluation.
21
+
22
+ ## Instructions for Use
23
+
24
+ 1. **Training:** Use the data from `data/rlds/train/` for model training.
25
+ 2. **Evaluation:** Run the simulation benchmark using the files from `data/benchmark/validation/`.
26
+
27
+ ## Links and Bibliography
28
+
29
+ - **Project Repository:** [CognitiveDrone](https://cognitivedrone.github.io/)
30
+ - **Paper:** [CognitiveDrone: A VLA Model and Evaluation Benchmark for Real-Time Cognitive Task Solving and Reasoning in UAVs](https://huggingface.co/papers/2503.01378)
31
+ - **Paper:** BibTeX reference will be available soon (coming soon).
32
+
33
+ ---
34
+
35
+ This provides you with a complete package of files that can be easily uploaded to Hugging Face. Once the repository is created on the platform, you can use the `git lfs` command for large files if necessary.
36
+
37
+ If you have any further questions regarding repository organization or the dataset card, feel free to ask!