Add task category and paper link to dataset card
Browse filesThis PR adds the robotics task category to the dataset and ensures the dataset can be found at https://huggingface.co/papers/2503.01378.
README.md
CHANGED
@@ -1,31 +1,37 @@
|
|
1 |
-
|
2 |
-
|
3 |
-
|
4 |
-
|
5 |
-
|
6 |
-
|
7 |
-
|
8 |
-
|
9 |
-
|
10 |
-
|
11 |
-
|
12 |
-
|
13 |
-
|
14 |
-
|
15 |
-
|
16 |
-
|
17 |
-
|
18 |
-
|
19 |
-
|
20 |
-
|
21 |
-
|
22 |
-
##
|
23 |
-
|
24 |
-
|
25 |
-
|
26 |
-
|
27 |
-
|
28 |
-
|
29 |
-
|
30 |
-
|
31 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
task_categories:
|
3 |
+
- robotics
|
4 |
+
---
|
5 |
+
|
6 |
+
# CognitiveDrone: A VLA Model and Evaluation Benchmark for Real-Time Cognitive Task Solving and Reasoning in UAVs
|
7 |
+
|
8 |
+
[](https://huggingface.co/papers/2503.01378)
|
9 |
+
|
10 |
+
## Abstract
|
11 |
+
|
12 |
+
This paper introduces *CognitiveDrone*, a novel Vision-Language-Action (VLA) model tailored for complex Unmanned Aerial Vehicles (UAVs) tasks that demand advanced cognitive abilities. Trained on a dataset comprising over 8,000 simulated flight trajectories across three key categories—Human Recognition, Symbol Understanding, and Reasoning—the model generates real-time 4D action commands based on first-person visual inputs and textual instructions. To further enhance performance in intricate scenarios, we propose *CognitiveDrone-R1*, which integrates an additional Vision-Language Model (VLM) reasoning module to simplify task directives prior to high-frequency control. Experimental evaluations using our open-source benchmark, *CognitiveDroneBench*, reveal that while a racing-oriented model (RaceVLA) achieves an overall success rate of 31.3%, the base CognitiveDrone model reaches 59.6%, and CognitiveDrone-R1 attains a success rate of 77.2%. These results demonstrate improvements of up to 30% in critical cognitive tasks, underscoring the effectiveness of incorporating advanced reasoning capabilities into UAV control systems. Our contributions include the development of a state-of-the-art VLA model for UAV control and the introduction of the first dedicated benchmark for assessing cognitive tasks in drone operations. The complete repository is available at [CognitiveDrone](https://cognitivedrone.github.io/).
|
13 |
+
|
14 |
+
## Dataset Structure
|
15 |
+
|
16 |
+
- **data/rlds/** – Data for training and validation in RLDS format:
|
17 |
+
- `train/` – Training data.
|
18 |
+
|
19 |
+
- **data/benchmark/** – Data for the simulation benchmark:
|
20 |
+
- `validation` – JSON files for model evaluation.
|
21 |
+
|
22 |
+
## Instructions for Use
|
23 |
+
|
24 |
+
1. **Training:** Use the data from `data/rlds/train/` for model training.
|
25 |
+
2. **Evaluation:** Run the simulation benchmark using the files from `data/benchmark/validation/`.
|
26 |
+
|
27 |
+
## Links and Bibliography
|
28 |
+
|
29 |
+
- **Project Repository:** [CognitiveDrone](https://cognitivedrone.github.io/)
|
30 |
+
- **Paper:** [CognitiveDrone: A VLA Model and Evaluation Benchmark for Real-Time Cognitive Task Solving and Reasoning in UAVs](https://huggingface.co/papers/2503.01378)
|
31 |
+
- **Paper:** BibTeX reference will be available soon (coming soon).
|
32 |
+
|
33 |
+
---
|
34 |
+
|
35 |
+
This provides you with a complete package of files that can be easily uploaded to Hugging Face. Once the repository is created on the platform, you can use the `git lfs` command for large files if necessary.
|
36 |
+
|
37 |
+
If you have any further questions regarding repository organization or the dataset card, feel free to ask!
|