RaceVLA: VLA-based Racing Drone Navigation with Human-like Behaviour Paper • 2503.02572 • Published Mar 4
Evolution 6.0: Evolving Robotic Capabilities Through Generative Design Paper • 2502.17034 • Published Feb 24
CognitiveDrone: A VLA Model and Evaluation Benchmark for Real-Time Cognitive Task Solving and Reasoning in UAVs Paper • 2503.01378 • Published Mar 3 • 4
Robots Can Feel: LLM-based Framework for Robot Ethical Reasoning Paper • 2405.05824 • Published May 9, 2024
Co-driver: VLM-based Autonomous Driving Assistant with Human-like Behavior and Understanding for Complex Road Scenes Paper • 2405.05885 • Published May 9, 2024
Bi-VLA: Vision-Language-Action Model-Based System for Bimanual Robotic Dexterous Manipulations Paper • 2405.06039 • Published May 9, 2024 • 1
VR-GPT: Visual Language Model for Intelligent Virtual Reality Applications Paper • 2405.11537 • Published May 19, 2024 • 1
FlockGPT: Guiding UAV Flocking with Linguistic Orchestration Paper • 2405.05872 • Published May 9, 2024
DogSurf: Quadruped Robot Capable of GRU-based Surface Recognition for Blind Person Navigation Paper • 2402.03156 • Published Feb 5, 2024
CognitiveOS: Large Multimodal Model based System to Endow Any Type of Robot with Generative AI Paper • 2401.16205 • Published Jan 29, 2024
LLM-BRAIn: AI-driven Fast Generation of Robot Behaviour Tree based on Large Language Model Paper • 2305.19352 • Published May 30, 2023
DeltaFinger: a 3-DoF Wearable Haptic Display Enabling High-Fidelity Force Vector Presentation at a User Finger Paper • 2211.00752 • Published Nov 1, 2022 • 1
LLM-MARS: Large Language Model for Behavior Tree Generation and NLP-enhanced Dialogue in Multi-Agent Robot Systems Paper • 2312.09348 • Published Dec 14, 2023 • 2
CognitiveDog: Large Multimodal Model Based System to Translate Vision and Language into Action of Quadruped Robot Paper • 2401.09388 • Published Jan 17, 2024 • 1