Update README.md
Browse files
README.md
CHANGED
|
@@ -23,11 +23,9 @@ It contains model checkpoints, Hugging Face–compatible Qwen-0.5B LLM, and ONNX
|
|
| 23 |
|
| 24 |
## 🔎 Introduction
|
| 25 |
|
| 26 |
-
|
| 27 |
-
To achieve localized **low-latency** and **high-security** desktop robot tasks, this project takes **OpenVLA-Mini** as an example and focuses on addressing the deployment and performance challenges of lightweight multimodal models on edge hardware.
|
| 28 |
|
| 29 |
-
We
|
| 30 |
-
By exporting the **vision encoder** into ONNX and TensorRT engines, we significantly reduced end-to-end latency and GPU memory usage. While a moderate drop in task success rate (around **5–10%** in LIBERO desktop operation tasks) was observed, the results still demonstrate the feasibility of achieving **efficient and real-time VLA inference on the edge side**.
|
| 31 |
|
| 32 |
---
|
| 33 |
|
|
|
|
| 23 |
|
| 24 |
## 🔎 Introduction
|
| 25 |
|
| 26 |
+
To enable low-latency, high-security desktop robot tasks on local devices, this project focuses on addressing the deployment and performance challenges of lightweight multimodal models on edge hardware. Using OpenVLA-Mini as a case study, we propose a hybrid acceleration pipeline designed to alleviate deployment bottlenecks on resource-constrained platforms.
|
|
|
|
| 27 |
|
| 28 |
+
We reproduced a lightweight VLA model and then significantly reduced its end-to-end latency and GPU memory usage by exporting the vision encoder into ONNX and TensorRT engines. While we observed a moderate drop in the task success rate (around 5-10% in LIBERO desktop operation tasks), our results still demonstrate the feasibility of achieving efficient, real-time VLA inference on the edge side.
|
|
|
|
| 29 |
|
| 30 |
---
|
| 31 |
|