xintaozhen commited on
Commit
055b5dd
·
verified ·
1 Parent(s): 350d200

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -4
README.md CHANGED
@@ -23,11 +23,9 @@ It contains model checkpoints, Hugging Face–compatible Qwen-0.5B LLM, and ONNX
23
 
24
  ## 🔎 Introduction
25
 
26
- Although the Visual-Language-Action (VLA) model has great potential in desktop robot tasks, its reliance on cloud computing brings inherent network latency, data privacy risks, and reliability challenges.
27
- To achieve localized **low-latency** and **high-security** desktop robot tasks, this project takes **OpenVLA-Mini** as an example and focuses on addressing the deployment and performance challenges of lightweight multimodal models on edge hardware.
28
 
29
- We reproduce a lightweight VLA and propose a **hybrid acceleration pipeline**, which effectively alleviates the deployment bottleneck on resource-constrained platforms.
30
- By exporting the **vision encoder** into ONNX and TensorRT engines, we significantly reduced end-to-end latency and GPU memory usage. While a moderate drop in task success rate (around **5–10%** in LIBERO desktop operation tasks) was observed, the results still demonstrate the feasibility of achieving **efficient and real-time VLA inference on the edge side**.
31
 
32
  ---
33
 
 
23
 
24
  ## 🔎 Introduction
25
 
26
+ To enable low-latency, high-security desktop robot tasks on local devices, this project focuses on addressing the deployment and performance challenges of lightweight multimodal models on edge hardware. Using OpenVLA-Mini as a case study, we propose a hybrid acceleration pipeline designed to alleviate deployment bottlenecks on resource-constrained platforms.
 
27
 
28
+ We reproduced a lightweight VLA model and then significantly reduced its end-to-end latency and GPU memory usage by exporting the vision encoder into ONNX and TensorRT engines. While we observed a moderate drop in the task success rate (around 5-10% in LIBERO desktop operation tasks), our results still demonstrate the feasibility of achieving efficient, real-time VLA inference on the edge side.
 
29
 
30
  ---
31