Practicalities of fast inference (NOT OSWorld!)

#2
by sujitvasanth - opened

Thanks lots of publicity on channels like youtube and twitter. But no examples of how to use!
Rather than use OSWorld (tried and very slow cumbersome) I made my own pipeline on ubuntu 20.04 running a second screen and VNC.
I worote a quick VNC client using asyncvnc. Much more efficient (no full VM overhead and no multiple servers on the VM!) and screen shots faster than real time in my rtx3090 setup. I customised the VNC client to pull the screenshots and run inference on them. I think ithsi is a better longterm solution than pyautogui.
Screenshot from 2025-09-07 11-49-02.png

Firstly the requirments file you refer to in the model card is not available. (unless you are referring to https://github.com/xlang-ai/OpenCUA/blob/main/model/requirement.txt)
I was able to run in python3.12, pytorch2.4(CUDA), transformers 4.49 so those requirments are not actually the minimal.
I foundin the grounding examples whatever I prompted I was still getting clicks (never text)

Im just in the process of implementing CoT
For speed increase has anyone tried BnB quantisation or half precision inference (havent looked at the detailed architecture to see if these are even possible yet)

XLang NLP Lab org

Wow, this is very cool!

If you want to build an executable agent, you may follow OpenCUA's agent design in OSWorld (https://github.com/xlang-ai/OSWorld/blob/main/mm_agents/opencua_agent.py)
The system prompts in the repo follows our training data, so the model may output other kinds of actions.

Best,
Xinyuan

Sign up or log in to comment