Layout Analysis Inference

#46
by saikanov - opened

Hi, I’m currently exploring this model and using vLLM as the inference engine for PaddleOCR-VL 0.9B.

I noticed that the layout analysis model seems to run on the client side, which could be problematic for production use.
Is there a native way to run the layout analysis inside the same Docker container as the inference engine?

Or should I manually host it by creating a small API for the layout model, adding it to the Docker setup, and connecting it to the vLLM server through Docker’s internal network?

Thanks, best regards!

PaddlePaddle org

We will provide an official Docker-Compose based solution recently.

We will provide an official Docker-Compose based solution recently.

thanks @gggdddfff , i will wait for it, you will make an announencement about that?

Sign up or log in to comment