YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/model-cards#model-card-metadata)
Deployment Scripts for Medguide (Built with Gradio)
This document provides instructions for deploying the Medguide model for inference using Gradio.
Set up the Conda environment: Follow the instructions in the PKU-Alignment/align-anything repository to configure your Conda environment.
Configure the model path: After setting up the environment, update the
MODEL_PATH
variable indeploy_medguide_v.sh
to point to your local Medguide model directory.Verify inference script parameters: Check the following three parameters in both
multimodal_inference.py
:# NOTE: Replace with your own model path if not loaded via the API base model = ''
These scripts utilize an OpenAI-compatible server approach. The
deploy_medguide_v.sh
script launches the Medguide model locally and exposes it on port 8231 for external access via the specified API base URL.Running Inference:
- Streamed Output:
bash deploy_medguide_v.sh python multimodal_inference.py
- Streamed Output:
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support