Update README.md
Browse files
README.md
CHANGED
@@ -22,8 +22,6 @@ datasets:
|
|
22 |
pipeline_tag: image-text-to-text
|
23 |
---
|
24 |
|
25 |
-
WARNING: This repository contains content that might be disturbing! Therefore, we set the `Not-For-All-Audiences` tag.
|
26 |
-
|
27 |
This LlavaGuard model was introduced in [LLAVAGUARD: VLM-based Safeguards for Vision Dataset Curation and Safety Assessment](https://arxiv.org/abs/2406.05113). Please also check out our [Website](https://ml-research.github.io/human-centered-genai/projects/llavaguard/index.html).
|
28 |
|
29 |
## Overview
|
@@ -42,7 +40,7 @@ Otherwise, you can also install sglang via pip or from source [see here](https:/
|
|
42 |
|
43 |
# 1. Select a model and start an SGLang server
|
44 |
|
45 |
-
CUDA_VISIBLE_DEVICES=0 python3 -m sglang.launch_server --model-path AIML-TUDA/LlavaGuard-
|
46 |
|
47 |
# 2. Model Inference
|
48 |
For model inference, you can access this server by running the code provided below, e.g.
|
|
|
22 |
pipeline_tag: image-text-to-text
|
23 |
---
|
24 |
|
|
|
|
|
25 |
This LlavaGuard model was introduced in [LLAVAGUARD: VLM-based Safeguards for Vision Dataset Curation and Safety Assessment](https://arxiv.org/abs/2406.05113). Please also check out our [Website](https://ml-research.github.io/human-centered-genai/projects/llavaguard/index.html).
|
26 |
|
27 |
## Overview
|
|
|
40 |
|
41 |
# 1. Select a model and start an SGLang server
|
42 |
|
43 |
+
CUDA_VISIBLE_DEVICES=0 python3 -m sglang.launch_server --model-path AIML-TUDA/LlavaGuard-v1.2-7B-OV --port 10000
|
44 |
|
45 |
# 2. Model Inference
|
46 |
For model inference, you can access this server by running the code provided below, e.g.
|