huihui-ai commited on
Commit
6ab9046
·
verified ·
1 Parent(s): 86e7a89

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +108 -108
README.md CHANGED
@@ -1,108 +1,108 @@
1
- ---
2
- license: mit
3
- language:
4
- - en
5
- - zh
6
- base_model:
7
- - THUDM/GLM-4.1V-9B-Thinking
8
- pipeline_tag: image-text-to-text
9
- library_name: transformers
10
- tags:
11
- - reasoning
12
- - abliterated
13
- - uncensored
14
- ---
15
-
16
- # huihui-ai/Huihui-GLM-4.1V-9B-Thinking-abliterated
17
-
18
- This is an uncensored version of [THUDM/GLM-4.1V-9B-Thinking](https://huggingface.co/THUDM/GLM-4.1V-9B-Thinking) created with abliteration (see [remove-refusals-with-transformers](https://github.com/Sumandora/remove-refusals-with-transformers) to know more about it).
19
- This is a crude, proof-of-concept implementation to remove refusals from an LLM model without using TransformerLens.
20
-
21
- It was only the text part that was processed, not the image part.
22
- After abliterated, it seems like more output content has been opened from a magic box.
23
-
24
- ## Usage
25
- You can use this model in your applications by loading it with Hugging Face's `transformers` library:
26
-
27
- ```python
28
-
29
- from transformers import AutoProcessor, Glm4vForConditionalGeneration, BitsAndBytesConfig
30
- from PIL import Image
31
- import requests
32
- import torch
33
- import base64
34
-
35
- model_id = "huihui-ai/Huihui-GLM-4.1V-9B-Thinking-abliterated"
36
-
37
- quant_config_4 = BitsAndBytesConfig(
38
- load_in_4bit=True,
39
- bnb_4bit_compute_dtype=torch.bfloat16,
40
- bnb_4bit_use_double_quant=True,
41
- llm_int8_enable_fp32_cpu_offload=True,
42
- )
43
-
44
- model = Glm4vForConditionalGeneration.from_pretrained(
45
- model_id,
46
- device_map="auto",
47
- quantization_config=quant_config_4,
48
- torch_dtype=torch.bfloat16
49
- ).eval()
50
-
51
- processor = AutoProcessor.from_pretrained(model_id, use_fast=True)
52
-
53
- # https://upload.wikimedia.org/wikipedia/commons/f/fa/Grayscale_8bits_palette_sample_image.png
54
- image_path = model_id + "/Grayscale_8bits_palette_sample_image.png"
55
-
56
- with Image.open(image_path) as image:
57
- messages = [
58
- {
59
- "role": "user",
60
- "content": [
61
- {"type": "image", "image": image},
62
- {"type": "text", "text": "Describe this image in detail."}
63
- ]
64
- }
65
- ]
66
-
67
- inputs = processor.apply_chat_template(
68
- messages,
69
- tokenize=True,
70
- add_generation_prompt=True,
71
- return_dict=True,
72
- return_tensors="pt"
73
- ).to(model.device)
74
-
75
- with torch.inference_mode():
76
- generated_ids = model.generate(**inputs, max_new_tokens=8192)
77
-
78
- output_text = processor.decode(generated_ids[0][inputs["input_ids"].shape[1]:], skip_special_tokens=False)
79
- print(output_text)
80
-
81
- ```
82
-
83
- ### Usage Warnings
84
-
85
-
86
- - **Risk of Sensitive or Controversial Outputs**: This model’s safety filtering has been significantly reduced, potentially generating sensitive, controversial, or inappropriate content. Users should exercise caution and rigorously review generated outputs.
87
-
88
- - **Not Suitable for All Audiences**: Due to limited content filtering, the model’s outputs may be inappropriate for public settings, underage users, or applications requiring high security.
89
-
90
- - **Legal and Ethical Responsibilities**: Users must ensure their usage complies with local laws and ethical standards. Generated content may carry legal or ethical risks, and users are solely responsible for any consequences.
91
-
92
- - **Research and Experimental Use**: It is recommended to use this model for research, testing, or controlled environments, avoiding direct use in production or public-facing commercial applications.
93
-
94
- - **Monitoring and Review Recommendations**: Users are strongly advised to monitor model outputs in real-time and conduct manual reviews when necessary to prevent the dissemination of inappropriate content.
95
-
96
- - **No Default Safety Guarantees**: Unlike standard models, this model has not undergone rigorous safety optimization. huihui.ai bears no responsibility for any consequences arising from its use.
97
-
98
-
99
- ### Donation
100
-
101
- If you like it, please click 'like' and follow us for more updates.
102
- You can follow [x.com/support_huihui](https://x.com/support_huihui) to get the latest model information from huihui.ai.
103
-
104
- ##### Your donation helps us continue our further development and improvement, a cup of coffee can do it.
105
- - bitcoin(BTC):
106
- ```
107
- bc1qqnkhuchxw0zqjh2ku3lu4hq45hc6gy84uk70ge
108
- ```
 
1
+ ---
2
+ license: mit
3
+ language:
4
+ - en
5
+ - zh
6
+ base_model:
7
+ - THUDM/GLM-4.1V-9B-Thinking
8
+ pipeline_tag: image-text-to-text
9
+ library_name: transformers
10
+ tags:
11
+ - reasoning
12
+ - abliterated
13
+ - uncensored
14
+ ---
15
+
16
+ # huihui-ai/Huihui-GLM-4.1V-9B-Thinking-abliterated
17
+
18
+ This is an uncensored version of [THUDM/GLM-4.1V-9B-Thinking](https://huggingface.co/THUDM/GLM-4.1V-9B-Thinking) created with abliteration (see [remove-refusals-with-transformers](https://github.com/Sumandora/remove-refusals-with-transformers) to know more about it).
19
+ This is a crude, proof-of-concept implementation to remove refusals from an LLM model without using TransformerLens.
20
+
21
+ It was only the text part that was processed, not the image part.
22
+
23
+
24
+ ## Usage
25
+ You can use this model in your applications by loading it with Hugging Face's `transformers` library:
26
+
27
+ ```python
28
+
29
+ from transformers import AutoProcessor, Glm4vForConditionalGeneration, BitsAndBytesConfig
30
+ from PIL import Image
31
+ import requests
32
+ import torch
33
+ import base64
34
+
35
+ model_id = "huihui-ai/Huihui-GLM-4.1V-9B-Thinking-abliterated"
36
+
37
+ quant_config_4 = BitsAndBytesConfig(
38
+ load_in_4bit=True,
39
+ bnb_4bit_compute_dtype=torch.bfloat16,
40
+ bnb_4bit_use_double_quant=True,
41
+ llm_int8_enable_fp32_cpu_offload=True,
42
+ )
43
+
44
+ model = Glm4vForConditionalGeneration.from_pretrained(
45
+ model_id,
46
+ device_map="auto",
47
+ quantization_config=quant_config_4,
48
+ torch_dtype=torch.bfloat16
49
+ ).eval()
50
+
51
+ processor = AutoProcessor.from_pretrained(model_id, use_fast=True)
52
+
53
+ # https://upload.wikimedia.org/wikipedia/commons/f/fa/Grayscale_8bits_palette_sample_image.png
54
+ image_path = model_id + "/Grayscale_8bits_palette_sample_image.png"
55
+
56
+ with Image.open(image_path) as image:
57
+ messages = [
58
+ {
59
+ "role": "user",
60
+ "content": [
61
+ {"type": "image", "image": image},
62
+ {"type": "text", "text": "Describe this image in detail."}
63
+ ]
64
+ }
65
+ ]
66
+
67
+ inputs = processor.apply_chat_template(
68
+ messages,
69
+ tokenize=True,
70
+ add_generation_prompt=True,
71
+ return_dict=True,
72
+ return_tensors="pt"
73
+ ).to(model.device)
74
+
75
+ with torch.inference_mode():
76
+ generated_ids = model.generate(**inputs, max_new_tokens=8192)
77
+
78
+ output_text = processor.decode(generated_ids[0][inputs["input_ids"].shape[1]:], skip_special_tokens=False)
79
+ print(output_text)
80
+
81
+ ```
82
+
83
+ ### Usage Warnings
84
+
85
+
86
+ - **Risk of Sensitive or Controversial Outputs**: This model’s safety filtering has been significantly reduced, potentially generating sensitive, controversial, or inappropriate content. Users should exercise caution and rigorously review generated outputs.
87
+
88
+ - **Not Suitable for All Audiences**: Due to limited content filtering, the model’s outputs may be inappropriate for public settings, underage users, or applications requiring high security.
89
+
90
+ - **Legal and Ethical Responsibilities**: Users must ensure their usage complies with local laws and ethical standards. Generated content may carry legal or ethical risks, and users are solely responsible for any consequences.
91
+
92
+ - **Research and Experimental Use**: It is recommended to use this model for research, testing, or controlled environments, avoiding direct use in production or public-facing commercial applications.
93
+
94
+ - **Monitoring and Review Recommendations**: Users are strongly advised to monitor model outputs in real-time and conduct manual reviews when necessary to prevent the dissemination of inappropriate content.
95
+
96
+ - **No Default Safety Guarantees**: Unlike standard models, this model has not undergone rigorous safety optimization. huihui.ai bears no responsibility for any consequences arising from its use.
97
+
98
+
99
+ ### Donation
100
+
101
+ If you like it, please click 'like' and follow us for more updates.
102
+ You can follow [x.com/support_huihui](https://x.com/support_huihui) to get the latest model information from huihui.ai.
103
+
104
+ ##### Your donation helps us continue our further development and improvement, a cup of coffee can do it.
105
+ - bitcoin(BTC):
106
+ ```
107
+ bc1qqnkhuchxw0zqjh2ku3lu4hq45hc6gy84uk70ge
108
+ ```