BangumiBase

community

AI & ML interests

Character Database of Bangumis (If you need character LoRAs, see: https://huggingface.co/CyberHarem)

Recent Activity

narugoย  updated a Space about 4 hours ago
BangumiBase/README
narugoย  updated a Space about 15 hours ago
BangumiBase/README
narugoย  updated a Space about 23 hours ago
BangumiBase/README
View all activity

BangumiBase's activity

narugoย 
updated a Space about 4 hours ago
not-lainย 
posted an update about 5 hours ago
not-lainย 
posted an update 13 days ago
view post
Post
1155
we now have more than 2000 public AI models using ModelHubMixin๐Ÿค—
not-lainย 
posted an update 18 days ago
s3nhย 
posted an update about 1 month ago
view post
Post
1824
Welcome back,

Small Language Models Enthusiasts and GPU Poor oss enjoyers lets connect.
Just created an organization which main target is to have fun with smaller models tuneable on consumer range GPUs, feel free to join and lets have some fun, much love ;3

https://huggingface.co/SmolTuners
ยท
lunarfluย 
posted an update about 2 months ago
not-lainย 
posted an update 3 months ago
view post
Post
2292
ever wondered how you can make an API call to a visual-question-answering model without sending an image url ๐Ÿ‘€

you can do that by converting your local image to base64 and sending it to the API.

recently I made some changes to my library "loadimg" that allows you to make converting images to base64 a breeze.
๐Ÿ”— https://github.com/not-lain/loadimg

API request example ๐Ÿ› ๏ธ:
from loadimg import load_img
from huggingface_hub import InferenceClient

# or load a local image
my_b64_img = load_img(imgPath_url_pillow_or_numpy ,output_type="base64" ) 

client = InferenceClient(api_key="hf_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx")

messages = [
	{
		"role": "user",
		"content": [
			{
				"type": "text",
				"text": "Describe this image in one sentence."
			},
			{
				"type": "image_url",
				"image_url": {
					"url": my_b64_img # base64 allows using images without uploading them to the web
				}
			}
		]
	}
]

stream = client.chat.completions.create(
    model="meta-llama/Llama-3.2-11B-Vision-Instruct", 
	messages=messages, 
	max_tokens=500,
	stream=True
)

for chunk in stream:
    print(chunk.choices[0].delta.content, end="")
lunarfluย 
posted an update 5 months ago