AI & ML interests

StableDIffusion, Computer Vision, NLP

Recent Activity

Nymboย 
posted an update about 2 months ago
view post
Post
2874
Haven't seen this posted anywhere - Llama-3.3-8B-Instruct is available on the new Llama API. Is this a new model or did someone mislabel Llama-3.1-8B?
  • 1 reply
ยท
Nymboย 
posted an update about 2 months ago
view post
Post
2637
PSA for anyone using Nymbo/Nymbo_Theme or Nymbo/Nymbo_Theme_5 in a Gradio space ~

Both of these themes have been updated to fix some of the long-standing inconsistencies ever since the transition to Gradio v5. Textboxes are no longer bright green and in-line code is readable now! Both themes are now visually identical across versions.

If your space is already using one of these themes, you just need to restart your space to get the latest version. No code changes needed.
not-lainย 
posted an update 4 months ago
DamarJatiย 
updated a Space 4 months ago
not-lainย 
posted an update 5 months ago
not-lainย 
posted an update 5 months ago
view post
Post
1776
we now have more than 2000 public AI models using ModelHubMixin๐Ÿค—
not-lainย 
posted an update 6 months ago
DamarJatiย 
posted an update 6 months ago
view post
Post
3990
Happy New Year 2025 ๐Ÿค—
For the Huggingface community.
1aurentย 
posted an update 6 months ago
not-lainย 
posted an update 8 months ago
view post
Post
2422
ever wondered how you can make an API call to a visual-question-answering model without sending an image url ๐Ÿ‘€

you can do that by converting your local image to base64 and sending it to the API.

recently I made some changes to my library "loadimg" that allows you to make converting images to base64 a breeze.
๐Ÿ”— https://github.com/not-lain/loadimg

API request example ๐Ÿ› ๏ธ:
from loadimg import load_img
from huggingface_hub import InferenceClient

# or load a local image
my_b64_img = load_img(imgPath_url_pillow_or_numpy ,output_type="base64" ) 

client = InferenceClient(api_key="hf_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx")

messages = [
	{
		"role": "user",
		"content": [
			{
				"type": "text",
				"text": "Describe this image in one sentence."
			},
			{
				"type": "image_url",
				"image_url": {
					"url": my_b64_img # base64 allows using images without uploading them to the web
				}
			}
		]
	}
]

stream = client.chat.completions.create(
    model="meta-llama/Llama-3.2-11B-Vision-Instruct", 
	messages=messages, 
	max_tokens=500,
	stream=True
)

for chunk in stream:
    print(chunk.choices[0].delta.content, end="")