AI & ML interests

Anime Bishojo. This organization is only for waifus' datasets and loras

NymboΒ 
posted an update 1 day ago
view post
Post
554
Anyone know how to reset Claude web's MCP config? I connected mine when the HF MCP first released with just the default example spaces added. I added lots of other MCP spaces but Claude.ai doesn't update the available tools... "Disconnecting" the HF integration does nothing, deleting it and adding it again does nothing.

Refreshing tools works fine in VS Code because I can manually restart it in mcp.json, but claude.ai has no such option. Anyone got any ideas?
Β·
NymboΒ 
posted an update about 2 months ago
view post
Post
2974
Haven't seen this posted anywhere - Llama-3.3-8B-Instruct is available on the new Llama API. Is this a new model or did someone mislabel Llama-3.1-8B?
  • 1 reply
Β·
NymboΒ 
posted an update 2 months ago
view post
Post
2705
PSA for anyone using Nymbo/Nymbo_Theme or Nymbo/Nymbo_Theme_5 in a Gradio space ~

Both of these themes have been updated to fix some of the long-standing inconsistencies ever since the transition to Gradio v5. Textboxes are no longer bright green and in-line code is readable now! Both themes are now visually identical across versions.

If your space is already using one of these themes, you just need to restart your space to get the latest version. No code changes needed.

Model Running Help

5
#1 opened 3 months ago by
Amir1387aht
not-lainΒ 
posted an update 4 months ago
ameerazam08Β 
posted an update 5 months ago
not-lainΒ 
posted an update 5 months ago
not-lainΒ 
posted an update 6 months ago
view post
Post
1780
we now have more than 2000 public AI models using ModelHubMixinπŸ€—
not-lainΒ 
posted an update 6 months ago
LewdiculousΒ 
posted an update 6 months ago
view post
Post
15152
Hello fellow LLMers, just a quick notice that some of my activity will be moved into the AetherArchitectural Commuity and split with @Aetherarchio .

[here] AetherArchitectural

All activity should be visible in the left side of my profile.
  • 2 replies
Β·
s3nhΒ 
posted an update 6 months ago
view post
Post
2196
Welcome back,

Small Language Models Enthusiasts and GPU Poor oss enjoyers lets connect.
Just created an organization which main target is to have fun with smaller models tuneable on consumer range GPUs, feel free to join and lets have some fun, much love ;3

SmolTuners
Β·
lunarfluΒ 
posted an update 7 months ago
not-lainΒ 
posted an update 8 months ago
view post
Post
2426
ever wondered how you can make an API call to a visual-question-answering model without sending an image url πŸ‘€

you can do that by converting your local image to base64 and sending it to the API.

recently I made some changes to my library "loadimg" that allows you to make converting images to base64 a breeze.
πŸ”— https://github.com/not-lain/loadimg

API request example πŸ› οΈ:
from loadimg import load_img
from huggingface_hub import InferenceClient

# or load a local image
my_b64_img = load_img(imgPath_url_pillow_or_numpy ,output_type="base64" ) 

client = InferenceClient(api_key="hf_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx")

messages = [
	{
		"role": "user",
		"content": [
			{
				"type": "text",
				"text": "Describe this image in one sentence."
			},
			{
				"type": "image_url",
				"image_url": {
					"url": my_b64_img # base64 allows using images without uploading them to the web
				}
			}
		]
	}
]

stream = client.chat.completions.create(
    model="meta-llama/Llama-3.2-11B-Vision-Instruct", 
	messages=messages, 
	max_tokens=500,
	stream=True
)

for chunk in stream:
    print(chunk.choices[0].delta.content, end="")
anohaΒ 
updated a Space 9 months ago