Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
Vishwajeet Ranjan
vkrnitd
Follow
avishetty's profile picture
1 follower
·
1 following
AI & ML interests
None yet
Recent Activity
replied
to
hexgrad
's
post
2 days ago
IMHO, being able & willing to defeat CAPTCHA, hCaptcha, or any other reasoning puzzle is a must-have for any Web-Browsing / Computer-Using Agent (WB/CUA). I realize it subverts the purpose of CAPTCHA, but I do not think you can claim to be building AGI/agents without smoothly passing humanity checks. It would be like getting in a self-driving car that requires human intervention over speed bumps. Claiming AGI or even "somewhat powerful AI" seems hollow if you are halted by a mere CAPTCHA. I imagine OpenAI's Operator is *able* but *not willing* to defeat CAPTCHA. Like their non-profit status, I expect that policy to evolve over time—and if not, rival agent-builders will attack that opening to offer a better product.
replied
to
crodri
's
post
2 days ago
At the Language Technologies Unit of the Barcelona Supercomputing Center, we are developing State of the Art Large Language and Voice Models through various national and international projects. It si an exciting time to be working in generative AI! We are looking for bright and motivated individuals to help us achieve ambitious goals. Our latest opening for the Innovation group that develops powerful and socially useful applications for AI technology might be for you. Check it out here: https://www.bsc.es/join-us/job-opportunities/3025lsltre2
replied
to
mitkox
's
post
2 days ago
llama.cpp is 26.8% faster than ollama. I have upgraded both, and using the same settings, I am running the same DeepSeek R1 Distill 1.5B on the same hardware. It's an Apples to Apples comparison. Total duration: llama.cpp 6.85 sec <- 26.8% faster ollama 8.69 sec Breakdown by phase: Model loading llama.cpp 241 ms <- 2x faster ollama 553 ms Prompt processing llama.cpp 416.04 tokens/s with an eval time 45.67 ms <- 10x faster ollama 42.17 tokens/s with an eval time of 498 ms Token generation llama.cpp 137.79 tokens/s with an eval time 6.62 sec <- 13% faster ollama 122.07 tokens/s with an eval time 7.64 sec llama.cpp is LLM inference in C/C++; ollama adds abstraction layers and marketing. Make sure you own your AI. AI in the cloud is not aligned with you; it's aligned with the company that owns it.
View all activity
Organizations
None yet
models
None public yet
datasets
None public yet