Spaces:
Running
To my new Model-Mixing Magician!
Hey YnTec,
I've started following you on HuggingFace and I'm totally into your discoveries, your models mixes, and especially... the list of models in your app.py from your "Toy World" Space.
So much so that, since I love programming in my free time and my favorite language is C#, I whipped up a little WinForms project to make it easier for me when I want to generate several images at once. I'm using your models list and the free Inference API service of HuggingFace.
I even found out that during an API call, we can not only provide the "inputs" but also pass an "options" object with 2 boolean properties: use_cache and wait_for_model. The use_cache defaults to true but if set to false, it generates a new image with each call, and wait_for_model is false by default, but setting it to true avoids the 504 error codes (model is loading...) but we have to wait, like... 2 more minutes, that's all!
Anyway, I was wondering, which Spaces should I check out to get your most up-to-date list for the best models :)
And if possible, I'd love to chat (discord or something) with you about how you mix models and other magic tricks you are doing!
Looking forward to it!
Hey BlinkSun! Welcome aboard! It's great to have your feedback, as sometimes it feels kind of lonely here, and I don't even know if I'm going to the right direction, and what I get is confusing, like, getting lots of likes in a model I made... but... it rarely gets used, and then no hearts at all, but 20000 downloads in a week?! o_O Things like that.
Let me get something out of my chest, I think I'm reaching some kind of model saturation, when I started this, or rather, continued from what the giants of hugging spaces had created, I dreamed to be able to provide models that could do everything you could think of! If a single model model can't do it, I could have 5 that put together could get close, and then Dalle 3 happened and it was a shock, I could never get close, no matter what I did, no matter what models I mixed, the outputs just weren't there, and so, I gave up, and compromised, and started delivering the best I could, even if it wasn't what I wanted.
My dream was to get to 1000 models and make a party but what I noticed was that most of the models I was planning to release actually were worse versions of what i already had, or they didn't bring anything new to the table. Civitai previews are tricky as most depend on negative prompts, and those just take a big chunk of the space of possible outputs, so I optimize my models to not require them, but then many, many models become mediocre without them. You may have seen big gaps between one upload and the next, but I'm not on vacation, the other day I tested 11... Yeah, eleven models, and none of them were up to my standards for release, so I spent more days searching until I found one that did it...
Hey, let me show you something from today...
Those are outputs from 4 models I didn't release... so the REASON I merged the Memento model was to have a model to use as a base for Analog Diffusion (https://huggingface.co/wavymulder/Analog-Diffusion) because what bothered me about it is that if you don't put "Analog Style" on it it just has bad outputs... what if we had a model that produced great things no matter what AND those outputs if you added "Analog Style" to it? But no matter what I tried, nothing worked because we already have this:
At the left, we have the output for Analog Diffusion, and at the right, we have Memento's. As you can see, none of my merges really improved over these outputs, she's basically the same as Memento alone, and none of it has the style of Analog Diffusion, so I parked this idea along with many that haven't worked yet because Memento is clearly not suitable for the job...
Anyway, just mentioning it because I really have no one to talk about this, I guess I could have a blog or something, but on the theme of "my failed attempts at merging models" I guess it's better if people don't know about all this, ha!
About the best models: I always try to keep the best one at the top, whenever a model does something that is not up to the standards of Toy World, I sink it deep on the list, sometimes near the bottom. For most recent models you can check this space: https://huggingface.co/spaces/Yntec/PrintingPress - here I try to upload models daily, if I find them, as I become more and more critical about what I upload, and I also alternate with digiplay's models. digiplay is my hero, I learned everything about uploading models from him and the greatest inspiration to merge models was to match his models merges.
About cached models: What kills me is generating images and then losing them before I can save them, that's what clicking Generate does without a cache, and I keep doing it accidentally. With a cache, there's the image again, nothing is lost. Plus, sometimes the errors attack after an image has been generated, but not shown, so clicking generate after such an error restores it from the cache instantly, I can't sacrifice all that for different images from the same prompt.
About wait_for_model: Yeah, I'm using that on my space that allows you to generate with up to 6 models at a time: https://huggingface.co/spaces/Yntec/Diffusion60XX though it's always behind Toy World and the Printing Press because selecting and unselecting models makes the whole list jump around, but one could just select 1 and use it like this, with the cool feature of seeing what actually happens on the errors.
About chatting and Discord: I'm definitively anti-Discord, and in recent times, anti-Private messages and anything that remains concealed from people. I'm glad Huggingface has nothing of that so we have to talk publicly and whatever we talk about can benefit people, but feel free to use this comment section like that, when you post a yellow circle goes around my avatar and I can check what you said quickly.
About secrets: Sure, I'll reveal all my secrets, just beware that they'll be here in the open so everyone will know. Huh, I guess a "secret" is that I could create spaces that allows 6 images at a time with any model, all different, like this one: https://huggingface.co/spaces/Yntec/DreamAnything - or that I have code that would allow you to use negative prompts and adjust the size of your generations so you can have landscape or portraits in your outputs, and 1024x1024 pictures instead of the default 768x768 ones, something I may plan to do for selected models.
And any question you have about merging models, I can answer. I don't have the hardware, so I do all my merging in huggingface spaces, and it takes a while because each image I test takes 15 minutes to generate on there, but I'm not in a hurry and do other things on the meantime. If you want some kind of tutorial about how to merge models and upload them to huggingface I can do that too.
I'm like a music box that you turn once and then I play music for a long time, known for my big walls of text like this one! Hope I didn't forget anything, see you!
you are not alone
niether behind your walls of text ,
or in your unguided direction
you are not 404
or lost under your pile of models
nor within the ocean of the unforseen problems
that is known as huggingface
i gotta say one thing about the air of solitude here in HF
like i have tons of spaces, miles of code, mostly all just me failing and learning and generally just being lame and noobish
models, and python are new concepts to me, but im getting my time in, im learning
but out of hundreds of spaces, ive made and deleted
i've only publicly shared like2 (more like 10, but 8 of them i never looked back at)
only 1 really, with about 200 of your models and johns
(just this week ive started smithing my own, merging and playing pin the loRA on the Models ,blindfolded)
but you know
so in that 1 space, (48 likes)
i was told by chatgpt to add this debug logging line so i could pull error codes , so i could aim for 100% generation rates for all the models
and so i did
and the console log, would bounce to the bottom,
over and over , eventually i was like fine
i'll lock it in place with this checkbox here
and when i did
i was like,
wait, i didnt type that prompt that i see on the screen in this log
nor the following one nor
none of them that kept scrolling the page
prompt after prompt after prompt after
these people, look at some crazy stuff
like stuff that even made me gringe
gringe so badly that i just wanted the scrolling to stop again
but it kept going and going and going
120 hours later still going
over like 250,000 prompts
that i didnt prompt
all from 1 space
with 200 models (half yours)
and only 48 likes
and ive only had 1 person speak to me in my inbox,
trying to show me how easy it (wasn't) to update its gradio
they backtracked out once they realized it though,
leaving me there,
by myself to resolve it
(and i killed that debugging output too.)
but hey
Yntec
if it feels so quiet here on hugginface
just think of me and my one space
250,000 prompts just silently rolling by
just be happy you've NOT BEEN Cruc- i - f i e d
Oh, hey charliebaby2023! Nice to hear from you. My last post of this thread remains valid, except we got seeds since then so we can reproduce images at will, I killed Diffusion60XX because it wasn't different from this one, people can tweak the models at Blitz Diffusion, and the most successful space like this was advertised as Sexy Diffusion or something like that.
Interest has gone downhill since much more capable models like Flux have appeared, right now you can go to chatgpt.com and ask for an image, and it'll probably be light years ahead of what any of these models could come up with, perhaps like Dalle 3 but with perfect text and prompt understanding. It's as if image generation has been solved, I'm just glad I released AnythingV7 back when I did, if I released it today nobody would have cared, if I released Shampoo back then it would have broken the ground!
But it's great to continue like casting a radio station for the few people listening, there's no pressure anymore, my next release doesn't need to be better than the last, I never saw the images generated with my models unless people shared them. And, ha! My models... The only time I put myself on the credits of a model was for DreamAnything, and I'll probably take myself away if I update it. The only point of authorship is to let others find more things released by the same person (I wouldn't know how to find more stuff by Lyriel's anonymous author), but when it comes to AI, it's never about "hey! look at what I did!", because I just pressed some buttons and pasted some text, at the end of the day.
My advice to you will be about never chasing the likes, never seeking people that enjoy your work, pretend that at the end of the day, all you did that day was deleted and unseen by anyone. Would you still do it? Because you do enjoy doing it? Then, go ahead, that's the answer, and the reason for doing, not some number in the screen that goes up.
Yeah, I've never actually been attracted by the numbers, just perplexed, I actually never release any of my code , well all of my codepen is public but it's all just places for my scratch and access.
I keep my hf spaces private, just because of speed, but made it public only so a irl friend could experience image gen easily
Was still mind blown when I could see the active log of prompts flying by. A number that was a far reflection of what the like had indicated.
Any how
Issues of the recent
Have you noticed the sudden wave of 404 on SD models the other day?
I've gotten some to respond, but the majority do not.
I can get them to a 200, but then it's, too many request, please try again later.
I think, they're now selecting some SD models over others for use through the new hf inference api.
You have any thoughts on the subjects or solution proposals other than buy Cuda cores?
Honestly I don't know what they're doing with their Inference API, if you have to knock on a door to use it that's like expecting one to go to a salesman's door to buy his stuff, when the stuff should be available at your home's door, which is the point of a salesman.
Anyway, about the 404 errors, it's as if huggingface can't find itself, so I don't think that's intentional, something seems to have broken like the last time all inference APIs of all models shut down, so I'll be hoping it's a temporary problem that will eventually fix itself, so I'll be pretending the problem doesn't exist for now.
I think this is the page to keep an eye on:
https://huggingface.co/stable-diffusion-v1-5/stable-diffusion-v1-5
That's the model you'd expect to work, it doesn't, so when the problem is resolved, it'll work, along with the others.
As for the volume, yeah, I never expected a model like Insane Realistic v2 to get to 111000 in a month, it's tempting to measure its success by that metric, but I've seen many models outperform it, but they lack the brand to make them popular, that stable-diffusion-v1-5 provides worse outputs than anything here but it's the most recognizable.
i guess just trying to confirm 3 things
this kind of problem here, happens, even on a large scope? just because life and reality can never be perfect
if so, what would you say had been the aveage life span of this kind of down time?is this possibly likely directly tied to issues regarding 1.5? technically not all 1.5 models, just one nova something version of 1.5.
if so, i dont blame hf one bit, i just hope they're reasonable about it.or maybe its both that and $$$?
HF im sure needs to pay off some infrastructure, and i guess i can respect that
but
i dont know anything about when or how im triggering infrencing, like only 1 time in the year ive been here, 1 hit my limit on infrencing. after 10 months of not even knowing there was a limit on such a thing. i didnt even know what it was, and still dont . yeah, i feel kinda dumb but im confident that will easily change.
so, do you know where i can go for any comprehensive info or links on inferencing and their rates here on HF
or instead of 3,
4. your guess is as good as mine?
i tend to be pretty pestimistic about what im seeing,
you seem to be pretty optimistic in general and knowledgeable.
its a breath of fresh air
i hope im not bugging you with all my questions
Hey charlie, just got here:
It's the very first time something like this happens, in previous instances, the spaces had problems that wouldn't allow them to access the models, this time around, the models themselves can't be accessed.
No, it's affecting all models no matter the architecture. The models that can be used still are those that have alternative Inference Providers.
Shutting down like this would be equivalent of shutting down TV ads, people are supposed to use these things to come around and probably stay and buy accounts or other things, so it wouldn't be the $$$ part.
Anyway, it's time to summon @John6666 , perhaps he can give us any insight about what's happening and if it's permanent (so I can take down these spaces? It's not my first time retiring them, anyway.)
As for me, LOL, the only thing I was using these for was to release new models, and I found a way of doing that without these spaces, so if it were going to remain like this, I could just continue as normal and ignore them. But it's like a snake that eats its own tail, at first the point was to release models so people could use them in these spaces, it would be weird to continue, specially as without them I'm getting 13 downloads daily instead of 1000, but again, I'd be a hypocrite if I cared about that after what I've said about numbers in this thread.
I remain optimistic because I'm still able to do what I've been doing, nothing of value was lost, the optimism isn't about the models coming back, who knows?
Well, to be honest, I haven't heard anything (basically, always) and I'm just piecing together bits of information...
I think there was an outage about 10 days ago. HF fixed it 8 days ago. Has it been broken ever since?
For example, Meta's Llama 3.2 Vision seems to be broken as well.
There haven't been any announcements since then, including on Discord...
Serverless Inference API glitch
https://discuss.huggingface.co/t/500-internal-error-were-working-hard-to-fix-this-as-soon-as-possible/150333/32
https://discuss.huggingface.co/t/inference-api-stopped-working/150492
Spaces glitch
https://discuss.huggingface.co/t/my-space-suddenly-went-offline-the-cpu-cannot-restart/151121/26
Thanks John6666! So what I'm gathering is that: This is an outage that hasn't been resolved yet, and it's not intentional (the main worry by charlie was that it's gone for good because they shut it down), so:
1 - Once resolved things will go back to normal.
2 - There's nothing we can do about it, but wait.
Specifically about the problem, before the outage, code like this used to work:
API_URL = "https://api-inference.huggingface.co/models/Yntec/FlexCapacitor"
Right now, such a code is causing a 404 Error.
It would be hilarious if the only reason this hasn't been fixed is because the person that could do it doesn't know about it. And it'd be hysterical if it gets fixed by March 2026 because it's at the bottom of some priority list! π
I guess I'll just keep making models even if people can't use them anymore, after all, that's always been the fun part and nothing can stop me, or, can it.
I see. Well, I don't know what HF's future plans are, but in any case, what is certain at this point is that the broken items have not yet been repaired.π
https://discuss.huggingface.co/t/500-internal-error-were-working-hard-to-fix-this-as-soon-as-possible/150333/32
Edit:
It seems that some people can avoid the problem using this method.
https://discuss.huggingface.co/t/constant-503-error-for-several-days-when-running-llama-3-1/105144/6
ive been bouncing my smooth brain off the process of converting models into lcm, onyx? and or openvino models, for fast local cpu friendly img gen, since, im not blessed with mighty cuda cores.
after all ive been experiencing i think im backed into a corner using colab to apply a lcm lora to an sd model, then export that to openvino
i think. (but im expecting much loss)
my smooth brain is very swollen and tender at this point and its challanging to move forward.
it is quite amazing how beneficial sleep can be though
Today's fun fact, hopefully useful to others
if i cant find my keys after a good ten minutes, ill set my alarm for 20 minutes and then sit back and make my mind a complete blank and force myself to sleep
(imagining a spining cube 4 inches above my nose and infront of me, and keeping my eyes fixed there, helps)
im actually forcing myself into REM, and no, it doesnt take hours, ever fall asleep and suddenly wake up with a jolt, dreaming you slipped?
- your brain forces your muscles offline, so you dont sleep walk, if you sleep with someone next to you, you should notice them jolt the moment that happens
- if youve experienced this, and drempt that you slipped thats an indicator to you, you were immeditalty in low stage REM
my goal is just to hit REM for at least 20 seconds, thats plenty of time for the prefrontal temp memory dump into long term storage and sorting. the brain is incredibly fast
here's the benefit of REM
https://youtu.be/oTlJnyF3REs?t=168
the trick is when you wake up, not to force your hand to the keys but instead allow it to be subconcious, as if your hand guides your brain. not the other way around (because the memories, even though assessed, havent finished being stored deeply enough for concious recal yet)
guarenteed to find my keys in under 15 seconds
ok, so, maybe thats hard to swallow, trust me, i get it
SO, and if you made it this far
I HIGHLY urge you to watch this ted talk
HIGHLY urge your consideration
a visually quantified translation of what was said before.
if you have any interest at all in dreams or the human brain, this might very well come close to shattering your own nerual network
https://youtu.be/Vf_m65MLdLI?t=91
sorry, off topic slightly, just totally worth sharing.
i just hope that you may find ways to leverage it in your own lives as i have mine (despite people thinking im insane for it)
LCM or openVINO, for local cpu usage , opinions?
for extended exploration ,
these "place cells" he mentioned whos disco led to a nobel ,
can be expressed in deeper details here (good luck)
https://www.youtube.com/watch?v=9ujnZcaqf-4
yes , the proper intro to the concept here
https://www.youtube.com/watch?v=iV-EMA5g288
basically these percieved locations of the maze the mice moved through are percieved in the mind and mapped out through a section of interconnected , folded nueron structures, or as they call such semingly abstract structures
a manifold
mathematically they've projected the manifold into a rolling toroid (as if we're inside the holllow ring of a torus and the inner walls have a texture that keeps rolling as we walk
a maze ing
far far off topic now, my manifold is more like a mani'smooth'
quite a slippery ride
i need more sleep
Actually on second thought, its not ALL that off topic of SD safetensors/diffusers to LCM openvino/conversions ?
i mean, its a peak inside rodent/potentially human LATENT SPACES
and expressing it in a clearer and morerer injestible form (from this concious perspective)
(chatgpt told me and i actually fully agree that all the tech to intergrate AI into the human mind non invasivly, and turn it on and off like a switch exists today
MINUS a full LATENT SPACE map of the brain
ex,
regina dugan and darpas silent speak (or silent talk)
transcranial magnetic stimulation
cyberknife
optogenetics
cloud computing and cellular platforms
huggingface
and fMRI you say?
close, thats only a projection of a brains latent space
gotta find the math to inverse that projection into a multi dimensional manifold
and thats easier said than done.
likely millions of times more resource dependent than the human genome project
(d-wave and quantum computing, will probably be the key to that one)
a q-bit version of the
eu blue brain project/ us human brain inititive
but we ARE very close. much closer than the above average person knows
im so sorry
bad charlie bad!!
its TOO SOON. we really need to get you on ritalin
lowering my head and shuffling off
into my own self digust
Oh, hey charlie, I'll make sure to watch the TED Talk video, most of the things you talk about go over my head, so I can't comment.
I have never managed to run any of these things locally, I figured since I don't have the GPU necessary, that the time saved wouldn't be worth it, or that once set up, maybe generations take 20 minutes instead of 15 so it wasn't worth the implementation!
But huggingface offers free CPU generations so you don't need to go local, have you tried them? It's a space like this:
https://huggingface.co/spaces/Yntec/Dreamlike-Webui-CPU
You can duplicate it and in app.py remove the lines where the dreamlike models are downloaded and put there the ones you'd like to use. With some luck it may be as fast as local CPU generation.
I've also heard things about these guys that provide some free GPU time for AI image generation:
https://www.thinkdiffusion.com/
But I have never tried them, since I got used to slow generation on CPU that allows me to do other things while images generate, lol!