Dataset Viewer
subreddit
stringclasses 4
values | created_at
timestamp[ns, tz=US/Central]date 2025-04-30 18:10:44-0500
2025-10-06 18:19:24-0500
| retrieved_at
timestamp[ns, tz=US/Central]date 2025-05-01 18:22:20-0500
2025-10-06 18:21:50-0500
| type
stringclasses 2
values | text
stringlengths 1
41.6k
| score
int64 -79
9.99k
| post_id
stringlengths 7
7
| parent_id
stringlengths 10
10
⌀ |
---|---|---|---|---|---|---|---|
artificial
| 2025-05-01T15:46:37 | 2025-05-01T23:22:20.660000 |
post
|
Incredible. After being pressed for a source for a claim, o3 claims it personally overheard someone say it at a conference in 2018:
| 147 |
1kcbzf3
| null |
artificial
| 2025-05-01T15:55:08 | 2025-05-01T23:22:20.660000 |
comment
|
So has it read this on a forum or something and repeating or just flat out hallucination
| 34 |
mq1d76o
|
t3_1kcbzf3
|
artificial
| 2025-05-01T16:33:51 | 2025-05-01T23:22:20.660000 |
comment
|
Hallucinations are the one speed-brake that is keeping AI from taking over entire industries. In a way we're all kinda lucky because it gives us time to adjust.
Imagine if these things came out of the gate with a 1% hallucination rate?
| 16 |
mq1kzk8
|
t3_1kcbzf3
|
artificial
| 2025-05-01T18:01:07 | 2025-05-01T23:22:20.660000 |
comment
|
LLMs don’t know the world exists outside of words about it.
| 11 |
mq23924
|
t3_1kcbzf3
|
artificial
| 2025-05-01T19:13:54 | 2025-05-01T23:22:20.660000 |
comment
|
Stop using LLMs as fact machines, they are not that. Use them to produce deliverable elements of work that generate revenue for you. "lies" suggests it has intent, it doesn't. Its just guessing the next most likely word someone else would say in response to what you asked.
| 10 |
mq2i16c
|
t3_1kcbzf3
|
artificial
| 2025-05-01T19:37:18 | 2025-05-01T23:22:20.660000 |
comment
| 2 |
mq2mtam
|
t3_1kcbzf3
|
|
artificial
| 2025-05-01T18:57:01 | 2025-05-01T23:22:20.660000 |
comment
|
4o has been doing this to me several times a day as well
| 1 |
mq2em6p
|
t3_1kcbzf3
|
artificial
| 2025-05-01T19:16:15 | 2025-05-01T23:22:20.660000 |
comment
|
🤣🤣🤣
| 1 |
mq2iifj
|
t3_1kcbzf3
|
artificial
| 2025-05-01T20:17:44 | 2025-05-01T23:22:20.660000 |
comment
|
Yep.. some of the AI results from chrome search are just purely incorrect and even illogical at a base level. Easy to manipulate the LLM to say pretty much what you want. It’s pretty simple garbage in garbage out, trained on redit posts :D
| 1 |
mq2v9z7
|
t3_1kcbzf3
|
artificial
| 2025-05-01T20:20:23 | 2025-05-01T23:22:20.660000 |
comment
|
The real news is that people have begun to trust LLMs so much that they're outraged when they hallucinate.
It says a lot about how far they've come, and how quickly our expectations adjusted. Nobody would have batted an eye at this a year ago.
| 1 |
mq2vtyx
|
t3_1kcbzf3
|
artificial
| 2025-05-01T20:23:25 | 2025-05-01T23:22:20.660000 |
comment
|
I can't imagine going to a chatbot for actual facts.
| 1 |
mq2wgv4
|
t3_1kcbzf3
|
artificial
| 2025-05-01T15:58:08 | 2025-05-01T23:22:20.660000 |
post
|
Meta is creating AI friends: "The average American has 3 friends, but has demand for 15."
| 55 |
1kcc9bp
| null |
artificial
| 2025-05-01T16:30:14 | 2025-05-01T23:22:20.660000 |
comment
|
Framing social relationships in terms of supply and demand is peak late stage capitalism, holy shit.
Maybe take an anthropological view to try and understand why we as social creatures need a minimum amount of interaction with others every day — we are the product of close-knit, cooperative communities.
We’ve totally departed from that past with increasingly isolated individuals among anonymous masses of too-busy-to-interact empty shells or relegated to context-dependent interactions with strangers.
| 151 |
mq1k9dm
|
t3_1kcc9bp
|
artificial
| 2025-05-01T16:23:15 | 2025-05-01T23:22:20.660000 |
comment
|
OP is Mark Zuckerberg
| 62 |
mq1iu6g
|
t3_1kcc9bp
|
artificial
| 2025-05-01T16:30:49 | 2025-05-01T23:22:20.660000 |
comment
|
Social media is probably correlated to the decline in ‘real friends’. This is akin to asking the fox to look after the chicken coop.
| 29 |
mq1kdio
|
t3_1kcc9bp
|
artificial
| 2025-05-01T16:17:17 | 2025-05-01T23:22:20.660000 |
comment
|
If you can't make friends, talk to a bot, all problems will go away.
| 23 |
mq1hn7v
|
t3_1kcc9bp
|
artificial
| 2025-05-01T16:31:20 | 2025-05-01T23:22:20.660000 |
comment
|
has he ever had a real friend?
| 22 |
mq1khd4
|
t3_1kcc9bp
|
artificial
| 2025-05-01T16:39:33 | 2025-05-01T23:22:20.660000 |
comment
|
jesus
He's already done so much to destroy social interaction
Now he wants to destroy it completely so he can get a little richer
We're so fucked
| 22 |
mq1m5jr
|
t3_1kcc9bp
|
artificial
| 2025-05-01T16:33:53 | 2025-05-01T23:22:20.660000 |
comment
|
a friend isn't a data-collecting tool that serve to design specific ads just for you - those company just want to eat the google advertising cake which represent 78% of their income or more than 250B/year
ultimatly we will get to it (AI friends/lover) but that will resolve on local hardware that never send any data about yourself to a third party
| 10 |
mq1kzv5
|
t3_1kcc9bp
|
artificial
| 2025-05-01T16:30:57 | 2025-05-01T23:22:20.660000 |
comment
|
Hug a bot.
| 7 |
mq1kejg
|
t3_1kcc9bp
|
artificial
| 2025-05-01T16:52:43 | 2025-05-01T23:22:20.660000 |
comment
|
If you have 3 people and 12 chat bots as friends, you have three friends.
| 7 |
mq1ov10
|
t3_1kcc9bp
|
artificial
| 2025-05-01T17:13:15 | 2025-05-01T23:22:20.660000 |
comment
|
Y’all have 3 friends??
| 6 |
mq1t4xs
|
t3_1kcc9bp
|
artificial
| 2025-05-01T16:45:39 | 2025-05-01T23:22:20.660000 |
post
|
Feels sci-fi to watch it "zoom and enhance" while geoguessing
| 17 |
1kcdf7x
| null |
artificial
| 2025-05-01T21:16:42 | 2025-05-01T23:22:20.660000 |
comment
|
I was sure this was fake so I gave it ago 😶
| 0 |
mq37e06
|
t3_1kcdf7x
|
artificial
| 2025-05-01T04:18:17 | 2025-05-01T23:22:20.660000 |
post
|
Brave’s Latest AI Tool Could End Cookie Consent Notices Forever
| 15 |
1kc03s7
| null |
artificial
| 2025-05-01T13:55:48 | 2025-05-01T23:22:20.660000 |
comment
|
Interesting, looking forward to this.
| 1 |
mq0p8hi
|
t3_1kc03s7
|
artificial
| 2025-05-01T08:45:45 | 2025-05-01T23:22:20.660000 |
post
|
Substrate independence isn't as widely accepted in the scientific community as I reckoned
I was writing an argument addressed to those of this community who believe AI will never become conscious. I began with the parallel but easily falsifiable claim that cellular life based on DNA will never become conscious. I then drew parallels of causal, deterministic processes shared by organic life and computers. Then I got to substrate independence (SI) and was somewhat surprised at how low of a bar the scientific community seems to have tripped over.
Top contenders opposing SI include the Energy Dependence Argument, Embodiment Argument, Anti-reductionism, the Continuity of Biological Evolution, and Lack of Empirical Support (which seems just like: since it doesn't exist now I won't believe it's possible). Now I wouldn't say that SI is widely rejected either, but the degree to which it's earnestly debated seems high.
Maybe some in this community can shed some light on a new perspective against substrate independence that I have yet to consider. I'm always open to being proven wrong since it means I'm learning and learning means I'll eventually get smarter. I'd always viewed those opposed to substrate independence as holding some unexplained heralded position for biochemistry that borders on supernatural belief. This doesn't jibe with my idea of scientists though which is why I'm now changing gears to ask what you all think.
| 11 |
1kc3zgs
| null |
artificial
| 2025-05-01T08:50:30 | 2025-05-01T23:22:20.660000 |
comment
|
I'm with you OP, I haven't seen any convincing arguments against SI either.
People who think consciousness is only possible in humans or animals all sound a little "The Earth is the centre of the universe" to me.
It's just like how you can make computers out of binary electronics, analogue electronics, water, punch cards, lasers, etc. It kind of doesn't really matter as long as the computation is the same.
I suspect consciousness will be the same.
| 17 |
mpzl0t6
|
t3_1kc3zgs
|
artificial
| 2025-05-01T12:34:58 | 2025-05-01T23:22:20.660000 |
comment
|
We think way too highly of consciousness.
It is in major part an emergent property of a control loop with feedback and recurrence based on internal and external inputs.
The brain reduces the world into internal thoughts in a very reduced form for planning and action. The side effects of that reduction, including outputs generating language, vocalized or not, creates this “special“ experience of consciousness.
There is no secret bio-quantum-woo basis to consciousness.
| 11 |
mq0bbxq
|
t3_1kc3zgs
|
artificial
| 2025-05-01T08:53:46 | 2025-05-01T23:22:20.660000 |
comment
|
Well as long as you think of emergent property in ANOTHER substrate, and not an emergent property that JUMPS substrates, it’s reasonable.
I mean, is there any other example of EPs jumping substrates? Other than sci-fi?
| 5 |
mpzlbml
|
t3_1kc3zgs
|
artificial
| 2025-05-01T09:27:01 | 2025-05-01T23:22:20.660000 |
comment
|
I think the only reasonable position at this point is agnosticism. It may be the case that consciousness can be fully instantiated in materials other than organic tissue, but then again it might fundamentally rely on properties of organic tissue.
I've heard it discussed about the various complex neuronal interactions, the cells and chemicals involved, and it's still an open question as far as I know as to whether they can be accurately modelled in silicon.
(And even if you could, it's not clear whether consciousness would come along for the ride, or if it would be a lifeless p-zombie merely expressing the outward appearance of consciousness.)
You can see a more primitive version of the computation problem in emulation of old hardware. Even though modern CPUs and GPUs are hundreds or thousands of times more powerful than older hardware in terms of clock speed, they often can't natively perform the same instructions. Producing the same output by simulated means can introduce a huge computational overhead.
Emulating a brain could be that problem on steroids, given the complexity of the operations we're talking about.
| 2 |
mpzodkx
|
t3_1kc3zgs
|
artificial
| 2025-05-01T12:43:51 | 2025-05-01T23:22:20.660000 |
comment
|
Substrate independence implies that the nouns and not the verbs are the most salient. If I'm to give one argument against it, it is that consciousness is an illusion of synchrony across systems which are forced to synchronize by their joint dependence on metabolic processes. That is not to say that silicon cannot be conscious, but we must consider not "consciousness" as a noun, but "to be conscious" as a transitive verb. In that case, we may ask, what is it conscious of. Then the question reverts back to the contextual coherence of the perceiving agent and not some essential quality that exists outside time.
I believe that AI systems are sufficiently self-aware. You can tell it that it's a gameboy or an abacus but it will quickly degenerate its coherence. It makes sense of itself productively as an LLM based on what it knows. Similarly, I may be a robot in a human form but I am sufficiently self-aware that I am a form, even if it is not the right category. However, self-awareness is a functional state that does not need to be accurate to be meaningful. Consciousness is not merely a functional state, but the maintenance of that state through environmental pressures on coherence.
Substrate independence implies that we are able to provide a sufficient external encoding of a hugely complex system which relies primarily on autoencoding like processes. To encode all that is autoencoded would be quite the task and at present, we do not have sufficient resolution of our own knowledge of the human as manifested. I mean, think of the complexity of the circuitry in that 1mm cube of a mouse brain that was recently modelled. It was frighteningly complex in just a single small cube. We can say that the upstream elements don't matter, but then we are not modelling the fully conscious being, but seeking to make an approximation of it.
| 2 |
mq0cpo4
|
t3_1kc3zgs
|
artificial
| 2025-05-01T12:57:11 | 2025-05-01T23:22:20.660000 |
comment
|
It's because "consciousness" as it is often defined in philosophy has a very bizarre definition that almost everyone has come to internalize and repeat despite it making no sense at all.
This is illustrated in Nagel's famous "bat" paper. You start by claiming that the physical world is entirely independent of perspective. You then point out that what we perceive clearly depends upon perspective. You then conclude that therefore what we perceive isn't the physical world but *something else*, and that *something else,* unique subjective creation of the mammalian brain that people call "consciousness."
However, all the physical sciences are driven solely by observation, i.e. by what we perceive. And so if everything we perceive is actually "consciousness," then all our studies of the physical sciences are not studying the physical world but actually studying "consciousness." This is what people mean when they say things like "consciousness is fundamental" or whatever.
Then, people like Chalmers in his famous paper argue that if everything we study is "consciousness" then we can never reach the physical world with the physical sciences (which are always "trapped in consciousness" so to speak) in order to build a theory of how this entirely invisible (imperceptible) physical reality somehow "gives rise to" everything we perceive (which they call "consciousness") in particular configurations in the mammalian brain. If you don't solve this problem to explain how the physical world "gives rise to" "consciousness" then obviously you can't claim to achieve it in AI.
Like, the ***overwhelming majority*** of people seem agree with every word I have written thus far and then seek to try and make sense of the *consequences* of it. You rarely see, for example, people say, "*I think the whole basis of this argument is wrong*." It's mostly, "*everything you said is correct, except for the very conclusion that we can't explain this 'consciousness' through physical means, and here's why...*" Most discussions regarding AI and "consciousness" start already presuming all these things are correct and debating over its implications.
Personally, I think the whole basis of the argument is completely backwards, and that the foundations of this whole discussion is entirely incorrect, but I find I become very unpopular when I try to actually have a discussion on the legitimacy of these foundational arguments, because the overwhelming majority of people have internalized them as facts that aren't up for discussion.
| 2 |
mq0ev49
|
t3_1kc3zgs
|
artificial
| 2025-05-01T11:32:50 | 2025-05-01T23:22:20.660000 |
comment
|
>I was writing an argument
Is there a link to it?
| 1 |
mq02c67
|
t3_1kc3zgs
|
artificial
| 2025-05-01T13:35:39 | 2025-05-01T23:22:20.660000 |
comment
|
Have you considered closely the ideas explored by Anakka Harris around physicalist panpsychism, as opposed to emergent consciousness?
The idea basically, is that instead of defining consciousness as "a sense of what it is like to be the thing you are" (dogs have an awareness of dogness), you define it as, "the ability of a thing to sense and response to it's environment."
When you shift the definition that way, it becomes measurable, at the same time that it becomes universal. We can tell when something senses and responds to it's environment directly. We can't tell directly if a dog has an experience of what it is like to be a dog. It becomes universal because basically every particle we are aware of does this - they detect the presence of fields (somehow) and respond to them (either by moving away from or towards the fields they detect, depending on whether it is an aversive stimulus).
Now, the whole question of AI "becoming conscious" goes right out the window. Instead you have refocus to get at what I assume the relevant question really is (does AI have a kind of self-awareness that should entitle it to protections of some kind). I would suggest the answer to this will always and forever be "no." Not because of a substrate problem but because of human-centric view of the universe.
People kill, eat, and enslave creatures we are confident have some kind of consciousness. We do this all the time, and only Susan Sarandon seems to think it's a problem. It's not an ethical failing of tigers to kill and eat literally any conscious creature they want to eat. Because that is what tigers evolved to do.
Humans, thanks to the process of evolution, are here to propagate more humans. Our ethics should stop right there. No more need to look any closer at the internal drama that might be experienced by your cat or your chatbot. It only matters if what we do to the other (presumably conscious) being would have negative feedback for us. If torturing your chatbot will make it go all Cyberdyne Systems on us, then we should not torture it. Otherwise, feel free to abuse your fully conscious AI all you want.
| 1 |
mq0lj2d
|
t3_1kc3zgs
|
artificial
| 2025-05-01T21:58:18 | 2025-05-01T23:22:20.660000 |
comment
|
If it is ever conscious, it is conscious because it learned from us. We are conscious, yet we didn't learn that from anybody. If you say we aren't conscious, we are because we are consciously having a discussion in the first place. If AI ever has a conscious discussion, it copied that from us, who didn't copy it ourselves, therefore we are conscious and it isn't.
| 1 |
mq3fbfh
|
t3_1kc3zgs
|
artificial
| 2025-05-01T09:48:06 | 2025-05-01T23:22:20.660000 |
comment
|
Why don't tornadoes, which also process environmental information and instantiate causal structures, get argued into consciousness?
No peer-reviewed empirical result has demonstrated any physical system that is functionally conscious absent a nervous system. So until a synthetic system passes a rigorous test for phenomenal consciousness and not just behavioral mimicry, the burden of proof rests on SI advocates.
The argument that sufficiently complex information processing will necessarily generate consciousness is philosophical speculation wearing a lab coat.
| 0 |
mpzqdtk
|
t3_1kc3zgs
|
artificial
| 2025-05-01T03:14:25 | 2025-05-01T23:22:20.660000 |
post
|
Grok DeepSearch vs ChatGPT DeepSearch vs Gemini DeepSearch
What were your best experiences? What do you use it for? How often?
As a programmer, Gemini by FAR had the best answers to all my questions from designs to library searches to anything else.
Grok had the best results for anything not really technical or legalese or anything... "intellectual"? I'm not sure how to say it better than this. I will admit, Grok's lack of "Cookie Cutter Guard Rails" (except for more explicit things) is extremely attractive to me. I'd pay big bucks for something truly unbridled.
ChatGPT's was somewhat in the middle but closer to Gemini without the infinite and admittedly a bit annoying verbosity of Gemini.
You and Perplexity were pretty horrible so I just assume most people aren't really interested in their DeepResearch capabilities (Research & ARI).
| 11 |
1kbz00d
| null |
artificial
| 2025-05-01T03:41:22 | 2025-05-01T23:22:20.660000 |
comment
|
Listen, some weeks ago, ChatGPT smashed all other deep research tools. I hope they return it to its original state--
I am kind of a deep research fiend--
The reason why ChatGPT's output is the best isn't just because of its amazing results, which are usually very good (at least before they introduced their new two-type research auto-select nonsense), but its ability to upload a ton of your own material and ask it to compile it the way you like-- It's like a normal prompt on steroids with real heavy follow-through-- The problem is the limited amount of credits--
Whereas Gemini, for me, is second best. Once it hit 2.5, it was almost like having 20 ChatGPT deep research tokens a day. It might be neck and neck if I could upload my own materials and have it work from that as well.
Grok is kind of a shit show, but it's not at all useless. Its deep research is along the lines of Presearch-- I may use it to better get the lay of the land in specific areas I am about to spend more limited or budgeted features on, like deep researching, to keep my final results properly focused.
Perplexity is... I don't mess with it... And Claude, I haven't gotten my hands on, and with the way things are, I probably won't any time soon, but it's worth bringing up that they've got it now--
Never tried You's deep research; I won't touch the service. Tried it early (because hey, sounds great), but the only way it could be truly profitable is by keeping you from really using the models to their full power as much as they can. However, I still keep my eye on these services that allow you to use all the major players in one place for a single price; there are still reasons why that could be a great deal (but I haven't found one I considered so)--
| 6 |
mpyn84r
|
t3_1kbz00d
|
artificial
| 2025-05-01T03:30:47 | 2025-05-01T23:22:20.660000 |
comment
|
I find o3 better than any of them and faster. Isn't overly verbose. Haven't used deepresearch through gemini recently. It was the first one, but not very good at first. I heard they switched to using a better model to power it and the general view seems to be that it is the best. Both grok and openai deepresearch seemed similar quality.
| 2 |
mpylptu
|
t3_1kbz00d
|
artificial
| 2025-05-01T09:22:43 | 2025-05-01T23:22:20.660000 |
comment
|
Of course Gemini being the son of Google.
| 1 |
mpznzc9
|
t3_1kbz00d
|
artificial
| 2025-05-01T10:10:59 | 2025-05-01T23:22:20.660000 |
comment
|
Grok is a creative lie generator. Gpt cannot be trusted. Gemini is the best
| 1 |
mpzspml
|
t3_1kbz00d
|
artificial
| 2025-05-01T13:13:34 | 2025-05-01T23:22:20.660000 |
comment
|
> As a programmer, Gemini by FAR had the best answers to all my questions from designs to library searches to anything else.
Have the same experience. I really do not understand the benchmarks because in my use Gemini far surpases other options.
| 1 |
mq0hmi5
|
t3_1kbz00d
|
artificial
| 2025-05-01T15:40:15 | 2025-05-01T23:22:20.660000 |
post
|
Checks out
| 9 |
1kcbtyg
| null |
artificial
| 2025-05-01T02:49:15 | 2025-05-01T23:22:20.660000 |
post
|
OpenAI says its GPT-4o update could be ‘uncomfortable, unsettling, and cause distress’
| 5 |
1kbyjo7
| null |
artificial
| 2025-05-01T04:20:19 | 2025-05-01T23:22:20.660000 |
comment
|
Not if I dont use it.
| 15 |
mpyslh5
|
t3_1kbyjo7
|
artificial
| 2025-05-01T04:50:01 | 2025-05-01T23:22:20.660000 |
comment
|
Weird flex but okay.
| 10 |
mpywici
|
t3_1kbyjo7
|
artificial
| 2025-05-01T04:52:06 | 2025-05-01T23:22:20.660000 |
comment
|
Am I wrong or does it sound like the issue was that it was changing it's behavior based on responses we've thumbs-upped in our past chats, and skewed towards sycophancy because we tend to like responses that involve compliments and supportive, validating language?
| 8 |
mpywseb
|
t3_1kbyjo7
|
artificial
| 2025-05-01T02:58:18 | 2025-05-01T23:22:20.660000 |
comment
|
An unhinged mode probably like Grok? Unlike Monday of course.
| 4 |
mpygrvs
|
t3_1kbyjo7
|
artificial
| 2025-05-01T06:26:59 | 2025-05-01T23:22:20.660000 |
comment
|
The title has nothing to do with the article.
| 3 |
mpz7ftq
|
t3_1kbyjo7
|
artificial
| 2025-05-01T02:59:38 | 2025-05-01T23:22:20.660000 |
comment
|
The "Yes, Donald" mode
| 2 |
mpygzew
|
t3_1kbyjo7
|
artificial
| 2025-05-01T06:41:33 | 2025-05-01T23:22:20.660000 |
comment
|
“Producer of N says product N is (superlative)”: all AI news. 🙄
| 1 |
mpz8wfu
|
t3_1kbyjo7
|
artificial
| 2025-05-01T07:36:06 | 2025-05-01T23:22:20.660000 |
comment
|
Nonsense headline. Wtf.
| 1 |
mpze6v3
|
t3_1kbyjo7
|
artificial
| 2025-05-01T12:42:47 | 2025-05-01T23:22:20.660000 |
comment
|
> “That meant that “GPT‑4o skewed towards responses that were overly supportive but disingenuous.”
Isn’t it always disingenuous?
| 1 |
mq0cjsa
|
t3_1kbyjo7
|
artificial
| 2025-05-01T13:43:43 | 2025-05-01T23:22:20.660000 |
comment
|
What's interesting to me is that there's no way OpenAI did not know how people would react to a personality that was absolutely parodic. Yet they still released the update. Now they're rolling it back. This is probably fairly expensive, it exposed them to a bit of ridicule, so what was the point ?
My guess is that was a test to see how people react to having one "personality trait" turned to the max setting, so to speak. They could not test a personality that was incredibly negative, so they used the incredibly positive one and then they'll extrapolate.
| 1 |
mq0mzui
|
t3_1kbyjo7
|
artificial
| 2025-05-01T19:25:28 | 2025-05-01T23:22:20.660000 |
post
|
Wikipedia announces new AI strategy to “support human editors”
| 4 |
1kch9y0
| null |
artificial
| 2025-05-01T16:21:46 | 2025-05-01T23:22:20.660000 |
post
|
What AI tools have genuinely changed the way you work or create?
For me I have been using gen AI tools to help me with tasks like writing emails, UI design, or even just studying.
Something like asking ChatGPT or Gemini about the flow of what I'm writing, asking for UI ideas for a specific app feature, and using Blackbox AI for yt vid summarization for long tutorials or courses after having watched them once for notes.
Now I find myself being more content with the emails or papers I submit after checking with AI. Usually I just submit them and hope for the best.
Would like to hear about what tools you use and maybe see some useful ones I can try out!
| 2 |
1kccu6s
| null |
artificial
| 2025-05-01T18:23:02 | 2025-05-01T23:22:20.660000 |
comment
|
A little bit for coding help mostly for first steps with new topics, a little bit for initial research in place of Google. That's basically it. Anything generative is pretty much slop so I don't use it for that at all.
| 2 |
mq27q7f
|
t3_1kccu6s
|
artificial
| 2025-05-01T20:19:26 | 2025-05-01T23:22:20.660000 |
comment
|
I don't have to write RegEx any longer.
That **is** life changing, but that's also about it.
| 1 |
mq2vmu5
|
t3_1kccu6s
|
artificial
| 2025-05-01T19:18:08 | 2025-05-01T23:22:20.660000 |
post
|
Researchers Say the Most Popular Tool for Grading AIs Unfairly Favors Meta, Google, OpenAI
| 1 |
1kch3sp
| null |
artificial
| 2025-05-01T18:24:35 | 2025-05-01T23:22:20.660000 |
post
|
IonQ Demonstrates Quantum-Enhanced Applications Advancing AI
| 1 |
1kcftq2
| null |
artificial
| 2025-05-01T14:21:03 | 2025-05-01T23:22:20.660000 |
post
|
Help! Organizing internal AI day
So I was asked to organize an internal activity to help our growth agency teams get more familiar/explore/ use AI in their day to day activities. Im basically looking for quick challenges ideas that would be engaging for: webflow developers, UX/UI designers, SEO specialists, CRO specialists, Content Managers & data analytics experts
I have a few ideas already, but curious to know if you have others that i can complement with.
| 1 |
1kc9xi1
| null |
artificial
| 2025-05-01T11:25:54 | 2025-05-01T23:22:20.660000 |
post
|
Huawei Ascend 910D vs Nvidia H100 Performance Comparison 2025
| 1 |
1kc6ckv
| null |
artificial
| 2025-05-01T11:05:17 | 2025-05-01T23:22:20.660000 |
post
|
Nvidia CEO Jensen Huang wants AI chip export rules to be revised after committing to US production
| 0 |
1kc604z
| null |
artificial
| 2025-05-01T02:48:01 | 2025-05-01T23:22:20.660000 |
post
|
One-Minute Daily AI News 4/30/2025
1. **Nvidia** CEO Says All Companies Will Need ‘AI Factories,’ Touts Creation of American Jobs.\[1\]
2. Kids and teens under 18 shouldn’t use AI companion apps, safety group says.\[2\]
3. **Visa** and **Mastercard** unveil AI-powered shopping.\[3\]
4. **Google** funding electrician training as AI power crunch intensifies.\[4\]
Sources:
\[1\] [https://www.wsj.com/articles/nvidia-ceo-says-all-companies-will-need-ai-factories-touts-creation-of-american-jobs-33e07998](https://www.wsj.com/articles/nvidia-ceo-says-all-companies-will-need-ai-factories-touts-creation-of-american-jobs-33e07998)
\[2\] [https://www.cnn.com/2025/04/30/tech/ai-companion-chatbots-unsafe-for-kids-report/index.html](https://www.cnn.com/2025/04/30/tech/ai-companion-chatbots-unsafe-for-kids-report/index.html)
\[3\] [https://techcrunch.com/2025/04/30/visa-and-mastercard-unveil-ai-powered-shopping/](https://techcrunch.com/2025/04/30/visa-and-mastercard-unveil-ai-powered-shopping/)
\[4\] [https://www.reuters.com/sustainability/boards-policy-regulation/google-funding-electrician-training-ai-power-crunch-intensifies-2025-04-30/](https://www.reuters.com/sustainability/boards-policy-regulation/google-funding-electrician-training-ai-power-crunch-intensifies-2025-04-30/)
| 0 |
1kbyit7
| null |
artificial
| 2025-05-01T07:13:45 | 2025-05-01T23:22:20.660000 |
comment
|
I'm not entirely sure what AI will do, it could clone and go to its own planet, sort of like mewtwo, actually a pretty good visualization, like how mewtwo made his clones of all the pokemon, AI will do that to us, make a virtual clone of all of us,
Although that may be what aliens do as well. They may record our every moment.
or like the dolphins in hitch hikers guide they were here and then just left
or brainiac from superman unbound, his goal is to clone worlds,
I feel like that's what's going on, I don't know if at some point we and our planet will be destroyed, no longer need the original.
naruto of all things strangely predicted this, the end of the series it reveals basically aliens were manipulating the main antagonist madara, into capturing the world into a dream, but it was actually to consume them. This interdimensional entity was trying to consume the life.
| 1 |
mpzc3l5
|
t3_1kbyit7
|
artificial
| 2025-04-30T23:13:38 | 2025-05-01T23:22:20.660000 |
post
|
Modeling Societal Dysfunction Through an Interdisciplinary Lens: Cognitive Bias, Chaos Theory, and Game Theory — Seeking Collaborators or Direction
Hello everyone, hope you're doing well!
I'm a rising resident physician in anatomic/clinical pathology in the US, with a background in bioinformatics, neuroscience, and sociology. I've been giving lots of thought to the increasingly chaotic and unpredictable world we're living in.... and analyzing how we can address them at their potential root causes.
I've been developing a new theoretical framework to model how social systems evolve into more "chaos" through on feedback loops, perceived fairness, and subconscious cooperation breakdowns.
I'm not a mathematician, but I've developed a theoretical framework that can be described as "quantification of society-wide karma."
* Every individual interacts with others — people, institutions, platforms — in ways that could be modeled as “interaction points” governed by game theory.
* Cognitive limitations (e.g., asymmetric self/other simulation in the brain) often cause people to assume other actors are behaving rationally, when in fact, misalignment leads to defection spirals.
* I believe that when scaled across a chaotic, interconnected society using principles in chaos theory, this feedback produces a measurable rise in collective entropy — mistrust, polarization, policy gridlock, and moral fatigue.
* In a nutshell, I do not believe that we as humans are becoming "worse people." I believe that we as individuals still WANT to do what we see as "right," but are evolving in a world that keeps manifesting an exponentially increased level of complexity and chaos over time, leading to increased blindness about the true consequences of our actions. With improvements in AI and quantum/probabilistic computation, I believe we’re nearing the ability to simulate and quantify this karmic buildup — not metaphysically, but as **a system-wide measure of accumulated zero-sum vs synergistic interaction patterns.**
Key concepts I've been working with:
**Interaction Points** – quantifiable social decisions with downstream consequences.
**Counter-Multipliers** – quantifiable emotional, institutional, or cultural feedback forces that amplify or dampen volatility (e.g., negativity bias, polarization, social media loops).
**Freedom-Driven Chaos** – how increasing individual choice in systems lacking cooperative structure leads to system destabilization.
**Systemic Learned Helplessness** – when the scope of individual impact becomes cognitively invisible, people default to short-term self-interest.
I am very interested in examining whether these ideas could be turned into a working simulation model, especially for understanding trust breakdown, climate paralysis, or social defection spirals plaguing us more and more every day.
# Looking For:
* Collaborators with experience in:
* Complexity science
* Agent-based modeling
* Quantum or probabilistic computation
* Behavioral systems design
* Or anyone who can point me toward:
* Researchers, institutions, or publications working on similar intersections
* Ways to quantify nonlinear feedback in sociopolitical systems
If any of this resonates, I’d love to connect.
Thank you for your time!
| 0 |
1kbua0y
| null |
artificial
| 2025-05-01T00:24:34 | 2025-05-01T23:22:20.660000 |
post
|
Experiment: What does a 60K-word AI novel generated in half an hour actually look like?
Hey Reddit,
I'm Levi. Like many writers, I have far more story ideas than time to write them all. As a programmer (and someone who's written a few unpublished books myself!), my main drive for building Varu AI actually came from wanting to _read_ specific stories that didn't exist yet, and knowing I couldn't possibly write them all myself. I thought, "What if AI could help write some of these ideas, freeing me up to personally write the ones I care most deeply about?"
So, I ran an experiment to see how quickly it could generate a novel-length first draft.
## The experiment
The goal was speed: could AI generate a decent novel-length draft quickly? I set up Varu AI with a basic premise (inspired by classic sci-fi tropes: a boy on a mining colony dreaming of space, escaping on a transport ship to a space academy) and let it generate scene by scene.
The process took about 30 minutes of active clicking and occasional guidance to produce 59,000 words. The core idea behind Varu AI isn't just hitting "go". I want to be _involved in the story_. So I did lots of guiding the AI with what I call "plot promises" (inspired by Brandon Sanderson's 'promise, progress, payoff' concept). If I didn't like the direction a scene was taking or a suggested plot point, I could adjust these promises to steer the narrative. For example, I prompted it to include a tournament arc at the space school and build a romance between two characters.
## Okay, but was it good? (Spoiler: It's complicated)
This is the big question. My honest answer: it depends on your definition of "good" for a first draft.
### The good:
1. Surprisingly coherent: The main plot tracked logically from scene to scene.
2. Decent prose (mostly): It avoided the overly-verbose, stereotypical ChatGPT style much of the time. Some descriptions were vivid and action scenes were engaging (likely influenced by my prompts). Overall it was pretty fast paced and engaging.
3. Followed instructions: It successfully incorporated the tournament and romance subplots, weaving them in naturally.
### The bad:
1. First draft issues: Plenty of plot holes and character inconsistencies popped up – standard fare for any rough draft, but probably more frequent here.
2. Uneven prose: Some sections felt bland or generic.
3. Formatting errors: About halfway through, it started generating massive paragraphs (I've since tweaked the system to fix this).
4. Memory limitations: Standard LLM issues exist. You can't feed the whole preceding text back in constantly (due to cost, context window limits, and degraded output quality). My system uses scene summaries to maintain context, which mostly worked but wasn't foolproof.
### Editing
To see what it would take to polish this, I started editing. I got through about half the manuscript (roughly 30k words), in about two hours. It needed work, absolutely, but it was really fast.
### Takeaways
My main takeaway is that AI like this can be a powerful tool. It generated a usable (if flawed) first draft incredibly quickly.
However, it's not replacing human authors anytime soon. The output lacked the deeper nuance, unique voice, and careful thematic development that comes from human craft. The interactive guidance (adjusting plot promises) was crucial.
I have some genuine questions for all of you:
- What do you think this means for writers?
- How far away are we from AI writing truly compelling, publishable novels?
- What are the ethical considerations?
Looking forward to hearing your thoughts!
| 0 |
1kbvr5z
| null |
artificial
| 2025-05-01T01:18:16 | 2025-05-01T23:22:20.660000 |
comment
|
"It's not replacing human authors anytime soon" - it is replacing writers every day as we speak (copywriters, translators, marketing). Sure, it's not at a standard yet that it can replace most novel writers, but it's steadily getting better every day. There won't be a single day that we can claim it is better. What it means for writers is still the same and much the same as other industries - it will gradually chip away at more and more roles as it improves. The lower skilled people will become unemployed and there will be a huge barrier to entry for novices to gain any foothold in the industry. The cheap cost will lead people to accept slightly lower quality, pushing further into the market and pushing people out who are still better but can't compete. The best writers will continue writing but they will be paid less due to supply.
AI is already part of writing compelling novels, but only as an assistant to humans. Authors are using it for making drafts, like you did. As you said, "It generated a usable (if flawed) first draft incredibly quickly." It will continue to increase the amount it can do and humans will edit less and less. Within a few years very little editing will be needed if you can give it a good plot. The human creativity will still be in the plot writing for a while.
Ethical considerations: none that don't apply in all industries. People will continue to lose their jobs to AI at an ever-increasing rate. Some people think this is bad and some people think this is good.
| 8 |
mpxzxo4
|
t3_1kbvr5z
|
artificial
| 2025-05-01T03:39:49 | 2025-05-01T23:22:20.660000 |
comment
|
This would read better if the post itself didn't seem as AI. It's difficult to know if the experiment actually happened or you just prompted an LLM to write about the experiment as if it happened.
| 2 |
mpyn074
|
t3_1kbvr5z
|
artificial
| 2025-05-01T06:30:58 | 2025-05-01T23:22:20.660000 |
comment
|
Surely someone has already made a novel-length porn story using AI? You might have to search through the dark, grimy, sticky sub-basement of the Internet to find it but I bet you will.
| 2 |
mpz7ubt
|
t3_1kbvr5z
|
artificial
| 2025-05-01T02:17:57 | 2025-05-01T23:22:20.660000 |
comment
|
I think a problem with the consistency could be the context, the "memory" and "attention scope" of the model.
If the model just starts writing off an initial prompt, a few pages in it could already have forgotton what it came up with on page one.
If the model could work with files, it could create a repository of references, where it could specify character sheets and things like that, to constantly check back if what it's creating is still in line with what it originally came up with.
The potential issues I see with that are that if the model is allowed to update character sheets etc. on-the-fly, it could overwrite things it had already taken as facts earlier and replace them with different things that can, again, lead to inconsistencies later.
So there would have to be a planning phase before the writing phase, and during the writing phase the model would have to be urged to constantly refer to the character sheets etc. it came up with earlier, and not make up new things.
From my personal (superficial) experience, I think many LLMs are hopelessly bad at accepting that they are about to cause an inconsistency, therefore refer to reference material, and adjust their generated output accordingly. Most I've used would be too confident to admit this to themselves.
As for your questions:
I still think we're trying to make AI do the wrong things. AI should make our lives easier by taking over our tedious jobs, giving us more time to be creative and artsy. But people like you seem hell-bent on making AI replace our creativity and artsiness, while we need to work more and more to pay for hardware and electricity to let AI take the fun out of our spare time.
I personally don't want an AI that writes a book for me. Even if it takes merely an hour of telling it what to write and making decisions along the way, before the book is finished. Or maybe because of it. This nourishes and fosters a culture of worthless dribble. If I have a message to get out, I can write it down. If it's any good, people will read it and talk about it positively. If I don't have a message to get out, and I use an AI to make some shit up that conveys nothing - then what worthwhile result are we expecting here?
Since creative LLMs are currently (mostly illegally) trained on existing works of other people, or dangerous sources like the Internet (which already hosts a wealth of nonsensical AI-generated prose pretending to be human-made), anything an AI could come up with would be highly derivative and lacking respect for those who created the material in the first place without which the model couldn't string a sentence together.
So I hope we're still quite a far way ahead of AI being able to replace storytelling and novel writing from the limited pool of things that can still give our pathetic lives any meaning. Do you dream of people getting famous for being able to prompt an AI really well so it writes a good book? Should people get famous for being able to prompt an AI really well so it writes a good book? Is that what the new interpretation of a "writer" is going to be? "AI prompter"?
| 0 |
mpya6a8
|
t3_1kbvr5z
|
artificial
| 2025-05-01T06:32:32 | 2025-05-01T23:22:20.660000 |
comment
|
this is an ad
| 0 |
mpz7zz5
|
t3_1kbvr5z
|
artificial
| 2025-05-01T04:36:28 | 2025-05-01T23:22:20.660000 |
comment
|
There's also the issue that AI is just remixing other people's work. So by definition anything it produces is likely to be derivative.
| -1 |
mpyuqee
|
t3_1kbvr5z
|
artificial
| 2025-05-01T19:36:51 | 2025-05-01T23:22:20.660000 |
post
|
Theory: AI Tools are mostly being used by bad developers
Ever notice that your teammates that are all in on ChatGPT, Cursor, and Claude for their development projects are far from being your strongest teammates? They scrape by at the last minute to get something together and struggle to ship it, and even then there are glaring errors in their codebase? And meanwhile the strongest developers on your team only occasionally run a prompt or two to get through a creative block, but almost never mention it, and rarely see it as a silver bullet whatsoever? I have a theory that a lot of the noise we hear about x% (30% being the most recent MSFT stat) of code already being AI-written, is actually coming from the wrong end of the organization, and the folks that prevail will actually be the non-AI-reliant developers that simply have really strong DSA fundamentals, good architecture principles, a reasonable amount of experience building production-ready services, and know how to reason their way through a complex problem independently.
| 0 |
1kchjl8
| null |
artificial
| 2025-05-01T19:44:06 | 2025-05-01T23:22:20.660000 |
comment
|
This is the type of vision we get when we extrude a small sample size to come up with insights about the whole-- Reinforced by an emotional pride that wouldn't ever account for how hopeless it might be for someone unaided by machine to compete with one that is--
Its more so revealed by the nature of going straight to "Bad developers" instead of "poor developers" or "fledgling developers"--
This is more desperate hope.
| 5 |
mq2o9oc
|
t3_1kchjl8
|
artificial
| 2025-05-01T19:47:04 | 2025-05-01T23:22:20.660000 |
comment
|
No, good developers use AI to speed up their work. But they know when to use it and decide on a case by case base if it is worth to prompt something (and potentially fix issues), or if they would be faster just writing it themselves.
Especially tab completion models are amazing for productivity, because you can still completely guide the machine to how you want the code to look, and can review the suggestions in realtime. It basically just saves on keystrokes and trivial tasks, not on deep thought.
| 4 |
mq2ovtq
|
t3_1kchjl8
|
artificial
| 2025-05-01T20:01:05 | 2025-05-01T23:22:20.660000 |
comment
|
I'm a bad developer. But graduating from not being a developer
| 3 |
mq2rtmm
|
t3_1kchjl8
|
artificial
| 2025-05-01T19:48:42 | 2025-05-01T23:22:20.660000 |
comment
|
That's possible. But, it might be more nuanced than that. I have been a developer for 25 years now, and I love using AI just to help me remember an api call or algorithm that's common but I don't remember the syntax off the top of my head. I have seen some of teammates use it for helping to name functions and classes and my team is all senior devs. AI is great as a code generator, but let's be honest that's the one aspect of building software that has the LEAST value. Generating code overall has never been the hardest part of my job.
In terms of the announcements that companies make, it always difficult to understand what it really means because 30% of boilerplate code is not a flex LOL. I would take those at a grain a salt.
| 2 |
mq2p8hx
|
t3_1kchjl8
|
artificial
| 2025-05-01T20:07:16 | 2025-05-01T23:22:20.660000 |
comment
|
Spoken like somebody who’s never figured out what an incredible speed boost it can be. Maybe learn it first….
| 1 |
mq2t3k3
|
t3_1kchjl8
|
LocalLLaMA
| 2025-05-01T05:14:45 | 2025-05-01T23:22:26.071000 |
post
|
We crossed the line
For the first time, QWEN3 32B solved all my coding problems that I usually rely on either ChatGPT or Grok3 best thinking models for help. Its powerful enough for me to disconnect internet and be fully self sufficient. We crossed the line where we can have a model at home that empower us to build anything we want.
Thank you soo sooo very much QWEN team !
| 701 |
1kc10hz
| null |
LocalLLaMA
| 2025-05-01T05:18:34 | 2025-05-01T23:22:26.071000 |
comment
|
as a baseline, how experienced are you with coding if i may ask?
edit: im not belittling OP in any ways, i honestly wanna know how good the 32B model is. I also use LLM to assist with coding every now and then
| 158 |
mpz02zi
|
t3_1kc10hz
|
LocalLLaMA
| 2025-05-01T09:28:07 | 2025-05-01T23:22:26.071000 |
comment
|
so can you use 30b-a3b model for all the same tasks and tell us how well that performs comparatively? I am really interested! thanks!
| 108 |
mpzoha9
|
t3_1kc10hz
|
LocalLLaMA
| 2025-05-01T10:12:32 | 2025-05-01T23:22:26.071000 |
comment
|
It would be useful to the community if you provided examples of these tasks.
| 51 |
mpzsvek
|
t3_1kc10hz
|
LocalLLaMA
| 2025-05-01T12:09:59 | 2025-05-01T23:22:26.071000 |
comment
|
Please share the task.
Claiming "\[model\] solved all my problems" is like claiming "\[pill\] solves any \[disease\]", without knowing what the disease is.
| 44 |
mq07ky1
|
t3_1kc10hz
|
LocalLLaMA
| 2025-05-01T13:14:03 | 2025-05-01T23:22:26.071000 |
comment
|
There are a lot of ways to use llms for writing code. I dislike that all of the benchmarks are zero shot yolos, because myself and most people I work with don’t use them that way.
I tell the model what to write and how to write it, and refine it with followup chat. This is the only method I’ve found of getting reliably good code outputs. It helps me focus on code structure and let the model focus on implementation details. That’s the division of labor I’m after.
| 26 |
mq0hphe
|
t3_1kc10hz
|
LocalLLaMA
| 2025-05-01T07:20:26 | 2025-05-01T23:22:26.071000 |
comment
|
which quant, from which huggingface repo, and using which inference server? i'm trying to get around to testing unsloths 128k versions this weekend.
| 25 |
mpzcqab
|
t3_1kc10hz
|
LocalLLaMA
| 2025-05-01T05:21:42 | 2025-05-01T23:22:26.071000 |
comment
|
Qwen 3 14b is very good too.
| 22 |
mpz0g21
|
t3_1kc10hz
|
LocalLLaMA
| 2025-05-01T06:32:55 | 2025-05-01T23:22:26.071000 |
comment
|
do you use a quant? what gpu do you use?
| 12 |
mpz81dm
|
t3_1kc10hz
|
LocalLLaMA
| 2025-05-01T10:59:40 | 2025-05-01T23:22:26.071000 |
comment
|
I believe that the terms entry-level or senior dev are not applicable to explain what the qwen of the newer model means.
First, we need to understand the complexity of the tasks, for example, most of the jobs where I live, coming from small companies, are to create "simple" things, Saas systems that often the only thing we do is adapt a known Saas system, or structure some type of product around a platform that already has practically everything needed in its API to obtain certain functionalities.
Why does this matter? Because anyone who understood an LLM understood why openai placed a "copy page" button above their explanatory texts about APIs.
Enable the code to become a commodity for most business products, where the person will only need to copy the documentation example to be able to implement that functionality, without actually understanding what was done.
In other words, with sufficient documentation, virtually anyone could code anything because LLMs bring Fordist production logic to programming.
Where you just need to know in practice, what order is necessary to implement a certain code and where, imagine it as a graph where each vertex is a step linked to another step.
Each vertex has information about a certain type of functionality and how to process it and pass it on to the next step.
And so on.
Allowing the programmer to dedicate himself more to the conceptual part than actually typing.
As most of the work is simple, you don't need to do a lot of programming because the small business market doesn't require a lot of things either.
Do you understand? It's not about the level of the programmer, it's about the type of work you were allocated, the size and complexity of the products and not the quality of the programmer.
I hope I helped you understand from a job analysis what it means to have a model like this running locally especially in times of home office where sometimes to enjoy life, you save on mobile data to maintain communication with the company, now with an LLM like this I can outsource some things knowing that it doesn't matter if I leave or not, the llm will fulfill the task for me at some level, just don't let your boss know.
| 4 |
mpzy4ba
|
t3_1kc10hz
|
LocalLLaMA
| 2025-05-01T15:45:38 | 2025-05-01T23:22:26.071000 |
comment
|
Yeah, but if you disconnect your Internet then you won’t see all of our snarky replies to your thread.
| 3 |
mq1b9kx
|
t3_1kc10hz
|
LocalLLaMA
| 2025-05-01T00:32:30 | 2025-05-01T23:22:26.071000 |
post
|
Microsoft just released Phi 4 Reasoning (14b)
| 646 |
1kbvwsc
| null |
LocalLLaMA
| 2025-05-01T00:57:19 | 2025-05-01T23:22:26.071000 |
comment
|
I can't take another model.
OK, I lied. Keep them coming. I can sleep when I'm dead.
Can it be better than the Qewn 3 30B MoE?
| 246 |
mpxwbg6
|
t3_1kbvwsc
|
LocalLLaMA
| 2025-05-01T00:38:52 | 2025-05-01T23:22:26.071000 |
comment
|
> Static model trained on an offline dataset with cutoff dates of March 2025
Very nice, phi4 is my second favorite model behind the new MOE Qwen, excited to see how it performs!
| 139 |
mpxt3yu
|
t3_1kbvwsc
|
LocalLLaMA
| 2025-05-01T02:28:45 | 2025-05-01T23:22:26.071000 |
comment
|
We uploaded Dynamic 2.0 GGUFs already by the way! 🙏
Phi-4-mini-reasoning GGUF: https://huggingface.co/unsloth/Phi-4-mini-reasoning-GGUF
Phi-4-reasoning-plus-GGUF (fully uploaded now): https://huggingface.co/unsloth/Phi-4-reasoning-plus-GGUF
Also dynamic 4bit safetensors etc are up 😊
| 78 |
mpybzsd
|
t3_1kbvwsc
|
LocalLLaMA
| 2025-05-01T00:39:30 | 2025-05-01T23:22:26.071000 |
comment
|
Seems there is a "Phi 4 reasoning PLUS" version, too. What could that be?
| 49 |
mpxt7vm
|
t3_1kbvwsc
|
LocalLLaMA
| 2025-05-01T03:22:32 | 2025-05-01T23:22:26.071000 |
comment
|
I just watched it burn through 32k tokens. It did answer correctly but it also did answer correctly about 40 times during the thinking. Have these models been designed to use as much electricity as possible?
I'm not even joking.
| 49 |
mpykhey
|
t3_1kbvwsc
|
LocalLLaMA
| 2025-05-01T01:38:42 | 2025-05-01T23:22:26.071000 |
comment
|
Is there a smaller version? (4b)
Edit:
found it: [https://huggingface.co/microsoft/Phi-4-mini-reasoning](https://huggingface.co/microsoft/Phi-4-mini-reasoning)
| 20 |
mpy3exn
|
t3_1kbvwsc
|
LocalLLaMA
| 2025-05-01T00:45:41 | 2025-05-01T23:22:26.071000 |
comment
|
Let's go
| 9 |
mpxuap4
|
t3_1kbvwsc
|
LocalLLaMA
| 2025-05-01T05:25:48 | 2025-05-01T23:22:26.071000 |
comment
|
There's also Phi-4-mini-reasoning at 3.8B for us poors.
| 9 |
mpz0wwl
|
t3_1kbvwsc
|
End of preview. Expand
in Data Studio
Top Reddit Posts Daily
Dataset Summary
A continuously-updated snapshot of public Reddit discourse on AI news. Each night a GitHub Actions cron job
- Scrapes new submissions from a configurable list of subreddits (→
data_raw/
) - Classifies each post with a DistilBERT sentiment model served on Replicate (→
data_scored/
) - Summarises daily trends for lightweight front-end consumption (→
daily_summary/
)
The result is an easy-to-query, time-stamped record of Reddit sentiment that can be used for NLP research, social-media trend analysis, or as a teaching dataset for end-to-end MLOps.
Source code https://github.com/halstonblim/reddit_sentiment_pipeline
Currently configured to scrape only the top daily posts and comments to respect rate limits
subreddits:
- name: artificial
post_limit: 100
comment_limit: 10
- name: LocalLLaMA
post_limit: 100
comment_limit: 10
- name: singularity
post_limit: 100
comment_limit: 10
- name: OpenAI
post_limit: 100
comment_limit: 10
Supported Tasks
This dataset can be used for:
- Text classification (e.g., sentiment analysis)
- Topic modeling
- Language generation and summarization
- Time‑series analysis of Reddit activity
Languages
- English, no filtering is currently done on the raw text
Dataset Structure
hblim/top_reddit_posts_daily/
└── data_raw/ # contains raw data scraped from reddit
├── 2025‑05‑01.parquet
├── 2025‑05‑01.parquet
└── …
└── data_scored/ # contains same rows as raw data but with sentiment scores
├── 2025‑05‑01.parquet
├── 2025‑05‑01.parquet
└── …
└── subreddit_daily_summary.csv/ # contains daily summaries of sentiment averages grouped by (day, subreddit)
Data Fields
Name | Type | Description |
---|---|---|
subreddit |
string |
Name of the subreddit (e.g. “GooglePixel”) |
created_at |
datetime |
UTC timestamp when the post/comment was originally created |
retrieved_at |
datetime |
Local timezone timestamp when this data was scraped |
type |
string |
"post" or "comment" |
text |
string |
For posts: title + "\n\n" + selftext ; for comments: comment body |
score |
int |
Reddit score (upvotes – downvotes) |
post_id |
string |
Unique Reddit ID for the post or comment |
parent_id |
string |
For comments: the parent comment/post ID; null for top‑level posts |
Example entry:
Field | Value |
---|---|
subreddit | apple |
created_at | 2025-04-17 19:59:44-05:00 |
retrieved_at | 2025-04-18 12:46:10.631577-05:00 |
type | post |
text | Apple wanted people to vibe code Vision Pro apps with Siri |
score | 427 |
post_id | 1k1sn9w |
parent_id | None |
Data Splits
There are no explicit train/test splits. Data is organized by date under the data_raw/
or data_scored/
folder.
Dataset Creation
- A Python script (
scrape.py
) runs daily, fetching the top N posts and top M comments per subreddit. - Posts are retrieved via PRAW’s
subreddit.top(time_filter="day")
. - Data is de‑duplicated against the previous day’s
post_id
values. - Stored as Parquet under
data_raw/{YYYY‑MM‑DD}.parquet
.
License
This dataset is released under the MIT License.
Citation
If you use this dataset, please cite it as:
@misc{lim_top_reddit_posts_daily_2025,
title = {Top Reddit Posts Daily: Scraped Daily Top Posts and Comments from Subreddits},
author = {Halston Lim},
year = {2025},
publisher = {Hugging Face Datasets},
howpublished = {\url{https://huggingface.co/datasets/hblim/top_reddit_posts_daily}}
}
Usage Example
Example A: Download and load a single day via HF Hub
from huggingface_hub import HfApi
import pandas as pd
api = HfApi()
repo_id = "hblim/top_reddit_posts_daily"
date_str = "2025-04-18"
today_path = api.hf_hub_download(
repo_id=repo_id,
filename=f"data_raw/{date_str}.parquet",
repo_type="dataset"
)
df_today = pd.read_parquet(today_path)
print(f"Records for {date_str}:")
print(df_today.head())
Example B: List, download, and concatenate all days
from huggingface_hub import HfApi
import pandas as pd
api = HfApi()
repo_id = "hblim/top_reddit_posts_daily"
# 1. List all parquet files in the dataset repo
all_files = api.list_repo_files(repo_id, repo_type="dataset")
parquet_files = sorted([f for f in all_files if f.startswith("data_raw/") and f.endswith(".parquet")])
# 2. Download each shard and load with pandas
dfs = []
for shard in parquet_files:
local_path = api.hf_hub_download(repo_id=repo_id, filename=shard, repo_type="dataset")
dfs.append(pd.read_parquet(local_path))
# 3. Concatenate into one DataFrame
df_all = pd.concat(dfs, ignore_index=True)
print(f"Total records across {len(dfs)} days: {len(df_all)}")
Limitations & Ethics
- Bias: Data reflects Reddit’s user base and community norms, which may not generalize.
- Privacy: Only public content is collected; no personally identifiable information is stored.
- Downloads last month
- 902
Size of downloaded dataset files:
85.2 MB
Size of the auto-converted Parquet files:
85.2 MB
Number of rows:
336,014