Dataset Viewer
audio
audioduration (s) 1.21
21.5
| text
stringlengths 15
393
| speaker_id
stringclasses 4
values | emotion
stringclasses 3
values | language
stringclasses 1
value |
---|---|---|---|---|
Today on the AI Daily Brief, the problem of AI that seems alive.
|
speaker_0
|
neutral
|
en
|
|
First of all, thank you to today's sponsors, KPMG, Blitsy, and Superintelligent.
|
speaker_0
|
neutral
|
en
|
|
And to get an ad-free version of the show, go to patreon.com/aidailybrief.
|
speaker_0
|
neutral
|
en
|
|
Now, today is a weekend episode, and that means it's a long read/big think episode.
|
speaker_0
|
neutral
|
en
|
|
And we kinda have both of that all in one.
|
speaker_0
|
angry
|
en
|
|
Earlier this week, Mustafa Suleyman, who is currently the CEO of AI at Microsoft, but who previously was a co-founder of DeepMind and the co-founder of Inflection, has written an essay about the problems of what he calls seemingly conscious AI.
|
speaker_0
|
neutral
|
en
|
|
Now, we're gonna read a bunch of excerpts from this piece and do some discussion, but I think that there's also interesting important context for this from a story that we covered about a week ago as well.
|
speaker_0
|
angry
|
en
|
|
You might remember the discussion we had around whether AI model welfare is a thing.
|
speaker_0
|
neutral
|
en
|
|
This came out of Anthropic's announcement that the most advanced Claude models would now be able to shut off conversations that they thought were abusive.
|
speaker_0
|
angry
|
en
|
|
They write, "We recently gave Claude Opus 4 and 4-1 the ability to end conversations in our consumer chat interfaces.
|
speaker_0
|
angry
|
en
|
|
This ability is intended for use in rare extreme cases of persistently harmful or abusive user interactions.
|
speaker_0
|
angry
|
en
|
|
This feature was developed primarily as part of our exploratory work on potential AI welfare, though it has broader relevance to model alignment and safeguards.
|
speaker_0
|
angry
|
en
|
|
However, we take the issue seriously, and alongside our research program, we're working to identify and implement low-cost interventions to mitigate risks to model welfare in case such welfare is possible.
|
speaker_0
|
angry
|
en
|
|
" Now, obviously, you can hear from the caveating that Anthropic is not with this trying to make an argument that model welfare is in fact a thing.
|
speaker_0
|
angry
|
en
|
|
They're saying, "It might be a thing, and so we wanna explore it.
|
speaker_0
|
angry
|
en
|
|
" The researcher who leads that initiative, Anthropic, apparently thinks there's around a 15% chance that Claude or other AI is what you could call conscious today.
|
speaker_0
|
angry
|
en
|
|
But even in the absence of them arguing that model welfare is real, this has generated a lot of conversation around why the concept itself might be problematic even just to explore.
|
speaker_0
|
angry
|
en
|
|
While some think that it's an interesting area of exploration, there are others like Martin Sapp who wrote, "I spoke to Forbes about why model welfare is a silly framing to an important issue.
|
speaker_0
|
angry
|
en
|
|
Models don't have feelings, and it's a big distraction from real questions like tensions between safety versus user utility.
|
speaker_0
|
neutral
|
en
|
|
" Still, so far, the loudest critic of this conversation, if not Anthropic's research specifically, is Mustafa Suleyman.
|
speaker_0
|
neutral
|
en
|
|
So with that background, let's read some excerpts from his piece.
|
speaker_0
|
neutral
|
en
|
|
It's called, We Must Build AI for People, Not to Be a Person: Seemingly Conscious AI Is Coming.
|
speaker_0
|
angry
|
en
|
|
Mustafa writes, "A lot is being written about the impending arrival of superintelligence, what it means for alignment, containment, jobs and so on.
|
speaker_0
|
neutral
|
en
|
|
Those are all important topics.
|
speaker_0
|
angry
|
en
|
|
We need to grapple with the societal impact of inventions already largely out there, technologies which already have the potential to fundamentally change our sense of personhood in society.
|
speaker_0
|
angry
|
en
|
|
In this context, I'm growing more and more concerned about what is becoming known as the psychosis risk and a bunch of related issues.
|
speaker_0
|
angry
|
en
|
|
I don't think this will be limited to those who are already at risk of mental health issues.
|
speaker_0
|
angry
|
en
|
|
Simply put, my central worry is that many people will start to believe in the illusion of AIs as conscious entities so strongly that they'll soon advocate for AI rights, model welfare, and even AI citizenship.
|
speaker_0
|
angry
|
en
|
|
This development will be a dangerous turn in AI progress and deserves our immediate attention.
|
speaker_0
|
angry
|
en
|
|
" From there, he gets into the main meat of his essay.
|
speaker_0
|
angry
|
en
|
|
He continues, "AI progress has been phenomenal.
|
speaker_0
|
neutral
|
en
|
|
A few years ago, talk of conscious AI would have seemed crazy.
|
speaker_0
|
angry
|
en
|
|
Today, it feels increasingly urgent.
|
speaker_0
|
angry
|
en
|
|
In this essay, I want to discuss what I'll call seemingly conscious AI, or SCAI, one that has all the hallmarks of other conscious being and thus appears to be conscious.
|
speaker_0
|
angry
|
en
|
|
It shares certain aspects of the idea of a philosophical zombie, one that simulates all the characteristics of consciousness, but internally, it is blank.
|
speaker_0
|
angry
|
en
|
|
My imagined AI system would not actually be conscious, but it would imitate consciousness in such a convincing way that it would be indistinguishable from a claim that you or I might make to one another about our own consciousness.
|
speaker_0
|
angry
|
en
|
|
This is not far away.
|
speaker_0
|
angry
|
en
|
|
Such a system can be built with technologies that exist today, along with some that will mature over the next two or three years.
|
speaker_0
|
angry
|
en
|
|
The arrival of seemingly conscious AI is inevitable and unwelcome.
|
speaker_0
|
angry
|
en
|
|
Instead, we need a vision for AI that can fulfill its potential as a helpful companion without falling prey to its illusions.
|
speaker_0
|
angry
|
en
|
|
To some, this discussion will feel ungrounded, more science fiction than reality.
|
speaker_0
|
neutral
|
en
|
|
To others, it may feel unnecessarily alarmist.
|
speaker_0
|
neutral
|
en
|
|
Such emotional reactions are the tip of the iceberg given what lies ahead.
|
speaker_0
|
angry
|
en
|
|
It's highly likely that some people will argue that these AIs are not only conscious, but that as a result they may suffer and therefore deserve our moral consideration.
|
speaker_0
|
angry
|
en
|
|
To be clear, there is zero evidence of this today, and some argue there are strong reasons to believe it will not be the case in the future.
|
speaker_0
|
angry
|
en
|
|
Yet the consequences of many people starting to believe an SCAI is actually conscious deserve our immediate attention.
|
speaker_0
|
angry
|
en
|
|
We have to be extremely cautious here and encourage real public debate and begin to set clear norms and standards.
|
speaker_0
|
angry
|
en
|
|
" So why is this important to discuss?
|
speaker_0
|
angry
|
en
|
|
Mustafa gives three reasons.
|
speaker_0
|
neutral
|
en
|
|
The first is that he thinks it will be possible, and therefore likely, that we will build a seemingly conscious AI within the next few years.
|
speaker_0
|
neutral
|
en
|
|
Second, he writes, "The debate about whether AI is actually conscious is, for now at least, a distraction.
|
speaker_0
|
angry
|
en
|
|
It will seem conscious, and that illusion is what'll matter in the near term.
|
speaker_0
|
angry
|
en
|
|
" Third, he thinks that this type of AI creates new risks.
|
speaker_0
|
angry
|
en
|
|
From there, he moves on to explore what consciousness actually is.
|
speaker_0
|
angry
|
en
|
|
He writes, "There are three broad components according to the literature.
|
speaker_0
|
angry
|
en
|
|
First is a subjective experience, or what it's like to experience things, to have qualia.
|
speaker_0
|
angry
|
en
|
|
Second, there is access consciousness, having access to information of different kinds and referring to it in future experiences.
|
speaker_0
|
neutral
|
en
|
|
And stemming from those two is the sense and experience of a coherent self tying it all together, how it feels to be a bat or a human.
|
speaker_0
|
angry
|
en
|
|
Let's call human consciousness our ongoing self-aware subjective experience of the world and ourselves.
|
speaker_0
|
neutral
|
en
|
|
We do not and cannot have access to another person's consciousness.
|
speaker_0
|
angry
|
en
|
|
All you can do is infer it."...
|
speaker_0
|
angry
|
en
|
|
but the point is that, nonetheless, it comes naturally to us to attribute consciousness to other humans.
|
speaker_0
|
angry
|
en
|
|
It's a fundamental part of who we are, integral to our theory of mind.
|
speaker_0
|
angry
|
en
|
|
It's in our nature to believe that things that remember and talk and do things and then discuss them feel, well, like us, conscious.
|
speaker_0
|
angry
|
en
|
|
Few concepts are as scientifically elusive and yet so immediately familiar to every one of us as individuals.
|
speaker_0
|
angry
|
en
|
|
Everyone reading this has a direct, distinct, inalienable understanding of the feeling of awareness, of being, of feeling alive.
|
speaker_0
|
angry
|
en
|
|
By definition, we don't know what it is like to be conscious.
|
speaker_0
|
angry
|
en
|
|
In the context of SCAI, this is a problem.
|
speaker_0
|
angry
|
en
|
|
There's both sufficient scientific uncertainty and subjective immediacy to create a space for people to project.
|
speaker_0
|
angry
|
en
|
|
One recent survey lists 22 distinct theories of consciousness, for example.
|
speaker_0
|
angry
|
en
|
|
Again, it's worth underscoring, there is at present no evidence any of this applies to current LLMs, and strong arguments to the contrary.
|
speaker_0
|
angry
|
en
|
|
And yet, this may not be enough.
|
speaker_0
|
angry
|
en
|
|
Next, Mustafa asks, why is consciousness important?
|
speaker_0
|
neutral
|
en
|
|
"Consciousness," he says, "is a critical foundation of our moral and legal rights.
|
speaker_0
|
neutral
|
en
|
|
So far, civilization has decided that humans have special rights and privileges.
|
speaker_0
|
angry
|
en
|
|
Animals have some rights and protections, some more than others.
|
speaker_0
|
neutral
|
en
|
|
Consciousness is not coterminous with these rights.
|
speaker_0
|
angry
|
en
|
|
No one would say that someone in a coma has voided all their human rights, but there's no doubt that our consciousness is wrapped up in our self-concept as different and special.
|
speaker_0
|
angry
|
en
|
|
Despite the many nuances, consciousness is critical to participating in society, a linchpin of our legal personhood and a key part of being granted our freedoms and protections.
|
speaker_0
|
angry
|
en
|
|
So what consciousness is and who or what has it is enormously important.
|
speaker_0
|
angry
|
en
|
|
It's an idea that sits at the very heart of human civilization, our sense of ourselves and others, our culture, our politics, our law, and everything in between.
|
speaker_0
|
angry
|
en
|
|
" If some people start to develop SCAIs and if those AIs convince other people that they can suffer or that it has a right not to be switched off, there will come a time when those people will argue that it deserves protection under law as a pressing moral matter.
|
speaker_0
|
angry
|
en
|
|
In a world already roiling with polarized arguments over identity and rights, this will add a chaotic new axis of division between those for and against AI rights.
|
speaker_0
|
angry
|
en
|
|
There will be many who just see AI as a tool, something like their phone, only more agentic and capable.
|
speaker_0
|
angry
|
en
|
|
Others might believe it to be more like a pet, a different category to traditional technology altogether.
|
speaker_0
|
neutral
|
en
|
|
Still, others, probably small in number at first, will come to believe it is a fully emerged entity, a conscious being deserving of real moral consideration in society.
|
speaker_0
|
neutral
|
en
|
|
People will start making claims about their AI suffering and their entitlement to rights that we can't straightforwardly rebut.
|
speaker_0
|
angry
|
en
|
|
They will be moved to defend their AIs and campaign on their behalf.
|
speaker_0
|
angry
|
en
|
|
Consciousness is, by definition, inaccessible, and the science of detecting any putative synthetic consciousness is still in its infancy.
|
speaker_0
|
angry
|
en
|
|
After all, we've never had to detect it before.
|
speaker_0
|
angry
|
en
|
|
Meanwhile, the field of interpretability, unpicking the processes within the black box of AI, is also a nascent art.
|
speaker_0
|
neutral
|
en
|
|
The upshot is that definitively rebutting these claims will be very hard.
|
speaker_0
|
angry
|
en
|
|
Some academics are beginning to explore the idea of model welfare, the principle that we will have, quote, "A duty to extend moral consideration to beings that have a non-negligible chance of, in effect, being conscious," and that, as a result, some AI systems will be welfare subjects and moral patients in the near future.
|
speaker_0
|
angry
|
en
|
|
This is both premature and frankly dangerous.
|
speaker_0
|
angry
|
en
|
|
All of this will exacerbate delusions, create yet more dependence-related problems, prey on our psychological vulnerabilities, introduce new dimensions of polarization, complicate existing struggles for rights, and create a huge new category error for society.
|
speaker_0
|
angry
|
en
|
|
It disconnects people from reality, fraying fragile social bonds and structures, distorting pressing moral priorities.
|
speaker_0
|
angry
|
en
|
|
To be clear, SCAI is something to avoid.
|
speaker_0
|
angry
|
en
|
|
What if AI wasn't just a buzzword, but a business imperative?
|
speaker_0
|
angry
|
en
|
|
On You Can With AI, we take you inside the boardrooms and strategy sessions of the world's most forward-thinking enterprises.
|
speaker_0
|
angry
|
en
|
|
Hosted by me, Nathaniel Whittemore, and powered by KPMG, this seven-part series delivers real-world insights from leaders who are scaling AI with purpose.
|
speaker_0
|
angry
|
en
|
End of preview. Expand
in Data Studio
goodforft
This is a merged speech dataset containing 863 audio segments from 4 source datasets.
Dataset Information
- Total Segments: 863
- Speakers: 4
- Languages: en
- Emotions: angry, happy, neutral
- Original Datasets: 4
Dataset Structure
Each example contains:
audio
: Audio file (WAV format, original sampling rate preserved)text
: Transcription of the audiospeaker_id
: Unique speaker identifier (made unique across all merged datasets)emotion
: Detected emotion (neutral, happy, sad, etc.)language
: Language code (en, es, fr, etc.)
Usage
Loading the Dataset
from datasets import load_dataset
# Load the dataset
dataset = load_dataset("Codyfederer/goodforft")
# Access the training split
train_data = dataset["train"]
# Example: Get first sample
sample = train_data[0]
print(f"Text: {sample['text']}")
print(f"Speaker: {sample['speaker_id']}")
print(f"Language: {sample['language']}")
print(f"Emotion: {sample['emotion']}")
# Play audio (requires audio libraries)
# sample['audio']['array'] contains the audio data
# sample['audio']['sampling_rate'] contains the sampling rate
Alternative: Load from JSONL
from datasets import Dataset, Audio, Features, Value
import json
# Load the JSONL file
rows = []
with open("data.jsonl", "r", encoding="utf-8") as f:
for line in f:
rows.append(json.loads(line))
features = Features({
"audio": Audio(sampling_rate=None),
"text": Value("string"),
"speaker_id": Value("string"),
"emotion": Value("string"),
"language": Value("string")
})
dataset = Dataset.from_list(rows, features=features)
Dataset Structure
The dataset includes:
data.jsonl
- Main dataset file with all columns (JSON Lines)*.wav
- Audio files underaudio_XXX/
subdirectoriesload_dataset.txt
- Python script for loading the dataset (rename to .py to use)
JSONL keys:
audio
: Relative audio path (e.g.,audio_000/segment_000000_speaker_0.wav
)text
: Transcription of the audiospeaker_id
: Unique speaker identifieremotion
: Detected emotionlanguage
: Language code
Speaker ID Mapping
Speaker IDs have been made unique across all merged datasets to avoid conflicts. For example:
- Original Dataset A:
speaker_0
,speaker_1
- Original Dataset B:
speaker_0
,speaker_1
- Merged Dataset:
speaker_0
,speaker_1
,speaker_2
,speaker_3
Original dataset information is preserved in the metadata for reference.
Data Quality
This dataset was created using the Vyvo Dataset Builder with:
- Automatic transcription and diarization
- Quality filtering for audio segments
- Music and noise filtering
- Emotion detection
- Language identification
License
This dataset is released under the Creative Commons Attribution 4.0 International License (CC BY 4.0).
Citation
@dataset{vyvo_merged_dataset,
title={goodforft},
author={Vyvo Dataset Builder},
year={2025},
url={https://huggingface.co/datasets/Codyfederer/goodforft}
}
This dataset was created using the Vyvo Dataset Builder tool.
- Downloads last month
- 281