audio
audioduration (s)
1.21
21.5
text
stringlengths
15
393
speaker_id
stringclasses
4 values
emotion
stringclasses
3 values
language
stringclasses
1 value
Unfortunately though, when it comes to enterprise data, it's not just a readiness issue, it's also an access issue.
speaker_3
angry
en
I was talking with a hedge fund earlier this year, and they were in the midst of getting all of their data and signal into formats that were accessible by AI systems.
speaker_3
angry
en
The problem was that across the organization, there was a huge variety of different levels and types of data permissions.
speaker_3
angry
en
One person might have access to datasets X, Y, and Z, but not A, B, and C.
speaker_3
angry
en
While another person might have access to A, Y, and D, but not B, Q, and F, and so on and so forth.
speaker_3
angry
en
So even once you've got your data in a format that is usable by AI, you then have to design systems for permissions and provisioning that reflect the real world of who can interact with what information.
speaker_3
angry
en
At this point, if your head is spinning around how much has to go into making these systems work, you're not alone, man.
speaker_3
angry
en
Another big challenge is poorly documented workflows.
speaker_3
angry
en
Now, as I talk about all the time on this show, AI is not RPA 2.0.
speaker_3
angry
en
Organizations that think about it simply as a way to one-to-one automate the existing work that people do are wildly under maximizing the potential of what AI can do for their organizations.
speaker_3
angry
en
At the same time, a lot of the natural starting points are some amount of automation of existing monotonous workflows, which can only happen if those workflows are actually documented in a way that the AI can understand.
speaker_3
angry
en
There's a reason that you see a million startups right now that are basically recording the screens of people who are doing work to understand exactly what they do so that they can then go imitate and hopefully improve upon those workflows.
speaker_3
angry
en
But right now, on average, the quote unquote "documentation of workflows" exists solely in the heads of the employees who are actually doing that work.
speaker_3
angry
en
And then of course, there's skills enablement and support.
speaker_3
angry
en
Wait, so you're giving all of your employees access to this incredibly powerful and complex new technology, and you think that just because the technology itself is expensive, you shouldn't also have to pay for skills enablement and upskilling?
speaker_3
angry
en
It's not how it works.
speaker_3
angry
en
Even people who are quote unquote "AI experts" are only AI experts because of the sheer amount of time they've spent figuring out how to actually interact with these systems.
speaker_3
angry
en
In many cases, the patterns that we have from using and interacting with previous software do not apply to gen AI.
speaker_3
angry
en
And guess what?
speaker_3
angry
en
The state-of-the-art opportunities that AI really represents are gonna take way more than a Coursera prompt engineering course.
speaker_3
angry
en
But part of the reason that it's a market problem is that entrepreneurs know that enterprises are trying to get out of this without having to pay for that, and so they don't wanna be the one who's desperately clinging to the coattails asking for some scraps for the table.
speaker_3
angry
en
If organizations and enterprises are serious about AI transformation up and down the organization, both in terms of agents doing big buckets of new work, but also their existing employees being more productive, they're gonna have to pony up for skills training, enablement, and broader change management.
speaker_3
angry
en
Then, of course, you have some very obvious things, like overzealous risk departments that don't allow people to actually use these tools in the ways that could create the most opportunity.
speaker_3
angry
en
For example, we have a partner right now that is reselling our voice agent interview assessments to their clients, but whose risk department will not let their teams be interviewed by voice agents.
speaker_3
angry
en
If you wanna try to take the time to go make sense of that, by all means.
speaker_3
angry
en
I'm just gonna keep cashing the checks.
speaker_3
angry
en
There are broader management issues, like organizational fragmentation, where different people in different parts of the organization may be piloting different systems that don't necessarily work with one another or even in competition.
speaker_3
angry
en
Or the reverse, which is existing vendor lock-in, and I think this one is worth taking a moment for in the context of this specific study.
speaker_3
angry
en
It's pretty clear if you read between the lines of this thing, that the employees at these organizations that this MIT group looked at are fundamentally disinterested in using the crappy versions of tools that their organizations are giving them access to, and instead just wanna use the general consumer tools that are way more advanced.
speaker_3
angry
en
Call this the copilot ChatGPT problem.
speaker_3
angry
en
Anyone who has touched AI at all in the enterprise has seen examples of this, where when you're logging in with your Gmail at home and using these most advanced reasoning models to then have to come back and use the neutered versions that your enterprise is giving you access to is just completely unbearable.
speaker_3
angry
en
Especially because in a lot of cases, every new update of the state-of-the-art unlocks meaningful amounts of new use cases.
speaker_3
angry
en
It's not like we're so far into the future right now, where even older crappier models are super useful.
speaker_3
angry
en
For some use cases they are, but for many use cases, you really need something that's close to the state-of-the-art.
speaker_3
angry
en
And if you don't have access to it, you're simply not gonna be able to do that work.
speaker_3
angry
en
"Employees know what good AI feels like, making them less tolerant of static enterprise tools.
speaker_3
angry
en
" The last couple reasons I'll mention that pilots fail have to do with leadership again, but leadership in the context of the pilots.
speaker_3
angry
en
It can so often happen that pilot ownership or leadership is like a hot potato.
speaker_3
angry
en
Some executive sponsor says that they want it, they hand it off to someone who was never exactly bought in, and then they're just going through the motions of aiding the pilot when they're not even particularly convinced that it's actually gonna be all that useful.
speaker_3
angry
en
This happens all the time, and it's why I separated leadership buy-in and team buy-in, and said that they're both incredibly important in context with one another.
speaker_3
angry
en
And then, of course, there's this situation.
speaker_3
neutral
en
In the circumstance where even if ownership of the pilot is clear, it's a one-off with no strategic plan or next steps articulated, and no ultimate direction.
speaker_3
angry
en
At this stage, this is the default and the norm.
speaker_3
angry
en
Let's try a pilot to see what we can do without any larger consideration of the big goals that you're trying to achieve as an organization.
speaker_3
angry
en
Pilots that are conducted like this in a strategic vacuum are simply much less likely to succeed and be a part of actual organizational change.
speaker_3
angry
en
I've been discussing this study throughout the week with our head of research, and when I asked her to estimate the actual distribution of failure rates between organizational issues and technology, her thesis was that it was about 80% organizational, 20% technology, so four to one organizational versus tech-related issues.
speaker_3
angry
en
Now, there's one more funny thing underneath all of this, which is the idea of using pilot failure as a bully cudgel in the first place.
speaker_3
angry
en
Like the idea that pilots failing isn't simply a part of the expected distribution of pilot results.
speaker_3
angry
en
If you are running an organization and trying a novel technology like AI, especially one that's as fast-moving and dynamic as AI, and all of your pilots are working, it is almost assuredly the case that you're not being experimental enough.
speaker_3
angry
en
You're not trying enough things.
speaker_3
angry
en
You're not thinking far enough about what AI could be doing for you.
speaker_3
angry
en
Some percentage, in other words, of your pilots should be failing.
speaker_3
angry
en
Certainly not the 95% that MIT claims, but some meaningful amount.
speaker_3
neutral
en
AI, again, is not a technology that's exclusively meant to be a one-to-one replacement for existing workflows.
speaker_3
angry
en
It represents an opportunity to do things that were not possible before, and you're not going to discover those things if you have no tolerance for pilot failure.
speaker_3
angry
en
VentureBeat writes, "MIT Report Misunderstood: Shadow AI Economy Booms While Headlines Cry Failure.
speaker_3
angry
en
" Fortune's AI editor felt the need to go write a follow-up.
speaker_3
angry
en
"An MIT report that 95% of AI pilots fail spooked investors.
speaker_3
angry
en
But it's the reason why those pilots failed that should make the C-suite anxious.
speaker_3
neutral
en
" Like I said at the beginning, I think a lot of the resonance of this report has to do with larger market forces right now, and in a different context, we wouldn't be giving it all this attention that we've been giving it.
speaker_3
neutral
en
However, to the extent that it becomes used as an excuse for why your organization can slow walk this change, I think that you're doing yourself a disservice.
speaker_3
angry
en
Hopefully you have a better sense now of not only why you should perhaps take this particular set of results with a grain of salt, but also a better roadmap of the type of reasons that pilots actually fail in practice.
speaker_3
angry
en
Appreciate you listening or watching, as always.
speaker_3
neutral
en