
Signal to Noise: Episode 11
The Year In AI: What We Got Right and What Went Wrong in 2025
Transcript
[00:00:01] Intro: Welcome to Signal to Noise by Riviera Partners, the podcast where leading executives share how they cut through the noise and act on what matters most. We go beyond the headlines to explore the pivotal decisions, opportunities, and inflection points that define their careers and shape the future of the companies they led. It’s time to cut through the noise and get to the signal.
[00:00:25] Kyle Langworthy: Welcome to Signal to Noise, Riviera Partners, the podcast where we cut through hype, headlines, and half-truths to focus on what actually matters for leaders building the future. Across this episode, you’ll hear from operators, technologists, executives who are not just talking about AI. They’re responsible for making it work inside real organizations with real people and real consequences. From leadership accountability and infrastructure readiness to trust, talent, where AI is genuinely delivering value versus where the noise is loudest. These conversations reflect the signals leaders should actually be paying attention to right now. Let’s dive in.
[00:01:10] Kyle Langworthy: In this first clip, Bill Murphy explains why meaningful AI progress doesn’t start with tools or pilots. It starts with executive will, sustained investment, and treating AI as a core business capability, not a side project.
[00:01:26] Bill Murphy: Yeah. I think it comes down to will, and that usually comes down to leadership. Now having the will at the top level to take this seriously and really treat it as something that’s business, you know, required and have that position cascade all the way down your organization takes a ton of energy and leadership. Now, once you do that, you also get the foundational things right, and you know, you run some experiments, and you figure out what works. And so they’re sort of like, they progress down that, but it always starts with the leadership saying, “This is super important for the company, we need to invest significant resources to do it, and we are going to treat it like a first-class citizen. This is not, you know, a little experiment off to the side that doesn’t matter, and maybe it’ll hit. This is fundamental to who we are as a company, and we’re gonna do it excellently.” So, you know, there’s a quote from one of my companies. The CEO said, “If a CEO is not the chief AI officer, then they should be fired.” Now perhaps that’s a little aggressive, but I think that is why his organization is completely on their front foot related to every other competitor in their industry because he, from the top, is saying this is the most important thing, and we will not be left behind. And they’ve become leaders here, and I think that’s done them right in the mark.
[00:02:47] Kyle Langworthy: In this section, Jon Krohn breaks down what the acceleration curve really looks like, why it’s been surprisingly consistent for years, and what leaders should be doing right now to prepare their organizations for what’s coming.
[00:03:03] Jon Krohn: About every seven months, the length of a human task that can be accurately handled by an AI model doubles. So, if it’s about two hours today that we can get 90% accuracy on replacing a human task with a machine, you can expect that in seven months, that will be four hours. Seven months after that, it’ll be eight hours. And this has been happening for years. It’s a trajectory that you can map very reliably. GPT5 fell perfectly onto that curve when it came out in August. That means that there’s an unprecedented opportunity today, and that is only, that’s doubling. Every seven months, the opportunity for things that you can be automating in your organization is vastly increasing, and so what can you be doing today to be setting up your infrastructure, your governance for both data as well as humans in the organization to take advantage of this? I think that there is an unprecedented opportunity. I think that anyone who’s listening out there who has experience building and deploying AI systems, I assume you’re having a huge amount of success. If you’re not, figure out how to make some tweaks because every conversation that I have leads to next steps.
[00:04:15] Kyle Langworthy: Ameya Kanitkar offers a grounded reality check on autonomous AI systems, separating future potential from what’s actually operating reliably in production today.
[00:04:27] Ameya Kanitkar: My controversial take on this is that agentic AI doesn’t exist yet. So, let me explain what I mean by it. So, yes, you can see that as a future, right? Just like how LLMs were, by the way, two years ago. Like, when ChatGPT first came out, it was all about LLMs and AI, right? But we didn’t actually see that being implemented in real life in a meaningful way. But now we are seeing it two and a half years later, right? Like, the number of tools that are now actually generating value for your customers and, you know, for business enterprises. Agentic AI is where ChatGPT’s launch was, right? These thinking modes and this kind of doing on your own. Yeah. There are some areas, you know, Claude code does, whatever it does is pretty impressive, especially in coding and some of these things, but we are not seeing somebody like, I’m an AI auditor, you just hand it over to me all the documents, and here’s your final thing, you know, you don’t have to talk to me ever again. Like, we have not seen this level of sort of purely agentic, you know, sort of completely hands-off kind of operations in practice, in production.
[00:05:32] Kyle Langworthy: Emilio Escobar shifts the focus toward defenders, how organizations are investing, where AI is actually helping security teams, and where the real gaps still exist beneath the noise.
[00:05:44] Emilio Escobar: In security, there’s a lot of noise when it comes to AI. Things like I mean, bad actors are always gonna use everything available to them, but just a focus on bad actors using AI, not a lot of focus on defenders using AI or AI security implementations that aren’t necessarily what I think are gonna make people tick and where their needs are. So, it’s a little bit of both. It’s how much investment is being made in AI, what is growing, what is building, what are people using, what are people trying to do, what gaps there are internally for my team or any other CISO I talk to, as we’re also building products.
[00:06:21] Kyle Langworthy: Megan Rothney explains how data readiness and algorithmic advances are finally converging, unlocking faster drug discovery, better diagnostics, and physician support tools that augment care rather than replace it.
[00:06:36] Megan Rothney: It’s a really, really exciting time to be working in AI. I think we’ve been talking about it for a long time, but health care data has been so fragmented that we really couldn’t necessarily action on all of the great ideas that people had. And I think what’s really cool that’s happening right now is the data assets are getting large and a little bit more controlled at the same time that the algorithm technology is kinda catching up. And so we’re really starting to be able to put those two things together and put more products out on the market. Some of the places that I think this is gonna have a huge impact, I think drug discovery is gonna be kind of revolutionized by this, getting drugs to market much faster. I think in terms of diagnostics, really, what I’m seeing is it’s just easier right now for us to get to a prototype that we can then get out in the world testing than it’s ever been before. It’s just incredibly exciting. And I think over the next 5 to 10 years, we’re just gonna see a huge shift in medical practice towards physician support tools. So, not replacing physicians, which I think is one view of the world that many people had, but really thinking about how can we make them do their jobs better?
[00:07:41] Kyle Langworthy: Next, Mahi Sethuraman dives into how seamless experiences, personalization, and clear consumer benefit are shaping trust and why that trust must be earned, not assumed.
[00:07:56] Mahi Sethuraman: I am now focused on how are consumers trusting fully AI capabilities? I think because they are adopting, they will adopt if they can be assured that there is trust and that the companies are focused on their best interest in opening up these opportunities to consumers. And it’s been interesting to see the impact that machine learning has had in consumer experiences, that has become so seamless to the consumer when they actually interact. And I think this was very evident in my tenure at Affirm. The product has been primarily adopted by Gen Z and millennial consumers, consumers across the credit spectrum, and they have personalized all of their consumer product experiences, including decisions on credit offers, underwriting decisions that are so seamlessly packaged in the consumer experience, including even on the merchant side in terms of the pricing and deals that we put out for different segments of merchants. So, I think that has been a fantastic, I always wanna say consumer knows to trust when he can be convinced that there is a benefit for them in adopting.
[00:09:18] Kyle Langworthy: Mike Abbott reflects on how technology is lowering barriers to learning, execution, and company building, reminding leaders that discomfort is often just unfamiliarity, not inability.
[00:09:31] Mike Abbott: You probably can do more than you realize in areas that you’re not comfortable with. And what I mean by that is, like, when I first started Composite, I was the CEO, and I was very open with, oh, I don’t know anything about finance, I don’t know anything about sales. It turns out, like, you can learn those things. Like, it’s not like rocket science. And so it’s like a kind of reminder that, like, humans in general, not just because I’m special. I think humans in general can do more than they think they can. I think sometimes, like, we as a society talk more about these, like, constraints. And I think these tools in AI are gonna even make this more profound, where it’s democratizing so many aspects of just company building.
[00:10:13] Kyle Langworthy: Toufic Boubez explains why model context protocol isn’t just another technical standard, but a foundational shift in how AI systems collaborate, share state, and evolve toward truly agentic behavior.
[00:10:26] Toufic Boubez: Model context protocol, how familiar are you with that? People think of it as another spec, but it’s not just another spec. For me, it is a fundamental shift of how we are going to be working with AI systems moving forward, especially agentic AI systems. And this is how we’re extending their capabilities. It enables different tools, agents, LLMs, what have you, to kinda share state and learn from each other and act and collaborate. And to me, that’s why we’ll take AI from where it is right now to the next level. So, that’s the biggest signal for me as a biased person in AI, being immersed in the AI field.
[00:11:09] Kyle Langworthy: Patrick Spence with a reminder that creativity, judgment, and taste remain irreplaceable, and that the companies that thrive will be those that pair AI capability with exceptional human talent.
[00:11:22] Patrick Spence: In this AI-dominated world, it’s easy to kinda fall into the trap of all efficiency, all automation, when the real winners will be those companies that figure out who the right people are, that bring creativity and taste into the equation, right, to help solve new problems and build new businesses and address these things. And, of course, they will, you know, wield AI in doing so, but the real differentiator will be what it’s always been, which is the people that are inside the company, figuring out, you know, how to solve the hard problems and kind of what to go work on next.
[00:11:58] Kyle Langworthy: That’s it for this episode of Signal to Noise. If these conversations resonated, it’s because they reflect a shared reality. The leaders who win in moments of transformation are the ones who know what to listen to, what to focus on, and what to ignore. Signal to Noise is brought to you by Riviera Partners, leaders in executive search and the premier choice for technology talent. To learn more about Riviera and how we help people and companies reach their full potential, visit rivierapartners.com. And don’t forget to search for Signal to Noise by Riviera Partners on Apple Podcasts, Spotify, or anywhere you listen to podcasts. Thank you for listening.
[00:12:36] Outro: Signal to Noise is brought to you by Riviera Partners, leaders in executive search and the premier choice for tech talent. To learn more about how Riviera helps people and companies reach their full potential, visit rivierapartners.com. And don’t forget to search for Signal to Noise by Riviera Partners on Apple Podcasts, Spotify, or anywhere you listen to podcasts.
About the host

Kyle Langworthy
Partner, Riviera Partners
Kyle Langworthy is Partner and AI / ML / Data Practice Co-Lead at Riviera Partners. With over 15 years of recruiting experience, former founder Kyle Langworthy is a driving force in innovation and executive talent placement at Riviera Partners. Leading the global AI, ML & Data efforts, he shapes the modern era of executive recruiting.
Passionate about entrepreneurship, Kyle aligns seamlessly with Riviera’s mission to provide top-tier talent for innovative companies. Through the strategic acquisition of WorthyWorks, he integrates cutting-edge practices, ensuring a perfect match between exceptional talent and transformative companies.
Specializing in AI/ML technology placements, Kyle advises clients, from private equity to Fortune 100 technology leaders, on recruiting executives for lasting impact.


