
Signal to Noise: Episode 10
2025 Highlights: The Real Playbook for AI Adoption in the Enterprise
Transcript
[00:00:01] Intro: Welcome to Signal to Noise by Riviera Partners, the podcast where leading executives share how they cut through the noise and act on what matters most. We go beyond the headlines to explore the pivotal decisions, opportunities, and inflection points that define their careers and shape the future of the companies they led. It’s time to cut through the noise and get to the signal.
[00:00:25] Host’s intro: Welcome back to Signal to Noise, and thank you for being with us through an incredible year. I’m your host, Eoin O’Toole, and this is our special best of the year episode. 2025 has been a bit of a whirlwind of transformation, especially in the world of technology and commensurately in leadership. We’ve seen AI move from more than a novelty but now as an absolute necessity, and the role of the CTO has never been more critical. So we’ve been honored to host some truly visionary leaders. I’ve personally really enjoyed my conversations with the likes of Mike Abbott, Emilio Escobar, Toufic Boubez, Megan Rothney, and many others. And they’ve shared incredible and invaluable wisdom with us on everything from the importance of culture fit over skills metrics in hiring to the leadership challenge of being the battery that energizes an entire organization. So as we look ahead to 2026, I know Riviera Partners is excited to continue guiding the world’s best companies through this era of rapid change. We’re gonna be focused on placing the next generation of creative, high-impact leaders. But let’s take a moment to dive into the best moments from 2025.
[00:01:39] Eoin: How do you personally see AI revolutionizing the health care landscape?
[00:01:43] Megan: It’s a really, really exciting time to be working in AI. I think we’ve been talking about it for a long time, but health care data has been so fragmented that we really couldn’t necessarily action on all of the great ideas that people had. And I think what’s really cool that’s happening right now is the data assets are getting large and a little bit more controlled, at the same time that the algorithm technology is kinda catching up, and so we’re really starting to be able to put those two things together and put more products out on the market. Some of the places that I think this is gonna have a huge impact, I think drug discovery is gonna be kind of revolutionized by this, getting drugs to market much faster. I think in terms of diagnostics, really, what I’m seeing is it’s just easier right now for us to get to a prototype that we can then get out in the world testing than it’s ever been before. It’s just incredibly exciting. And I think over the next five to ten years, we’re just gonna see a huge shift in medical practice.
[00:02:37] Sean: How do you navigate the stress and the responsibility, honestly, that comes with these jobs?
[00:02:44] Emilio: So, I think the one thing that the government actually gave me exposure to was that I got to deploy to some non-friendly areas, so my self-assessment of threat is different than most people. So, if I can go home and sleep on my bed at night, then I think I’m doing well. But, you know, that’s a long time ago, and it’s a different kind of mindset. But I think ultimately, you know, I can only control what I can control. So, what the Fed is doing is like, okay, well, we learn all of that. The second thing is, and I don’t wanna speak to, like, specific cases or, you know, SolarWinds, CISO getting sued and things like that. I think there’s, in my opinion, security or lack of, or none, business decisions that have led to separate outcomes. But going back to what I mentioned, if you are part of the C-suite, then you’re a member of the business. So if it’s gonna rain in the company, then you’re gonna get wet. And I think, you know, that’s something that everyone has sort of realized. So it’s working with your internal teams and your partners on, hey, how do we make sure we codify decisions that we’ve made? What’s the process for deciding on who makes a risk decision? At what point? When do we get escalated? So I did, for example, like, I’m lucky because I happen to be so aligned with Olivia and Alexi that if I don’t think something is good enough, most likely, you’re not gonna think it’s good enough. In fact, the only time I’ve gotten Trump at Datadog was when I thought something was already good enough, and Olivia was like, That’s not enough, which, as a CISO, it’s like, okay, fine, then we’ll do what you say. I’m not gonna push back on it. But ultimately ended up being like, hey, we have a good relationship, and I think this is what I advise every CISO to do is don’t think of the specific because then you’re thinking about me. Only think about how this is actually applied to as a company, as a business, because, again, if it’s gonna rain, then you’re all gonna get wet, is work with your colleagues or who should be your coworkers and your leaders, on, as a company, how do we actually make risk decisions? How do we codify them? So that if anything goes wrong, then, yeah, you all take the blame. It’s fine. But then there’s, like, the legal support, the insurance support, all that stuff. Yes, absolutely. And in reality, if you’re not getting any of that support, when you stop thinking about yourself, and you’re thinking about the business, and you still feel like the support isn’t there, then that’s a good signal to say, “Oh, hey, Rivera Partners, what do you have going on? Because I need something else.”
[00:05:04] Adam: What advice would you give to CIOs as they think about framing this discussion with the board? They’re gonna need investment. They’re gonna be sitting in front of you. They’re gonna try to lay out a strategy to ask you for a giant bucket of capital. What would you counsel them to do, and how to position that?
[00:05:21] Bill: And sometimes it’s not a giant bucket of capital. Sometimes it’s like a giant bucket of buy-in and a giant bucket of the ability to focus on this, not other things. So it’s like I’m asking for prioritization and focus of the whole organization. So, when they go to the CFO, and they say, I need you to change your accounting process slightly in order to enable us to get, you know, these downstream benefits, or I need to be able to go to the head of operations and do the same thing. And that air cover is incredibly important, and I don’t think that people talk about it enough. Every CTO and I think I’ve made this mistake many times in my own career. It’s like you sort of believe, like, if I build a solution perfect enough, then everyone will love it, and everybody will love me, and everybody will say I did a great job, right? And we’re all, like, striving for that validation in our role, and we don’t spend enough time. And people listening, probably who have worked when they’re probably laughing and saying, “Oh, Bill finally realized it.” But trying to bring people along even that much more than I did. I felt like I did, but, like, I obviously didn’t do well enough all the time, right? And it’s, like, constantly generating that buy-in from the top and below to try to get them to lean forward into the change that’s required so that we can have all the benefits of these futuristic technologies that probably will, in the fullness of time, make things pretty amazing, but the path to that is a little bit rocky.
[00:06:49] Eoin: What are some of those signals you’re looking for as to whether you need to evolve your team or somebody is starting to break on that continuum about their ability to continue to scale with the company? If you have signals or signs, I’d love to understand those.
[00:07:02] Toufic: There are quite a few signs that you can think of. The first sign is when the leadership team starts getting overwhelmed, right? That’s when you need to start layering in specialized leadership proactively, hopefully, but also intentionally. So another flag for me is when I start spending more time on process than actually moving the business forward. That’s a strong cue for me that the organization has outgrown its current structure, and you need to start looking. I hear stories about how Jensen Huang of Nvidia has, like, dozens, I think, I don’t know, of direct reports. I personally don’t buy into that, for example. If you have too many people reporting to you, I think they’re being underserved. The service to them and the service to the, I don’t mean to tell Jensen what to do, but as for me, that’s time to bring in the next layer. So that’s kind of from the structural perspective. The other thing is when the company shifts phases from like building to scale to product market fit to go-to-market and so on. These are specific transitions that I’ve lived, that you live through, and then you need to bring in leaders who specialize. We talked earlier about zero one, one two three, you do need leaders to come and specialize at that stage. You can’t brute force your way through it. An example, like for us at Cat.io, we just recently hit that moment on the go-to-market side, right? From the beginning, it was founder-led sales, which you’d expect an earlier-stage company; we’ve been doing all the selling, all that kind of stuff, but we are now clearly past our PMF. And we needed to bring in somebody who’s just going to focus on the go-to-market part. So we just brought in the seasoned go-to-market executive who just started just few days ago, as a matter of fact, because now we’re in that phase. So, you have to look for these kinds of cues and signals, whether it’s organizational or process or phase of the company, and bring in the right leadership at the right time.
[00:09:04] Michael: How do you find great talent in your career? And so, like, I’m interested both in, like, a hiring perspective, but also what you did with Nebula, how you’d approach that question, like, what you’ve learned about that over the last handful of years.
[00:09:16] Jon: Finding great talent for companies is the signal-to-noise question, I guess, that you and I are dealing with a lot in our regular lives, Michael. Certainly, when I was working at Nebula and Untapt before that, like you mentioned, Ed Donner. Ed Donner in the past year, if people are interested in agentic AI either as a hands-on developer or data scientist or as a more general businessperson, Ed has been, in the past year, creating some of the most popular agentic AI content on the Internet. He has hundreds of thousands of students on Udemy, for example. So go check out his Udemy courses, and he does an amazing job. So I had been working with Ed for ten years on this problem of how can we use data and machine learning to improve the experience of finding the right talent, finding the signal amongst all of the talent noise out there for employers? And so in 2015, I think, ten years ago, I joined Ed, a company that he founded, and he was CEO of Untapt, u-n-t-a-p-t. And I was the chief data scientist there up until the pandemic, when we were acquired, and then that led to Atomy forming a new company that you mentioned there, Nebula. So we cofounded that with a third individual named Steven Talbot. In either case, with Nebula or Untapt, the fundamental thesis was that there is a big opportunity here to build a SaaS platform that leverages data and machine learning to find more relevant talent for people. It’s a huge pain point for so many hiring managers, so many organizations. How do I create a pipeline of the right talent? How do I interview them effectively? And for us in kind of recent years, a really big thing for us at Nebula was going beyond a keyword based search to using encoding large language models in order to be able to, quote-unquote, understand the meaning of language so that, this is kind of a silly example, but one of the most popular programming languages in the world right now is Python. This is like a toy example, but it illustrates the idea. You wouldn’t wanna be searching for a software developer and have a snake handler show up in your results just because they have Python and Boa constrictor written on their resume. So that’s the kind of example that the, it’s amazing to me that all of the big incumbent hiring platforms out there, they all use keyword-based searching that is that simple and kind of, at best, still have these curated ontologies of relationships between different kinds of terms, but it’s still very black and white. Whereas with Nebula, probably other companies out there that have had this idea, we could be using modern AI technologies to really understand all the language and be able to pull up relevant information in context and get way more relevant results than any of the big existing incumbent hiring platforms can do.
[00:12:17] Josh: How do you think about identifying the talent that you need in any context, not just Sonos? And what do you think of when you try to determine good from great?
[00:12:29] Patrick: I think I’ve learned over time that we all get fooled, right? We have to be very careful because a lot of our inner, you know, our initial impressions, we think we all can, you know, determine who somebody is in a, you know, even an hour interview, right, or something like that. And so I’ve begun to rely a lot more, and I think over time, one of the other things I would highlight, Josh, is the importance of references at Sonos was higher than I’d seen everywhere. And, usually, you know, yourself and myself, we divide and conquer on that. But the leader would be heavily involved in talking to both listed references and then other references we would find to understand truly who this person is and how we would help make them successful if they were the right candidate. And so for me now, so much of it has been understanding somebody at a deeper level, so spending a fair bit of time with them, understanding what motivates them, what do they wanna get out of it, and really developing that kind of relationship, like you said, a really candid relationship over a period of time, but as well talking to as many people as possible that have worked with that individual to understand what they’re really about, and is this going to be a good fit? And so I think, I still think that we, in tech in general, way underestimate the importance of references that maybe aren’t provided, but understanding, you know, that you seek out and find that ultimately tell you the strengths and weaknesses of particular individuals. Because again, the interview process and the experience, no matter how many hours you put in, are still going to, potentially, you could fool yourself, right? And so I would say that’s the important thing to me. And then I think I’ve found that just the humility and curiosity are the things that really set the leaders apart that are great versus those that are good. There are some people that are fantastic experts in their area and in their discipline, but they don’t really have an interest necessarily in learning the next thing or how AI impacts their area. And so they’re good, but the great ones are figuring out, hey, what’s my job gonna look like in five years? What’s the next step for this? Where’s this all going? And they’re humble enough to know that they don’t know, but they’re going to work towards figuring it out, right? And they’re curious about what those answers are. And so I think that’s a really, really important dynamic.
[00:14:39] Eoin: Do you think that’s predominantly done through agentic AI or, you know, ultimately to deliver those personalized actual insights for organizations at an employee level, at an organizational level? Is agentic the way, or how do you see that playing out?
[00:14:53] Ameya: My controversial take on this is that Agentic AI doesn’t exist yet. So, let me explain what I mean by that. So, yes, you can see that as a future just like how LLMs were, by the way, two years ago. Like, when ChatGPT first came out, it was all about LLMs and AI, right? But we didn’t actually see that being implemented in real life in a meaningful way. But now we are seeing it two and a half years later, right, like, the number of tools that are now actually generating value for your customers and for business enterprises. Agentic AI is where ChatGPT’s launch was, right? These thinking modes and this kind of doing on your own. There are some areas, you know, Claude Code does, whatever it does is pretty impressive, especially in coding and some of these things, but we are not seeing somebody like I’m an AI auditor. You just hand over to me all the documents, and here’s your final thing, you know, you don’t have to talk to me ever again. Like, we have not seen this level of purely agentic, completely hands-off kind of operations in practice, in production. But that doesn’t mean it won’t happen, but it’s not there today. And it’s gonna take some time to build out those things. This is where we think, Gladwell is thinking about this as a company because one of the estimates they have is that today, the total IT spend, the technology spend is about $1 trillion. They’re expecting this to go to $10 trillion. I don’t know the exact number, but, you know, I don’t know, in ten years or something. If we have $10 trillion of IT spend, you’re essentially spending the same amount of capital of human on your technology. And at that time, it becomes really important how you measure the output of it? How do you measure the productivity of your workforce, which is now both humans and AI at the same time? Because you’re not spending $100 a month on AI seed, you’re gonna spend $10,000 a month on an expert per month. That’s a very different value proposition, and you have to make sure that you actually can measure that and measure the productivity of it.
[00:16:44] Sean: Do you see any difference in the kind of trust you’re gauging and winning from the early days when everyone’s grandmother said, “Oh, never put your credit card on website”, right, to all the different types of trust we’d had since then with mobile and all the complex platforms to now, it feels a very Gen AI synthetic vs authentic. Like, has the trust always been a similar issue, or are there any differences to the kind of trust problems we’re tackling today?
[00:17:17] Mahi: I believe we are at the cusp of testing what trust means to the consumer, yet another time. When Internet security was still a big heavy factor, people didn’t feel comfortable putting their bank details or credit card details. You had to work on identity fraud mechanisms, convince them that it was safe, and give them fraud alerting when something requires multifactor authentication. We have come a long way since the advent of the World Wide Web, and getting to a point where people want to do digital payments, right, and the share of wallet increase for wallet transactions across both ecommerce and in-store. So, it has definitely been a work in progress to where we have actually now, in a place where consumers trust us with ecommerce and in-store transactions. The reason why I say we are at an interesting juncture is because the typical forms of biometrics, the voice that we used to be able to rely to say, Okay, we have voice biometrics now or to authenticate who the consumer is on the voice channel, for instance, can now be easily spoofed, if you will, with the advent of generative AI. So you have to be extra careful in how do we actually add security to some of these experiences that we wanna experiment with? I’m not saying we are already there. But we’re certainly thinking about what does that introduce in terms of trust? What does it mean to secure our systems that are actually interacting with consumers in those ways? So I think it’s going to evolve because we have to rethink what is trust in a generative AI world, and the new age of AI?
[00:19:07] Ali: How do you see it changing development organizations going forward? Do you need less developers, or do you still have the same amount of developers? They’re working on more stuff. Is it a different level of developers? How are guys like you thinking about that?
[00:19:20] Mike: Well, you know, as you can imagine, a lot of these students are asking questions too, like, you know, what does the number mean? Because I do think these entry-level software jobs are going to wane.
[00:19:30] Ali: So when you say entry level, Mike, what do you mean by entry level? First job out of college program?
[00:19:36] Mike: First job out of college. Yeah. But I think what’s also happening, though, which is countering some of that, is that, like that student I mentioned, who’s a freshman who, like, if you looked at his LinkedIn, it looks like two years past school, kids are doing more interesting things much earlier in life. When I was 17, 18, I sure wasn’t thinking about submitting a paper academically to go present all these different things, and it’s like, wow. So, like, I think that gives me a lot of hope. Now that’s not every student, though. And so it remains to be seen. I think with these tools, at General Motors, they had about a thousand people trying out Copilot, like, 31% of the code was being generated, etcetera. We found that actually, those tools tend to help more senior people than junior people, which is a little bit counterintuitive, but in a way, it kinda makes sense. But there are a group of, I was talking to someone the other day, there’s also a category of people, and you mentioned age that, you know, they get so locked into using Emacs or, you know, VI that they don’t look at what is cursor enable, and they they’re almost resistant to that change, which I think is the other risk of, like, yeah, okay, you got a bunch of senior people, but are they gonna actually use the tools in the way that you want them to? And so, you know, there’s this this real balance of getting these kind of AI-native employees, which I think are really important, these folks coming right out of school, but ensuring that, like, your more senior talent, like, Emosin has that intellectual curiosity to want to go learn and unlearn maybe what things worked before that may not work today. I think the net of it all is that, like, I think for tech companies, I think the impact’s gonna be smaller than for non-tech companies.
[00:21:07] Host’s Outro: And that’s a wrap on the best of the year episode. Hearing all of those insights back to back really shows what a transformative and really insightful year it’s been for technology leadership. I really wanna thank again all our incredible guests for joining us, and a massive thank you to you, our listeners, for tuning in, sharing, and making Signal to Noise a part of your journey. So as the year draws to a close, we hope you take the time to celebrate your achievements, recharge that big battery, and gain clarity on your vision for the year ahead. We are so excited about what we’re planning for 2026, and I hope and trust that you are too. Don’t forget to review, subscribe, and share the episode with anyone who needs a little leadership motivation. Follow us on all the platforms, and we’ll see you in the new year for more signal to noise. Until then, keep driving collaboration, embracing change, and finding those key things that will have the biggest impact for you next year.
[00:22:02] Outro: Signal to Noise is brought to you by Riviera Partners, leaders in executive search and the premier choice for tech talent. To learn more about how Riviera helps people and companies reach their full potential, visit rivierapartners.com. And don’t forget to search for Signal to Noise by Riviera Partners on Apple Podcasts, Spotify, or anywhere you listen to podcasts.
About the host

Eoin O’Toole
Managing Partner – Venture, Riviera Partners
With over 16 years of experience in the executive search industry, Eoin is a Managing Partner and leads our Venture-backed practice, which focuses on the unique needs of early stage companies and startups in rapid growth and expansion phases. From concept through product-market fit, he has a deep understanding of the DNA of leadership talent that can propel an early-stage company, and the ability to anticipate how an emerging executive and an accelerating company will grow together.
Being a former CEO and founder himself. Eoin is passionate about helping clients build their dream teams, and enabling talent to reach their full potential.


