Venture Vanguard:
Troy Bannister

Hospitals know how to triage patients. AI tools? Not so much. While they’re racing to adopt AI to stay competitive, few have the infrastructure, standards, or processes in place that are required to evaluate AI effectively or monitor their ongoing risk.

As the former founder of the healthcare data platform Particle Health, Troy Bannister is now focused on helping hospitals safely and effectively bring AI into the clinical setting. He founded Onboard AI to provide healthcare organizations with the information and tools they need to adequately vet AI vendors and assess their unique risk.

https://www.rivierapartners.com/Bannister spoke with Riviera Partners to share his insights into the fast-evolving healthcare AI landscape, what makes vendor evaluation so complex, and the changing roles of healthcare providers in an AI world. This conversation has been condensed and edited for clarity..

The big problem in healthcare right now is that there’s this competitive push for organizations to adopt AI to stay competitive, and reduce costs. But at the same time, the users of AI are very nervous, as they should be, about deploying these tools. They’re new, and the technology isn’t well understood.

This means hospital executives are in a stuck-between-a-rock-and-a-hard-place situation. Their AI budget has tripled, they need to spend millions of dollars and compete with the hospital across the street, but they don’t know how to appropriately find, evaluate, deploy, and manage these tools.

What we’re doing at Onboard AI is building a platform that allows them to find the right vendors, evaluate those vendors, understand the risks, mitigate those risks, and safely deploy these tools in a clinical setting.

Healthcare organizations are already leveraging so much healthtech. What makes evaluating AI vendors different?

First, it’s a bit of a black box. You put data in and it gives you a result, but it’s hard to understand exactly how it got there. Another big difference is that AI is always changing. It adapts and learns over time, which can then introduce new risks.

Doctors have a really interesting problem: how do I trust the tool to make a decision with me or for me? It’s still my license, and malpractice is still a major concern. 

If I’m a radiologist, I’m now using AI as a throughput enhancer to do 150 scans a day instead of 50. But I don’t have time to evaluate every AI decision in detail. I have to just click accept, accept, accept at some point. That means providers could be blindly accepting outputs.

Given this risk, how are healthcare organizations responding?

Almost 100% of hospitals we talk to respond to these risks the same way: they create committees, which is very hospital thinking, right? 

They all recognize this is a new surface area of risks. However, they haven’t been able to increase their capacity to evaluate these tools in an efficient and consistent way. As a result, most of these AI committees are taking 12 to 18 months to evaluate a single tool. There’s no consensus framework yet like we have in information security, no equivalent of SOC 2 or HITRUST for AI, so every hospital is building its own process from scratch. The industry is moving towards a consensus, but we’re still a long way off.

Are there any misconceptions about healthcare AI that you keep seeing?

After doing as many assessments on AI tools as we have, we are starting to see some patterns. The first is that these tools don’t perform the same across hospitals. Foundational models might be trained on the same data, but hospitals have different schemas, different populations, different inputs. If I take the same model from Hospital A and then bring it to Hospital B, it will perform differently.

For example, we evaluated a vendor that didn’t use race in their model. That was a risk we flagged because the hospital was looking at a use case that includes COPD patients, and a lot of those patients get prescribed albuterol. However, Puerto Ricans are six times less responsive to albuterol, and there is a density of that population in this hospital’s geography. Without looking at the race of the individual and only using zip codes as a proxy, the AI would have learned over time to avoid prescribing albuterol, which would then create potential care gaps for other populations who live there. 

Another misconception is around how involved the AI vendors will be. All of the risk is on the hospital. They give you a tool and say, this is as good as we can get it. But they have no way of knowing if the tool is being deployed safely and accurately, or if it will perform well in your patient population. It comes down to the hospitals to ensure that AI is being used safely, not the vendors.

So how do hospitals manage that kind of risk?

When I talk to hospitals, my biggest piece of advice is to stop doing 100 pilots a year and instead select the 10 pilots where you can focus your time and resources. That enables them to have a much higher success rate in converting these pilots to production. 

The really big challenge for hospitals right now is they don’t know what questions to ask, and they don’t know how to select the right vendor for their unique, nuanced problem that they have. There are vendors that fit Hospital A really well, and then there are vendors that fit Hospital B really well, so picking the right vendor is a really hard problem.

It’s coming from both directions. Executives are pushing from top-down. They see it as an existential requirement because they believe AI will provide a better patient experience, have potentially lower costs, and give the hospital better margins.  And doctors, especially after seeing cool new tools at conferences, are pushing bottom-up, because they want to stay on the cutting edge. 

But there are two key obstacles. The first is that there’s no clear process. A doctor might find something they want to use, but hit a wall trying to get it approved. That’s what we provide; a structured way to submit a request, go through evaluation, and get to deployment.

There’s also pushback because of the malpractice exposure. Most vendors position themselves as a clinical support tool, which provides a recommendation that you can take or leave, as opposed to a diagnostic tool as defined by the FDA. That means it’s always the doctor’s decision to accept or reject the AI’s output, which puts the doctor in a tough position.

What’s the solution?

II think we’ll need to shift from medical malpractice toward product liability where vendors assume more responsibility. If the future really is an AI tool replacing a doctor in some capacity, someone has to insure that tool as if it was a doctor making an error.

If we can figure out the liability issue, what might healthcare look like in five years?

We’re heading toward a world where the cost of diagnosis drops to near zero. And I use the word diagnosis very explicitly. It will become very accessible to receive an accurate diagnosis through technology without having to see a doctor.

There’s still, throughout the care continuum, a lot that happens before diagnosis and a whole lot that happens after. I think the downstream of diagnosis is going to be everything for doctors. It’s going to be the actual care administration in the form of surgeries, prescription management, intensive care. There is a very human element to healthcare that will never disappear. We use the word care for a reason, and I think that will always be a part of this. Humans will be augmented by AI in that world, but never replaced.

About Riviera Partners 
Riviera Partners is a global executive search firm specializing in technology, product, and design leadership. With over two decades of experience and a proprietary platform that combines deep recruiting expertise with data-driven insights, Riviera is the go-to talent partner for venture capital, private equity, and public companies. Learn more at www.rivierapartners.com

Riviera Partners
Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful. Privacy Policy