Skip to content

Episode 15: Data Science and Medicine: AI in Healthcare

AI is the new frontier in medicine. Learn how to ensure that sensitive information is secure while also reaping its benefits for the sake of the patient and the advancement of public health.

Listen now


Tina Tang

VP of Product Marketing at Privitar

Aaron Sullivan

Head of Commercial Strategy, Radiology Digital Solutions at Bayer Pharmaceuticals



Tina: I’m here today with Aaron Sullivan, who is head of commercial strategy for radiology digital solutions at Bayer pharmaceuticals. He is responsible for creating and executing the commercial strategy for Bayer solutions in this space and the Americas region, including artificial intelligence and enterprise software, the commercialization of an AI marketplace and the platform for image intelligence applications. And these are SaaS based applications. And I’m really excited to welcome Aaron to the show. 

Aaron: Thanks, Tina. 

Tina: First warm up question is, what are you reading?


Aaron: You caught me on a good spurt I don’t know, religiously read or routinely even read. But I do love everything that David Sedaris serves up and so I just read a couple of his latest books. But more recently, I ordered these. Basically ordered a bunch of books from Randall Munroe. And if you’ve heard of this fella, the one that I’m looking at now is called What If it’s serious scientific answers to Absurd Hypothetical questions. It’s, it’s just a lot of fun. You know, he’ll he’ll solicit questions from from the internet or from readers. Sometimes even children, like what would happen if you fill the universe with soup, all the way from the sun to you know, Venus, and, and he’ll give like a real he’ll do his best to come up with a real scientific answer. He’s an ex NASA engineer. So he’s got the background. But obviously, it’s they’re absurd questions. And the answers are a bit absurd. And he explains them in a fun way. I thought that was really great. So

Tina: that’s a great recommendation. I just wrote that down. Because yeah, I love that kind of stuff

Aaron: It’s really cool. He has another book called How to same thing, really absurd scientific explanations of things, you know, like, how the human body works, or how an internal combustion engine works. But he, the way he describes them, he gets away from all the jargon. And you know, just call things like more descriptively what they might be.

Tina: Okay, I need to read that one. Because my partner tries to explain to me things that have to do with internal combustion engines, and I just can’t get it. I just, I just can’t get it.

Aaron: He does these really, you know, basic illustrations. And he does these kind of childlike, you know, descriptions. But it’s all scientifically, you know, based, it’s not imaginative. It’s not make believe,

Tina: Isn’t it funny, like how some people’s brains are wired certain ways, like he can explain the inner workings of an internal combustion engine. And I could explain to him how a database works. But it’s not not the same kind of not the same part of the brain, I guess.


Aaron: Well, then he’s marrying both, you know, in the book, in the foreword, he talks about how, as a child as a five year old, he just randomly his mother wrote this down because it struck her is, is compelling. And so she wrote it down for later, and shared it with him years later that he just said one day, do you think there are more hard things or more soft things in the world? And she was, you know, saying, like, I don’t know, and, and he said, well, let’s just think about it in the house. And he kind of picked out some hard things and saw things and somehow extrapolated them to global levels. And it made his decision to you know, I think it’s not just a kitschy style, I think this, you know, this is the way this guy,


Tina: yeah, it’s like a childlike curiosity and stay with it. I love that. I have a childlike curiosity about how the health care systems as well as providers are using, or hope to use if they’re not currently using hope to use AI to improve the systems and the care. And I know that you are somewhat of an expert at this because of what you’re doing at Bayer. And so I was wondering, you know, if you could share some of your experiences and insights about this topic, we’re hearing a lot in the media about how healthcare systems are, you know, even before the pandemic, right, they were already under resourced already on the edge. And then, of course, during the pandemic, just completely overwhelmed and continue. A lot of them continue to be overwhelmed. So what is the promise of AI for health?


Aaron: Well, I’ll start by saying because you refer to me as some sort of expert that we are, we finally remind ourselves continuously that if anyone in our industry says they’re an expert in AI, you should be very wary of what they say next. So I will absolutely not classify myself as an expert, although we are passionately interested and dedicated to try to bring some semblance of usable commercial instances to market in a responsible fashion. With that out of the way though, I mean, Our AI is all around us. We’re using it every day, whether we think we are or not. It’s often interesting when I talk with health care providers and try to gauge their level of comfort with with this concept of AI and medicine, and you know, the folks that really dig in and say like, I would never, I would never, I think, you know, just with a few conversations about the way they’re in their current personal life as consumers, they probably don’t even realize just how much artificial intelligence is surrounding the products they use, the decisions they make, and the really the things that they value and lean on to manage their own lives, right, in real ways. And so I think it’s different when you think of using it versus using it in your profession, right, and the stakes are so high in medicine. So we we do need to be careful that we don’t conflate AI in different industry segments, right? Because in medicine, the stakes are just too high right to, we’re not just going to get a bunch of misguided ads in our social media feed, if it goes wrong, right? We’re going to put patients at risk and cause hospitals and health care tremendous amounts of money and resource in misdiagnosis.

Tina: That’s such a great point, the way that AI is being used in different industries are different in their potential impact. So I’m wondering if you could share with us maybe some examples of what are some of the patient outcomes that might be realized if AI were used, and let’s just take radiology which is your field?


Aaron: Well I’ll talk, you know, maybe I’ll touch on both because bigger than radiology, even although intimately intertwined in the radiology world, is this idea of digitization of health that has grown steadily, but maybe what hasn’t kept pace is interoperability of data. And it’s a bit of an unrealized promise, in my opinion in the healthcare industry. And here, what I mean is as more and more data as is collected through the care that’s provided, increasingly, it’s digitized, and tied to machines, and software. And when that’s done, you get a treasure trove of data coming back pretty pretty quickly, it turns into a bit of a tsunami, right, because you can have to be presented and organized in meaningful ways. And if it’s not, this is where you still see errors and delays and care, inefficiency and a patient experience that suffers, right. And the bigger picture. There’s these there’s the potential for population health applications, the promise of shifting, like the priority from reactive care, so I get sick, and I turn up at the hospital and I get treated, and AI could be involved in that treatment. But bigger picture than that, and longer longitudinally, from that if you cut AI loose on the massive trove of data that’s in healthcare, that that, again, is right now kind of poorly organized and coordinated. Well, AI would be very well suited to find patterns that humans either can’t readily see, or even patterns they might be able to see but don’t have the time and energy to to analyze, right? It could quantify those patterns in a way that we can’t, and it could identify at risk patients before acute events occur, the ability to to have a primary care physician know that a patient of theirs is at risk for a significant cardiac event in the next, you know, month. And you know, how would you know that you would know that only through the analysis of massive amounts of data in the population, right? That only with that level of, of analysis and breadth of data, can you really start to see a trend and then get ahead of those trends before they become acute issues.

Tina: So there’s the vector of increasing digitization of health factors of conditions of diagnoses. 

Aaron: Yeah, I think that’s already happening, you know, 

Tina: And the signals, right, that we’re generating data through, you know, new kinds of technology, new, new kinds of diagnostic tools, even


Aaron: Even old tools, that we just have taken, you know, finally taking the trouble to make digital, right, not not putting the lab report out on paper, but actually collecting it as discrete data elements. You know, in a blood sample, those reports come back and paper, initially, digital healthcare was like scanning that paper into a system, right? And creating an electronic file. Not super valuable for data mining, right? But but maybe a little better for health portability for a patient. And so here, if you just take the simple analogy of turning that, that blood analysis paper data points on paper into discrete data elements that are attached to that patient and loaded in a database, and then you do that at scale, you know, you can really start to see some things differently.

Tina: Right. So So you talked about this data that’s being captured generated mind, man imaged what are some of the ways that this data is being used? After the point of creation? You touched on the preventative?


Aaron: Yeah, I would, I would say, to take a turn towards the radiologist here in radiology practice. I mean, a radiologist is, you could argue is a kind of on the preventative side, or at least on the diagnostic side, right? Typically, diagnostic imaging comes into play early and often in patient care. And on the early side, you know, it’s the most magical, fastest, easiest, cheapest, less invasive, highly tolerated by patient way of figuring out what exactly is wrong so that we can guide some therapy or treatment accordingly, right. And so the role of data is absolutely critical in radiology. And it’s not just enough to read an x-ray or a CT scan as an image to make a diagnosis. You know, if you imagine a radiologist just doing that, you really aren’t appreciating what’s necessary to interpret those images, right? That radiologists will also need and want more context to make the highest probable diagnosis to guide most confident treatment strategies. So here I’m talking about in addition to the X-ray, patient demographics, prior history patient, other diagnostic test results, maybe not even imaging, I mentioned blood tests before a lab result, right? Of course, their intake form, what symptoms are they communicating and presenting, what what also has been noted potentially, during a physical exam, which is still very much an important part of diagnosing a patient, having all that information available to the radiologist at the point of read, can dramatically improve the confidence of diagnosis, that data does exist. But oftentimes, the challenge is a tablet meaningfully organized and presented in a timely fashion. For radiologist, you know that that is where artificial intelligence also can play a role, which is not necessarily, you know, there’s the ability for AI to see and do things that humans are not well suited for, especially in image based analysis or pixel based analysis, right. But there’s also the ability for it to do faster and more efficiently, things that humans could do, but are really tedious and mundane and wrote and not not really worthy, right, of trained professionals time and energy,


Tina: talking about patient health data, which is maybe one of the most sensitive types of data that we can think about how does a company like Bayer who provides AI enhanced solutions for the industry? How does Bayer protect this kind of data?

Aaron: That’s a great question. Healthcare data is, as I mentioned earlier, is kind of like the most personal the most sensitive category of data that maybe that we generate as humans, 

Tina: it’s very valuable, right, and also very bad, right, not just for individuals, but for human race.

Aaron: Absolutely, you really, you’ve really nailed the, you know, the scale balance there. It’s not just that it’s super private. Instead, it also happens to be it’s best to use even above and beyond your individual diagnosis and health is leveraging it for this population approach that I described, right? Like, if we all opt in on that, the amazing things that we’ll be able to do with predictive care, you know, but but we need, you need the data to do that. And because it’s so sensitive, it’s always a battle to get hold of it, that makes it really expensive actually, to and healthcare because of all the controls and the responsibility. And the security necessary, makes it kind of expensive to leverage data to develop AI or other softwares. And so it’s an unintended consequence, no one’s no one’s really monetizing it as much as they’re protecting it. And that comes at costs. But it does, it does have an effect on on slowing some products to market. I mean, there’s a host of legal regulatory standards already designed around, you know, safeguarding patient data. Some of them relate to that interoperability promise that I described. Some of them relate to security, and cybersecurity specifically, some of these you may have heard of, are, for example, HIPAA, or high trust, or even not necessarily a health care rule, but a SOC2 certification, which as you know, ties to cybersecurity generally accepted cybersecurity best practices, right for software, and so viable vendors in healthcare, obviously, build their products with these in mind. Another aspect that complicates it is the absolute need for growth and shift into cloud hosted applications. Healthcare here lags a bit and again, you could argue, argue responsibly, so given the stakes, but then, you know, as healthcare gets more comfortable with that idea, it will enable some, some things to happen at scale and hopefully better interoperability. But there’s a little bit of a trust, you know, building that has to occur, which is perhaps ironic, it’s worth noting that data is stored and managed locally on on premise is not immune to cyber security risks. And, you know, you see the headlines every day about about ransomware and hacks. But in my industry in the healthcare industry, I think, right. As of a year ago, about one in three healthcare organizations had reported being hit by ransomware attacks in one in three, it’s absolutely ubiquitous 600 or so US healthcare organizations 18 million records hijacked estimated annual cost in the 20 plus billion range. So the idea that data is stewarded at a hospital or by a healthcare provider is is safe, I think is worth questioning. I’m not arguing that others can do it better. Only that, you know, it’s not the Fort Knox of data security,


Tina: I read a book by a New York Times journalist, and she is the cybersecurity or InfoSec, as she calls it, head and she she said that as far as cybersecurity goes, I’m paraphrasing, everyone is in everybody else’s business already. So it’s, it’s, it’s just like that, and you should just get used to it, because it’s a fact. But there are things that can be architected, that even someone you know, even that kind of risk could be kept minimal boxed in, you know, like with tokenization, and encryption, things like that.

Aaron: Yeah, tokenization and pseudo de-identification, those are all techniques that I know we’re using, and and others are using as well, especially when we try to take data off premise from a hospital into the cloud. And by the way, the reason we’re doing that, in many cases is not just cloud storage, which is, you know, prevalent with all the big data players. But with AI applications in medicine, they tend to be high GPU compute dependent. And so hosting and scaling AI on premise for a hospital is going to be a losing battle with the prevalence and type and number of AI applications coming to market and the reams of data that have to run through them. You know, asking a local hospital to stand up server space and maintain that securely, is going to be a never ending battle. They’re already IT departments are struggling in hospitals to keep up with this wave. And so the move to cloud is not just to get data out of the hospital, in fact, probably the least interesting component of it, it’s more about managed services, and hosting applications in the cloud, to reduce the demands on the local IT infrastructure,

Tina: Right. Some of the customers that we’ve been working with in the healthcare space. They use cloud, but they have to do so in a very intentional way, so that they meet all of the different jurisdiction of related recording or privacy requirements, or data protection requirements. But you know, they need that data from all of, you know, multiple sites, sometimes multiple sites within the bounds of the same country, are still not allowed to share data with each other until you do certain, you know, levels of protection to that raw data. And only then are they allowed to, you know, bring it up into the cloud, run analytics on it, and then use it for improving their operational efficiency, improving care research. So it’s, it’s a very cloud seems like it’s risky, but there are actually ways to architect it implement those systems and processes that make it actually quite safe. 


Aaron: Absolutely. And as I, as I just recounted, that cost number of records compromised. And then in the frequency and prevalence of ransomware attacks, you know, you have to say cloud is Dangerous or Safe compared to what we know the alternative already. And it is not without risk, right. It’s worth noting, too, that in most cases, maybe haul, but certainly most vendors like like Bayer that are leveraging cloud to do the work that I described earlier, we never actually see the patient data. And we certainly don’t store it in a persistent fashion. In fact, quite the opposite. We’re precluded from doing that, by most of the license agreements, hospitals are very protective of that stewarding that data. So if we’re truly doing it, to execute a service, so that we can give back to that same user real time information to help with the treatment and diagnosis of that patient, but the data is coming to us tokenized. And it’s going back with an enriched amount of information available from from the AI and then it’s it’s cleaned from our from our cloud instance. So it’s not the case is going to probably never be the case that industry will benefit tremendously from this data. And again, this might be a point worth delineating when you think about the way big tech in the consumer world, you know, gets people to sign up and essentially, you know, the payoff his email address physical address phone numbers, you know, and it’s it’s pretty clear that that’s, as Facebook CEO famously said, you know, in Congress, Senator, we we sell ads, right. I mean, that’s, that’s the business model that is not necessarily the business model and health care and probably won’t be.


Tina: It’s a great point that you make. And I’m glad that you said that, because I know that it’s, you know, it’s becoming more and more important to individual citizens, that the companies that they trust their information to have that in mind have their privacy in mind, like what you just described with their it’s not about selling ads. something much bigger than, than that. So you’ve been in this industry for a while, and you’ve worked in imaging, in particular, and I think you’ve seen how this space has grown and evolved over time. And, you know, working both on the technology product side, as well as on the, you know, the business side and having met with hospitals and care providers, is there a commonly held belief in the industry that you are either a huge champion of or violently disagree with them? And because you, you can see that it’s holding them back?


Aaron: Well, it’s unfortunate. I mean, the first thing that pops to mind is an unfortunate way that AI in radiology specifically started to become prevalent. There were some early thought leaders in the space, actually maybe adjacent to the space, so they weren’t radiologists, but they were advocates for artificial intelligence and big tech and and they kind of famously and publicly predicted that we wouldn’t need radiologists in five years because this profession, you know, the premise was in this is true. It’s a data rich profession, right? Tremendous reams of data coming through radiology, radiologists, by some studies, interpret a image every three to four seconds during an eight hour work shift. I mean, that’s how much right and how frequent. And as I’m describing it, you can almost picture all this is disingenuous, you can almost picture a manufacturing line of images, right. And so anything that’s done that routinely, and at that high volume, and it’s chest exam, after chest exam, after chests exam, kind of lends itself to machine learning. And so there were some famous over estimations about the demise of radiology as a profession. And unfortunately, well, fortunately, that’s not true. And because we’re five years from that statement, and here we are needing radiologists more than ever, but the unintended consequence of that was that some radiologists dug in and really steeled themselves against this technology. And maybe rightfully so, if the premises by using it, you eliminate yourself. You know, there’s not a lot of folks that will sign up for that. So it was a bad way to start. It was maybe a maybe a textbook lesson in how not to gain rapid adoption of a technology in a space with users, right? So yes, there for sure. People dug in, for variety of reasons. And many of them are still good in AI in medicine, you know, again, the highest bar, we need have supreme trust in these algorithms before we cut them loose. And to do that we need transparency, we can’t have blackbox algorithms in medicine, that can’t be explained, AI has to be accountable, in the end. We can’t afford to have a an AI tell a patient that the lesion is benign, and therefore no treatments recommended. And then a year later, it turns out it was wrong. And a patient is either late stage cancer or you know, possibly even dead. We can’t afford to turn to AI and say what happened and have it just say, I don’t know, I’m not sure you know, it has to be we have to have explainability and accountability. And until we have it for sure we want it to augment medical professionals like radiologists not not to try to replace them.


Tina: It’s such an important point that there’s overhype, like you said about five, six years ago, overconfidence, overstating capabilities. And yeah, the point was, is that it’s meant to augment human decision making. It’s not meant to replace. We’re I mean, we’re so far away from that.

Aaron: Yeah, one way we try to frame it, just to make sure that our customers know where we stand on it, and that we’re on the same page as they are is that we think it can do a lot of things it can improve their workflow. It can reduce tedious and repetitive tasks. It can occasionally do, like find the needle in the haystack kind of scenarios that a radiologist in theory could Do but in context and in the moment is very unlikely to do. For example, when a radiologist reads an exam and they see the context of the rest of the patient history and the other information, they’re looking for probability, like any medical professional, it’s, you know, is it likely to be some rare genetic disorder that’s never been seen before? Or is it more likely to be that based on all the information here that this obese person that’s been smoking for 40 years, like, there are other things that are much more likely right. And so AI, though running in the background, can let the radiologist do their probabilistic diagnosis, but then, either at the same time, or even later, because it’s not an emerging case, it can scrub through that data at its leisure and say, hey, there could be something here. And tomorrow morning, when that person comes back to work, you know, it could be an inbox of triage that says you might want to take a second look at this. And here’s why. Right, it’s still up to the radiologist to decide but so there’s that application too. And there’s no reason we would want a radiologist focus there, it would come at tremendous cost to the rest of the, of the primary work that they do. So there’s those are applications for AI. Ultimately, we want the radiologists freed up to practice at the top of their license. We want the AI to handle the workload necessary to free them up to do that.


Tina:I think that, you know, speaking from a patient’s perspective, I would want that from just like, focus on the important part, right?

Aaron: You know, if you’re on an airplane and the pilot came back and serve you a drink, you might be wondering, like, is this really where you ought to be right now? Don’t you have something more important to do?

Tina: I recently got an electric car. And I’ve been kind of testing the autopilot stuff. And I’m, I think, you know, goes into the overstated function. It’s actually a full self driving version, not just autopilot, which is like helped you with parking and stuff like that. But like, it’s supposed to be full self driving. Yeah, it’s not full self driving. It’s not going to be either. I mean, not, you know, in the lifetime of that car. 

Aaron: Some day, some day, very slowly and incrementally,

Tina: someday, but right now, it’s helping me drive better. Anyway, that’s my laypersons analogy augmenting my my driving skills. So Aaron, how would you measure the success of AI? in radiology?


Aaron: Yeah, it’s important to define because we’re not just proposing technology for the sake of technology, right? There are real challenges that need to be addressed in radiology. AI is one potential solution, you know, we’re tool we’re trying to do is improve access to diagnosis and manage the complexity of care, reduce burnout and diagnostic error in radiology, procedure growth has been on a rocket ship trajectory for years in radiology, it’s really, there’s a good reasons for it. It’s widely available, it’s affordable, it’s highly tolerated and non invasive, right. It’s kind of magic in terms of healthcare. But more recently, the technology in the modalities that we use to acquire those images has evolved in a dramatically scaled way. And so there’s a five fold increase in the amount of data that radiologists season and exam today. So this is not just more procedures, but in on top of more procedures, more data in each exam, that has to be interpreted by a radiologist. And so we’re at a point now that it’s growing faster. Technology’s growing faster than radiologists can grow. And I mean, what are your choices? I mean, you you can’t slow down diagnostic radiology, it’s just too valuable. It takes more than a decade to recruit, educate, train and deploy radiologists. And that’s just too slow, right? And so you have to leverage technology as at least one option. And that’s where machine learning can be deployed. It can be deployed right now it can be deployed at scale, and help improve radiologists burnout, I mean, 49% of radiologists report signs of burnout. That’s significant, right? These are the folks that are responsible for diagnosing but it’s a clear sign that they just can’t keep up with the workload. They’re about. Some studies estimate over 40 million diagnostic errors annually across the globe. And that may seem alarming, and obviously not all those are acute, but they are mistakes and missed reads and delayed reads. And so how do you alleviate this burnout and this diagnostic error problem? If you can’t grow more radiologists, you have to leverage some technology.

Tina: Right so so AI is important in that it helps radiologists be better at what they do. prevent burnout and improve patient outcomes.

Aaron: Absolutely. That’s what it’s about. 

Tina: Aaron, what’s a takeaway that you want our listeners to walk away with from this conversation?


Aaron: Well, I think a couple times, we’ve touched on the ways that AI and healthcare and radiology might differ from the way that we experience AI in the business to business world, or even the business to consumer applications that we all are surrounded by. And that’s really important to know, because in healthcare, the stakes are supremely high for patients and for health care systems and professionals responsible for them. And because of that, AI and healthcare will be more heavily governed, and it will be more methodically deployed, and use in a very conservative fashion. But that said, it is coming and we need to have it for the reasons I just described. The applications are absolutely necessary and valuable. And then finally, they just they rely heavily on patient data, right and access to that data. And so as consumers and medical professionals, try to embrace this technology, you know, patient data is is central to it. That also gives people some concern, but it’s it’s going to be vital for the success of healthcare overall.

Tina: So I want to thank you for joining us on InConfidence. Aaron, it’s been a pleasure having you.

Aaron: Absolutely. I enjoyed it immensely.

Ready to learn more about Privitar?

Our experts are ready to answer your questions and discuss how Privitar’s security and privacy solutions can fuel your efficiency, innovation, and business growth.