Universal security and privacy automation
Protect data and manage risk
Analyze conversational chat data
Reduce the time and cost to comply
Self-service without friction or delay
Align data protection and business use
Tailor access controls and data privacy
Flexible, consistent, scalable
Automate actionable compliance steps
Who we integrate with
Our professional services
Power responsible use
From clinical to commercial
Optimize data tests
Open new revenue streams
Realize the potential of the cloud
Protect data from misuse
Transform your data
Opinion and industry insights
An A to Z of the industry
The podcast for data leaders
The latest compliance news and advice
Press releases, awards, and more
Staying at the cutting edge
The team behind Privitar
A thriving partner ecosystem
Our story, values, and careers
Dedicated customer assistance
Dr. Dexter Hadley, the founding Chief of the Division of AI at the University of Central Florida’s College of Medicine. Astronomically talented and a lover of rum, he’s also a pioneering entrepreneur. As the founder of Hadley Lab, he strives to “translate big data into precision medicine and digital health.”
Listen to “Bits To Bedside-Using Big Data in Precision Medicine” on Spreaker.
Featuring Dexter Hadley, MD, PhD, Founding Chief of the Division of AI at University of Central Florida College of Medicine and Founder of Hadley Lab
VP of Advisory Services at Privitar
Founding Chief of the Division of AI University of Central Florida College of Medicine Founder of Hadley Lab
Intro: Welcome to In:Confidence, the podcast for data ops leaders. In each episode, we asked thought leaders in futures to break down the topics and trends concerning it and data professionals today, and to give us their take on what the data landscape will look like tomorrow. Let’s join the data conversation.
Nick: I’m Nick Curucu. And this is In:Confidence. A podcast sponsored by Privitar. In:Confidence is a community of data practitioners and encourages conversations that will enlighten, educate and inform data leaders of today. And tomorrow. Thank you for taking time to let In:Confidence be a part of your day. Today, joining our community is Dr. Dexter Hadley, Dexter for short. Okay, Dexter, who has the distinction of being both an MD and a data scientist at the University of Central Florida. So he’s one of those people that actually is being able to influence our data leaders of today. And tomorrow. So Dexter, it’s a pleasure to have you in the In:Confidence community today.
Dexter: Thank you so much, it’s an honor to be invited to share my thoughts.
Nick: And so before we start our conversation, I want to give a little bit of background to the audience, where you’ve come from and what you’re doing. I mean, you’re both an entrepreneur, and you’re a cutting edge researcher, and you’re you are pushing the limits of artificial intelligence and medicine. I mean, you’re doing research on how to apply ai ai in the areas of breast cancer, you started looking at some ways to do it with COVID-19, and radiology. And you continue to do your work with genomics. And that’s those are some incredible feats that you’re pulling off. I mean, a lot of your current work is with AI, working with X rays, to create algorithms to help interpret them, for people who are practicing medicine. I mean, you can catch you know,In:Confidence, folks, you can catch some of these lectures, and some of the videos that Dexter has put out, such as you know, it’s the bedside, or precision medicine and artificial intelligence and developing AI for clinical applications, two of my favorites, and you can catch them by just going to our our website, and looking at the podcast description. There’ll be links inside there, as well as several other links of Dexter’s work that he’s done and is continuing to do. So again, please take advantage of that, because this is some really interesting stuff we’re about to discuss today. And I know that there’s going to be more interesting stuff that Dexter has out there in the community of data. Dexter, you know, you bring an incredibly unique perspective to data science, as well as medicine for that matter. I mean, you struggle to compete very complex worlds at the University of Central Florida. You’re part of both the engineering school and the medical school as a result. Can you give our audience a glimpse of your career trek trajectory that got you to this point at the University of Central Florida?
Dexter: Oh, yes, absolutely. So as you probably know, I’ve always wanted to be a doctor. But my life changed significantly at the age of 10 when I taught myself to program and I thought, This is what I would be doing for the rest of my life, figuring out how to apply computation in medicine. So I’m from Trinidad, went to high school of the High School in Trinidad. We moved for college, I went to New College in Florida, which is the Honors College of the Florida State system, pretty liberal education was able to really pursue a very personalized trajectory and combining computers and medicine. Not a lot of mentorship in Florida. I went to Yale, I studied with some endocrinologist. I made that expert system for my thesis, and they got me into med school, went to med school at University of Pennsylvania. It’s an Ivy League medical school. And, you know, it’s this, this passion for computers and medicine that got me in there. I like to say I spent 10 years in medicine med school trying to figure out how not to practice medicine. They discovered the genome. While I was in med school, I got a PhD in genomics. I have a master’s in electrical engineering from Penn as well. Again, all trying to really fulfill this drive the who knows where I got this idea from computers or medicine would be important. I went on to work at the Children’s Hospital Philadelphia, we did some genomics, large scale genomics experiments, we found drugs for ADHD, and autism. I ended up matching at Stanford for residency, I got to Stanford and realized, again, clinical practice is probably not where I need to be. I really love the research in 2012 at Stanford was when this deep learning sort of buzzword came around, and that which was when I was there. And it became quite clear to me genomics was interesting, it was just really the first wave of big data and medicine. And the future was really, so much of the data that we generate as physicians and under utilized as data scientists. I was at Stanford, doing my residency. I ended up getting a faculty position at the University of California, San Francisco, which is like the NBA of data science and machine learning research in medicine. And after all, I come back to Florida to take care of my mom who has Alzheimer’s and my dad recently passed away. And that’s it.
Nick: Again, you know, you did a lot of good work in genomics. I mean, you have gone from, you know, where it’s Trinidad, to the west coast in California to Philadelphia, and now down in Central Florida, at the University of Central Florida. You know, what are some of the things that you’ve seen along the way and how to apply or take data and begin to move it, you know, from into the world of medicine, that AI to medicine connection, that you actually help probably bring about over the course of your career?
Dexter: Yeah, I like to say I spent 10 years in medical school, trying to figure out how not to practice medicine. For somebody like me, who likes the program likes things organized as a very engineering, engineering type mind that medicine is the wrong career for somebody like that medicine is, is hardly data driven, there’s hardly so what I realized was if I wanted to fulfill both of these sorts of passions, programming and medicine, genomics seemed like an obvious endeavor. So when I got into med school, I got a PhD in genomics, trying to figure out how to make how to model data science and medicine together, not because anybody was advising it, it is just because I love to program. And I would, you know, I saw a huge need for programming in medicine a long time ago. So I got my PhD in genomics and I, you know, pioneered a lot of I went to penetrators, I think you got the order wrong, no. And from undergrad at New College, I went to a very liberal arts school, the new college is a small liberal arts school here in Florida. And I was able to pursue, you know exactly what I thought I wanted to pursue how the program medicine and computers, not too many mentors out there, I had to go to Yale, I shadowed an endocrinologist at Yale, we built an excellent system for diabetics, you know, and that’s what got me into med school. So that’s the personal statement, they got me into med school. And suffice it to say, you know, diabetes is a pretty straightforward algorithm, you don’t need an expert system, but it was enough to get me into med school. And, you know, once I landed in med school, I realized, you know, medicine is just not that technologically innovative at the time. I mean, I went to Penn, you know, an Ivy League school, they were involved in sequencing the genome, I was there for 10 years, the beginning of that, in med school, the end of that in med school. And I don’t think the word genome was mentioned once in the clinical program, just because, you know, it’s a very researchy thing, and it stayed like that for a long time. And that’s, you know, that’s what that’s about, as soon as it takes 10 years from discovery to development of a drug, a billion dollars, you know, 99% of them fail. So I realized, you know, medicine could be a little bit different if we were to use computers to help sort of optimize that process. Genomics was the optimization of drug discovery, we were able to develop a drug for ADHD at Penn and Chop – Children’s Hospital Philadelphia, in two or three years, for much less than a million dollars. And a billion dollars, excuse me, a few million dollars. And that really opened my eyes. Oh, wow, this was a good idea all along. You know, genomics was the first of the big data so to speak. But, again, haven’t spent this long amount of time , I don’t think any doctor looks at a whole genome as much as they look at X rays, pathology, data types, and other kinds of big data types that a lot of doctors look at. So it just became clear to me as I got older, you know, we don’t have any apps that we look at our medical data, the ones that doctors look at, you know, there’s 23andme, there’s all these genomics stuff, there’s just not that much medical relevance as yet for that kind of thing. You know, there are obviously caveats, cancer being one, but every doctor looks at X rays, broken bones, you know, I got hips, look at their X rays for, you know, hip dysplasia, ingoing and out going, and I just find it amazing and 20, there is no AI really to do this at the clinical level. So that’s the problem I’m trying to solve, basically bring a level of systematic machine learning to medicine, there’s so much data, so much rich data, that is relatively underused, and continues to be under use. For a number of reasons we can discuss it.
Nick: I asked, I would like to, I mean, that’s a big, big component. I mean, especially in our last discussion, you talked about the work you’re doing in just breast cancer alone, to be able to take all that data that you’re seeing in those x rays, and how does it get interpreted? Or how can you make it, you know, not just academic, but actually get it into, you know, the hands of the clinical world?
Dexter: Yeah, pick up the baseline. These computer algorithms, so called Deep Learning algorithms are pretty dull. They’re not that smart, right? You need to show 1000 images of a picture for these computers to learn with high accuracy. The label wherever that label is dog cat bridge, ductal carcinoma inside the metastatic carcinoma, or whatever the label is you get 1000 pictures. Now if you just go on Google and type in dogs, go to Google Images you I get, you know, millions of dogs, cat and bridge, you’re gonna get millions type and ductal carcinoma in sight, and you just don’t get anything of quality.
Nick: I can’t even spell it. But I can imagine
Dexter: saying, you know that a radiologist looks at or pathologist looks at typing into Google. And if you can’t get out images that are well labeled to learn from, then there’s no way it’s over. The reason you can identify dogs, cats bridges people is because you can go on Google and find these things label, anybody can label them. But I feel as a clinician or a failed clinician, versus a fail or digital population, right? There are no apps for breast cancer, no apps for prostate cancer, no apps, hardly any apps for diabetes. And these are all relatively lifestyle kind of medicines, you follow. And we can get into why that is, these are these are unsolvable problems currently. And, and these are the problems we’re trying to solve. So we’re trying to get community level interest in these algorithms so that people share the data with us. So we can find ductal carcinoma insights on X rays, find lobular carcinoma inside to find metastatic carcinoma. And that’s just for breast cancer. There’s all these other types of diseases, we also want to study.
Nick: When you’re trying to find those, you know, we’ve talked about the ability to try to share data and you said that there’s been some roadblocks and just trying to be able to get those 1000 pictures of whatever that carcinoma is, you know, let’s let’s dive a little bit into that. Because we always think that, you know, everyone’s you know, everyone’s talks about all what we’re, we want to share data, we want to be able to do it, but there are some pitfalls and roadblocks along the way that that kind of holds up a little bit of what you’re describing. Can you kind of talk to some of that, how you’re trying to break down those barriers to get data to be shared, to be able to do the work that you’re doing within? You know, AI and ML?
Dexter: Sure. Next, so you’ve probably heard of HIPAA.
Dexter: Right. It’s the health information, the PII and epistle portability, right? Authorization Act or whatever. So So HIPAA is the PII and HIPAA is for portability of health information. It’s meant for researchers like myself, to share data amongst themselves securely and to with the patient securely. So this framework has existed since the 80s. So legally, we can share data. There’s also another PII everybody thinks it knows about called privacy. Alright, and HIPAA privacy law is pretty, it’s pretty rigorous. If you don’t share the data as designated by HIPAA. You get fines that use liabilities, obvious risks to the patients on and so forth. So if you’re a for profit corporation, you know, if you, you know, you can share the data, you need something called informed consent, it’s a signature, saying that the patient has been informed of the risks associated with this study. And you need a release of medical information. Given those two approvals by the patient. Legally, you can not get any data in an unethical manner. But if you’re a for profit corporation, one it costs money to go get HIPPA release and informed consent on patients Two. If you improperly share certain data, you’re in a whole lot of trouble. So what happens is everybody hides behind the privacy law, they hide behind the privacy law until they amassed so much data, that they no longer really care if the patient gives them informed consent or not. Now that data is valuable on its own in aggregate, now you can get HIPAA says you can share this data opaquely, the patient doesn’t even have to know we can call it quality assurance. And it’ll really improve how you run your hospital. And that’s what’s happening, these hospitals are amassing so much data, the patient is almost is not in the loop anymore, because it becomes expensive to put the patient in the loop and to be able to identify that patient from the data. So all of this data gets traded in bulk, I am sure at massive expense either way in costs, and it has become an ecosystem that basically keeps the not for profits, like the University of Central Florida out of that equation, because we don’t have that money to play in that. Right? Does that make sense?
Nick: It does, because you bring up two things that, you know, probably almost not just probably the medical field, but everyone faces, it’s, it’s when you’re for profit, you want to always reduce your risk. And by reducing your risky basically, we don’t have to give you access to the data or we lock it down so tight. It there’s no utility to it, there’s no use for it, there’s no ability for someone to actually share it because we reduce risks. And the other thing you just brought up is, in some cases, okay? If we want to open it up, we’ve aggregated it so much that potentially it becomes a lot less utility. And to an extent, I’m not going to use the word it could become useless, right to help make decisions. So it’s that’s what I like about what you’re doing is trying to be able to say how do you get things back into balance where we’re willing to share information, right? Because there’s ways to do that today and reduce those risks. And then the ability to say and we don’t have to water it down. That’s probably the better term watered down, not useless. So please, everyone out there I’m gonna go with water down and water it down so that it can be used to treat people as well as to discover things. I mean, I think a good example of this is in breast cancer, didn’t you want? In our last conversation you said, because there’s like almost 15-20 years of data that was missing from the datasets, because people just got rid of it.
Dexter: Well, well, again, so it’s a sort of risk mitigation strategy, right, the statute of limitations. So by law, you keep data for seven, eight years, something like that. By law, if you get sued, you got to have the data to produce in court, if that data hanging around puts your organization At Risk After seven or eight years, that your fiduciary duty to mitigate that risk. So that’s how all these films, for instance, that you’ve had in film, before we had digital radiology, film probably no longer exists. One, it’s expensive to store that data for research, to legally you don’t have to serve it anymore. So you’re not you’re not required to store it. Three, the most secure thing you can do is erase that data and not be liable for it. So that’s what happens because we really, again, I really feel I’m not blaming any you know, I don’t think any for profit hospital is to blame. And it’s I really feel like physicians are to blame, because we haven’t given patients any reason to share their data with us. Right, you go on Apple’s App Store, Apple’s given you a ton of reasons to share your Apple Watch. Apple use those reasons to build FDA approved sensors for arrhythmias for systole. For afib. FDA approved, right with the EKG on your on your wrist. That’s where the 100,000 people gave up all their data, right? 100,000 people, there’s nothing we know FDA knows there’s 40 million mammograms a year 100,000 mammogram, just nothing to get. But it’s impossible to get because of the way the state is silent today. Because we have no app on the App Store, Google Play Apple App Store, to really read that breast cancer data show you the image. For instance, my medical data is among the most expensive and rich and quality data yet there’s no app, you can go on and download your imaging, you know, you can go get your your charts from a lot of hospitals, you can get all these long, you know, wordy, very technical charts, but you don’t see the pictures that are referred to. So how on earth as a physician, can we educate our patients about you know, this is what a breast cancer mammogram looks like? This is what suspicious, this is what the lumpectomy took out if you can show them the image, right. So I teach in school and I see there’s a you know, at least in American society, there is a massive push to go talk to your doctor, we’ve made it impossible to talk to a doctor about everything because we don’t let the patient see it. Right. And I’d like that, I’d just like to change that at the outset. And then say something interesting about sedimentary.
Nick: And it kind of goes back to the beginning is there’s not a lot of these images to begin with, as you said, previous, so that if you’re getting rid of these images, then there’s not a lot of ability for you to actually apply data science with the machine learning and the artificial intelligence or the algorithms to say, hey, there’s something here or you know, there’s something unusual take a look at right.
Dexter: Again, look at Google, right, the accuracy of computer, there was a competition that went on for a long time they stopped the competition. The competition was I think there’s a million images organized into these 1000 categories. You have 1000 images per category, dogs, cats, bridges, common things. And the competition is predict on some holdout set the labels, they stopped the competition because computers can outperform humans since 2015 -2016, is 98 99% accuracy. Which means if you submit a low resolution picture of a cat, so Google, Google has a high res, a higher chance of predicting that low resolution cat. Like we know this already, they stopped the competition. So there’s no doubt like, you know, people that are in this field know, this is doesn’t matter what cat whether the label is cat or carcinoma, the lack of data is what’s stopping is relatively dull algorithms from performing at relatively high accuracy.
Nick: Is there a level? If you take a look at the clinical side and the academic side? Is there differences in the level of detail that the data carries? And I won’t talk a lot about, like personal privacy or things like that? And, and is there ways that you know, you work with to say, hey, I can ensure privacy if I can have this as part of my study, or I can ensure that whatever the study helps determine whether it could be a new drug that it could actually I can reapply that back into the clinical can? Can you give us an idea of what the differences potentially in the data and the level of detail that each side is looking for?
Dexter: Yeah, I find it much less intrusive to share a picture of a breast mammogram than a picture of my kids online. First of all, yeah, we share pictures of our children online all the time, for free. Those pictures have metadata as to where you live where that picture was taken. If you have enough of those pictures, you can triangulate the faces. This is crazy because Facebook or Instagram or whatever gives us an app that’s interactive, that we can use to share pictures with grandma and grandpa or whoever. Right? But this idea that what Facebook can’t be hacked, of course, Facebook is hacked frequently. You think hospitals can’t be hacked. Of course, hospitals are hacked frequently. So I’m not promising any privacy, your Equifax report has been hacked, T Mobile has been hacked, who am I to stop any of this hacking? So let’s pretend you’re gonna. But my point is you share pictures of your kids personal stuff online without any informed consent or knowing the risks or danger of doing so at least, you know, if you get hacked, okay, take the mammogram, you don’t know what the one or the zero, because that’s kept in a separate as a separate key. So in ways like that you cannot escape, you know, maintain privacy. But this is not a good research, there are laws that protect this, right? If you violate my study and hack me, as opposed to hack Facebook, there is a consequence if you get caught. So the whole idea No, the whole atmosphere is just upside down. Upside down and unfixable. The current state that we’re going, it’s almost like this, if I, you know, I mean, this is just the pragmatic aspects of getting the data and, and the business of getting the data. There’s a whole biological side to, right, we live in America, you use the word, previously useless. I like that word. Because if you look at the outcomes of COVID, the EHR data is relatively useless, the data from the medical record is relatively useless. Why is that? Because people like a COVID, take the bus to work, because they have to go to work. I don’t have to go to work, they go to work. And now the bus is half capacity, because the COVID were in a medical record does it say this. And how are you supposed to learn anything about what really happens in America, from the medical record, it’s very limited. What you can learn, because you can’t ask the patient you take the bus to work, you interact with people due to this due to that is there is your house crowded, none of this is in the medical record. And these are all social determinants of health that no EHR or very little is, in the EHR, that kind of med student interactive kind of information, that kind of anecdotal information that is critical to diseases like COVID at breast cancer. Does that make sense?
Nick: Yeah, it does, cuz, no, I was gonna say is, it’s kind of like, you know, we take a look at it. So a lot of the work I’ve done is on commercial side. And it’s kind of like when people say, I want to know more about my customer, I want to know about my more about my consumer. So I want to take in all these other pieces of data and bring it into my data set, right? And when what you’re describing is, I don’t get as a physician, I won’t get all that other information that can help me potentially treat.
Dexter: let me challenge you a little bit a physician gets that information, you can ask the patient, whatever you want a data scientist, we got one on a zero, we don’t even know who that one is. Which is why I really think, I could sepnd, it’s been 18 hours a day, at least, you know, on the phone, as do adults, yet there’s no very little medical interaction on that phone outside of the app, which is fantastic. Don’t get me wrong, but that’s an integrated, that’s exactly what we would like to do with medical records, which have become standardized, which now are variable, everybody has a portal, how do you get your COVID results? You don’t go anywhere you log on somewhere, and you get it? You know what I mean? So I think now is the time, you know, COVID has forced the virtualization of everything, including healthcare. And I think, you know, now is the time people are interested, they don’t want to go to the doctor, but they want tele visits. And you know, outside of the Zoom interaction, or the tele conferencing interaction, there is no innovation in this space. And there must be there has to be to push the field forward.
Nick: Interesting. So now let’s, let’s actually talk a little bit about that. So as we share all that data, and everything comes in, you know, some of the things can run is is like that ethical use of data, or the ethical ability to use that data for the right reasons. And in many cases, we always talk about the wrong reasons, which inhibits us from trying to actually do what’s right. You know, what are the some of the things that you know, when you talk or when you’re doing your research that come to your minds, as you see this data coming in? Is that an ethical part of the research and ethical part of using that data? You said in the past? You’re having a lot more discussions on this now, as you’re looking at this, but your thoughts?
Dexter: I think there are two angles or perspectives on ethics. One is the biology of ethics, meaning 40% of breast cancers are worse in non european derived people, right? Meaning, you know, probably 80% of the sequence we analyze for breast cancer is on European direct people, yet 40% of the outcomes are worse on non european documents, there’s a divergence and what we’re studying and calling precision medicine, precision medicine is great. It’s just imprecise or inaccurate for most people, right? Because we don’t even study the data for various reasons. So the first problem is and what everybody’s doing, you know, if you were at Harvard or, or at Pen, or at Stanford or wherever you have so much data, because you’ve aggregated it, you’re, you know, there is no need to go get any community level data, you can go build models, and you can overfit them to your inpatient population, except you just, you just propagate that 40% disparity, or the 30% of diabetes, or 20%, of whatever, there’s ton of disparities left and right. And if you keep training data on the same bias data sets, ethically, you’re propagating that exponentially with machines. Now, does that make sense? So that’s the first problem.
Nick: It does. Because, you know, in my world, we call those biases, right. So if you’re training up always on the same piece of data you’re training on the same for us, the population is actually the real people, you have a tendency to build an algorithm or create a machine learning algorithm that has inherent biases based upon the data set itself. That unknown to you potentially.
Dexter: We’ll very known. We know there’s a 40% disparities in the outcomes. So what do you expect if you train on 40% disparate data, you’re gonna get 40% disparities minimum scale. So it costs money to go into disparate communities and get committed, it costs money, I can tell you exactly how much it costs for COVID. To not even you know, so this costs money. And if you’re one of these big institutions, you don’t have the money. We’re academia. So it’s almost like what do you expect us to do? We have all this data, it’s underutilized. We can train models all day, we can swap them amongst ourselves, but we have no real way to measure the community bias. Right. So I did genomics, we talked about this before you go into Pen genomics in the 2000s, the late 2000s. And study autism, like I did my PhD, I say 3000 kids with them without ADHD and autism, right? 99.9%, whatever percent of them are European derived, not because but kids and minority kids don’t get autism. They don’t go to the doctor for neither autism, nor asthma, or any longitudinal disease. What you see for African American kids in Philly, which is, you know, not a European derive kind of city is emergency Falls is sickle crisis, but not autism are not asthma, and not these longitudinal diseases, same for diabetes, for older folks, same for breast cancer, so the data doesn’t even exist, and it costs a lot of money to go get the data, measure the bias before you can even begin to properly clean the bias. That’s why I feel like, you know, you have to be in a place like where I am now. And I was at the biggest places with the most data, I’m in the one of the newest medical schools, now our hospital just opened, we have zero data, and we’re all about community. So this is, you know, this kind of place is the place, you can do this kind of research, build national studies that sample everywhere, based on criteria, we ethically set. So you can imagine, we’re gonna study breast cancer, but we’re gonna study breast cancer and these disparate populations and go find people and recruit people and share data from those people. So at least others can start to measure the bias and start to correct the bias and their logic of your populations. I think this is critical if you expect AI to be mission critical, like medicine is mission critical.
Nick: Yeah, that’s actually, you know, again, we got to begin to wind down our conversation that but that’s actually really interesting when you sit back is, is, you know, at University of Central Florida, the ability as you begin to move community and academic community data, the academic I mean, it started the community clinical world, with the academia was they bring it together to take care of the whole community, potentially not just, you know, using data from a portion of the community to try to go after the whole lesson. Interesting. That’s a very interesting, that is exciting to hear, and gives me confidence that there are at least people out there like yourselves, and institutions like UCF that are saying, Hey, we serve everyone. And we want to serve everyone by taking their data so that we can serve them, as we say, in the commercial space uniquely, it can become more personalized, because we now have the data and information to do that. And that’s what you’re working on. So that’s just amazing to me.
Dexter: I mean, technically, you’re but there’s something in machine learning called active learning, or in the loop where you try to minimize the human input to correct these algorithms that are mission critical. And that’s what a med school to me should play. Or mentioned it should be in the loop training, learning one and teaching not just themselves but algorithms because we do a lot of teaching and that school is just lost it’s never captured the train these stupid algorithms that need a one and a zero you know, it’s easy to translate whatever into one zero and code it there wherever is needed. Nobody does it. You know what they do it fo? Billing. They do it for billing, but they don’t do it for AI. And there’s an entire department dedicated to billing. So what you find is like, you know, instead of teaching directly if we go query ICD codes as if that is diagnosis, you know, that’s not medicine, that’s data science. And I think the entire paradigm really has to change. I think, you know, there’s 180, something medical schools, none of them have integrated AI. They have entire institutions, separate, independent from the med school, filled with PhDs doing all kinds of interesting things. But there are no med students in those programs as part of their requirements for graduation. And that, to me is just crazy. If you, you know, there’s 1000 different startups doing this, none of them have any data to do it on. And if they have the data, it’s bias. So I mean, this is just never gonna converge into an acceptable outcome to reduce costs and improve outcomes, the current setup.
Nick: hey, well, I like that. That’s the you know, to me, as you’re describing those, that’s the next big next big thing that you’re working on is that emergence of using a medical schools and making sure that they are truly learning and teaching, right, the ability to bring community and academia together academics, along with the research to hopefully be able to help someone get through or to work through some diagnosis, see, so they get healthier, or they get better, or you can cure a disease, or at least identify it, so that you can treat it.
Nick: So with that, actually, as we wind down, one of my last things would be is in this respects, you know, if the audience would walk away with one or two things, what would you want them to walk away with after the last, you know, this time that we spent together? What do you want to make sure that they put in the back of their brains from this conversation we just had?
Dexter: I think medicine and data driven medicine in America is quite unique. Mark is the only country that advertise to patients drugs to go after a doctor, one of the only countries in the world that does that. Right, then obviously, independence is critical in health care in this country. And it’s sold to the population independence to pick your doctor independence to pick your drug independence to pick your health plan, and defense do it everywhere. Wear a mask, not wearing masks, vaccinated or unvaccinated? Why do we not have independence to own our own data, to use our own data, just sell trade, whatever you wanted. It’s the most valuable data in the world. And it’s being sequestered by large data aggregators, and not being leveraged by patients. And I don’t and the only people to blame for this is physicians, there’s so many doctors, and few if any of them are really focused on how to leverage this new technology that was found here. Outside of period tracking, there’s really that state of the art for health on iPhone.
Nick: You know, you bring that up. And I like that as a last laugh that people should own their own data to help each other. I mean, like you, your father’s died from cancer. My wife is a cancer survivor. And I swear to you that my wife is more than one after I talked with her. She’s like, I’ve got to banker’s box with the records upstairs. I’m more than willing to share this, if it can help someone else get through this.
Dexter: Well, that’s the first problem. You got them in boxes. And that’s more of the same. That’s more of the same game here. Let’s give you a CD to make it difficult for you to share that as well just send me that ad, we’ll figure it out. You got to start digitizing.
Nick: As we wind down, though, to just get our minds because this is heavy stuff. I’d like to see if we can just lighten the mood a little bit. And you know, you’re doing so many things extra. What do you do to unwind?
Dexter: Well, what do I do? I just got back from Lowe’s. We’re trying to build a Christmas tree box for this 15 foot Christmas tree we have. I would like to say I build things but I don’t really build it. Christmas is coming and I have to build this thing. So I must say I’m glad that I did move from California to Orlando to this Lake Nona area, because we live in a pretty new and emerging community. So we just spend a lot of time at home. We spend a lot of time driving around here. We have a lot of friends here. Work is here. Everything is here. I don’t think I drive five miles outside this community to be honest. So I stay home.
Nick: So let me ask you this in staying home or staying within the Lake Nona area. Is there a favorite restaurant that you would choose to go to?
Dexter: So there’s only like three or four restaurants here period. And we go to all of them. I like to go live because it’s golf cart distance to the town center up there Chroma the brussel sprouts are the best thing in the world. Yes. So we do that frequently. As this type of collection is excellent.
Nick: Well, I’ve never heard anyone say you got to try the brussel sprouts. Now I’ve got something to think about next time I actually visit Orlando. So lets do a speed round while you’re enjoying those brussel sprouts would you enjoy with wine, beer cocktail? Oh, maybe just a soda.
Dexter: I think the requisite is alcohol. I really don’t discriminate on which type. But if I had to pick a cocktail that we drink it would be rum.
Nick: So well, you’re from Trinidad. That’s actually that’s better. That’s probably a given that drinking rum. Alright, so Dexter, this is this has been wonderful. As we wind down everyone, this is Dexter Hadley, I forgot to even mention, he also is the founder of Hadley labs. But you know, next time, Dexter, when I visit Orlando, I am going to be pestering you to actually have some of those brussel sprouts. And I’m going to have you pick my rum and the kind of cocktail that room should go in. And again, I went and did and that would be just to have a conversation about not just the incredible work that you do, but see pictures of potentially what you have been building from your time at Lowe’s. So thank you for being here, Dexter.
Dexter: Yeah, thank you so much.
Nick: That’s really what you’re looking at is, you know, you said wave and I keep seeing you’d like that surfer keeps going back, I’d never comes ashore, but keeps going back out and writes the next wave and goes back out, finds that next wave, and it’s a continuous learning. So that is an unbelievable journey to get to where you’re going to work to where you’re at today. Thank you for listening to incompetence. With our guest, Dexter Hadley today. Remember in confidence that a podcast with a community of data leaders, we hope you found your time with us well spent.
Outro: Want to learn more? Our team of experts is ready to answer your questions and discuss how data privacy can fuel your business. Visit privitar.com Thanks for listening to incompetence brought to you by privitar. To hear more insights and advice on how to effectively use, manage and protect your data. Subscribe to the show and your favorite podcast player. If you liked the show, leave us a rating. Join us for the next data conversation.
Our experts are ready to answer your questions and discuss how Privitar’s security and privacy solutions can fuel your efficiency, innovation, and business growth.