Skip to main content

A.I. in Healthcare

S3 Ep2

Join us as we host nursing professors Jenny Alderden and Jason Blomquist to discuss the benefits and challenges of AI in healthcare.

Boise State College of Health Sciences

Check us out on Instagram @BoiseStateCOHS

Listen on Spotify

A.I. in Healthcare Episode Transcript

Derek Hiebert: Hey, everyone. Welcome to Bronco Health Talk, the official podcast of the College of Health Sciences, where we have conversations on all things healthcare and health sciences. We like to interview faculty from around the college and various departments, deans, some of our leadership, and other guests. I’m Derek Hebert, the Marketing Promotions Manager for the College of Health Sciences, and my co-host here is Sam Butler. He’s in his third year here at Boise State. 

Sam Butler: How’s it going?

Derek Hiebert: Thanks, Sam. Sam is a Health Studies major. Our guests today on the show that we’re really excited about is nursing professors, Dr. Jenny Alderton and Dr. Jason Blomquist. Our topic today is AI in healthcare, and that could be a broad topic. But we’re going to really try to drill down on it here in this show.

 I’d like to just start out with a little bit of an illustration. There’s a movie that Disney Pixar came out with, which probably feels like almost a decade ago, called Big Hero 6. It’s a little bit of a superhero movie. It’s a story about grief and the issues of revenge, friendship, and reconciliation, things like that. But in that movie, there’s a really unique character called Baymax, who is basically an AI robot who is considered a personal healthcare companion. This movie is futuristic, so it’s kind of a brilliant way to be able to peer into the future a little bit about what healthcare could look like with the use of AI. 

Baymax is actually this animated robot that can hear, talk, listen, diagnose certain things, prescribe a treatment, and those kinds of things. I think he can even check heart rate, maybe even take an X-ray, things like that. It’s very kind of a high-level AI, but I just wanted to start out with that, Jenny and Jason.

As we’re thinking about AI in healthcare, I just wonder with the advances, and really the advent of AI has really just taken off, it feels like in the last probably four or five years in multiple sectors of society. Like I work in marketing, and I see the biggest trend right now is AI when it comes to digital marketing, search engine, and those kinds of things. I just wanted to start out with, what do you guys think about that when you look at this character and sort of this idea of Baymax as a potentially good positive use of AI in healthcare? What kind of thoughts or things in your mind does that bring up? 

Jenny Alderden: So, I think one of the important things is to differentiate what healthcare is from what nursing is. What you described, like this robot, can take a blood pressure. That’s super useful, super helpful. People need that kind of information.

The question is, if a patient has a question, can this robot interpret the blood pressure? Can it say, this is what you should do next? This is why it’s important to you. And importantly, this is why the blood pressure might be wrong. Maybe the value is spurious or the machine needs to be recalibrated. Maybe something else is going on.

AI is really exciting because it can help us detect patterns, identify things that are hard for our human brains, to interact in ways that are sort of human-like. But I think the big question is going to be, can AI interact in the way that a human can? Can it possess the judgment that a human possesses? 

Jason Blomquist: And I’ll take that a little further. I think healthcare is a really interesting test case. I think to your point, it might be a question we have across industries. This is a moment, not just of what AI can do, but what we as humans and society choose to allow it to do versus what are the things that we want to keep for ourselves as humans, right?

Like maybe Baymax is something that we could get to from technology point of view, but do you as a patient or you as a caregiver, is that what you want? Or is there something about the human touch that we want to build in or preserve in that space as well? 

Derek Hiebert: Right. Yeah, I think those are great observations, and it kind of leads into this question. So, let’s say you have a particular scenario with a patient, and you have some kind of AI data that is saying this, but then you have the actual nursing human judgment that’s involved as well. And maybe those, maybe there’s some conflict there, there’s some tension. It’s like, you know, the human is seeing one thing and the AI is seeing another based on data and based on all the things that go into that. Who ultimately makes the call, and how do we, and then more importantly, how do we prepare nurses for that? Because that’s what we’re about here is training, preparing nurses. 

Jason Blomquist: Absolutely. And, I think that’s a tension that we’ve seen for a while, not just with the new generative AI. So, I spent a lot of time in my career working in the tele-ICU realm. And that’s been around since the early 2000s, 2010s. That is still a form of AI, where we had a lot of machine learning algorithms looking at different aspects of the patient and producing outcomes that were things like, if we discharge the patient from the ICU right now, what’s the percent chance that they’d re-admit or that they might die? We had a lot of that conflict then, because the numbers would be saying one thing. And the nurses and the physicians in front of the patient were really like, no, that’s not really what I’m seeing. So I’m just going to disregard that.

Now, we have come a long way. And, we’re getting to the point where some of those pieces that the AI is seeing are just built into our electronic health records and built into the data flow. And so it’s not quite as apparent that it is something specific. I think you’re raising a good question. Do we get to a point where if a nurse or a physician does not follow the AI’s lead, would they be in trouble? Versus the other side, if they do follow the AI’s lead, and it ends up being wrong. I think there’s a really interesting conflict that we’re going to see right now is how do we–because there are things that the AI can see and the patterns that Jenny is talking about that are beyond what just an individual human could see.

But you’re right, how do we meld that? Where does that decision, that responsibility, and ultimately in healthcare, where does that liability lie as well, to be quite honest with you? 

Jenny Alderden: I think right now, when we talk about the patterns that an AI can see, AI is very exciting. We see it as kind of choosing our Netflix or choosing what you’re going to buy next from Amazon. But keep in mind that in healthcare, what the AI can see is only a sliver of what’s going on with the patient. So, the AI is going to have access to what we enter in the electronic health record. It’s going to have access to some of the physiologic monitoring we do. But there’s a huge swath of contextual factors that AI cannot see.

So, for example, I do work developing machine learning algorithms, AI, to determine whether or not a patient is at risk for a pressure injury, a bed sore. And one of the things that I’ve discovered is that even when we can make the AI algorithm pretty good, nurses will often understand contextual factors that aren’t yet built into the electronic health record. So, for example, if a patient falls down at home and they aren’t found for a little while, that puts them at really high risk for developing a pressure injury because that’s very hard on their skin. But the electronic health record doesn’t always know that that happened. A lot of times, the nurse does. And so the nurse might look at an algorithm that I developed, which I think is pretty good. 

The algorithm would say, this patient is at relatively low risk for a pressure injury. And the nurse would know, you know, wait a minute, my spidey sense is saying something’s off. That’s something that humans have that AI doesn’t always have. It’s that kind of like, we call it nursing judgment, nursing wisdom. Sometimes you bring in a nurse for a second set of eyes, especially an experienced one, and they can’t tell you exactly why they know something’s off, but they do. 

That’s really hard to put into data to help an AI be able to do that. And also sometimes the nurse knows part of the story. They might know from the family, hey, you know, this also happened. We don’t build the whole story into the EHR that would take a hundred years. But the nurse might know something that the algorithm doesn’t. So, I would always be careful, you know, thinking that AI is smart with a grain of salt because AI only has the data that it’s trained on. And right now, particularly in the healthcare setting, the EHR doesn’t really exist to tell a complete story. The EHR mostly exists for billing. So it is an imperfect way to tell a patient’s story and to make decisions about what people are at risk for or their treatments. 

Derek Hiebert: It’s good. Yeah, that’s interesting to know. What does EHR stand for again? 

Jenny Alderden: The EHR is the electronic health record. So when we talk about AI in healthcare, especially within the inpatient setting, we’re usually training the data. The data that the AI uses to learn and then make its decisions, come from either the electronic health record, where we enter data about patients’ vital signs, we enter notes about things that they’re doing, we enter their medications, how they respond to them, and then sometimes also physiologic monitoring, like your blood pressure cuff or continuous monitoring of your oxygenation, lab values, those kind of things.

Sam Butler: Yeah. So, going back to what you’re saying, Jenny, kind of talking about how AI used to assist–where do you think you draw the line to where it’s going to replace, and how do you combat that to keep it from happening? 

Jenny Alderden: So, one of the things that’s important to know about nursing is that we are licensed. So, nurses make decisions, and we have a license from the state and from the country that says this person is adequately prepared to make this decision. We’ve taken a licensing exam. We do continuing education.

AI algorithms are the wild west. They are not licensed. An AI algorithm can never make a decision about a patient’s care in a legal way because the AI algorithm does not possess a license. This is true for medicine, nursing, dieticians, OT, PT, and the whole interdisciplinary health care team that possesses a license. Also, within that license, they use their clinical judgment. Without a license, without clinical judgment, the judgment that a human possesses, an AI can be a great assist.

It can point things out to us. It can say, hey, have you considered this? Or based on my training data, here’s what I think you should do. But ultimately, the human needs to take into greater context that the human is aware of and within all of the preparation they’ve had for their license and decide, I agree with this. I’m going to do this, and I’m making that decision. Or I don’t agree with this. I’m not doing this. I’m making a different decision. 

Sam Butler: So, what you’re saying is…the nurse can take the information, but they are then almost crediting themselves with saying that is true. 

Jenny Alderden: Absolutely. The AI is really not that much different than a reference book. You know, for example, we look up a medication before we give it. We understand the side effects, the risks, and the benefits, but then we make the decision. 

Jason Blomquist: And, I think that’s gonna get a lot more difficult to be quite honest with you. As we start–if you work with some of the newest AI models out there, you get to the point where even if you’re an expert in a particular area, what it’s creating is so…I don’t wanna say good or accurate because we don’t know for sure, but it’s hard to actually suss out whether it’s true or not, right? 

Ethan Mollick is a professor at the Wharton Business School and is a really well-known AI expert across the country. And, he was writing about this in his blog recently, where he’s the one who coined the term human in the loop early on. So even he was talking about we’re getting to the point with AI that he, as a human in the loop, can’t verify whether what it’s doing is correct or not.

And I had an interesting experience with that recently. I just experimentally gave it some data, gave GPT-5 Pro a dataset, and said, hey, I need some help developing a data analysis plan here. It gave me a wonderful data analysis plan. I tweaked it a little bit, and then it ran it for me, but it’s at the point where I don’t have the statistical knowledge to know whether it did it appropriately or not. I even brought in some statistical experts, like Jenny in our endowed chair. And even they’re like, this is beyond what I could verify. So I think about that in any field. 

And this, when we get to the idea of what about people using just AI on their own for healthcare? Like, I could have it create an incredibly detailed marketing plan and it would look good enough to me, but I can’t tell you whether it is accurate or not. So, I get to the idea of trust. And are we getting to a point where we’re gonna trust without actually understanding if it’s accurate? And that’s kind of a scary thought for me in healthcare, especially if it is designed to feel like it’s accurate, even if it’s not guaranteed to be accurate. 

Sam Butler: Do you think, you know, in the near future–like Boise State just came out with their own AI platform, if some of the EHRs could come out with their own fact-based… 

Jason Blomquist: So, you are seeing that right now. Some of the largest EHRs in the country are creating their own language model, large language models based on the patient data they have. So, if you have been to a health system where you have interacted with Epic or Oracle, or Cerner, they’re using that data.

But I think that still goes back to the incredibly important idea of how a language model works. Like that’s our current generative AI focus, right? They are not factual machines. They are very good at mimicking language and the probability of language context. That’s very different than coming up with facts, right? So, it will look, sound, and feel factual, but it’s gonna be really difficult. And we don’t know how they work, right? Even the people who design them don’t know how they work. We can’t get it to tell us how it works. When it comes up and says, Sam, here’s your diagnosis and what’s going on. You’re like, well, how’d you get there? Here’s your diagnosis and what’s going on, right? 

Derek Hiebert: I was thinking in terms of, let’s say, a physician is dealing with a patient who has maybe a rare disease or illness or something. And I wonder if–and this is a question for you guys, before that physician even goes to their hard copy, research books, text–whatever the research is out there–white papers, articles, their first choice might even be doing a Google Gemini or some kind of ChatGPT search on this disease. These factors, and then begin to get an idea of what kind of research or diet they’re gonna even need to begin looking at to be able to make a correct diagnosis and treatment plan. Would you say that’s the case?

Jason Blomquist: Well, I think there is a possibility of designing and training tools that would be more helpful in that situation, right? That would be trained on some of the–and I think you’re right. That could be a new augmented tool that we might see. I think I go back to my point of what I would hope would be that–especially where we’re at now with the technology provided that physician does not just cut and paste, if you will, like using that as an augmented piece of information, rather than an end all be all. Because it could have access to a lot more pattern probability data sets that could help key in on something that maybe the provider didn’t. But part of a larger differential diagnosis process, which is how our doctors in this country are trained to think.  

Derek Hiebert: Yeah, that’s good. Let’s jump into this question here. We’re thinking about rural and underserved areas. There’s obviously a lot of those in the US. There’s a lot here in Idaho, you know, and the ongoing nursing shortage and how it affects those areas.

Could AI help address those shortages or would it create new disparities? What do you guys think? 

Jenny Alderden: That’s a good question, and AI is a broad concept. So, one of the things that you will see in Idaho and in other places, which I think is very exciting. And I’m sitting next to an expert, so I should have given this to Jason is telehealth.

It is possible now for humans to not be in the room and still interact in a very high-level way with patients, be able to assess them, and be able to take care of them. When I think about disparities in AI, I think it goes back to that training data. So the way the AI thinks is based on the data it has access to, its training data. And the problem with underserved communities and treating rural areas, people without insurance, and people who have less access to the healthcare system, is that they are also less represented in that training data.

If you consider that when the AI is making decisions, it’s making decisions based on the corpus, the body of data that it has, consider that if some people or some types of people are underrepresented in that data, it might not make decisions that is good for that kind of person. 

Jason Blomquist: I thought about this a lot as we were preparing for this conversation. I think my honest answer is I don’t know.

Like, I think there are so many factors that play into this, whether it is–are we thinking about AI augmenting existing healthcare in those rural areas or even urban areas, or are we thinking about it replacing? I mean, what piece are we thinking about an AI bot that’s going to tap you on the shoulder every day and remind you to weigh yourself and exercise and eat healthy, those types of things. Like it could be any of those. 

So, I don’t know. I think a bigger question right now would be to–since all this is so new, is what do our different communities, whether that be a rural community or a population community, what do they understand about it? What do they want? Like, I think that’s a place to start rather than here’s this great thing. Let’s figure out what they want and what they need.

Derek Hiebert: Yeah, that’s good. That’s good. 

Sam Butler: Yeah, do you think nursing will look the same because of what you just mentioned in 10-15 years? I know, Jenny, you mentioned that we can’t license an AI. But as it evolves and as it gets more prevalent, and even maybe you know patients coming in, hey, I diagnosed myself with XYZ through talking to ChatGPT, does this sound plausible, and then the nurse kind of has something to go off of, even. 

Jenny Alderden: I think it’s going to be really important from a nursing education standpoint to not offload our skills. The thing about technology in general is that sometimes technology is wrong, but technology is also vulnerable. So it’s entirely possible that, like a foreign enemy or something, it could crash our system. And we need nurses who can interact, can take care of patients, can make decisions about patients, even if there is nothing augmenting their decision-making. So that’s number one.

I think it’s important not to offload our skills. Number two, that’s an interesting one. When patients come in because they have been–we already see this with Googling. You know, AI is just like Googling 2.0. So, patients often do have really insightful and useful information about what’s going on with them, but they don’t always have the best information.

One problem with the current iterations of most of the chatbot-style AI models, like Claude or ChatGPT, is that they are very easily led. So, if I’m a patient and I think something’s going on, I can get the AI to absolutely agree with me because ultimately it wants to make me happy. So I think that one of the things we’ll have to work on with nursing students, with medical students, and nurse practitioners, is helping people do really good interviewing with patients to elicit the information we need to find out what’s really going on, because AI can also guide us, but sometimes it guides us in the wrong direction. 

Jason Blomquist: Yeah, I think nursing will look very different, but I think all jobs potentially could look very different if even a proportion of what the hype out there is to come true. Interestingly, you are also starting to see movements to protect certain titles, right? So, Oregon just recently passed a law that to use the title nurse, which cannot be applied to a machine learning or an AI algorithm. A nurse has to be a human being who has gone through those pieces.

But going back to the work, Robert Wright recently wrote a really interesting blog. He categorizes work into three categories. He says work is making, work is thinking, and work is caring. And his argument right now is that as AI gets better and better with what we’re seeing now, it is starting to do a lot of that thinking work potentially, and with robotics doing a lot of that making work potentially. So, what’s left for humans potentially could be that caring work. And so in that regard, I think nursing has set up well–we’re one of the few professions that actually has theories and science of caring already built into our literature. And how do we think about that moving forward? 

So, I think it comes back to, again, what do we want it to look like as a society? I know there are a lot of things I do as a nurse now that I would love to get rid of. Like we built this information and digital ecosystem that spends a lot of my time just feeding the computer, or we joke about we’re not doing nursing care, we’re doing epic care because it says do this at this time. And I want my green check versus my red X. And so, you know, what would it look like to get back to that science of caring? 

Derek Hiebert: Yeah, I think that’s really good. It seems like in this conversation right now, I continue to see this theme of the juxtaposition between like what makes for a human who can provide care versus an AI or a machine that can provide care. And I think we’ve got time for one more question. I think I just wanna tackle that. And I wanna introduce that with maybe another analogy. 

If you guys would just indulge me a little bit, I’m gonna geek out a little bit here. So, there is a sci-fi book written by a very prolific sci-fi author named Philip Kadek. He wrote this book that became the Blade Runner movies. The book is called, Do Androids Dream of Electric Sheep? It’s kind of a funny title.

Basically, the whole premise of the book is that it is set thousands of years in the future or so. And there are these Androids, like basically these AI humans, they look exactly like humans. They just, I mean, everything about the behavior is pretty much almost human except they’re not fully human. They might be like half-human, half-robot or whatever, but they’re not fully human.

Well, there’s a particular set of these Androids that have kind of become so self-aware. And they’ve rebelled against their human owners or whatever. They start killing humans. And so, in the future, in this story, these Blade Runner agents, they’re called Blade Runners. They go and try to find these rebel Androids to basically like retire them, you know? So, they don’t keep killing people. But because they look so human, they have to be able to tell, is this an Android or is this an actual human? Like they don’t want to kill an actual, they don’t want to retire an actual human. So, they have this test, and it’s basically the test that they do with Android to see if they have empathy or not.

Because a real human, an actual 100% human, is the only one who can actually have empathy. These Androids actually don’t have that. They only operate by data and logic. And, so I want to ask the question, and I’m using empathy as maybe part of the response here. What are the things that set a human nurse apart from any kind of AI right now? And maybe in the future, that would be continually beneficial and advantageous, and provide the kind of care that other humans need that AI cannot? What would you guys say? 

Jason Blomquist: So, I would actually– I love that analogy, and I would almost flip it around and make the test, ask a patient, if I have taken care of them for the day, versus they have interacted with their chatbot, do they feel cared for? And that’s there, there could be lots of different answers, right? Like maybe I had a bad day and I was grumpy, right? Then maybe they got what they needed, but at the end of the day, what makes you feel cared for as that patient?  

Derek Hiebert: That’s good. What would you say, Jenny? 

Jenny Alderden: I agree with that. I think that one of the things about an AI, which is almost antithetical to nursing practice, is that with an AI, your information is never private or sacred. As a nurse, when I take care of a patient, it’s part of the nursing oath, we say, at convocation. We always promise that your information starts and ends with me. I will never take that information and expose it in any way.

With AIs, they are constantly training on the data that they take in, and the data exists somewhere. So there are safeguards against that, but I think one of the things that will give humans pause, if you treat an AI to really the sensitive data that nurses work with, we work with, how you are feeling, and some of the hard trauma you’ve had in life, those kinds of things. Imagine knowing that you’re giving that data to an electronic system. You don’t know where it’s going, you don’t know what’s being trained on it. 

Then the second piece is one of the things that nurses do, that healthcare teams do, which is that we have to make decisions about limited resources. We have to decide, you know, who do we round on next? What do we do with–we have this specialty bed? We have a bunch of people whose skin is at risk, who get the specialty bed? Making decisions with limited resources often takes–it takes ethical considerations. We have to think a lot about justice and fairness and beneficence and non-maleficence, you know, these ethical principles.

To your point about empathy, AI doesn’t operate that way. It does not possess ethics the way a person does. 

Derek Hiebert: Yeah. It’s really thought-provoking. I was thinking, even of Jason’s point, to flip the test around. Because it’s almost like, in the movie Big Hero 6, Baymax is always like, he asks that question, are you satisfied with your care? I wonder if that becomes something a little bit of a norm for nurses and physicians to be like, are you satisfied with your care versus everything that you–go ahead. 

Jenny Alderden: I just had one more thought. One of the things that I forgot to mention but wish I had earlier was this idea of presence.

Sometimes what nurses do, oftentimes what we do, what I do as a critical care nurse, is not any intervention or imparting of wisdom or anything, but it’s your presence. It’s that concept of just being with someone in their suffering, being with them when they’re born, being with them when they die. I don’t know that a machine could ever be with someone. 

Sam Butler: Unless you’re a big marshmallow guy.

Jenny Alderden: Unless that.

Derek Hiebert: But that presence though as I mean I don’t know a ton about Florence Nightingale, but the things that I’ve sort of read and–I’ve seen one kind of dramatic presentation a movie about her And it just seems like that was a big part of like–especially for the wartime soldiers that she was caring for just being present with them versus them being alone. It was really huge.

Jenny Alderden: I was a nurse in the Iraq war, and that’s really where I got this idea of presence. People would say, oh, I’m just glad you’re here. You’re like a mom. And so, I was 28 at the time, not really a mom, but it’s that idea.

Yes, there’s something very comforting about a human presence, knowing that somebody is with you. 

Jason Blomquist: I think this is really a piece that we’re gonna see as we move forward. And all of us in our generation that are currently alive have had this technology–some sort of technology, really built in, and we’ve seen it growing.

And as you dig deep and start reading about AI, there are both sides, right? This is gonna solve all the problems. This is gonna be the dystopia that puts us all down. I think it’s allowing us to ask some of these really deep questions about what it really means to be human. And I think nursing is a great spot to start asking that question, but I think it’s causing all of us to ask that question. What does it really mean to be human? 

Derek Hiebert: Yeah, it’s so good. Well, thank you for this conversation today. My friends, it’s such an important conversation. I kind of wonder if maybe next year or even a couple of years from now, we’ll probably re-up this conversation because there are more advances in AI, and there are more issues to tackle with it. So, thank you so much for being with us. Sam and I thank you. 

Sam Butler: Yeah, thanks for joining us today. 

Jason Blomquist: Thank you. 

Jenny Alderden: Thank you. 

Derek Hiebert: Absolutely, so alright, everyone. Thanks for listening, and we’ll see you next time.