Skip to main content

AI in Healthcare: Opportunities, Challenges, and the Basics

Nursing and AI with Jenny Alderden flyer.

It’s no surprise: Artificial Intelligence is changing the way society goes to work. But what does this mean for health sciences? Join School of Nursing associate professor Jenny Alderden as she covers the basics of AI in healthcare (Is it used like ChatGPT?) and how nurses can engage with this technological “co-pilot”. With examples from her own research, Alderden highlights some exciting possibilities – as well as areas of caution – for using AI. Finally, learn how Alderden first discovered the power of healthcare data while deployed with the U.S. Navy.

Listen on Spotify

AI in Healthcare Episode Transcript

James Sherpa: Coming up on BroncoTales. 

Jenny Alderden: AI is being used at all of the health centers in the Valley, in the electronic health record. And so I appreciate the extra set of eyes aspect of AI. I think it’s very useful and helpful. 

Katherine Sheets: Alright, hi everybody, I’m Katherine Sheets with the School of Nursing, and this is BroncoTales with Boise State College of Health Sciences. Today, I’m talking with Dr. Jenny Alderden, an associate professor in the School of Nursing. 

Jenny Alderden: Hi Katherine. Thanks for having me. 

Katherine Sheets: Thanks for joining me. So can you tell us a little bit about your background with AI and nursing? 

Jenny Alderden: Absolutely. So I studied pressure injuries, also known as bed sores. And one of the big questions that I had about pressure injuries is, how can we tell which patient is going to develop a pressure injury? The reason that’s important is because we can put in, hopefully, preventive interventions before the patient gets a pressure injury. And, in trying to answer that clinical question, I discovered one of the best ways to predict a clinical event is using artificial intelligence. 

Katherine Sheets: So how did you end up taking artificial intelligence and figuring that out? 

Jenny Alderden: So what artificial intelligence does is it’s really where a computer learns from data. And critical care patients, who are the patients that I study, usually have large amounts of data associated with them because we have the electronic health records.

So in the electronic health record, a patient will have hundreds of thousands or even millions of data points that show the clinical course. Showing what happened to them from hopefully even before they were admitted to the hospital, while they’re in the hospital, and then when they develop the pressure injury. And so what artificial intelligence can do quite well is take those data points and learn from them.

We’ll show a data set that includes a bunch of patients. Some of those patients will have pressure injuries, some won’t have pressure injuries, and we’ll feed the data set full of EHR data to a machine learning algorithm, which is our so-called artificial intelligence. 

The algorithm will teach itself by kind of iteratively guessing and learning and guessing and learning. It will teach itself which patients are going to develop a pressure injury. That’s called supervised machine learning or supervised artificial intelligence because what it does is it looks at that outcome of developing a pressure injury, yes or no, and it creates a model based on that outcome. 

So what I do is use EHR data for predictive modeling to develop models to predict whether or not a patient will get a pressure injury. But artificial intelligence, broadly, where a computer learns from data, is used in all kinds of different ways in healthcare now. 

Katherine Sheets: So how is AI in healthcare different than AI, say, like in ChatGPT? 

Jenny Alderden: So AI in healthcare is actually not a lot different than those large language models like ChatGPT in the sense that in all AI, the computer is learning from the data. However, in healthcare, we usually have a very specific reason for the AI. So we do predictive modeling. So, for example, we might use a patient’s EHR data and a predictive model to try to determine their risk for a heart attack, for example, or their risk for a pressure injury. 

Image recognition is also commonly used in hospitals. If a patient has a chest X-ray or an EKG, or some kind of visual scan results, a deep neural net, a type of artificial intelligence, can be used to assist with diagnosis. To say, oh, I think this EKG, this patient has X problem, or I see a tumor on this chest X-ray. Artificial intelligence does that really well for those very kind of circumscribed tasks. And it’s already being used in healthcare for those tasks.

Large language models like ChatGPT work a little bit differently. What they do is they use billions upon billions of data points to try to predict the next word. They’re a type of natural language processing. So instead of unleashing them for a very specific task, like finding a tumor or identifying which patient is at risk for a bad event. What those large language models do is they try to find the right next word in context, and by doing that, they can create human-like text. 

It’s entirely possible that those kinds of large language models will be used in healthcare. Places they could be used are, and may already be being used in certain contexts, would be things like patient education or patient portals. Sometimes they’re being used already to communicate with patients. So a patient reaches out with a question. And obviously, we have to be very cautious about these things. We’ll talk more about that later. But ChatGPT is another type of AI, and it’s absolutely either being used now or will be in the future. 

Katherine Sheets: That’s really interesting. I feel like there are a lot of concerns that people have about AI, like ChatGPT.

In school settings, there’s kind of a concern about plagiarism and original work, that sort of thing. So are there concerns or drawbacks that in healthcare you kind of have to be aware of? 

Jenny Alderden: Of course there are. So the thing about large language models like ChatGPT is that, unlike humans, unlike you and me. They do not possess any kind of clinical judgment. So you cannot rely on them to be a decision maker.

They can be an adjunct. They can be a helper. They can generate something useful that a nurse or another healthcare worker could use as a sort of template, as a jumping-off point. But it’s dangerous, in my opinion, to rely solely on artificial intelligence because artificial intelligence does not possess clinical judgment the way a human would. 

Katherine Sheets: So when you’re saying clinical judgment, you’re meaning like the decision-making part of what you do. 

Jenny Alderden: Precisely and specifically, clinical judgment includes a component of being able to explain your decision-making. So if I said, hey, I think this patient needs XYZ test or XYZ treatment, I would have a rationale for that.

Whereas artificial intelligence does not come with a rationale. They’re a black box, almost all of them. And so the idea is that it’ll give you an answer. And the answer is often a good answer, but if it can’t show how it got to its answer, it’s not safe to use because AI can and does make mistakes. Sometimes, AI will be unleashed for a patient population that it hasn’t seen very many times. 

So, for example, the way these AI models are developed in healthcare is by using patients’ data. Now, imagine there are some patients who are not well represented in those data sets. People from very rural areas, sometimes minority patients, or people with very unusual health conditions. There aren’t very many of them in those training data sets that are used to create the AI. Therefore, it could be dangerous to use that model on that patient. It might not be a good fit for them. It might give them a bad answer, an answer that doesn’t work for a specific patient.

And that’s why the human is so important because humans understand those contextual factors. Humans understand what might be missing in the training data, and computers and algorithms can’t think; they don’t know. 

Katherine Sheets: That sounds really interesting in the sense that the computer doesn’t have the why behind the decisions, quote-on-quote, decisions that it’s making. So, how can nurse scientists or just any healthcare scientists kind of combat that possible training data bias, or like, is it just going to take more sampling to try to get more data? 

Jenny Alderden: I’m really glad you asked that. So that’s a piece of it. We want to make sure that everyone is represented.

There are some ways if you have a type of patient who is underrepresented, for example, the oldest old. 100-year-olds are underrepresented in data just because there aren’t that many of them. Most people die before they turn 100. So there are statistical techniques that can be done. You can do what’s called synthetic minority oversampling, which is where you make some synthetic cases of 100-year-olds that kind of look like the other 100-year-olds, so that they’re better represented in the data. 

There are other techniques as well, but none of those things are perfect. So it really comes down to very judicious use. You should never rely on an AI algorithm solely to tell you this is the right decision or this is the wrong decision, but it can be more of a useful adjunct. It can be, oh, hey, the algorithm thinks we should do this and it’s something to consider. 

Katherine Sheets: You used the phrase supervised AI earlier. So is that kind of like having the humans involved to check its work, if you will. 

Jenny Alderden: In a sense, so supervised the way I used it was–thank you for helping me clarify it was that, we’re giving it a certain task. So we’re supervising it instead of just looking for patterns. Generally we’re saying we want to look for who gets a pressure injury and who doesn’t. That’s your job.

We’re supervising you for that job, but more broadly, we should absolutely supervise AI. So some of the ways that we can do that is we can generate and it’s not perfect, but to an extent we can make algorithms explainable. So we can look at the algorithm, and we can see how it’s making some of its decisions. And I’ll give you an example of explainability from my own work where I made an error creating an algorithm. 

So I was trying to create an algorithm to predict pressure injuries. And when I made it explainable and started looking at individual cases, I noticed something odd: some very high-risk patients, people who were extremely sick in the ICU, people who, for example, had very low blood pressure, very high heart rates, very low oxygenation. The kind of patients who are at extremely high risk for pressure injuries and just about everything else that’s bad. My algorithm thought those patients were low risk.

And when I was trying to understand why I realized it’s because a lot of those patients died. So the algorithm isn’t smart enough to know, oh, they died. The algorithm only sees, they didn’t get a pressure injury. So it’s essential for humans to look at those contextual factors and then incorporate those into the algorithm. Because without humans overseeing that all the algorithm knows is that’s a low risk patient. They’re probably not going to get a pressure injury without considering that it’s just because of the competing risk of death. There’s all kinds of examples like that where AI can come to very wrong conclusions because of contextual factors. That’s where nurses and other people working with AI have to be really cautious. 

Katherine Sheets: That is fascinating. So, yeah, thinking about the other side of AI then, the positive things, what aspects we should be engaging in and embracing more? 

Jenny Alderden: So, I am a big proponent of AI because I think that more information is better and because AI thinks differently than humans do. As humans, we obviously have kind of a limited cognitive capacity. And one of the things that’s very difficult as a nurse that I find most difficult is the idea that we have so many different things we have to be paying attention to all at once. 

So when you are taking care of a patient, you are monitoring their vital signs. You are comforting them. You are interacting with their family. And you are probably fielding texts and phone calls and getting medications ready. You are doing all of these things one after the other after the other. And so nurses do a lot of task switching.

One of the things that AI can do is be an extra set of eyes and ears on the patients. So already in hospitals today, there are AI algorithms that look for clinical deterioration. These AI algorithms can find extremely sensitive early signs that something is wrong, maybe before the nurse notices, because we are doing so many other things. So, for example, some of these algorithms will notice slight changes where the heart rate becomes a little faster, the urine output drops a little bit, and it will send a cue to the nurse and say, hey, check on Mrs. Allen. Something might be wrong. Those algorithms are extremely helpful because they are sort of an extra set of eyes and ears for the nurse. They are an assistant. 

That’s really how I think of artificial intelligence. I think of it as a useful assistant in clinical care. 

Katherine Sheets: That sounds very helpful. 

Jenny Alderden: Yeah, another example is that AI is being used at all of the health centers in the valley in the electronic health record. Algorithms will notice an alert of a provider or the nurse if two medications are being given that are incompatible, or if a medication is being given that a patient previously had an allergic reaction to, or that might be contraindicated in that patient because of some condition that they have. Those kinds of tasks by AI can keep patients safer. They also help because our human brains just can’t do all of these things at once. 

Katherine Sheets: Yeah, we’re going to get fatigued and overwhelmed, and the machine will not, if it has that one test to do.

Jenny Alderden: Exactly!

Katherine Sheets: That’s fascinating. So, can you talk a little bit about the nurse’s role in the AI development and implementation? Because you kind of talked about how it’s been used already, but I mean, it had to get here somehow, and it’s definitely growing in the future. So how can nurses be a part of that? 

Jenny Alderden: Absolutely. So this is something I’m passionate about. We need nurses to be at the table when algorithms are being developed because nurses understand the contextual factors. They’re the ones that produce the data, that put the data into the computer, so they understand what the data means and also what the data doesn’t mean.

So I’ll give you an example. When teams are producing these algorithms, there will often be a lot of missing data, a lot of values that just aren’t there. They have to decide what to do about it. So, for example, in the ICU, if we have a really sick patient, we draw a blood gas. A blood gas is a pretty invasive procedure because we take blood right out of your radial artery, so it’s uncomfortable, it’s painful. It takes a little bit of specialized training to do, but it gives us this great data about how your breathing is doing. So blood gas values are a part of many different machine learning algorithms about mortality, early deterioration, pressure injuries, and all kinds of things. The problem is, not every ICU patient has a blood gas value drawn because they don’t all need it. 

So some machine learning algorithms will treat blood gas values that are missing as missing data. They’ll impute the missing data, they’ll make a guess about the missing data. But the reality is the fact that the blood gas wasn’t there, a nurse would tell you, well, that is data. Knowing that the patient didn’t need that drawn is a very useful data point because it tells you they probably weren’t having major respiratory problems. 

Nurses are the ones, as the producers of data, who are going to understand all these kinds of contextual factors that are really necessary to build a good artificial intelligence algorithm. So I believe that a practicing nurse should be on the team, on the study team, with every AI algorithm that’s being developed. Because we have a unique perspective, and we usually understand contextual factors that other people might not know. 

Katherine Sheets: So how would nurses be on these teams? 

Jenny Alderden: So one of the things that I think we’re getting better at, because there’s money involved in this and people want their algorithms to work well. They’re discovering the algorithms that work the best really do usually have expert clinicians on the team. So just take people up on it. Almost every hospital now has a data science team. And if you have the time, the inclination, and you wanna do something kind of interesting, visit them. Mention that you’re available and that this is something that you’re interested in. Oftentimes, it can be worked into a nurse’s clinical role that they’ll have X amount of time with the data science team. 

Katherine Sheets: Oh, that’s super interesting. So how can students now in their education kind of get prepped? Like, what are things they can be doing now to be either engaging with AI or learning more tools before they get employed? 

Jenny Alderden: So students now are, wow, they are entering the workforce at a fascinating time, right after COVID. Lots and lots of things have changed in healthcare and the advent of large language models like ChatGPT is a–it’s going to be a massive change. A massive change in healthcare, a massive change in the way that we work. Students are really emerging, sort of just at the precipice of AI. And so I think one of the things that I would encourage students to do is really work to understand both the advantages and the limitations of AI.

Often, these AI algorithms are developed by companies that have a lot of money, and so they’re packaged in a very slick way. It’s essential for nurses to always remember that no algorithm can replace them and to really fight for that. The truth is, until an algorithm can explain with the clinical judgment of a nurse exactly why it made a decision, that’s really not good enough. We can’t make clinical care decisions based on something that we don’t understand the decision-making of. 

So one of the things that I think nursing students can do is really educate themselves on what a black box algorithm means and stand their ground. Make sure that they do not allow themselves to be sort of encroached on by these algorithms. 

And then on the flip side, learning how to use them as tools. As an educator, I’m actually a proponent of ChatGPT. I think it’s a useful tool and learning how to write prompts for ChatGPT, learning how to interact with these large language models, learning their plugins, and some of their both strengths and limitations.

I think that that’s good training and good practice because the truth is we are in an AI-assisted future. So we might as well learn how to work with them, and often it can lighten your workload. One small caveat about ChatGPT in particular, but probably most large language models, is that we should never put patient data in because it’s not secure. So when that data, you know, when you enter data, it actually enters the training data of ChatGPT. It still exists. It’s not a safe place to put anything private. So always keep that in mind. 

Katherine Sheets: That’s important. Yeah. Can you talk a little bit about how you even got interested in studying patient data and AI in general? How did you go from critical care into this kind of work? 

Jenny Alderden: Yeah, absolutely. So, I actually had an experience, oh gosh, about 15 years ago, where I saw how studying patient data, not in an AI context, but just generally, can save lives. 

So I was working as a helicopter nurse in Al Anbar province in Iraq in 2006. We had a number of young Marines who were killed from a gunshot wound to the femoral artery. And we would fill out a little report, we’d send the data somewhere, this was the patient, this is what happened. 

And then one day I got a call from somebody, and she said, you know, we’re seeing that injury in your region. There are a lot of gunshot wounds to the femoral artery. I didn’t know what to make of it, but I went to a contextual expert. I went to our gunnery sergeants, and I said, why would this be, what do you think? Because it looks like a pattern. And he suggested that when Marines are with their squad leader, they’ll often take a knee. So they’ll sit on one knee, and they’ll gather around their squad leader, and the problem with that position is it opens that femoral area right up. An order went out, don’t assume that position, and we didn’t have any more of that particular wound anymore.

So I really learned that, and that’s an extreme example, but patterns in data can be lifesaving. You know, individual data are just a story, they’re just an anecdote. However, when we start to see a pattern, it often reveals something actionable that nurses can use to significantly improve the care of our patients. 

Katherine Sheets: Wow. That is so cool. I love the connection then between data can save lives. I was thinking about that when you were talking about how the AI didn’t recognize certain patients as at risk for the pressure injuries, just because they had a different risk going on at the same time. That’s really interesting, that’s driven your path with the data, and now embracing the AI in education. 

Jenny Alderden: Yeah, and I think, too, one of the things I like about AI is I’m aware of my own human failings. I’ve made medication errors, I’ve missed things with patients in my 20-year career as a critical care nurse, and so I appreciate the extra set of eyes aspect of AI. I think it’s very useful and helpful to have an algorithm in the background also keeping an eye on things. There’s really no downside for that. 

Katherine Sheets: Yeah, yeah, that’s great to have that other set of eyes. It’s almost like I mentioned that we’re kind of checking AI’s work, but it’s almost like AI is also, it’s got your back. 

Jenny Alderden: Yep, it’s a partnership. And the way I consider it is, I think of AI as a copilot for nurses. And so we are the pilot, we’re in charge, we decide where the plane goes, but AI can be a very useful copilot. 

Katherine Sheets: That’s fascinating. Well, thank you so much for talking about nursing and AI with me today. 

Jenny Alderden: It was my pleasure. Thank you for having me.

James Sherpa: Thank you for listening to the BroncoTales podcast. In next month’s episode, our new hosts will explore the dynamic interplay between neurology and kinesthesiology. Enjoy this preview, and we’ll see you there. 

Bob Wood: We’re able to understand human physiology and the impact of treatment strategies. Also, you know, unfortunately, the impact of not being physically active. I think at the undergraduate level, you want to try to, you know, just create a story that is both unique, meaningful to you, and will resonate with others whom you wish to help.