Melanie Cole (Host): Artificial intelligence is reshaping the future of eyecare from early disease detection to personalized treatment plans. Welcome to Bettered a Northwestern Medicine Podcast for physicians. I'm Melanie Cole, and we have a panel for you today with three leading experts from Northwestern Medicine to explore AI integration into ophthalmology practice and research. Joining me in this panel are Dr. Angelo Tana. He's the Vice Chair of Ophthalmology, director of glaucoma, and a professor of ophthalmology at Northwestern Medicine. Dr. Tana will be moderating today's discussion. Joining Dr. Tana is Dr. Paul Breyer. He's the Vice Chair of Clinical Operations, a professor of ophthalmology and a professor of pathology at Northwestern Medicine and Dr. Ana Meza. She is the Vice Chair of Faculty Affairs, the Ryan Terri, professor of Ophthalmology, and a professor of medical education at Northwestern Medicine. Dr. Tana, I turn it over to you. Guest 1: Thank you, Melanie. Let's start with you, Paul. We, we in the field have, uh, limited actual practical applications that are FDA approved and that we can use right now, uh, applying artificial intelligence technology to patient care. Tell us about what we're doing at Northwestern in that area. Guest 2: Yeah, I mean, uh, so you, you're, you're right when you say you. When we have to use things clinically in most circumstances, they have to be FDA approved, uh, devices, so to speak, if we're gonna use artificial intelligence with that. So, uh, the one area that we have at Northwestern available right now for clinical use is screening for diabetic retinopathy. You know, and the way that's done is, you know, taking a. What we all know, a photograph of the retina. And, uh, typically it would be interpreted by one of us, uh, to see if there's, you know, diabetic retinopathy present or not. And then we would return a grading result. Now we have a, a device where the AI can actually interpret the images at what we call the point of care, uh, and, uh, give a, an immediate result, uh, to, uh. To the patient right there. So if that is done in the primary care office, uh, we, they can get the appointment. If there's diabetic retinopathy or diabetic eye disease, they can make that appointment before they leave rather than having to wait two days for a result and then having, you know, to go through the scheduling after the fact. So it helps us get instant results to the. Ordering providers and get the patient scheduled that need to be scheduled. So where do we have those devices right now? So we have, uh, several. Diabetic, you know, screening cameras. Uh, one of them is actually in our, in our clinic, uh, you know, so patients will visit their primary care and then the order will come in, they'll, they'll come to our department and get that done. We're deploying a new camera right now to the labs. So often these patients will get, when they get their blood draws, their blood glucose, and, uh, they. People who draw the blood can actually take those pictures there. So, uh, those are the main areas that we have that, and we're looking to expand that, uh, out to, to all of Northwestern and the various regions. Some of those cameras might be in the endocrinology office, some might be in their central labs. But the, the goal is to have this widespread, uh, to match. You know, the, the patient flow, uh, in each individual region so we can maximize the amount of people screened. Guest 1: That's great. The idea of having the, uh, the camera in the area where lab work is drawn, you know, where blood work is drawn is fantastic, especially if the, if the technicians there are able to get the pictures. So what's the uptake been? So, uh, you know, Guest 2: adoption of this, uh, like anything, uh, is. You have to get the word out right. And, uh, so we piloted it with several practices, uh, and just trying to figure out, you know, we don't wanna give the primary care providers a new workflow. We wanna find out what they do in their ordinary practice. How can we make this accessible to them and their patients with minimal change in their workflow? Because like. Like us, they're very busy in clinic and, and if it's a multi-step process, it, it's not gonna be done. So, um, but the adoption, uh, has once, once providers order this and they get a, get a result in a timely manner, they find that that's, uh, a great. Tool for them, rather than just repeatedly telling a patient, you're overdue. If your eye exam go schedule an ophthalmology, they can get a picture right there. Uh, so it's, it, it's definitely growing and becoming more, uh, uh, widespread adoption by the clinicians who use it. Guest 1: That's great. And how about, uh, at some of our, um, federally funded. Health clinics that we support. Yeah. Do we have implementation of these cameras at those locations? We do. Yeah. Guest 2: So we have, um, we partnered with the, you know, community health centers, uh, you know, federally qualified health centers here in, in Chicago. Uh, and we used some of our, uh, our big data here at Northwestern to find out exactly where in the city to, to deploy these cameras. You know, looking at. Where is the highest incidence of diabetes? Where is the highest incidence of potential blinding eye disease? Right. And we could go down to the zip code level, uh, and say, we, if we're gonna deploy this camera, we should employ it in this four blocks square area there, you know, so we partnered with, um, several clinics on the south. In west and near west sides of the city. So we have three, uh, cameras right now, uh, actively screening, um, patients with diabetes. Uh, we've photographed, you know, uh, over a thousand, uh, patients already. And given the fact that, uh, in that population, little less than half of them will require referral, uh, for some sort of potential eye disease, it's a, it's been, you know. A number of patients that we've been able to detect disease and uh, as we all know here at this table, you know, 90% of vision loss from diabetes is preventable. If we can detect it early enough. Guest 1: Sana as, as somebody who provides medical retina care, are you finding that these patients are coming your way? Is this, uh, um, increasing your volume to unmanageable levels or what, what's, what's going on? So there are all these people that we need to take care of who have undiagnosed disease, and this can open the window and allow us to detect these patients at a much earlier stage. How does it affect your clinical practice? Guest 3: Absolutely. So, um, you know. Technology takes a while to, to kick in. And I think one of the things that we do very well at Northwestern is we're on the leading edge and we're, we're on the, the ground floor and we've been doing telemedicine screening for quite some time, um, manually grading them at. First, and then even employing some internal, um, methods for trying to, uh, boost, um, gradability of the images and, uh, eliminate sometimes some of the, some of the pitfalls of, of these processes and technologies. Or sometimes it doesn't work. So working through those, um, early, uh, processes, um, we've been doing it for some time. Um. We also have ALS for a long time been, um, available to, uh, the community. So I haven't seen, um, a marked change, um, because we get these patients in, um, and it's not always flagged how they got in. Um, uh, but I think that the volumes have never been higher. So, uh, I would not, um, doubt that this is in part due to this new technology that we are working through. Guest 1: Thank you. Well, so all of us do research using artificial intelligence, and we have a big interest both, uh, in using image analysis and generative. Artificial intelligence to, um, help facilitate glaucoma clinical care. Uh, Ana why don't you tell us about the research you're doing and how it's going to change the field? Guest 3: Well, I think, um, it is incredibly exciting and ophthalmology is right for this, um, in part due to the fact that we have multimodal imaging. So what's multimodal imaging? We have so many different ways of looking at different parts of the eye that are noninvasive, the eye. It's fascinating in the sense that it is the only place in the body where you can directly visualize vessels. As we know, this is incredibly exciting and embryologically, we know that it is, uh, the eyes of, or the retina is a part of the brain. So in essence, we're able to actually look into the brain and the, and this, the circulation of the brain. So this has opened up just a huge, um. Body of research called omics, where we look at retinal manifestations of systemic disease and not only looking at how that might impact our vision, but how our retinal microcirculation might. Give us an idea about the cardiovascular state or the neurovascular state. There's been, um, research, um, quite prominent research, um, regarding Alzheimer's and seeing that there even, uh, cognitive state can show a decrease in, uh, flow in the, uh, deep capillary plexus. Um, we have seen in cardiovascular disease that a color photo can, um. Estimate ejection fraction. Now, some of the things that we are doing at Northwestern is collaborating with other departments. We have a collaboration with vascular surgery, neurology, um, even heart transplant. And looking at, uh, ways that maybe a mi uh, non-invasive, uh, imaging can inform us about the state of other systems harnessing. AI is essential in that because to actually look at all of these modes of imaging and to find patterns within that really is this is where, um, AI can really help us, uh, harness our, uh, systems information and really, um, find biomarkers. Now, what are biomarkers? Biomarkers are, uh, areas within, um, a tissue that might inform, um. Systemic disease. Uh, so one of the things that we looked at very carefully are retinal ischemic perivascular lesions are, are, are, they're also known as ripples. These little, uh, areas of lack of oxygenation, which we call embarks, um, are undetectable by patients. They're not coming in with any visual complaints. Um, but we find them in their imaging and we, you know, at first we were like, oh. We don't know what the significance of that is, but there's been, uh, big research related to that, um, uh, from elsewhere, um, that has shown that these little, um, micro infarcts are related to systemic and cardiovascular disease. So we've worked with our, um. Computer science colleagues on the Evanston campus to try to, um, automate finding these because they can be very, very labor intensive to go through and count. Um, and we've had some really, uh, interesting early success. We've also had, uh, early success with, um, our computer science colleagues bringing. Algorithms from other diseases, uh, I mean other, uh, entities into ophthalmology and using our data. Um, one is called machine teaching. And machine teaching is where we actually interact with, uh, the artificial intelligence and, and. Don't require as much data as is needed in, um, other modes of learning. Um, but we interact with, uh, with the artificial intelligence to try to point out the areas that we are most interested in. Um, so this has been fascinating from so many different, uh, angles. Um, not only, um, harnessing new technology to potentially, uh, find. Almost a needle in a haystack. Um, but also, uh, working with teams, um, and developing new technology. Guest 1: You were talking earlier, uh, when we were preparing for this today about, uh, education mm-hmm. And the use of AI in education. So tell us, tell us about that. You, you, you oversee medical student education in our department, and I know it's a passion of yours. Guest 3: It is. Well, you know, uh, one thing that I learned being, uh, in this field for, uh, for years now is that we not only learn from our mentors, but, uh, sometimes our students become our mentors. And, um, the new generation of ophthalmologists, uh, and medical students and doctors, they have skill sets that are very different than ours and us. Uh, we work, we work with our wisdom about, uh. Systemic problems and retinal pathology, and they come with, uh, a different perspective and that, uh, mentorship and, and collection together. Um, and not only just working with medical students, but with computer science students, um, this is a bridge and a language. Um, and we can't do this, um, in isolation. It is really a team science. Um, so. Bringing that passion about ophthalmology to students is what I bring, and they bring their different ways of thinking and we come up with new ways to do things. So it's really exciting. Guest 1: Fantastic. So, you know, in glaucoma, my field, um, we've done some research looking at the use of, uh, vision transformers with a technology called Dyno, which stands for distillation with no labels. Uh, it's a fascinating, uh, thing that we've discovered. If you, uh. Provide, uh, a training process to review OCT tomos OCT of the Macula, uh, to our Vision Transformer approach. Uh, we were able to use that AI model to evaluate patients that we have who have glaucoma, and we were able to predict the individuals likely to have. Rapid visual field progression with a reasonable a UC of around 85%. So, you know, it's interesting that there's so much information in the OCT. We don't look at and using a vision transformer using ai, we can, we can actually harness that information in a different way. It may not always be explainable, which is an important feature of AI in terms of clinical acceptance, but in terms of the discovery process, uh, I think that. We may be able to use AI in meaningful ways, uh, as, as we did in that study. Um. Paul, you're doing, you're you're, go ahead. You were gonna say something? Guest 2: Yeah. Yeah. I, I, so, I, I, like, I look at it in, in a slightly different way, and that's, I think what's good. We, we all have our own, uh, ways of approaching this, um, you know, using AI for analyzing and image or thousands or hundreds of thousands of images to figure out what's really there that we can predict from that, and how can we get instant results and predictions from that. I think where a lot of the power for AI potential is coupling that with all of the sorts of other data, clinical data, right? You know, um, such as, you know, what medications are the patient on, what is their calcium or their magnesium been like. We really don't look at that on each individual patient on each visit. But with ai, you know, looking at various factors, you know, what is their ethnicity, uh, even what is their zip code, right? You know, all of those things, um, will change. Each person's unique. Individual risk for a progression of a certain disease or, uh, you know, is this, is this patient more likely to have a more, a, a quicker progression? Or is this patient more likely to be stable? Uh, and, uh, you know that, so coupling images with. All of this data that we have today and in our, in our electronic health record, uh, it's the Guest 3: ultimate personalized medicine. Right? Yeah, Guest 2: exactly. And, uh, and no one clinician can look at all that in a 15 minute encounter with a patient. Right. You know, so having that, having that done, uh. Almost instantaneously, um, and presented to the physician while the patient is there. I think that's, that's where another great potential, uh, AI application is. Guest 1: Yeah. And for clarity, when you talk about zip code, of course we're trying to talk about capturing social determinants of health. Yeah. Uh, for example, exposure to pollution, 2.5 micron particles. Uh, it also captures. Information about income. Mm-hmm. In, uh, in a particular region, at least average income. And that, of course, can influence things like nutrition, which may influence disease processes, uh, in a, in a very important way. So, yeah, I think that's really powerful. You know, I've, I've looked at some studies that have incorporated. MR data that have, uh, arrived at very, very high sensitivity and specificity levels for the detection of glaucoma, for example, or for predicting some future glaucoma related event, sort of unbelievably high. Guest 2: Yeah. Guest 1: And I think one of the dangers, uh, with using the EMR is that information can bleed into the data set. Um, that. May not really be intentionally, uh, desirable to have in, in the model. Uh, so I guess that's more just of a, of a research warning is that, you know, if, if you're, if you include intraocular pressure data, for example, when you're trying to, uh. Use the EMR data to, in a multimodal way to try to enhance the detection of glaucoma. If a patient had a pressure of 24 and then suddenly they have a pressure of 12 on a different day, uh, maybe the machine has just captured the fact that the patient was started on treatment and that could lead to a falsely high, uh, sensitivity and specificity, uh, assessment. Yeah. I've seen papers where that I think is, is a problem. Well, I think Guest 3: you bring up critical issues about a clean data set and a, a very, uh, curated, um, and mindful process of looking at all this. And I think this is the, the importance of the clinician and all of this, right? Because we are constantly mining data, um, with our own thoughts, right? And is this relevant? Is this not relevant? And I think that that critical thinking and that critical, um, questioning is, is. Vital. You also brought up this idea of, uh, you know, screening patients, patient, uh, burden patients coming into clinic. And I'm really excited with another, um, uh, research, um, uh, study that we're a part of at Northwestern, which is, um, through the, uh, DRCR, which is the diabetic, uh, clinical Research Network. Um, it's a national study, uh, of retina specialists, and this is looking at home OCT monitoring. So now for those of you who don't know, OCTs. Probably the most common, um, image that we do in ophthalmology. And it looks at, uh, the, the cross-sectional image of the retina nerve. Um, it gives us layers of the retina. So we use this for macular degeneration. Um. At every visit, uh, almost to see how our treatment is working, how we might treat an individual. There is now an FDA approved device that will home monitor patients, um, with OCT and it uses an AI algorithm to detect levels of, of fluctuation. We're gonna learn so much from the study about what is actually happening on a daily basis with these patients and when is it critical to, um. Treat and, um, maybe, maybe some fluid is tolerable or maybe it isn't. So we, um. We treat with the best knowledge that we have, but the ability to access new information and data about our patients is just exploding. So it's really exciting. Guest 1: So you're talking about protocol ao Correct. In which patients will be randomized to either have home monitoring of their, uh, of their macular degeneration, looking for subretinal fluid using an AI driven OCT machine that patients will have at home. And, uh. Versus standard of care. Right, Guest 3: right. So both patient, both groups. This is for exudative macular degeneration or wet macular degeneration. Patients will be randomized either to standard of care, which is our treat, and extend our, our, our own way of personalized medicine where we see the patient in office and do these imaging, looking at subretinal, intraretinal, fluid hemorrhages, um, versus them also being treated and, um, being monitored, monitored at home while they're guiding their own, um, imaging every single day. We are getting an alert. We have human oversight, um, over that data as well. Um, but it's giving us that information of uh, perhaps some patients can go much longer in between injections and you Guest 2: could find clinical events that patients don't notice but happen on OCT. Correct. And perhaps intervene. Guest 3: Correct. And perhaps we end up with less injections with, uh, exceptional vision or, or maybe we'll learn otherwise. So, um. It's really exciting to, that's a very, uh, clinical application, uh, clinical application of AI in, in a device. Yeah. Guest 1: So all of us have done research looking at generative AI and managing, uh, patient communication and, uh, queries that come in from patients. So, uh, the two of you worked on an, a very interesting project looking at, uh, uh, use of chat GPT, I believe for triage. Guest 2: Yeah. Well. So one, as in, in my head as a practice director, you know, incoming calls, patient calls every day. You know what, uh, with questions like that's something that is a burden. We have to get the right person to answer the right question to find out is this an, is this an acute issue that we need to have them come in today, tomorrow, or can it wait? Uh, and, uh, we get those calls every day, you know, dozens, maybe many, many calls every day. So. I'll, I'll tell what, what we did and then Dr. Me can tell what the, what our findings were. But you know, so we. I came up with scenarios for each of the specialties, glaucoma, retina, you know, oculoplastics, you know, what would be the typical calls that would come in. And, uh, we, a patient scenario, I have, I have red eye and um, uh, my right pupil is bigger than my left. Right. You know, so, um, you know, that's something that would, or headaches and eye pain. Right? Yeah. That's something that would. We would want to see that patient the day of where somebody has got itchy burny, scratchy eyes will tell 'em to use tears and let us know in a week or two how it is. Um, so we propose, we, we, we pose those questions to chat GPT and various iterations of chat GPT to find out, well how good is that, uh, compared to a group of, um. People who would answer them in person. So we, we went to, uh, attending physicians, we went to some residents, we went, you know, triage people, like how can we compare pt, the triage people? Guest 1: The triage people are trained technicians, right? Guest 2: Yeah, exactly. To the, to, to triage, you know, to see how they, how, how the model did. And uh, Guest 3: yeah. So I think it was a, a really interesting thing because, um, of, uh, you know, as ophthalmologists, we know the. The words people say that require urgent evaluation. So we wanted to see, um, especially this has been done in other fields, how, uh, large language model would respond to these questions. So we posed actually the questions, um, three different times, um, to the model. And it's interesting, people may not be aware, but you might get a different answer, uh, each time you put in. So, you know, as we get these new technologies, chat, GPT barred, all these things, they, they seem. Amazingly helpful, but we have to continue to look at it critically. The good news is, is that. Overall, I, we do, we did find that this, uh, technology was very helpful and it did screen, um, and it was able to provide answers and triage, meaning you should come in one day. We think this is the diagnosis. Um, it was better at triaging saying, come in at this point than actually knowing the diagnosis. So the diagnosis may not be correct, but the triage meaning come in or don't come in or come in in a month, or this is routine. That was, that was better. Guest 1: So, um, what proportion of the, uh, queries that you submitted resulted in what you. Or the graders considered to be an accurate response, an appropriate response. Guest 3: The majority were appropriate, but there was a minority, a very small minority of unacceptable answers, which we always have to screen for. Um, because an unacceptable answer, um, by, uh, artificial intelligence is needs all of art. All of artificial intelligence needs human oversight at this time. So, yeah. Guest 2: And, and you know, many calls are currently, uh, a technician or front office staff. You know, they're not always gonna have you answering the phone all the time and giving, you know, uh, the answer with that experience. You know, so we're comparing, you know. That type of thing. But we found, you know, over 85% or so had appropriate answers. It did come up with a list of diagnoses that could be possible, right? And we, we put some questions in there like, Hey, if this patient calls with a shade over their vision, flashes and floaters, that's a retinal detachment unless we prove otherwise they need to come in today. Questions like that to find out if it would trigger an immediate, uh, referral. That's where it was good at. But uh, there, there were some, you know, hallucinations, you know, so to speak. Uh, and, uh, you know, does, like if you asked chat, GPT, does United Airlines fly to. Tacoma, they say, yes, there's an nonstop flight, and then you ask it again, are you really sure? It's like, oh, um, yeah. It, they don't, you have to connect through, uh, you know? Yeah. Yeah. We, so, um, so that's what we clinically, you just, we can't, we can't have that clinically, you know? Mm. So, uh, and that's, but these are evolving models and we even put it into version three and version four, and the, you know, the, the learning of those models is really exponential. Absolutely. Guest 3: And actually done interacting with it and training it. Yeah. Um, it, it shows a lot of potential even in, uh, you know, we talked about education and training of staff and, uh, and, uh, giving a script for what would be an appropriate response to, um, a certain query. Um, so I think that the technology can be really harnessed and it can really be a positive force when with the right oversight. Guest 1: So we did another study in which we took about 300 real questions. Mm-hmm. That came in through MyChart. And so, you know, all ophthalmologists know that with the volume of questions coming in, it's very difficult to process this. Uh, technicians are typically answering these questions and they have to do it. In the midst of a busy day of, uh, supporting us as technicians. And so there can be delays in responses to questions that come in this way. So it'd be very nice to know that there might be another approach, like an AI driven approach. And so we looked at 300 real questions that came in, uh, to physicians who were retina specialists, glaucoma specialists and cornea specialists. And, uh, we use chat GP T four O to generate the responses and then we. S had three graders review the responses for accuracy and completeness, and we found that three quarters were complete and accurate, which is not bad. But you know, similar to your study, about 10% of their responses varied between among specialties, but somewhere between five and 10% of their responses were. Judge to be unacceptable. So that's the problem here is we're not quite ready for prime time. But at the same time, as you mentioned Resana, you know, we, and Paul, you too, that there are these exponential gains in the quality of the, uh, of these models. And so it, at GPT five, it may be a completely different story where we may, uh, uh, scale a barrier and get this to, to the stage where it might be implementable. Guest 3: What's interesting is, like with the newer models, there's also abilities to put in images. So if you combine an image with a, um, with a symptom or a, a question, um, what we know is you add to the model and you get, you get even more information out. Guest 1: Absolutely. And whether we use it or not, patients are definitely using it. Correct. And that's probably, uh, you know, uh. The most part, I actually think it's a good thing. Uh, I think it gets the patient often closer to a correct answer. Um. Guest 3: But I would put that caveat out there. There are, there are big mistakes that can happen it for sure. So, uh, we still, we still, uh, need really good oversight. Um, but I do think that, uh, an increasing trust in the process and an engagement with the process is, is part of the development and part of our advancement. Guest 1: Yeah. There was, uh, an article in New York Times, just within the past week in which, uh. Uh, there was a, an assessment of, uh, chat GPTs responses, regenerative artificial intelligence in, in, in broad terms, uh, to patient questions and, and how patients feel about them. And what was really interesting to me is that patients find the, the communication with these large language models to be more empathetic mm-hmm. Than what patient, uh, what physicians, uh, are able to provide. In some instances. Yeah. Uh, it's, it's a, it's a very interesting, Guest 2: the model is, uh, busy and distracted. Guest 3: I think it's the directness, right? The, the short response versus an opportunity to provide a longer, uh, com complete response. But I think they've done, uh, they've done studies like that. It's psychiatry as well where, um, interacting at the, the empathy can be, um. Integrated into the model. So, um, yeah, really exciting. Guest 1: In the UK there was a, a very interesting study that was done that used, uh, an artificial intelligence tool that they called Dora, uh, that was designed to communicate with patients about four weeks after cataract surgery to determine if they needed to come in for further evaluation. Mm-hmm. So it's interesting, you know, we see patients on post-op day one and post-op week one and so on. And that's not the, uh, national Health Service approach. You know, I, the approach there is if you're, if you're healthy and just had cataract surgery, meaning no eye problems, no glaucoma, for example, uh, those patients are not seen back until about four to six weeks and by an optometrist and mainly for glasses at that time. Guest 2: Yeah, I mean, I think it's like we're, we're basically describing the fact that we really don't know where we're gonna be using it the most and where we're gonna find the most. Usefulness of ai, uh, in, in our, it's gonna, it's gonna be a part of our, our lives and our practice, uh, you know, very soon. But we don't, we don't actually don't know what it is yet. We have some idea of where we think it should go and where it can go, but, uh, uh, I think we'll be surprised about, you know, there'll be applications, uh, that are in everyday use, uh, that we still don't. Really grasped yet think that's a great thing. I think some of Guest 3: our hopes is that we can, um, spend some of that time reconnecting with our patients and really, uh, focusing on the, you know, the issues that they have and connections with them to take off some of these other, um, tasks. Um, certainly AI has, uh, started doing, um, scribing or, or of, of. Office visits, um, many other things, uh, where small AI is used in every day. Um, you even, like we write an email, it, it suggests, uh, the next word sometimes. Um, but I, I, I think our hope is that we utilize this as a tool to really access the patient as a whole, um, access all the data that's given in front of us and to really be able to reconnect and reengage, um. With, uh, really tough, tough problems sometimes. Guest 2: Where, where I like to also look at where it's going is looking at not just the patient but populations, right? Mm-hmm. You know, so we in our system alone here, you know, have a tremendous amount of patients, right. You know, uh, and. Using AI to say, okay, of all the patients with diabetes, right, that haven't had an eye exam, who's at the highest risk? How can we proactively, uh, get in contact with them at their next, you know, visit with anybody to say that, you know. Encourage them to get in for screening or, or treatment of a disease, you know, that we're, so, these are patients that never, that haven't shown up yet, right? How can we go reach them? The high risk patients, the high risk, uh, uh, for vision loss, you know, or, or for X, Y, Z. I mean, looking at that, having that in the background. Proactively search for people to get them in our office. I mean, that's, that's where'd like Guest 3: to see. Well I think if you really took that a step further, uh, ultimately we will be able to do, uh, undilated photographs with, uh, iPhones. Oh, yeah. You know, and, um, really reaching people where they are. Um, I think that is not too far in the future Yeah. To obtain an image of the eye and, um, what I think Guest 1: it's now. Uh, yeah. And it can be done in Africa using an iPhone. Yeah. Uh, to look for glaucoma. Guest 3: Right? And so all this, so then coming back to research, what, what is a research research, uh, to create data sets to create that knowledge to, that is infused in what that picture means, right? So what does that picture, how can we harness all the data, um, and say this is what we, we think is going on with this patient. Um, that's really exciting. And also to create models where, um. All people are represented so that the model itself is accurate and, um, I think that there's a lot of work to be done, but the possibilities are just really amazing. Guest 2: It's, it's not too far in the future where part of your yearly checkup with your primary care is gonna include a, a photo of the eye and mm-hmm. Right now we think of it as a fancy tabletop. Non me camera, but it's gonna be mm-hmm. A handheld device or Yeah. An iPhone, you know? Uh, and, uh, yeah, I think that's gonna quickly become part of the standard, uh, of, of looking for, for things, you know? So, Guest 1: yeah, that's, uh, that's a good point. I, you know, the problem is the sensitivity and specificity have to be sufficiently good in order to. Be able to accomplish all of our goals. Mm-hmm. And, um, getting back to the GS of the research. Yes. There, there, there's a lot of evidence that AI works best when it's a collaborative approach with a human clinician. Mm-hmm. Mm-hmm. So, um, you know, I think that that will be another, uh, another strong area, you know, especially in glaucoma where. There's a lot of disagreement about the diagnosis of glaucoma early on, and then there are certain cases of anomalous optic discs in patients with high myopia where experts don't often agree. So there's not a real consensus on the definition of glaucoma in all cases. There are the black and white cases, though, of course, and AI is great at differentiating. Those, uh, but actually so are normative databases that we already have, that we use, uh, with our OCT that don't rely on artificial intelligence. Um, Paul, you're doing research in, uh, in the use of artificial intelligence to generate, uh, risk scores. Um, tell us more about Guest 2: that. Yeah, so we use, uh, you know, uh, something, uh. Big data repository called Source, and it just, it's, it's, uh. About 25 centers, 25, and soon to be 50, uh, you know, centers Guest 1: that use Epic, epic Guest 2: academic medical centers that use our same electronic health records. So that data's a lot cleaner. So it, before we, we clear it of all patient identifying information and send it to the big data repository. So, but it has, it, it's, it's a lot cleaner than other big data sources because it's the same electronic record. Right. Uh, so. The data format is pretty similar, you know, so, but you can couple that with outside things, uh, such as, um, income, like we talked about. Uh, and, uh, you have all the lab values, all the medications, all the, uh, diagnoses of, of diseases. Then you can have, you know, zip home zip codes and, and things like that. So what we are doing is we're using, you know, because glaucoma, as we learned from, you know, in our residency, uh, it, certain populations have glaucoma earlier and it's more severe and it's more prevalent. Right. Is that just all genetics or is that the, um, you know, what else is going on there? If anything? You know, and, and, well, you alluded to it earlier, you know, so it may be that, uh, uh. Uh, an African American population has definitely higher glaucoma, but when we look at them, they tend to live in higher areas of air pollution, like you said, PM 2.5, you know. So, um, we're looking at all these various factors to figure out, uh, you know. Can we bring in these things? 'cause you know, when I see a patient, I'm not really looking at their Guest 3: environment, at Guest 2: their zip code. Right. You know, I'm, I'm just looking at them as a, as a patient looking at their, their vision, their pressure, their eye exam. I'm not thinking of those things. You know, so how can we get that data to the, to me or to you when you're seeing that patient, you know? And that's, that's the end goal for this is to, uh, to calculate, you know. With all of these things that we're not looking at in the clinical visit, but it's there somewhere in the EHR. Can we. Give the provider at the ti at the point of care, a risk, uh, estimate or, uh, or a red flag, so to speak. Uh, that says, you know, if you, if you were thinking about adding a second medication and you're on a borderline, you have a low risk patient and a high risk patient, you might add that everything else is the same except for that risk score. You might add the second medicine to the high risk patient and maybe not for the low risk patient. You know, that definitely has to be validated and, and, um. Proven. But, uh, I mean, that's how I think it will augment our ability at, at the point of care. Guest 3: I think, uh, one thing we haven't discussed, um, is, uh, that's possible in the age of AI are wearable devices or devices that are in the home, um, well we talked about OCT, but um, other devices that a patient might wear and harnessing that data about, um, uh, what's going on outside of that clinic visit. Right? Guest 2: Yeah. Um, so cardiology use that all the time, right, right. You know, get home heart monitors, uh. Pulse rates, you know, Guest 3: and that glycemic variability. Yeah. Yep. Amazing. Guest 2: So plugging into that is, uh, is definitely, I also, I also think about, I can't, uh, let this go by without thinking of just the our, how we get through our days too, like. I'd like to latch it onto, you know, saying, okay, where, how can we deploy resources in the hospital or in our clinic during a busy day? You know, we have some, we have 500 patients going to our clinic, our own clinic, right? And how can we say. If you're getting overwhelmed, we need to shunt some more resources there. Or you know, your pictures are running behind. Let's get some more retinal imagers and, and your side of the clinic and, you know, how can we make the patient experience better? You know, we're not looking at ai, we're looking at AI to monitor all these things. 'cause we know exactly when somebody checks in, when their eyes are dilated, when they're waiting for their pictures, when they're ready to see. To, to me, see me, see you. So I mean, those, those things, um, we could have something monitoring that constantly, you know, all these timestamps and then redeploying resources to, to make everyone's lives, uh, easier and, and get people on the clinic quicker. Guest 1: Yeah. And as the population ages and we all get busier and busier, we're already there, but it's going to continue to get busier. Yeah. Um, and we have to understand the need for efficient delivery of healthcare. I think AI will be a major driver in that direction. I think that, uh, we can hope for AI driving discovery in medicine. Um, AI will drive efficiency and I think it will drive accuracy too. And quality get there Guest 2: eventually. Yeah. Guest 1: Yeah. Been a great discussion. Guest 2: Yeah, it was fun. Melanie Cole (Host): Thank you. Guest 3: Future is exciting. Future is now. Guest 1: Thank you. Melanie Cole (Host): Thank you all so much for joining us today for such a lively discussion on such an exciting topic. Thank you again. And to refer your patient or for more information, please visit our website. At Breakthroughs for physicians.mm.org/ophthalmology to get connected with one of our providers. That concludes this episode of Bettered a Northwestern Medicine Podcast for physicians. I'm Melanie Cole.