In this compelling episode of Health Biz Talk, host Tony Trenkle speaks with Julia Komissarchik — AI innovator, patent holder, and founder of Glendor — about the evolving intersection of artificial intelligence and healthcare data privacy. With over 20 years of experience in machine learning, NLP, and image processing, Julia shares her insights on the challenges of de-identifying multimodal medical data, the biases in AI training datasets, and the ethical responsibilities of patients, providers, and tech developers.
From discussing the limitations of HIPAA in a rapidly advancing digital age to exploring the need for responsible AI and diverse data sharing, this conversation dives deep into what it takes to ensure AI can scale responsibly and equitably across the healthcare ecosystem.
Transcript of the Podcast
00:00:00 Intro
Welcome to Health Biz Talk, the industry’s leading podcast that brings you today’s top innovators and leading voices in healthcare, technology, business and policy. And here’s your host, Tony Trenkle, former CMS CIO and Health IT industry leader.
00:00:16 Tony
Hi, this is Tony Trenkle and welcome to another podcast of Health Biztalk. I’m pleased today to introduce Julia Komissarchik and Julia has over 20 years of work and experience in the areas of artificial intelligence, including machine learning, deep learning, OCR, image processing, speech recognition and analysis, and natural language processing. Her accomplishments include the building of Did You Mean Capability for Wikipedia, which is available in over 300 languages, and building patented pronunciation training system that was incorporated into the Rosetta Stone language learning. Julia holds degrees from the University of California, Berkeley in mathematics, where she graduated Magna cum laude, and Cornell University for Computer Science. She has 10US patents in AI and several more patents pending. So welcome, Julia. Thank you for taking the time to join our podcast. Thank you, Tony. So we’re going to start off, Julia. The way I always do is ask people how you got to where you are today. What’s we’ve given a little bit of background, but what kind of made you go in the direction that you have gone in your career? And and then what is has caused you to be so passionate about the whole area, about healthcare, data and privacy and some of the other issues we’re going to delve into in a few moments?
00:01:50 Julia
So I’ve been interested in mathematics since I was a little kid. I’ve participated in competitions, want some and always was interested in mathematics and analytical thinking. And I’m also very passionate about so-called human machine interface. So anything which has to do between the human and machine. And it’s actually, it’s, it’s a domain that blends sometimes it’s called cognitive studies. And it’s a domain that blends computer science, psychology, linguistics and philosophy. And most of my work is I’m always interested in working on things which are what they’re probably called out-of-the-box that create demand of demand, creative thinking and don’t have straightforward answers one, once you start working on it. So that’s that’s my passion.
00:02:43 Tony
Great, great. Well, one of the things we want to kind of delve into right now is, you know, there’s this kind of balance between getting access to medical data and loosely defined as medical data. Obviously it involves more than just regular clinical data. And then the whole issue of protecting privacy, which you’ve advocated through the identification and through some work you’ve done with your company and, and in other areas. So first, I think for the for the audience would be helpful. I, I understand what multimodal data is, but maybe it’d be helpful for you to kind of talk through that for people so they can kind of understand what you mean by that. And then we can delve into some of the privacy challenges.
00:03:30 Julia
Sure. So usually when people think about medical data, they think about either reports or tabular data, something that is collected, for example, for SDOH, for analysis of the healthcare trends. But medical data is so much more than that. It’s also medical images like X-rays, ultrasounds, CT scans, Mris, pathology imaging like slide imaging, so-called WSI, whole slide imaging. It could be videos, videos of the surgery or videos of the patients or video etcetera. Or could be photographs. For example, dermatology actively uses photographs to detect lesions and skin issues, or it could be voice recordings, it could be doctors recordings of let’s say an X-ray report, or it could be an interview with a patient to detect whether or not they have dementia or certain psychological issues. So all of these, anything you can think of becomes medical data. And that’s why one of the analysis shows that 30% of full data generated by humans is is healthcare data.
00:04:41 Tony
That’s amazing. So we, we talked about this whole issue of de identification. And for years when I was working at CMS, that was obviously one of the areas we looked at very closely. And it would be helpful in today’s world where we have a proliferation of people’s public data that’s available and even data that’s not necessarily public, but people can get access to. And of course there’s the whole issue around now with AI, it can take data from multiple sources and bring it together with machine learning and and other techniques to really begin to bring a deeper, I don’t know about richer, but certainly a different and deeper understanding of the data and how it all ties together. So how is in, in this environment today, which of course is going to keep evolving and becoming more difficult when we get to quantum computing and and other major advances over the next several, maybe certainly in the next decade. Is there, do we truly have a way to de identify people’s data today? And will that be the same things that we can do, you know, 5-10 years from now or, or are we on a, a glide path that’s going to lead us into a situation where nobody has truly de identified data?
00:06:10 Julia
Well, the way to think of it is of thinking of a piece of paper. So let’s say I want to send you a piece of paper, right? One of the safest way for me to protect any information that is on that paper is to block it out. So it will be just a black page, right? It will be absolutely safe, but not very useful. So the other way is for me to send you the entire page. That will be useful, but not very safe because it might contain information that can link back to the patient, right? Patient name, their date of birth, their location, etcetera. So there’s always a balance. How much do I cleanse? What would we call cleanse or sanitise so that it’s it protects the patient while it’s still, the data is still useful. That’s the biggest challenge, but in my opinion, one of the biggest challenges for AI in general is whether or not it will have access to the data to to train on. Because if it doesn’t then that leads to bias that leads to non generalizability. One of the things which is striking right now is of 1000 plus FDA approved AI models, average installation is 1, which means that it was trained on certain data set and it was applied to that data set. Which means that the models are not generalizable, which means that if they’re not trained on the right material, they won’t be applicable to you. And I, the other stat which I always like to quote is if we look at the, where the data is coming for, for AI training, it comes from the coast, it comes from the large urban areas. It doesn’t come from small clinics, tribal clinics, rural clinics. And as a result, if those AI models are applied to population which lives in rural communities, it won’t be valid. It will be, it will be erroneous and very dangerous the same time going back to that. So basically if the AI going back to the paper example, if AI doesn’t get enough information on that paper, it will be useless. The same time, if we give out too much on that piece of paper, then privacy will not be protected and that’s very dangerous. So it’s always a question of balance and that balance changes. Let me give you an example. So one of the most challenging problems for medical imaging is so-called brain CTOMRI scans, because when it comes to those scans, it’s a slices of a head and even if the information like so-called metadata and so-called burnt in is de identified, one can actually reconstruct the face itself. Originally, when the original data sets a few decades ago were built, the thought was that if we make slices thick enough then the facial recognition will not be able to recognise those faces. However, face recognition has been improved dramatically as a result. Even those thick slices are enough to recognise the person, let alone thinner slices. So yes, things change all the time. At the same time, it’s sort of a game of balance between how much do we de identify or sanitise and how much do we do we release into and give access to the to the researchers. And that’s a challenging problem. That’s why, for example, we haven’t talked about Glendor, but Glendor has been around for eight years and five of those were spent developing the technology so that we can actually sanitise the data so that it can be safely shared, according to Hippo and GDPR with other researchers.
00:09:43 Tony
Right, right, right. Yeah, I agree. It’s, it’s going to continue to be this kind of balance risk versus reward type of and and of course risk sometimes is in the eye of the beholder. I know some people have more risk adverse thinking. Others think of it more broadly for the common good. And it’s this issue will continue to be debated. And as you said, as as things continue to evolve, it’ll get more and more of a challenge for people trying to protect some of the privacy, I believe. But moving you, you mentioned the whole issue of bias. And I, I, I’ve been looking at this for a long time. And I think one of the certainly one of the areas that you bring up is the whole area of geographical bias versus urban. Of course, I live here in the Baltimore area where we have Johns Hopkins, a large university That’s certainly typical of of some of these research organisations you’ve talked about in terms of bias around the data that’s collected. And, and of course we’ve seen it with gender, we’ve seen it racially, we’ve seen it other places. And I think the, the problem is, is, well, it’s, there’s multiple problems. But one of the problems is, as you know, as we move into the world of AI, we don’t know where these various companies are getting their data from. And we don’t know, you know, from a bias standpoint, or at least to me, it would be difficult and certainly to way a lot of users would be difficult to understand if they get an answer from ChatGPT, how can they detect the bias or can they? I know I’ve seen stuff with the deep fakes and other things. It’s very it’s getting more and more difficult to to determine some of the differentiation, let alone you know the bias based on the data that was derived from.
00:11:59 Julia
There are actually systems that detect bias and there are systems that detect deepfakes. Videos and others are companies that work on this. It’s it’s sort of an ongoing battle. The deepfakes become more and more sophisticated, but the companies that detect deepfakes also become more and more sophisticated. And it’s, it’s actually very well, technically challenging and problem, but it’s something that is being solved. But once again, it’s balance between and one of the things which does help is volume of data. Getting data from multiple sources reduces bias. For example, we’re discussing ChatGPT, right? One of the challenges there is that or not challenges, but it was you, it was trained on publicly available data. And and actually they’re starting to realise that it’s not they don’t have enough data, enough representative data. Healthcare, it’s even worse because most of the data is highly private and personal and needs to be protected, right. So Healthcare is an even bigger trouble than ChatGPT when it comes to bias, but availability of different data creates a much more objective view. So that’s why for me personally, I’m so keen on getting for, for us as a society to create those data leaks or share creating concept of sharing data because that will that will if not guarantee that will increase the chance that it reduce the bias and increase the chance that the models will be generalizable.
00:13:35 Tony
So I, I, I, I understand where you’re going from the greater good. Do you think a lot of these private companies who can gain certain advantages by, by how, what types of data they can access? And, and, and of course the models, are they, do you think, have you seen that they’ve had not only interest, but are they, is that favourable to them? Because it seems like a lot of them would would not like a level playing field quote unquote.
00:14:08 Julia
Well, I’m big believe in cheques and balances and one of the cheques would be the patients themselves. We ourselves have to think about what what is happening to our data, how our data is shared, how it is protected. Because you know, we were discussed earlier about people, some people are more averse and some people are more for data sharing. But lots of people are on social media. A lot of that data becomes public and then used for other purposes, right? What was that like Netflix came out this not that I’m trying to advertise Netflix, but Netflix just came out with yet another document and that one is about children. YouTube stars where I don’t remember the exact numbers, but there’s like 50% of all subscribers to those channels for teenagers. We’re not teenagers, but we’re male adults, right? Well, they, they use the images that are out there for other purposes. Let’s put it this way, right? So it’s we have to think about it ourselves, like how much of the information do we put on the social media? How much of the photographs of the children do we put on the social media, right, etcetera. So it’s it’s a bigger question. So my answer to that is one of the things which we need to do is we ourselves as individuals, as patients, we need to think through this and be mindful of that. Actually, some of the people who spent a lot of time thinking through this are people of the rare community, a rare disease community, because quite often they have, there is no way not to share data because usually the disease, they don’t have representation of it in one location, could be like across the world, right? But in order for the researchers to do their research, they have to have access to this data. But the parents, quite often it’s paediatric, rare paediatric diseases, right? Parents or patients decide how they want to share data and that’s they, they realise that they cannot not share, they have to. But at the same time they have to think about the privacy. And that’s what I encourage everybody else to do in healthcare or outside of the healthcare because we’re all patients at the end of the.
00:16:17 Tony
Day, well we’re, we’re we’re since we’re talking about the patients, of course, patients come in all different types. Some are much more knowledgeable about the the privacy issues. Others and it’s not necessarily generational, but there’s certainly a greater interest in people under a certain age to basically, I don’t want to use the word barter, but they’ve certainly given up their data for a number of reasons. Some of it’s monetized, some of it’s to allow access to certain products or information or things of that sort. And I guess the question is in the medical space, as people get more, you know, with wearables and and other types of devices that constantly monitor their information health wise, and it becomes more and more of a part of how they provide that. I, I, we’re kind of getting into a situation where it’s not an educated public in many ways about this. And even when they are educated, many of them seem to be willing to make trade-offs because of the fact they see a a short term monetary or other type of gain. I don’t I don’t know what your thoughts are on that.
00:17:46 Julia
Well, my thoughts are once again that the most important thing is to educate patients so that they can make those conscious, those conscious and educated decisions.
00:17:57 Tony
So who’s the who’s the educator?
00:18:00 Julia
Well, hopefully this podcast is one of those right, All of us can can educate each other, right. So I would say this is so many different opportunities could be a federal, you know, government level opportunities could be companies, it could be individuals, it could be different nonprofits, etcetera. The, the important thing is to think through what, what 1 is doing. And then I’m hoping that that will change things a little bit. There’s a lot of initiatives, very interesting initiatives on the responsible AI. For example, I know personally who are involved with one in University of Utah, which is here in, in Utah. There’s also one in Northeastern and they’re usually called Rai, responsible AI initiatives, and they’re trying to think through those things how much? Because it’s not just so the the challenge, what we are addressing primarily is how to train systems so that they are not biassed right by giving access to the data. But there’s also questions like if somebody’s using ChatGPT and entering too much information, information that they shouldn’t have, that’s something that the users need to be aware of that ChatGPT keeps everything. So if they’re asking questions, they might want to ask them more generically without providing their personal information. It could be in terms of usage, right? There’s so many different aspects, like one podcast is not enough to discuss all these aspects. But what I like about this is that there’s a conversation going on about it because I, my, my belief is that it has to be done on every single level. The conversation should be like people. We as a society should be aware of it. It’s sort of if we don’t think through what AI is doing to us, AI will do the thinking for us and we won’t like the results.
00:19:44 Tony
I, I don’t, I don’t disagree with you. I, I, but I will say to me one logical place to discuss this would be the providers, certainly even possibly the insurance companies. But I don’t recall ever getting any information from either them other than the, the, the basic HIPAA guidelines. But other than that, I don’t recall any. And that’s more of a paper to sign rather than an, an education. So. How do we get to the point where we can some of these parties that are most responsible for the data outside, of course the individual themselves? How do we, how do we get them to take on some of this responsibility? Because it seems to me people trust their providers a lot more than they trust their healthcare plans. And we’ve seen obviously a number of surveys that have said they trust providers much more than they trust insurance companies or the government or any of the other entities that are in in there. So, so I don’t know, you may know better than I do, but are, are providers being trained in medical school or other types of training to learn more about the healthcare data and AI and, and some of the privacy implications that may have a negative impact for their patients if they’re not aware of how to better safeguard it?
00:21:14 Julia
They should, but you know with our healthcare, everybody’s doing their own thing. So I can speak for everybody, right? It’s the question, but to ask you to to answer your question, in my opinion, how do we encourage providers and insurance companies to be more mindful about this when one of the very strong drivers is enlightened self-interest, in this case money. If there are fines, legal or fiscal implications like HIPAA provides, then they will think more about it. They will just have to ’cause let me give you an example. There was a this is more about sensitive images rather than HIPAA images. But there was a case in Pennsylvania where the hospital accidentally didn’t intentionally released 606 hundred records of with nude photographs of the genitalian breasts of cancer patients. They had to pay, I think, 70,000 per patient as a fine. They actually, yeah. And there was nothing they could do because they were at fault. And then they had to merge with another hospital because that kind of fines times 600 is a pretty hefty number. So that’s one of those, the stick, right. But also hopefully there is a carrot and care, it could be monetization, proper monetization. You, you cleanse the data so it can be monetized so that it can be used in research. So there is a positive and also the, the providers, insurance companies and patients themselves can be compensated something like a NASDAQ for medical data. Then the players are entering it with their, with their eyes open, right? And also there will be properly compensated, you know, the micro payments and other things will it can take care of it. So that would be my. That would be my approach to the problem that we have.
00:23:15 Tony
Or any of the large medical associations like the AMA or others, have any of them done anything to tackle this problem? Have you, have you had a chance to talk to any of those folks about that?
00:23:30 Julia
I have one of the examples I could give is ARPA H which is like DARPA for healthcare. This is a new initiative, well new newish couple of years old initiative. One of the projects they had is it’s it’s it’s it’s an organisation which has different projects. One of them is so-called Index which is the database for medical images. It’s not a database which can be located in one place. It could be Federated. But the important thing is it’s the how to give researchers access to de identified medical images for their research. And they just finished their competition and they will be announcing winners shortly. But that’s one of the initiatives that I’m aware of that is doing that. There is another one which is called the rapid. Once again, our pH. This is for rare disease, same principle. How do we make sure that to do research, especially on rare disease, how do we make sure that the data has that the researchers in the I model developers have access to the data. I have to say de identified data. That’s the critical part. That’s the part that we’re addressing. But I think well, that’s that’s why we have this company. But in in general, I think this is very critical. The data that we’re using for any kind of research has to be de identified. There is no way around it. Interesting anecdotes. So Federated learning, everybody was very hoping, very hoping, very much hoping that Federated learning will solve this problem. Because in Federated learning, what happens is the the data is not sent to the place where the model is trained. The model is trained to the is sent to the places where the data is and then sort of smaller pieces are trained on the local data and then only the numbers come back. So you would think that will be enough. You don’t have to de identify the data in those locations because it’s just numbers. Unfortunately, those numbers can be can reconstruct the entire image together with the sensitive information. So even Federated learning, it solves the problem of not pushing around a lot of data, which for, for example, for pathology imaging, it is critical because one study can be a TB, right? It’s just too much stuff to move around at the same time it does it. It was it was hopeful that they could solve this problem, but it doesn’t. Which is the data still needs to be de identified. You can’t just use data SS.
00:25:48 Tony
Yeah. Yeah. So, well, we, one of the areas you kind of alluded to a few moments ago was the role of government. You talked about the ARPAH and some of the other work that’s been going on in the, in the government over the past number of years. Of course it’s CMS. We use the carrot and stick approach often using the the payment system. So we would start promoting certain policies to give incentives. That’s what we did with electronic health records systems when, when we started pushing meaningful use many years ago. And then the, the other issue of course is the stick, which is you, you promote incentives for a certain period of time and then you start creating disincentives as, as, as we might say to kind of promote the behaviour change, but hopefully is already gone far down the road because of that. And then it hopefully evolves into more of a business decision. So once you begin to use something like over the past few years CMS has promoted the use of fire as a as a standardisation way of sending information, medical information. And they started with with carrots and moved into sticks some. And now they’ve got it tied to more mainstream business applications such as prior authorization that they’ve promoted over the last couple years. So as, as we begin to look at this ecosystem that you’ve kind of been sketching out, you’re talking about the role of the patients, we’ve talked about the role of the medical community. Now we we turn to the role of government. And when I say government, I mean at all levels, both the state and federal. What are, what are some of your thoughts around that in terms of what, what’s the best way to, to do this in a way that continues to drive innovation and, and, and promote the the use of AI and, and data, but at the same time continues to support privacy and security safeguards for the patients and others in the system?
00:28:15 Julia
That’s what is it $1,000,000 question, right?
00:28:17 Tony
That’s a lot. Yeah. It’s a lot of question.
00:28:19 Julia
Nobody knows the answer to. I think working closely with both patient community and the healthcare provider communities will help because certain regulations actually spur growth. We can say safe growth certain just stifle growth and we need to be careful about doing either. So it has to be a balanced approach. But do I have like a one sentence answer? No way.
00:28:47 Tony
No, no, no. I’m not asking if A1 sentence answer. I’m just is, is the amount of government intervention, whether it be as a carrot or a stick today appropriate or do you think there should be more of a, what we used to call a heavy hand on this area?
00:29:10 Julia
Good question. It depends on different parts, right? So let me give you an example that I know of. So HIPAA covers 18 particular pieces of information, right? But there is actually a very interesting initiative on sensitive images this there was a white paper, I was co-author of it from SIM Hymns Society Joint Joint White paper on sensitive images. Let me explain what I mean by sensitive images. It could be, for example, nude photographs of patients, especially paediatric cancer patients where the question is not necessarily, of course it has to be protected, but the question is even within the hospital who should have access to it. So treating physician should have access to all the information, but scheduling clerk or a janitor, should they even have any access to those images or should those images and that for those files marked as sensitive and not shown to them at all. That’s that’s the big question that is goes beyond HIPAA. For example, autopsy showing autopsy results to a relative with photographs, right? Or we’ve seen some, you know, highly publicised photographs of certain crashes where the photographs are too gruesome to be shown, especially to family members. So all of this are things that need to be thought through. So that’s an example of where more thought and more regulation would be appropriate and should be. That’s, that’s something that should be looked at this granular approach to how much data should a treating physician versus student physician versus scheduling clerk within the same hospital, right. So that’s that’s or that’s one of the conversations. But at the same time, the other conversation is how do we ensure that? So for example, Q here, right, that takes care of the patient. If same patient, you’re exchanging information about one single patient. But what about when you have to do research where you have to have access to a lot of data, It has to be divorced coming from multiple sources. So it adds additional headache in terms of handling, right? And rightfully so. At the same time, how can we make sure that this data is accessible? Because if we don’t, AI is going to die in the vine because it’s, if it’s not working, people will, you know, AI has been up, down, up, down. And we’re almost on the precipice where people will say AI, it doesn’t work. We know it doesn’t. We’ve had this before, right? So it’s it’s always this balance.
00:31:53 Tony
Yeah, I, I agree. So the kind of the bedrock of all healthcare privacy has been the HIPAA privacy, which of course that regulation is over 20 years old now. And the original legislation was came out in the, I think the late, I don’t think 96 or so somewhere in that time frame. And yes.
00:32:18 Julia
The.
00:32:19 Tony
Last the last Millennium, but I guess the the challenge with that is that we have a have a Congress now that’s seems fairly deadlocked in it. I think it’s going to be hard to get certain legislation passed to update some of these, some of these regulate, not regulation, but some of these legislation that’s, you know, 2530 years old. So sometimes people say, well, maybe the states, but if you go to the States and you have, you know, 50 states plus territories and things of that sort, then you end up with a, a hodgepodge of regulations and rules and things of that. So, So what do you think given the current atmosphere in DC, it’s kind of hard to get that type of legislation passed to update a lot of this healthcare data privacy today. How can we in the in the meantime, given that fact, what can be done at the at the government level, do you think?
00:33:26 Julia
Well, I think States is picking up a lot of work on this. For example, Utah is very active in thinking through responsible AI and thinking through the AI regulation data regulations. I’m, you know, I’ve spent almost all my adult life in California, right? I’ve been in Utah only for eight years. So I’m a new U-turn. But if you look at California, they do a lot of work as well. If you look at different states, they approach it differently based on their philosophy, based on their, on how they they think should be done. But it’s, it’s going to be interesting what will happen with the with this sort of every state is a Petri dish, right? Well, where, where, where the experiment that is run on. But still it’s, it’s a Petri dish. And it will be interesting to see what will have, what will fall out of out of the 50 experiments and what will sick and what will will not work. So in that sense, unfortunately for us, it’s an experiment on us. But from longer term, hopefully that will lead to some interesting results.
00:34:31 Tony
Yeah. I mean, traditionally states are called the laboratories of democracy. And certainly a lot of national movements grew out of state efforts. I mean, the ACA, the Affordable Care Act, a lot of the work around marketplace was done at the state level before it was done at the national level. Other types of national movements began as state movements. It’s just that the the whole area of AI and the whole explosion of data is is occurring so rapidly, it’s going to be hard for the government at any level to keep up with what is needed to safeguard.
00:35:11 Julia
Yes, but at the same time, that’s why I’m very bullish on individuals because it’s their life, their data, and they need to think through it and seek out it. If you mentioned that you’re concerned that there might not be enough education, then I think the desire to seek out that education would be very valuable because at the end of the day, they, we as individuals will define what states in the government is going to do.
00:35:43 Tony
Right, right, yeah. And I and I and I do think as you said, one of the areas that offers promises the whole area of advocates in the rare disease area, especially people who have, you know, children with these diseases. I’ve seen a number of people forming these user groups and sharing information and data has been have been very valuable. So we’ve talked about a lot of issues over the last 40 minutes or so. Julian, of course, as you said, we could talk a lot longer than one podcast, but I am going to move towards closure here and maybe we’ll bring you back and and just kind of deep dive on one or two of these issues in a future podcast if if you’re available.
00:36:29 Julia
Sounds great.
00:36:30 Tony
OK, great. So turn into the personal side. What keeps you busy when you aren’t thinking about healthcare data?
00:36:37 Julia
Lots of things, music, sport, reading, I like to do things which are which Take Me Out of the my comfort zone. So right now I’ve volunteered for a choir. It’s not exactly something that I would want to do, but something that I have to. And that’s good, not good because it’s have to. But you know how it is like it’s nice to step outside of your comfort zone to do some blunting, so.
00:37:01 Tony
Yeah, it does. And it, and it also helps as you step out of your comfort zone, it actually helps you in your in your work as well because it forces your mind to think in different ways because it’s not something you’re used to doing on a regular basis.
00:37:17 Julia
Yeah, absolutely. That’s what makes like Life interesting.
00:37:21 Tony
Well, that’s great. Another area I wanted to just touch on at the end is you’ve, you’ve hit a lot of different topics and you’ve had a lot of podcasts out there, but what are some places people can go to? One of the things we talked about a lot today was the getting the consumer more informed, the patient more informed. And what are some areas, public sites that you would recommend?
00:37:51 Julia
Actually, I would recommend reading as much as possible if you’re looking for more research of the market. I really like Signify Research. These guys are sort of, you know, they’re, they’re more recent, well, not quite, but they’re a smaller player, but they’re very agile. So I like them. But there are sources there’s for medical imaging, there’s Aunt Mini for healthcare, there’s healthcare IT blog. So there are quite a few different blogs and conversations. But my recommendation is try to look at 2 opposite opinions and make and make up your own mind.
00:38:28 Tony
Right. I, I, I agree. I I think there’s so much information out there now that a large part of it is, is just kind of trying to philtre what comes through. And, and as we talked about before, you know the whole issue around bias and, and you know, what is, what is the right information? The more more you can read and think about it and go out to some of these different sites or even, you know, pose the question to a, even posing these questions to AI and seeing what kind of different answers you get from the different models.
00:39:04 Julia
That too but for AII. Recommend when you’re using ChatGPT, know the answer beforehand. ChatGPT is very good at presenting really round sentences you know. But don’t let it speak because unfortunately it’s it likes to come up with non existent information.
00:39:22 Tony
Right, right. So I’m going to close with a couple quick questions. These are just kind of like one minute or less snippets that we can utilise on LinkedIn or you could even utilise on LinkedIn or other other platforms just to kind of give. So one is, is there another country that you feel better or best balances patient privacy safeguards and the use of health information to improve clinical outcomes? Is there a shining star out there, the European Union or others or what do you, what do you think?
00:39:58 Julia
I think we’re all searching. And not there is no clear answer yet, but one thing which I really like is South Korean initiative. They created a medical data lake that they used for their own AI companies. And as a result some of the leaders in space like Lunid came out of this initiative because they they could train their data on very large volume of diverse data. As a result, remember that example from one on average 1 installation Lunid is installed in something like 3000 hospitals. That’s quite a difference. But they also were trained on the largest vocal. So for initiatives, I would say Korean initiative. I’m very hopeful about the RPH projects because it’s, it’s, it’s a very interesting approach and they’re looking for into different challenging issues. But I can’t say that one country has has made it already and we can just copy over what they did.
00:40:54 Tony
You think it’s, you know, this is this is stretching the question a little longer. Do you think it’s better or in a better opportunity in some of these countries that have more of a national healthcare system or do you think it it doesn’t really matter that much?
00:41:13 Julia
I think both systems have their problems and that’s the challenge right now. UK has big issues. They were creating research in it like NHSX and it died. So and it’s, it’s a, it’s a very depressing story. They’re doing some other work. So in, you know, NHS is one of the examples of something which is opposite of what we’re doing, right. So that’s that’s why I don’t think anybody has the answer. I don’t, I don’t know the answer. I don’t think anybody else has it. But we need to work towards finding it together.
00:41:50 Tony
Right, right. I agree. So what worries you the most about how healthcare data will be used or or abused in the future? If you and I were going to have this conversation 5 I don’t know, pick up, pick a data point in the future, in the near future, what would worry you over the next five years that we’re going in the wrong direction?
00:42:16 Julia
When when I mentioned that before, when we as a society don’t think through those issues because it’s one of those things, we can’t just avoid it and let it go. We need to think through them because otherwise decision will be made not by us, but sort of by default. And that would be interest. So I think responsibly our conversations about it for us as a society on different stratas and different levels, that’s I think would be a winning combination of having an answer because I don’t think anybody has the answer. But what I’m hoping for with the conversations and thinking through it, not just letting it go and not pretending that it’s not there, would help us.
00:42:56 Tony
Right, right. I agree with you. I, I hope we can get to that point. I will be cautiously optimistic.
00:43:03 Julia
Yeah, no guarantees, right?
00:43:06 Tony
Right, no guarantees. Well, thank you, Julie. I appreciate the the time and I think we covered some really great ground. And as I said, I’d, I’d like to probably bring you back at some point to really dive deeper on one or two of these issues because we could easily spend a 45 minutes to an hour or even longer on on any one of these.
00:43:26 Julia
Yeah, yeah, all of them are cans of worms or, you know, deep, deep holes that need to be explored, right.
00:43:36 Tony
Exactly.