Transforming Medicaid With AI : Insights From Optum’s State Leaders

Play episode

In this Health Biz Talk episode, host Tony Trenkle, former CMS CIO, sits down with Karl Schelhammer, Senior Director for AI/ML Engineering at Optum State Government Solutions, and James Lukenbill, Strategic Product Manager for Analytics at Optum. Together, they explore how AI is reshaping Medicaid operations, fraud detection, eligibility processes, member experience, and the future of state health and human service systems.

Transcript of the Podcast

00:00:00 Intro
Welcome to Health Biz Talk, the industry’s leading podcast that brings you today’s top innovators and leading voices in healthcare technology, business and policy. And here’s your host, Tony Trenkle, former CMS CIO and health IT industry leader.

00:00:12 Tony
Hello, I’m pleased to introduce our two speakers today. We have James Lukenbill. And James is the Strategic Product Manager for Analytics at Optum State Government Solutions. With over 20 years in healthcare analytics and a doctorate in quantitative methods, James leads efforts to deliver advanced AI and machine learning solutions for state government clients, driving innovation and actionable insights in Medicaid and public sector health care. Karl Schelhammer is a Senior Director for AI/ML Engineering for State Government Solutions, also at Optum. Karl is an AI and data science leader with over 15 years of experience in turning machine learning innovations into customer focused products. With a PhD in physics and a track record of guiding cross functional teams, karl blends technical expertise with strategic vision to deliver measurable business impact. Known for championing customer centric innovation, karl helps organizations transform AI capabilities into real world value. So welcome gentlemen.

00:01:33 Karl
Thanks, Tony. Pleasure.

00:01:36 Tony
So we’re going to kind of divide this conversation into three sections. We’re going to first ask you both to talk about a broader, you know, discussion on AI. Then we’ll get into some real world use cases that both of you have have looked at and have implemented. And then finally we’ll get into key considerations in looking ahead. So karl, I’m going to start with you and then James, you’re certainly welcome to chime in. So one thing that we often discuss is that AI is looked at in very broad terms, but you both work with state health and human services agencies and so what does it actually mean for them and why is it becoming so important?

00:02:26 Karl
Okay, thanks Tony, again. Pleasure to be here. I think I’ll take a crack at that one. So one of the reasons why it’s so important is because Medicaid applications are becoming a lot more complex than they were in the past. I think it’s, it’s due to several factors. For one, there’s been an explosion of policies and rules and it complicates the logic that the systems need to administer. And also because of the nature of the way the systems are administered, the state and federal government share responsibilities. It leads to lots of custom implementations in order to maintain compliance. Right. So it’s hard to find a one size fits all solution and it creates a lot of standardization challenges. So that means development teams like ours, there’s a lot to keep up with another thing is that the systems have very high enrollment. So during the COVID pandemic, due to things like job losses, enrollment peaked at around 93 million people.

And currently it sits at around 77, 77 million. So it’s difficult to scale up to a population this size, especially when we talk about some of the use cases that we’ll get into in this discussion. So AI can help out a lot there. On the plus side, I think it’s a massive opportunity to consolidate and simplify systems like this. So AI basically helps at every single level of Medicaid implementation to prove access to information as well as improve user experience and generally speed time to value. James, is there anything you want to add to that? Sure.

00:04:04 James
So I think there’s a couple of fronts. So in Medicaid, a lot of These systems are 40 years old, so they’re running COBOL on mainframes. And as these systems near retirement, the people that know how to code on them are nearing retirement too. So there are very few people that really know to code and COBOL or maintain these systems. And these people that grew up around these programs are starting to retire. So there’s a really great opportunity with, in the last couple of years with AI and large language models, specifically in accelerating the retirement of these systems, recoding these systems from COBOL using LLM methodologies to more modern languages, but also helping these health and human services agencies with agentic AI, things like virtual investigator, virtual analysts that. That can supplement the state staff in order to help them do more with less.

00:05:22 Tony
I think that’s a good point. As a number of people retire from these state health agencies, obviously we’re in a period of time where it’s a difficult time financially for many states. And so what are your thoughts in terms of how to best get people to understand AI from a broader sense, given the fact that many of them have probably had very little experience with it?

00:05:54 Karl
I’ll take a crack at it. I mean, I think it’s important to understand that technology is a benefit. I mean, you know, it’s fundamentally disruptive, right. But I think we’re going just moving in a direction where it’s going to be part of the way we use technology. It’s going to be part of the way we do our jobs, right? And so it’s going to be important for people to really understand what that means for them in their job role, so that they’re building, you know, the skills that they need to. To thrive. And that’s going to be important as it relates to, you know, the way we develop our features for our beneficiaries as well.

00:06:31 James
Right.

00:06:31 Karl
There’s a whole new world of technology that can now come to bear to make our citizens lives easier and more efficient. And so I think that’s going to be one of the big themes that we see as we modernize and update these systems.

00:06:48 James
And, and as, as states adopt these systems, they’re very concerned with governance, of course. So you know, there, there are many scenarios that people worry about, about, you know, like the HAL 9000, you know, taking over your life. But what, what they’re, you know, seemingly missing is the upside of this, you know, and, and I think just a few months ago states thought that they could forestall massive adoption of AI in their agencies.

But I think as more and more of their employees use AI on the side of their desks or even when you do a Google query, now you are using AI. So as people become more conversant with it, more familiar with it, more comfortable with it, I think the state employees are going to demand that their, their supervisors get on board and adopt, you know, safe ways to use AI. So enterprise technologies like Enterprise ChatGPT or Copilot that wall off their data and allow them to experiment with it and have some intellectual freedom with, with without the threat of doing something off the side of their dumping some phi into their own chat GPT, for instance.

00:08:16 Tony
Right. And I guess another just point to make about this is with this state health agencies, obviously each state is different and each state is looking at going about how to regulate or not regulate AI in different ways. So you know, some of the stuff that you’re going to have to deal with with the states and both the leadership and the staff is the hype around AI, which is both positive and negative. I mean, people hear about hallucinating, they hear about biases, they hear about concerns that AI is going to replace positions. So there’s a lot of things that are being talked about. Some are true, some are partly true, some are, you know, total misconceptions. But you’ve both been involved with AI and with the states for a while. So I was wondering maybe karl, you could start with talking about some of the misconceptions you’ve seen and some of your thoughts about it.

00:09:16 Karl
Yeah, let me think about it. I think there’s two key themes. The first one is, kind of goes back to what James is saying, is that AI is some kind of boogeyman that’s going to take over the world. The second theme is that it’s like an all powerful butler that’s going to automatically serve your beck and call. And I don’t think either one of those ideas is really quite right. The truth is that it’s fundamentally disruptive, but at the same time, it holds an enormous potential to empower people. And that’s a really important point to make. It’s not categorically going to take over jobs, but it’s definitely going to get integrated into the work that you’re doing right now. Right. So the people that get out ahead and learn to use it, what it’s good for and what it’s not good for in their respective roles, the ones that learn how to leverage it well, they’re the ones that are going to be well positioned to thrive as more, more of this work becomes augmented by AI. So it’s not something to be scared of. I think it’s something to be embraced.

00:10:20 James
And I think that the AI models are coming along very quickly in terms of the ability to give answers without hallucinations. As the technology improves, but also our ability to prompt them has improved. People are becoming much more sophisticated in their understanding of how to prompt a model so that it gives good results as, as they get comfortable with it. As you get comfortable with any technology, you learn how to use it better.

So I think that, you know, the, the models are getting better, they’re hallucinating less and then people are also understanding how to use them and then understanding parameters such as temperature that can allow people to tune a model so that it’s more creative or, or more, you know, down to just the facts, ma’, am, type of model. So I think there’s, people are alerting and the technology companies, OpenAI, the Gemini models, all of those folks are learning too, how to produce better models. So it’s moving very quickly. But I think everybody’s learning how to adopt these things together.

00:11:41 Tony
I think one of the ways to deal with some of these misconceptions is to work with a trusted partner. And we talk a lot about that. When I was in the government, we looked for certain players in the game, whether it was a vendor or a health system or others who we could work with to be a trusted partner. And not specifically with your company, but what do you both see is the characteristics of a trusted partner and helping them to deal with some of the hype and misconceptions. We can start with you, karl. I guess.

00:12:18 Karl
I think responsible implementation is going to be a key one. Right? Especially with how fast you’re able to move with the Technology, anyone can throw together a feature or a demo and they’re very convincing. But is it really what you need to implement? Has it been thought through? Is it, you know, how does it impact disparate groups? Right.

So those sorts of things I feel like is where partners build trust. Because even with, you know, these pre trained foundation models, you still need a lot of the rigor around capability analysis, like understanding, you know, the kind of metrics that you’d implement to make sure that it’s performing well. Like let’s say before you pass a POC through a process that’s going to put it in your customer’s hands or your citizens hands, you need to understand how to implement robust monitoring and fail safes around those kinds of capabilities so that if something goes wrong, you already have everything in front of you that you need to troubleshoot it so you can get your capabilities back up and running for the sake of your citizens. You also need to know best, best practices about how to implement with the AI agents. Right, which is going to be a big theme this coming year.

 And already it really is, right. You don’t want to implement something that, you know, seemed like a best practice a year ago and then next thing you know you’re sort of pigeonholed into technology or architectures that will be hard to undo. So that’s where I think thought leadership comes into play a lot. And you know, we put a lot of thought towards, you know, those sorts of considerations so that we’re setting ourselves and our, our customers up for success. But anyway, those are my thoughts. James, what do you think?

00:13:59 James
Yeah, and I think those thoughts are spot on. So you know, we’ve thought a lot about governance just because the models that we use in our daily business, you know, they could have differential impact on people. So we have a machine learning review board that reviews all the models before they go into production. And we have to have exhaustive evidence as far as the, the predictive power, the efficacy of the model as well as if it is a predictive model, how it’s not going to be biased or if we, if there is some bias so that everybody understands that bias so that it’s deployed correctly. And you know, so that’s a key thing.

This second thing about our organization, which I think is very important is we’re a health services company. So we use these models, we’re developing these models all day, every day for our own use, testing them as, as karl was saying through the mlrb. So we’re really, you know, eating our own dog Food, so to speak. Whereas you know, some of the, the, the Silicon Valley startups, you know, kids right out of Stanford, you know, healthcare data looks cool. Let me, you know, get some VC funding and see what I can do with this. They’re not going to have the, the DNA that we do.

00:15:32 Tony
Yeah, I think, I think those, both your responses were great and I think one of the things people who are listening or watching this podcast are going to be asking is, okay, this sounds great, but where are we doing this today? What are some real world examples you can provide that say this is actually today? It’s not just a concept or a pilot, it’s something that, where organizations at the state level are actually doing this to improve their workflow, to improve how they’re working with the members and a number of other things. So we’ll start with James, what are some of the areas that you’ve looked at that are actually being used today with AI for?

00:16:18 James
Some of the really exciting things that we’re doing is I’ll take one use case like fraud and abuse detection. So we’ve written an AI capability that takes in the state policy and essentially writes SQL code to detect variances in that state policy. So state policy changes all the time. So instead of going hunting through the code to determine, oh, does this conform with the current policy, we go just flip a switch and ingest the new state policy and then we have the new code that we could run. That’s one example. Another example is taking in data, so x12 data, HL7 data. We have an LLM capability that takes in specs. So we have specs for our own trading partners for x12, so for 837ps for instance. So we, we have a capability that just reads those specifications, just reads the PDF, writes the configuration file and then writes the, the Python or Java code to read that in. Also we’re doing some exciting things. We have a couple of features called virtual investigator and virtual analyst. That one of the things that the states don’t have time to do is look at all the analytics that they already have. So just saying, hey, you know, you want more fraud models or want more analytics, you know, analytics want more pie charts or whatever and they don’t have time to look at what they, they have now. So one of our capabilities goes through everything that they have and summarizes it and then can set up Alerts and update KPIs and things like that. So those are some of the things I’m excited about. karl.

00:18:22 Karl
Yeah, spot on. Agree with all of that, I think I’ll just add maybe just a note around insights. So one of the things that was tough, I think in years leading up to the President was, you know, dashboards were typically very static. And, you know, they’re often served with static data. Right. So somebody had to go in there and look and see what’s interesting, and then that would get kind of pushed through a process that ultimately rendered it to users. But now with AI, you can look at a much wider swath of data and leverage AI language models to basically distill those insights in a way that’s aligned to different job roles within your application. Right.

And so that’s, I think we’re prompting, actually comes back into play because you can guide the models to look for things that are going to be interesting to different Personas. And with using automation now, you can feed those out much more dynamically into dashboards so that people have information that’s more timely, current, and more relevant to them because we’re personalizing it for those different Personas. The other thing that’s happening around that same domain is dashboards are becoming a lot more interactive and controllable. So it used to be that you’d set up your bar charts, your pie charts or whatever. Right. And somebody had to go in and.

00:19:33 James
Do a lot of work.

00:19:33 Karl
And if you wanted to change that, look like. And now natural language interfaces are coming right to that plotting interface, so you can render those visualizations in ways that make sense to you. And all of those are going to allow people to get better information in a more timely manner. And ultimately that’s going to lead to better decisions that hopefully better affect patient outcomes.

00:19:54 Tony
I totally agree. So I’m going to swing back a little bit to fraud, waste and abuse. And one of the things I know from working with it for many years myself is there’s a bunch of different places along the life cycle that there can be potential for abuse or fraud or even waste. And part of the challenge is for an organization is today, not only are the states themselves starting to use AI, but a lot of the bad actors are also using AI. So you have people on all parts of the, you know, the claims processing or eligibility or other areas that are using it to counter anything that the states are doing. So what do you both see? I’ll start with you, James, in terms of your thoughts with that.

00:20:51 James
So we, we, we, you know, again, we eat our own dog food. So we have very sophisticated AI models that interrogate our claim stream to, you know, interdict the claims when they look like they’re anomalous. So the prepay capabilities, so those are capabilities that are, that are benefiting a lot from the AI techniques instead of the pay and chase. But you know, those capabilities for both prepay as well as pay and chase are greatly enhanced by the use of AI. So as I mentioned before, the policy to SQL engine that we developed, this same sort of thing could be deployed by any agency.

So to keep abreast of their policy as it develops and make sure that the appropriate edits are in their MMIS system or other claims processing system system and that they have the correct protections in place to make sure that providers are billing as per policy and their MMIS system are paying per policy. And then earlier I touched on the agentic AI angle that’s going to be huge for states, particularly fraud, waste and abuse investigation. They simply do not have the staff to look at all the leads they have. So taking that the needle in a haystack from all the information they have and prioritizing that for investigators and opening cases for investigators. So not removing the human from the picture, but making humans much more efficient.

00:22:51 Tony
Right. karl, any thoughts?

00:22:53 Karl
Hard to add on to that actually, but I would just reiterate what James said about the agentic experience. Right. I think it’s going to allow people to just work at much larger scales because the tasks that they would do today become agentic tasks and then they’re more focused on what to do about what they’re finding rather than digging through all of that minutia.

00:23:15 Tony
Yeah, I think that’s certainly an important aspect of it, is the fact that AI allows you to move on from some of that digging and allow it to do a lot of that work for you and allow the, the different state agency officials and others who work with, with the fraud area to really focus on individual cases at a much more detailed level without having to do a lot of the digging, initial digging themselves. So one other area if we want to look at more detailed use cases is of course the members. And there’s a lot of ways that organizations deal with their members. We certainly have portals today that we didn’t have years ago, but we have contact centers where they can reach out to different organizations in many ways, whether it’s email or telephone or text and other types of communications measures. But I think it’s, it’s very frustrating to a lot of the members. And I was wondering if you had any thoughts, karl, in terms of some of the things that Optum’s looking at in terms of Helping the, the members who have to deal with the, the challenges of the healthcare system.

00:24:44 Karl
Yeah, I think you said. Right, right. It can be very, it can be very frustrating. So yeah, let me give, I’ll give you a few examples about how we’re addressing member outcomes and the first one goes back to the contact center. So think about it like this. So imagine you or loved ones having a serious health issue and you can’t get care because there’s no one available to provide, let’s say, a translation service to a non English speaking person. Well, that’s a really bad experience for someone who needs to get help urgently. Right.

So now using AI agents, one of the things you can do is automatically provide on demand voice enabled translations for beneficiaries and that means that they’re going to be able to get the information that they need to be able to get care faster and more effectively. And that’s going to be really important because presently about over 20% of US residents over 5 speak a language other than English at home and that’s doubled since 40 years ago. Right. So that’s just going to continue, continue to go up against a system that’s already under a lot of pressure. Right. And you know, other states are already offering program information in dozens of languages, sometimes up into the hundreds. So multi language is just not optional in today’s world and it doesn’t scale well using traditional means. So this is where we can really lean in using AI to fundamentally change the way that non English speakers in particular are able to access care by making it more equitable and more accessible. James, what do you think about that?

00:26:23 James
Yeah, I think that’s, you know, a great answer. One of the things that we’ve seen is also on eligibility applications, having the AI fill in a lot of the information from that, that it knows about the member from the database. So instead of, you know, the very annoying thing of like typing your address into the interface like six different times when you’re filling out a form, you know, it, it will be smart enough to derive that information and then have the member check it essentially. So, you know, the goal there is to reduce eligibility application time from, you know, say 30 minutes, 45 minutes down to, you know, just a few minutes. So I think that’s going to really help serve our constituents a lot better.

00:27:24 Tony
Right, that’s certainly true. Let me, I’m going to circle back to both of you first want to circle back with karl. One of the things you talked about is being able to use AI to work with the members in various languages. And I think another aspect of that is it. It would help the healthcare system build trust with the members as well, which is a big problem, particularly as, you know, with the payer payers. They have a lot of issues around members and trusts. So this sounds like it’d be another area where a trusted partner could help in making some of this better for the members and build their relationship with their. Their health plan or health system.

00:28:08 Karl
Yeah. You know, and I think anytime the patient has a good experience, it’s going to build trust. It’s really as simple as that. If you want to dial it down to the basics. Yeah. And I think, you know, those are the sort of things that we’re thinking about is how do you reimagine the experience of interacting with or dealing with with Medicaid systems, which can be complex and frustrating for all the reasons.

00:28:31 James
Right.

00:28:32 Karl
You know, actually, it reminds me of another example. Another one of the things we’re working on is in the assessment space. And it also has to do with experience and trust. And the way that this work happens today is, you know, imagine a nurse interacting with the patient. Right. The practitioner collects data, and the goal is to basically try to get to a script or a plan that’s based on the information to decide how that patient is going to be treated. It’s cumbersome, though, because that provider has to drive the conversation to listen to clues about what the actual health issue is, all while filling out forms that are required to treat the patient and remain compliant. It’s just a lot of cognitive load for one person to think of when they’re having to do this kind of work.

So one of the ways we want to innovate here to build a better experience, which builds trust, I think, is to allow AI agents to basically listen into a conversation like that, with the end goal of automatically filling out the forms as the two individuals are having a conversation. So basically taking away some of that busy work and what this does is allows those practitioners to put more of their attention on the patient so they can better understand them, they can empathize with them. And I think that means they’re going to be able to perform better, more accurate assessments because they’re more focused on the patient. Ultimately, that’s the sort of thing that drives better outcomes, right?

00:29:53 Tony
Absolutely. And, James, getting back to what you had talked about eligibility a few moments ago, of course, we know that people move from Medicaid to other different types of programs. Sometimes they move to Marketplace, sometimes they move to a private health plan. And Eligibility is one of the major areas of fraud that occurs in the Medicaid program. So I just wanted to give you a little bit of an opportunity to talk a little bit more about that and where you see AI can not only you mentioned a few moments how it could help the health system, but how can it also help the individual who not trying to commit fraud or anything, but is dealing with a situation where they may be moving between healthcare systems?

00:30:42 James
Right, Yeah, I think it’s the ease of that application can help people anytime. As karl was saying, you make the, the member more comfortable with the experience. Easier, easier for them to do the right thing, you know, and enroll in the state that they’re in, dis. Enroll in the state that they’re no longer in, you know, clean up their own stuff. As long as the systems are really easy to deal with, have a good UI and you know, really help the user experience.

I mean, a lot of the systems right now are just incredibly clunky or really unfortunate to use and don’t support mobile very well or you know, somebody like opens it up on their mobile phone and it looks like, you know, they have to scroll around. So all of those interfaces can be more easily recoded with AI now to make them more user friendly. So we’re seeing an acceleration both in the ability to automatically enter data that improves the user experience, but also in the ability to recode these systems much more quickly than was ever possible before just a couple of years ago. So we’re hoping that those, those enhancements with helping the user do the right thing. But also on the back end, the ability to consolidate databases with CMS and other payers will help understand where their coordination of benefits issues and we can interdict those things in the back end also.

00:32:35 Tony
Great. All right, well, we’ve talked about AI in general for the health and human services agencies. We got into some specific use cases. So if I’m an agency official and I’m starting to think more about where and how I should implement AI to help improve the various parts of the Medicaid program? For example, what are some of the ethical, governance or operational considerations they should keep in mind besides what we’ve already spoken about? I’m going to ask karl if you could start off.

00:33:13 Karl
Okay, I may have mentioned this already once, but the NIST AI Risk Management framework is really the big one. And the reason is because AI risks are just, they’re different from traditional IT risks. Right? You have bias, you have explainability needs that aren’t straightforward and models like this need specialized government governance that you just don’t have in traditional software engineering since it’s not a fire and forget kind of technology. Right. So there’s lots of proactive performance like measurements and things like that that need to go into these capabilities. The NIST framework provides four pillars, Govern, map, measure and manage. And basically gives you everything you need across all levels of the organization to build the kind of systems and processes that you need to be able to implement this kind of technology. And it aligns with federal expectations.

00:34:11 James
Right.

00:34:11 Karl
So if you’re a state government looking at these programs and you’re wondering, you know, like, where to start. I would, I would start there and just make sure that you have all of the, all of the processes in place that are going to allow you to be able to do that. Another one I think is, comes out of the Affordable Care Act. So there’s something called section 1557 and this is another really important one. It addresses discrimination in health programs that receive federal funding. So as part of any new feature development, builders have to be really careful to test like outcomes, especially as it relates to things like protected groups, just to make sure that these capabilities aren’t biased so that when they’re deployed and actually working for our citizens and users, they don’t do something that would otherwise be unfair. So I think those are two of the big ones to start with.

00:35:01 Tony
James, thoughts?

00:35:03 James
And I would say, you know, I think karl did a great job of handing the governance and the MLRB structures that would probably be necessary within a state. But I would say, you know, if I were a cio, I would understand the reality that my people are going to be experimenting with AI. So I would want to get an enterprise AI subscription to keep my data safe so that they’re just not, you know, dumping it in to their iPad, you know, on the side of their desk. You know, so people are going to experiment. They’re, they’re incredibly curious. You know, they, they, they go out to lunch with people that are in it, experimenting with AI all the time. So they’re going to do the same. So it, you know, it’s, it’s really, really kind of impossible to suppress human creativity, this technology. So just expecting, accept reality that there are folks are going to use AI and give them the appropriately safe tool so they keep your data safe, you know, in aligned with the NIST guidelines that karl was talking about.

00:36:22 Tony
Yeah, and that brings up another issue that, that you kind of touched on, James. But I also want to get back to karl as well is how I I’ve looked a lot at what federal agencies are doing with AI, and a lot of them have appointed chief AI officer. Sometimes it sits in the CIO’s office, sometimes it sits in other parts of the agency. Are you seeing agent, Are you seeing state agencies organized similarly or is it, or is the AI work mainly coming from the actual operational units, or is it a little bit of both?

00:37:02 Karl
I think I’m actually going to defer.

00:37:06 James
I think we’re seeing a little bit of both. You know, so, you know, in agencies that I’ve talked to, they, they, some of them do have a chief, chief AI officer that, that is working with, you know, agencies. And then there’s also sometimes that sits within the CIO’s office. I don’t know. What, what are your thoughts, karl?

00:37:33 Karl
Yeah, I actually don’t know.

00:37:39 Tony
Well, I think it’s, I think it’s early, unfortunately. It’s, it’s. I think just like with any new technology adoption, a lot of times it springs from the grassroots. Agencies try to put a regulatory and operational framework around it and it takes a while for it kind of to sort out. But who do you mainly deal with when you go to the agencies? Do you deal with the CIO’s office or do you go down to the, or you deal more at the program level?

00:38:08 James
We deal with both. So we had some really interesting conversations just recently with a couple of state agencies. One that was set up that you know, an AI officer and another one that was within the, the, the CIO area. But, but both of them were exploring really interesting use cases of, of the use of AI. So, you know, it’s just like anything in Medicaid. You know, the whole adage is, you know, you’ve seen one Medicaid system, you, you know, you’ve seen one Medicaid system. So it seems like there’s a diversity of models that, that people are adopting here.

00:38:57 Tony
Well, I think we’ve had a great conversation. We’ve touched on a lot of topics, but now we’re looking ahead. I wanted to ask you, karl, to start off with where do you see the next big opportunities or innovations for AI and state health and human services? I don’t want to go out 10 years, but let’s say the next one to three years. What do you think?

00:39:20 Karl
It’s a good question. I think insights are going to be an area where organizations can improve. Right. So making those dashboards more relevant because they’re using AI agents to filter and sort data and provide it in ways that make sense to the People who need to consume it. That’s going to allow everyone in an organization to make better decisions. Like James has said a lot today, finding fraud, waste and abuse more efficiently. Right. It’s a cat and mouse and the fraudsters have AI too. Right. So we’ve got to arm up and make sure that we’ll, we’re building the systems of tomorrow so that we can, we can beat them. Right. And eliminate, eliminate the waste that’s, that’s in the system and hopefully be able to prevent that in the future. And I think serving an increasingly diverse population.

00:40:09 James
Right.

00:40:10 Karl
It’s kind of like the right tool for the right time.

00:40:12 James
Right.

00:40:12 Karl
America is a very diverse country, always has been, and it’s only going up. Right. And so now you have AI that can write a form into Spanish or Hindi or whatever that needs to be. Right. Or you have an AI that can sit on the phone and convert languages so that people can have a conversation that would have been impossible 10 years ago. Right. So I think those are just, those are just some of the themes, but I think that’s possible and within the scope of the next three years.

00:40:39 James
Great.

00:40:40 Tony
James.

00:40:41 James
Yeah, I, I think the trend is toward, you know, all of us becoming kind of like meta analysis analysts at the, at a higher plane than we existed before. Just kind of like you’ll have meta analysis and in science where, you know, scientists will look at all the studies and then infer the, you know, the, the insights from all that information. So same with analytics in particular and health analytics. I think our, our systems are going to become more and more agentic and are going to take us out of the, the, the day, the day to day of producing, you know, pie charts and graphs and all that kind of stuff and really just interpreting what it all means after it’s kind of served up by the agentic processes that we have that’s going to allow us to hopefully exist at a higher level of value and really deliver better value to policymakers.

That really cuts through all of the, all of the fog of excessive information and allows us to distill that information better. So that’s really what I’m hoping. And then from the IT end, I think you’ll have people that are supervising agents that go through and recode things for them. So testing it to make sure that it’s okay, implementing systems that do automated testing. So just, you know, just in general moving up the value chain to a supervisor of technology.

00:42:30 Tony
Great.

00:42:30 James
Okay.

00:42:32 Tony
Okay. We’re going to now turn to having a couple rapid fire questions. So karl I’m going to start with you first in one sentence, why does AI matter for Medicaid right now?

00:42:45 Karl
Because the work is exploding in complexity and volume, and AI is the only realistic way to reduce paperwork, deal with risk, as well as keeping millions of people from falling through the cracks.

00:42:57 Tony
Great. Now I’m going to have a second one for you. James, what’s the biggest myth you hear about AI and state government?

00:43:04 James
Well, I think it’s people fear one thing that it’s going to replace a whole bunch of people. So we don’t see that occurring, especially in state government. You know, I think people are going to move up the value chain. And then the other myth is that it’s going to fix everything so that it’s going to be a panacea. And, and, you know, we’ll. We’ll be able to, you know, leave at noon every day. So I. I don’t think either is true. I mean, it’s a promising technology, but it’s just another technology.

00:43:38 Tony
Right? Exactly. So, karl, to finish off, if a state wants to start small with AI, where should they begin?

00:43:48 Karl
Pick a narrow pain point. Something like simple, like summarizing case notes or trying to put a natural language question interface on top of a dashboard, and then wrap that in good governance so that people can see value pretty quickly without taking on some kind of huge risk.

00:44:05 Tony
Great. Well, thank you both for this very interesting conversation, and best of luck to you both.

00:44:13 James
Thank you, Tony.

00:44:14 Tony
Thank you.


More from this show

Subscribe

Episode 13