In this insightful episode of HealthBizTalk, host Tony Trenkle, former CMS CIO, sits down with Amber Nigam, Co-Founder and CEO of basys.ai, to unpack how agentic AI can reshape some of healthcare’s most persistent administrative challenges—including prior authorization, utilization management, medical review, risk adjustment, and patient experience.
Transcript of the Podcast
00:00:00 Intro
Welcome to Health Biz Talk, the industry’s leading podcast that brings you today’s top innovators and leading voices in healthcare technology, business and policy. And here’s your host, Tony Trenkle, former CMS, CIO and health IT industry leader.
00:00:16 Tony
Hi, I’m pleased to introduce Amber Nigam. He’s a Boston based entrepreneur and the Co founder and CEO of the Agentic AI company basys.ai. With over 15 years of experience across healthcare, AI and data science. He focuses on building practical, policy aligned AI systems for payers and federal programmes to reduce administrative burden and promote transparency in health operations. He holds a graduate degree from Harvard, has taught at MIT and writes for the Harvard Business Review and Forbes. So welcome Amber, to the podcast.
00:00:56 Amber
Thank you so much for having me, Tony.
00:00:59 Tony
Thanks. So the way we generally start this off, Amber, is we asked you a little bit more about your background, how you got started in this field. I know you have told me about you had an experience with your father. So maybe you might want to just speak a little bit about that at this point.
00:01:18 Amber
Yeah, absolutely. So growing up my dad had diabetes and over a period of 10 to 15 years he also had certain other comorbidities as well. He developed those comorbidities and as I navigated the health system or healthcare in general, because I was his caregiver, I got to understand there are so many gaps that exist in the system. Not because people are not interested in doing work, people are not lazy. I think they have a lot of work than they can practically do in a day. But we are not using technology in the right way. And when I say that, I’m not just saying, I’m not preaching AI. I’m not saying that we should be using AI without truly understanding how it can be used.
But if we triage, well, we essentially can solve, let’s say 70 to 80% of the problems that are administrative in nature. They are less clinical, they do not require a clinician. So that’s how I got started in healthcare. I did have, like you mentioned, a computer science background. I had my undergrad degree in computer science and I did my grad school degree at Harvard and I also instructed a course that was at the intersection of healthcare and AI at mit. So that kind of provided me further, I would say, insights into how different processes lead to the bottlenecks that we have in healthcare.
So that was another experience that I had. And then over a period of time I’ve also done research with different organisations like Mass General, Brigham, Mayo Clinic. As you would know, Mayo Clinic is Also one of our investors, along with Lilly. So that’s. That’s another thing I observed. When you are working in health care, you cannot be working in silos. You need to have a neutral, transparent perspective. If you want to make a big change, you need to be able to align incentives. So on that part, when I did start the company basys, we ended up raising money from the largest healthcare company in the world, which is Eli Lilly, the largest health system in the us, which is Mayo Clinic. And now we have also investment from an insurance company, Care First. So that my journey in a nutshell.
00:03:35 Tony
Oh, great, great. And it’s good to. I know all those companies and they’re great companies to be partnered with. Of course, Care first is our local Blue Cross company here in Maryland, so I’m very familiar with them. So we’re going to turn it just. I’m sorry, go ahead.
00:03:54 Amber
I was just saying. Absolutely. I feel all these organisations have given us different lengths to essentially understand health care and understand the policies, the medical charts. We have learned a tonne from physicians at Mayo Clinic, learned a tonne from Eli Lilly, and also learned a tonne on the policy side from Care for us as well. So happy to share more about my experience as we go further ahead. Great.
00:04:18 Tony
Well, we know AI is in your company’s name and obviously there’s a lot of hype around AI, not only for health care, but in every sector of the economy. But your focus has been with agenic AI and you’re using it to work with issues like prior authorization and utilisation management, which I know a number of companies are now looking to bring that back inside the company, as opposed to having an outside vendor work with it. But one of the things I wanted to ask you about is what is the difference you see with using a genic AI, as opposed to the more traditional AI that we’ve used over the past number of years?
00:05:07 Amber
Yeah. So the general AI that we used to essentially apply for different problems was very specific to the problem that we are solving. So, for instance, if you’re working on prior authorization for musculoskeletal, or if you’re working for prior authorization for cardiology versus you’re working for risk adjustment. So typically you’ll have very specific AIs catered to each of these problem statements and then you can of course, like, slice and dice in a way that you are able to address a particular domain or subdomain using a particular version of AI.
Agentic AI is different from the perspective of it has the capacity to answer and optimise certain operations, certain processes, without actually focusing on one single problem. So that’s the advantage of using large language models where you understand the whole context and so on and so forth, and you’re able to solve a large, I would say, swath of problems. Having said that, and taking a step back from, let’s say, if there is an AI which can, it is so generalised that it can start solving any and every problem, the problem becomes you’re not specific enough. So in a way we have zoomed out as we start using agentic AI and now we are again zooming in back because we want to make sure we are specialised enough.
And the answer probably will lie somewhere between what we used to do earlier versus what we are building now. So for instance, agentic AI, it’s very generalised. There are all these large language models like OpenAI, ChatGPT, Anthropic Lord, Google’s Gemini, they do a decent job in demonstrating the applications of AI in different domains. But when you’re actually talking about application of these technologies in real world scenarios, and you’re actually trying to resolve prior authorizations for a cardiology procedure versus, let’s say a drug like GLP1, you start noticing that there are gaps. And unless you fill these gaps, these technologies are not enough.
So even though we work on agentic AI, we have defined our own guardrails, we have also built our own models to cheque agentic AI. So while in summary, I was talking about how agentic AI has increased the breadth of the problems that could be solved by AI, and now you can almost approach a problem, any problem whatsoever, using large language model, almost any problem. But you also have to be conscious about, you need to be spending enough time in defining the right guardrails, you need to be making sure you’re not putting at risk the patients, because that’s what healthcare is all about. We are here to provide better accessibility, affordability for patients. And that’s when I will say agentic AI is good, but we need to take it with a grain of salt.
00:07:57 Tony
Yeah, I can understand that. And one of the challenges when technology companies get more engaged with the healthcare industry is, is not understanding it both from a macro level but also from a micro level. And I think you have to understand it from macro to understand the various stakeholders and how the processes flow not only within an organisation, but across the ecosystem. But yet at the micro level, you really do need to understand the role of risk adjustment and other types of operations that really run not only the health plan but run the providers and basically also the patient is part of that as well.
00:08:43 Amber
100%. And I feel like I previously talked about as well, when you’re working with, let’s say, health plans, even though you’re working with health plans, you have other stakeholders as well, like health systems and patients, members like you talked about. And unless you have the full understanding of how your decisions will affect providers and even the members or patients, I don’t think you’ll be able to do a good job if you are working just in silo. So for instance, for us, working with the health plans, without having understanding of the context of electronic medical records on the provider side, or understanding how pharma companies think about the drug distribution and so on and so forth, and how patients are affected, the ramifications of your algorithms is not advice.
00:09:31 Tony
Yeah. One of the things that I’ve noticed throughout my career is the interface between the technology folks and then the business policy operations team. And if the former, the technology folks don’t understand enough about the business processes and the, and the various linkages, they end up with developing a technology solution. But it doesn’t solve all the business problems. Same way with the business side. If they don’t understand what we used to call it IBM, the art of the possible, from a technology standpoint, they tend to focus more narrowly without understanding what technology can actually do to help them change or streamline their operations.
00:10:18 Amber
100%. And one thing I would also add on to your point is when we think about technology, when we think about the business or the operations aspects of, of healthcare, the other aspect is also clinical science. And if we do not understand how physicians, how clinicians take decisions while they are working on a specific case, and while we do not, if we do not understand the whole triaging framework that they follow, and there are different specialties, so it’s almost like defining a framework when you’re working on a problem like prior authorization or even risk adjustment, and starting to think about all these problems and how different physicians and clinicians solve for those problems in their point of view, in their line of view. Because unless you are doing that, you won’t be able to define the right guardrails for agentic AI. So I feel that is extremely crucial part of solving any healthcare operation problem.
00:11:16 Tony
Absolutely, absolutely. So let’s drill down a little bit when you talk about some of the provider challenges. Of course providers strongly dislike prior authorization and they’re not always crazy about utilisation management either. But so when we look to applying your technology to those Areas from a provider standpoint, what are you providing to give them, you know, a better, well, not just make it more efficient and effective, but also how, how it can work better for them as opposed to being a bottleneck or even a blockage in a lot of ways.
00:12:00 Amber
Yeah, I would say there is a framework that, that has three pillars. One is, as far as providers are concerned, I don’t think they like provide to receive a rejection even when they have submitted all the right documentation and everything is checking on and so on and so forth. So one way through which we help them is by auto approving requests which can be quickly auto approved. So that’s one point of view. The second one is given that we aim to process as much information as we get and we try to always improve our recognition, like character recognition technology and try to connect different dots which let’s say sometimes humans might not. So in certain cases where humans might would have said essentially, you know what we require X, Y and Z information, they probably did not cheque at the right places.
The AI has essentially learned from clinicians to understand how there might be variance. When one clinician thinks about certain things and thinks about a prior authorization process and looks for certain information versus the other, we have tried to reduce that variance. The second part is clearly defining the rationale of decision making, making sure the decision tree is very clearly communicated to the provider. So that’s the second part where we help providers take a calculated approach. Let’s say if something is auto approved, they do not need to worry about that. But if there is a missing information, we quickly find that information so that providers do not have to wait one week before receiving you know what, we need to also provide this.
We are able to capture all that information and quickly tell them that these are the gaps that we have observed. So how about you provide that information? And the third, and the final aspect is given that we work with insurance companies, but we also acutely work with health systems and we have trained our algorithms using Mayo Clinics data. We received more than 10 million patient records and so on and so forth. We have also received strategic advice from pharma companies and then we also work with NCQA and CMS on the regulatory side. So given that we understand what we understand and given the transparency that the platform brings, providers are able to see all the things right away that they might not have access to. So I would say that is the third most critical point of just being able to understand how this platform works. They are a transparent platform. They’ll not just auto deny you anything. Of course no one can auto deny in this age with what has happened. But being that transparent neutral vendor is something that we take very seriously.
00:14:44 Tony
Right, right. And what about the CMS prior auth rule? How do you see that change in the game?
00:14:53 Amber
I think that’s just amazing because if you think about the gaps that exist in prior authorization, in access to different care and so on and so forth, I think one of the bottlenecks has been how different organisations, whether they are peers, whether they are providers, how they exchange data. But if you’re coming up with a platform and if you’re coming up with a framework which every vendor has to follow, I think that’s a brilliant move that CMS has taken and everyone has to get onto that bandwagon. But more importantly, I also feel this paves way to a much better form of care. And we can practically start moving from a post processing, post payment way of finding certain plans and finding certain providers to saying, you know what, this can be approved at the point of care.
So I feel we are actually moving from that post payment world to prepayment world where we will not just be auditing after something has happened, after four years, after three years, we will be able to take a call right now. So I feel this is a great initiative. But at the same time I also say that we need to still follow all the timelines. The vendors need to make sure that they are considering that post payment to prepayment shift that is happening and so on and so forth. But I’m overall quite optimistic about the initiative.
00:16:16 Tony
Right. I think some of this is as much cultural as it is, you know, process or technology. So I think one of the things that the CMS rule does is, is kind of break some of those cultural norms and you know, for want of a better way, putting it, force some of these players to begin to change how they do business, particularly the payers.
00:16:42 Amber
I would say so. And some challenges are because of the resistance from different organisations. But I also feel some of the changes are difficult to take place because of how healthcare is structured and in, let’s say more remote rural areas where technology is not as widely adopted, it will still be a challenge. But having some sort of framework to follow and defining our timeline is still a decent, I would say, first step. Having said that, I do believe, and I would love to see certain more specific scenarios in that framework that CMS has proposed where we also talk about what happens to the state level Medicaid, where we also talk about what happens to rural areas that do not have as much technology adoption, so on and so forth. How will we make sure that those places to not suffer from the lack of technology adoption?
00:17:39 Tony
Yeah, that’s certainly an issue. I think one of the other questions is it going to become a business driver for these organisations as opposed to a mandate? And if it becomes strictly because of a mandate, then the question gets into how seriously is CMS going to enforce the mandate? And that can vary depending on who’s running CMS and some of the other priorities they have. So, yeah, so hopefully this gets into the point where businesses, both payers and providers, see it as a way to improve how they do business as opposed to just saying, okay, we got to meet this mandate by 2027, so we, we’ve got to do things a certain way. So it’s. But that’s been a constant thing since I’ve been associated with cms. It’s always this, you know, kind of carrot and stick where you try to push the industry further until it gets to the point where it’s mainstream and then the industry can really drive the change because they see how it does make them able to better serve their customers and build their bottom line as well.
00:18:51 Amber
100% agree with that. And I’m also seeing CMS taking certain initiative in bringing the broader industry together. I’m talking about Chilli Cook Off Challenge, for instance, that they are currently holding. We, by the way, are one of the ten finalist organisations basys.ai so excited to be leading that way. But I do feel such an initiative is very welcomed by the industry because we get to share what we are doing on the commercial insurance or even Medicare Advantage plans with pure Medicare, pure Medicaid. So it has been a learning curve, I would say, for us to understand how CMS thinks about things. And then CMS also came up with the wiser approach for prior authorization. So I do see CMS taking certain initiatives, trying to revolutionise in a way how auditing is done, medical bill review is done and how we think about risk adjustment versus prior authorization and even utilisation management at a large level. So I feel quite positive about that. Of course, it remains to be seen how everything will play out, how we will go just beyond a few headlines into operationalizing what we are doing right now as well. So I look forward to that.
00:20:06 Tony
Yeah, let’s take a step back and talk about medical reviews. As you mentioned, you applied to be part of the the model and I guess one of the questions I have for you is obviously there’s been a lot of medical A lot of waste, fraud and abuse for years in the industry. Medicaid, probably the largest, but not solely in Medicaid, is more an eligibility, whereas the other areas, a lot of it is claims focused. Where do you see AI making a difference? Because a lot of these organisations and CMS included have invested a lot of money in both the front end and back end around claims to be able to catch the bad actors in, you know, before the process and then also then quickly catch the anomalies after the, the claim is processed. So what, what are your thoughts as to how do you, how do you get them to the next level?
00:21:07 Amber
Yeah, I would first of all describe this as like a three way problem or a three pronged problem. One is on the eligibility like you mentioned, especially for Medicaid. Capturing all the gaps and almost like categorising them as waste, abuse or fraud and so on, so forth is one step. The second part is on understanding how different claims happen, how different providers apply for the same, that’s a CPD code and they ask for pricing and how there are different, I would say, qualifiers that sometimes they use which might be wasteful, abuse, et cetera. So thinking about that is one thing and the third and the final thing which kind of connects all the pieces is thinking about medical bills and medical records. So when you connect the dots and you’re saying, you know what, I’ll not just be looking at the claims, I’ll also connect them with the medical records, I’ll also connect, connect them with the patient history, that’s when you get the richest data.
And what we see is even though we can do a lot if we just have claim records, because you can still aggregate information up to bubble information up to let’s say a provider level and you can try to figure out what waste patterns or abuse patterns you can see emerging. But at the same time, unless you can connect it with the actual patient records, unless you can actually see what the patient has been through, it’s difficult to connect the dots with half baked information. So I would say as we think about finding waste, finding abuse, almost like attributing some of them to be fraud, I think we also need to be mindful of we are not just connecting the wrong dots, we are open to understanding why some of that waste is not intent based because intent is a gating mechanism. I would say between what is waste versus abuse versus fraud and when there is a repetitive abuse, it kind of becomes fraud.
And waste is something given everyone is under so much administrative burden that they could not Lift their head up and understand there was something wrong with their documentation. Being very clear of that would be crucial because we don’t want to be having these experiments bias providers in a bad way that they start turning patients away, turning members away. That I feel will be critical as we think about the whole broad based and abuse space and medical records and medical bills.
00:23:45 Tony
Well, I know, I know there’s administrative burden, but I will say that there’s three players in this drama. One is the, the government side is the, the entity, one is the payers and one is the providers. And I know from my experience all of them have engaged technology companies to use AI to help themselves. So the government’s trying to make sure that there isn’t a abuse occurring. The providers and payers are both looking to maximise revenue in a lot of cases. So yeah, there’s burden, but there’s also. It’s like three organisations using AI to compete against each other and the, and the patient sometimes is the one that gets, or the member, whichever role they play, is the one that gets left out in a lot of cases.
00:24:39 Amber
I would agree and I would just share something with you that we are seeing a lot of appeals as compared to what it used to be, probably like three years ago in the last couple of years. Because there are AI technologies that can pick up all the records and say to the providers or even peers as well, you know what, we can challenge this on your behalf. We can just automate sending this email. You don’t have to worry about it. The regulators or the payers are the ones, they have to respond to you. So that’s one thing that we have observed. And to your point, what we are also noticing is peers get worried about receiving these appeals, then they start making all the rules that they have even more restrictive so that they are safe from certain scrutiny and so on and so forth. And patient is, or the members essentially are the ones who are suffer in that process.
What we are also seeing is regulators are doing the same. So as you think about CMS and secure and so on and so forth, they’re using AI as well. I think the solution will essentially lie in a strategy where all these stakeholders kind of come together. And I would take the analogy of how we think about pharma industry and I’m not saying that’s a solved problem, but sometimes you define price caps for drugs. So no matter what strategy, let’s say a pharma company is talking about how innovative the drug is. But if there is an organisation like CMS that Says, you know what, this is the price gap that we will have for X drug or X category of drug. Then whoever will negotiate pricing will negotiate something similar. So I think something similar has to be done for like all these prior authorization processes, risk adjustment processes and so on and so forth. And, and it’s easier said than done.
And price cap is a very different concept as compared to, let’s say, prior authorization. Prior authorization happens for so many different domains, so many different specialties. And then there are subspecialties and surgeries, procedures, drugs. So it won’t be as simple, but it does start with creating some sort of framework which caps certain pricing so that we are not looking at an exorbitant bill for something that should not have an exorbitant bill.
00:26:54 Tony
Right. And as you know, CMS with value based purchasing over the years has tried that type of method where they would, you know, pay just a certain amount for a, an operation or procedure. And you know, they had mixed success with it. But from a patient side, I think one of the issues is patients don’t understand the bills. They don’t understand how the charging occurs. And I’ll be honest with you, I get different bills and it’s, it’s hard for me to understand them in a lot of cases too. And even when you call the insurance company or the provider or the site where the procedure is getting done, you don’t always get it. They don’t give you really a good answer. So it seems to me on the patient side, if you could provide AI to help them get better literacy and understanding of their options and not just. And even if they appealed, having, you know, powerful AI tool to help support them in their appeal would be great as well.
00:28:00 Amber
I would say so. And I think to your point, I was speaking with a physician a couple of weeks ago and he mentioned this, that he typically gets different notifications from the payer, from the provider on what he needs to submit, of what he needs to pay for. And unless he receives at least three odd notifications from both of them, that’s when he submits, that’s when he pays for those bills, because otherwise he’ll be paying twice. And there is no record essentially that reconciles all of this. So to your point, if there is an AI which helps patients not just be a part of the process of, let’s say prior authorization appeals, but also clearly lets them know, you know, what you went for this procedure, this was the cost that your insurance is bearing, this is your copay, you just need to pay this And I’ve reconciled information from 10 different documents coming from the payers and 20 different documents coming from the providers.
You just need to go to this link and pay. And this is very much possible, I think, just because, because of how we have defined certain constraints and firewalls in healthcare, which of course some of those firewalls are absolutely required from privacy standpoint. Privacy standpoint. But if somehow we can get over these barriers, it will, it will significantly help the patients. And it’s not even like we do not have the technology.
00:29:28 Tony
We do, yeah, we definitely have the technology. I think regulations, policies and, and other things, some of it is business imperatives. At certain organisations sharing information sometimes they don’t see as a benefit, they see it as something that could be detrimental to a business. Whether we believe that or not is another storey, but 100%.
00:29:54 Amber
I’ve also heard certain organisations say that if we provide X information we have seen how different stakeholders game those into, into the equation that we have. And I feel many people consider it a zero sum game. I think you’ll only consider healthcare as a zero sum game if you’re not considering patient to be at the centre of this. I don’t think it’s about providers, I don’t think it’s about the payers, it’s not about the regulators. I think all are working together for patients or members as you’d like to call them. And I think these unaligned incentives are because some of us somehow feel our perspective is more important versus X. Perspective is more important than Y. Instead of truly understanding why healthcare exists, it’s because we are here for patients.
00:30:46 Tony
I think part of the problem with prior auth is if we’re still looking at the patient, I mean, we certainly know why prior auth is done by the, the payers because they needed to help control their costs and certain procedures and, and tests and things that, you know, may be out of network or whatever. And we understand from the provider standpoint what they don’t like about it. But it seems to me like in a lot of cases the patient does not even know there’s going to be a prior auth needed until they get, you know, fairly well into the process. So if there could be a way that AI could be educational to them up front that says, lets them understand the, the consequences of picking a certain insurance company, its relationship or lack thereof with their provider and then where prior auth might fit in. It just seems like technically you can do all these things, but it would take a number of changes and things that the patient would have to agree to, but it would save a lot of paperwork for everyone and save a lot of challenges for the patient as they try to navigate their way.
00:32:04 Amber
I would say so. I would also say, while I truly believe utilisation management, if we bubble enough organisations together, I think it is there to address the $1 trillion problem that healthcare has that we do overspend almost 1 trillion dollar, and it’s not even like we have the best patient outcomes. In the world. So I would say utilisation management is there for a reason. And if we are objective about what we want to achieve out of this, and if, let’s say, the objective is to reduce administrative burden, to expedite any approvals, to make sure if there are any gaps, patients know what those gaps are, providers know what those gaps are. And the final perspective is if you approve anything and everything. Maybe the person who should receive an urgent request, an acute surgery, might miss out on getting that surgery because there’s so much pressure on health care and they’re not enough physicians, they’re not enough clinicians. So that’s one perspective.
But I think I truly agree with your point. Patients who suffer because of all these opacities that exist essentially in healthcare. And I think I was not the patient, but I was a caregiver and I had. Almost an anxiety breakdown. And even though I understood how technology works and whatnot, there were so many hoops I had to cross to get to a particular standpoint. And I know a few things about healthcare, and that was me. And it’s the same, by the way, with certain clinicians and physicians as well. Of course, they know a few friends and they can maybe go somewhere and they can get a better treatment. But for a regular person like me or anyone else out on the street, it’s just a difficult process to navigate.
And I feel if AI could be used in this process to help patients understand where they are with respect to their prior authorization, if it’s denied, why it’s denied, what are the alternative pathways that could be taken now so that they are not overly concerned? Imagine someone who is like stage three cancer and they are denied a particular request because they are not satisfying a certain criteria. What will they feel, what will their families feel, and so on and so forth. Right, Right. So just having that empathy is super crucial, super critical, because you, at the end of the day, are essentially. I’m going back to my original point. You’re serving patients, you’re serving members, and we should not forget that.
00:34:48 Tony
Absolutely. And. And turn Turning back to a little bit, you mentioned about your being a care provider for your father and one of the things I know that you had gotten concerned about was the whole issue around bias in AI data sets. And more specifically you were looking at the racial bias. Can you talk a little bit about that and what you found and what you’re seeing today? Do you think it’s improved over time or is it about the same as it was when you first started looking at it?
00:35:22 Amber
Absolutely. I would say one thing about biases is it inflates your accuracy. So even though you might have biases in your algorithms and, and you might take a very aggregate perspective on whatever sorts of populations, like whether it’s of a particular race or ethnicity or gender and so on, so forth, and we all have those biases as humans. Yeah. So I think to your point, Tony, bias has a big role to play in healthcare in certain ways and it’s not very different from certain other industries. It’s just the repercussion in healthcare. Sometimes it’s much more than any other industry or most of the industries. I would say what I saw as I worked on healthcare, different healthcare problems, maybe like a decade ago versus five years versus two years versus what I’m seeing now.
I think a decade ago we had a lot of disbeliefs about what accuracy means, how we calculate bias, how we calculate model drifts and such. So I think technology was not ready. Even statistical analyses were so called biassed. 5 years ago I think we started thinking about how to think about biases, define a framework. There were a bunch of organisations that came up with their own framework. Some of those were public organisations and they shared things in public and we started talking about governance and we started talking about how to address those biases. I think maybe almost two years ago, as we think about the adoption of agentic AI, large language models, generative AI, we almost were in a euphoric state where we thought technology can solve for anything and everything. And we again went back to ground zero and we started treating bias as a liability and we started talking about, oh, this large language model is so accurate and so on and so forth.
So I think we have a tonne of work to do. Not because we went back to almost like a primitive ways. I think we are right now trying to understand how the current technology works, how large language models work. In our own specific case, we have seen there is a 15% to 40% hallucination in prior authorization. And when you think about hallucination, many of those hallucination result in propagating biases, for instance, if there is a patient and there was no mention about their ethnicity, or if there was no mention about their gender, they’ll pick the one. Typically they’ll pick the one which is most common. And this is not very different from what bias used to be 10 years ago. So in a way we have improved. We have defined certain frameworks, we have come up with new approaches of catching biases, drift, drift, etc. But we are also kind of back to ground zero as we feel super confident about what we are doing, as we feel super confident about what these large language models are doing, as we take them on their face value with not question enough. We do not define the right guardrails. So that’s one side of things.
The second side of things is there are a few responsible, I would say organisations that are taking the role of defining guardrails around AI, around agentic and generative AI, very, very seriously, and they define those KPIs. So while you measure your accuracy, your F1 scores and all those fancy jargons that you can call out from data science and AI, you are also now accounting for the biases, you’re also accounting for the model drift in terms of biases, not just accuracy. So I am positive. But I also would like us to remember that we have to learn our lessons from the past, otherwise we’ll be running the same mistakes all over as we move from basic AI to machine learning to more advanced AI transformer based models and large language models and agent. I think the first principles remain the same, right?
00:39:29 Tony
Yeah. And in a lot of cases, people concerned, it just magnifies existing biases that are already out there in the data and how the data was originally collected. I think one of the concerns I’ve heard from people is that a lot of these organisations that run these LLMs or develop them, they don’t publish or let you know where they got the data from. So you’re basically taking their models, but you don’t know how they develop those models, what types of data. And I know one person has said that the data is biassed because it draws from the large medical system. So if somebody’s in a rural area, they may have different issues than somebody who lives in an area that say the Mayo Clinic or Johns Hopkins serves. So I don’t know what your thoughts are about that, but there is a lack of transparency to some extent that I think is an issue.
00:40:30 Amber
I would say so. And to your point, and earlier I also spoke about how certain rural areas and remote areas have a different population density, or when you think about relative distribution of different ethnicities and so on and so forth. Healthcare is not about one size fits all. In fact, it’s very specific. When you go to a physician, they’re just observing you. Of course, they’re using the knowledge that they have learned in their medical school and based on the treatment that they have provided to other patients and so on and so forth. But when you are in that room, they’re just focused on you. So if you use a model which is very generalised, very aggregated, I think you’re missing out on the optimization problem. You’re trying to optimise for one specific person, so it almost becomes N of one problem, then N of N problems or N of M problems. I think to that point, there are certain companies that do not just not share anything about the data that they are using. There are also companies essentially that do not even share how the models are trained and even the trained models how their weights and biases are. B. So you’re unable to understand, like if this model will say X, is it a correlation, is it a causation? How can I understand if I’m hearing this from a model? Now, if I run the same model on the same data, will it say the same thing? Even if it says the same thing, if I run it on a very similar person.
How different will that be? So given that even in the research community, people do not talk about sharing certain weights, certain biases of those models, I think it’s the same problem all over, that sometimes you need to share the data, sometimes you need to at least give some hint. But given how competitive large language model, I would say organisations are and the industry has become, I don’t see that changing. To be very truthful. And even though there are organisations that are trying to be more, I would say, forthcoming about sharing information, I think there is also a fear about, oh, what if we are sharing but some other are not sharing? And let’s say, as we think about our own country, about the US and we compare it with, let’s say, any other country like China.
So even if we start doing, do we really expect China to share anything whatsoever or the organisations in China to do that? And I don’t want to take a very political route here, I think there’s just a lot of distrust, especially in how you train your data, how, how you train your algorithms and on what data. And I think a lot of current really big organisations have taken certain very dubious stances on training their Algorithms. So I don’t even think they can share their data as much as we would like them to share their data.
00:43:35 Tony
Yeah, I think that’s true. I think the people who you mentioned the large percentage of hallucinations that occur and I think that the big challenge is not going to be with the health providers because they understand when they see certain types of responses. But I think once again, getting back to the patients or members or whatever you know, role we’re playing, we can be easily. We’ve seen that already with, you know, the news media and other things in the health area can be even more potential for a problem if the patient doesn’t get accurate information.
00:44:18 Amber
I would say so. And that’s why human in the loop is super critical as we think about health care, as we think about anything that we are about to deliver to a patient or member, like you said, or even to the providers, so that we are not going back and forth between two different versions, one on the provider side, one on the payer side, and now one on the regulator side as well. So it’s a three way problem, like you mentioned, and I think patient is at the centre of it.
00:44:45 Tony
Exactly, exactly. Okay, we’re going to get to the last major question and then we’ll kind of wrap this up in the next few minutes. But you recently wrote an article with Forbes. We described the challenges of building AI into various healthcare workflows and we touched on this a few moments ago a little bit. But you expressed concern that these efforts may fall short because developers fail to heat the traditional engineering concept of separation of concerns. And you know, you thought that that would be something that could actually make the development a lot better if people paid attention to that. And I guess this gets back to your generic versus specific area that we talked about a few moments ago. But I just want to give you a little bit of time to talk a bit more about that and, and how you think that can. Well, let’s say, how should we change how development is done today to make it closer to what you see as more ideal?
00:45:50 Amber
Yeah, absolutely. I would just say when you think about separation of concern and if I have to take a step back from even talking about healthcare for a bit, just talking about separation of concern, it’s the first principle, or I would say proposition that when you’re solving a complex problem, how do you define different components of those problems and how do you define different ownerships to each of those components, such that no one party, or let’s say there are engineering teams involved and clinical teams involved and the regulatory teams involved, and so on and so forth.
How do you define their own work in such a way that they do their own job? But when you are talking about defining the framework, all the relevant parties are involved and you define that framework in a timestamp fashion so that you’re not obfuscating, ventured one organisation or part of the organisation be involved versus the other. And there should be some sort of clarity from the timestamping point of perspective. So that’s what I observed. And not taking a very health care perspective. So when we build our algorithms as, let’s say, data scientists, sometimes we tend to think as we are thinking about the whole framework. We do not require a clinician because we are just trying to understand the rules and we are reading natural language, which large language models can do. But what we don’t understand is when we go into the specificities clinicians understand and they can perceive certain nuances that large language models cannot.
So unless we involve clinicians in that framework designing process, I think we’ll always have biases. There’s also something called epistemic humility. It’s all about knowing when you do not know things and involving people who might know certain things because they have that lived experience. So sometimes it’s all about saying, you know what, I do not understand this part of the problem and I need to bring in people who essentially can educate me either on clinical decision support, or they can let me know what certain populations go through in their own healthcare journeys as compared to a specific race, ethnicity, so on and so forth. So that’s another part to it. And the third and the final part, I would say, is as you are building this framework, as you’re thinking about, you know, what now is the data science part, then we should give certain freedom to data scientists to do their magic as well. And there should be certain checkpoints so that those. Experiments do not run amok and they’re checked at certain points.
And finally, when you’re delivering the product, you have a product which has very well defined guardrails. And it’s not to say that mistakes will not happen, but when mistakes will happen, you’ll have a feedback loop on where things went wrong, how do you, in the next iteration, improve? So it’s not to say that the companies should slow down, they should just focus on defining a very, well, airtight, watertight process and where everything runs perfectly. No, it’s not about that. It’s about understanding there will be mistakes, owning those mistakes, making sure there is feedback loop. So that next time around it’s a better evolution of the original product. That’s how I feel. We have evolved as humans and I think this should be practically the same for the algorithms as well. To your second point, about generic and specialised models, I touched upon some of those. Why generic models are good because they help you in giving quick demonstrations.
But why they’re not good is because they’re not specific enough. You need to define those specifications. Specialty based guardrails, unless you have done that, you’re not doing a good job. Like I mentioned, there is 15% to 40% hallucination prior authorization, which goes unchecked, which goes missed when you’re just demonstrating a cool, nice, exciting software. But when you are into the trenches, when clinicians are observing your algorithms, you see everything, everything is under the broad sunlight or daylight.
00:50:09 Tony
Right, right, right.
00:50:10 Amber
That’s, that’s the reason why we need to be mindful of switching back and forth between generic and more specialised. And I mentioned this in my Harvard Business Review article as well, that as we think about moving fast, of course we should move fast because there are a lot of patients, members who are not receiving the care that they should in the right time. And we can define the right triaging model to improve those scenarios. But at the same time as we triage, we should also be mindful of certain critical cases where we cannot expect a substandard algorithm to give a suboptimal output and result in denial of a critical surgery that results in maybe a person losing his or her life or them being amputated and so on and so forth. So that’s, that’s what I would say about walking that fine line between some, I would say, generalised model, because it gives you speed. But at the same time, when you are talking about critical surgeries, when you are talking about patients who require urgent attention, you are using the right models for all, all of them.
00:51:24 Tony
Right? Right. Yeah. And people’s lives depend on that in many cases. So it’s, it is, it is critical that we have the right models to be utilised for those, those types of procedures and services, because otherwise it can create a big problem for an individual who’s already having some challenges because of their health condition.
00:51:45 Amber
100%.
00:51:48 Tony
So we’re just going to close with a couple questions around a few things. One is, what do you do when you’re not trying to save the health care system? Amber?
00:51:59 Amber
I love running. I also like dancing as well. I took ballroom dancing when I was in the grad school. So even though I haven’t picked up on that after I graduated. It’s been a few years. I don’t want to date myself, but I would love to pick that back again. But I love running and I think when I have to clear the air and cut through the noise and just run, I think I’m in the best peace of mind. Running next to Charles Charles river in Boston. I think that’s the best way to cool off things for me.
00:52:41 Tony
Yeah, you definitely have some great running trails in Boston along the Charles river and some other areas as well.
00:52:51 Amber
So I love them.
00:52:53 Tony
Yeah, I’m sure. So what people who are trying to get more information on some of the topics you’ve discussed, obviously they go out to your company’s website, but what are some of the publications or podcasts or other types of information sources you go to to get more information that might be helpful to people who both want to dig deeper or people want to who want to dig wider?
00:53:18 Amber
Yeah, I would say if I have to go wider, I typically go for like I like reading Harvard Business Review. I also like New Yorker for instance, or Health. The healthcare section is just amazing. New York Times when I want to go for a more mainstream stuff. But I think New Yorker is my is my favourite. At the same time I also write. So I have written for Harvard Business Review like you mentioned, Forbes, Stat News. I love Stat News, the people who essentially made stat. I know one of the co founders, he’s amazing. And Stat News has a lot of good, genuine articles on use of data, use of AI, but also grounded in the actual healthcare scenarios. And it always, almost always has specific numbers so that you can know whether this is like 80% accurate or is it 90% accurate. And they also sometimes call out why biases are identifying. Biases are critical. So in my own article on Stat News I talked about how our accuracy went down as we improved our biases based. Metrics.
And it’s an interesting read because at the surface you feel like, oh wow, the accuracy actually went down. Then why would they use an algorithm which is not 99% but 92% accurate? And then when you think about the scheme of things, you’re also able to call out on the grey areas. So while it does not go into your accuracy, identifying grey areas is also super critical in healthcare. So I would say these are some of the. Publications I go for as far as podcasts are concerned. I think there are a few podcasts by certain MDs and I also follow them on LinkedIn as well that I really like, but. Not super Into I would say podcasts as much as I am into, I would say reading articles and understanding about healthcare.
00:55:29 Tony
Oh, great. So, speaking of healthcare, I’d like to close with a couple questions for people who’ve been in the field for a number of years, like yourself is so over the last 10 years, what have you seen that’s excited you the most? And then the second question tied to that is, what has disappointed you the most? So if you were going back, you know, to 2015, what. What’s. What do you think has been most exciting to you that you maybe couldn’t have anticipated 10 years ago, and then what was something that you thought was going to happen, but you’ve been disappointed that it hasn’t happened or happened to the extent you thought it would?
00:56:08 Amber
Would you also like me to take a stab on this?
00:56:11 Tony
Yeah, I’m not going to take a stab on it because I asked the same question to everybody. So I’ll let you take the stab at it.
00:56:17 Amber
Got it. So I would say I am most excited about the way the technology landscape has changed. And as we thought about AI being very specific for solving for a specific problem, being optimised to that specific problem, I am seeing that realm of opportunities, essentially for using AI has significantly improved. So that’s what I’m most excited about. For instance, using agentic AI. And of course, you have to complement it with the right guardrails. That’s something I’m most excited about, which is something my company is working on as well. The thing that I’m most disappointed about is certain applications and use of first principles, approaches like separation of concerns that we talked about. I feel as we are advancing in terms of technology, we are probably stagnant or in certain cases, we have also regressed on. On those funds. First principles, I would say propositions as well.
00:57:26 Tony
Yeah. All right, well, great, Amber, thanks for a great discussion and best of luck to you in your future endeavours.
00:57:34 Amber
Thank you so much for having me, Tony. Have a good.

