Beth Griffin examines AI applications in healthcare fraud, waste and abuse (FWA) at AI World 2019
At AI World last week, Beth Griffin, Vice President, Healthcare, Product Development and Innovation, Mastercard, participated on a panel that examined the current role of AI in healthcare as well as potential future applications of the technology. Today we share some of Beth’s fraud, waste and abuse (FWA) insights.
Current and future AI applications
Panelist John Mattison, MD, CMIO, Emeritus, Kaiser Permanente, opened the discussion by sharing how AI has already replaced low risk and menial tasks in the healthcare industry. Additionally, AI is creating new opportunities and ushering in a world of hyper-personalized medicine via the technology’s voice capabilities.
John also explained that just as there’s no one right workflow for getting something done, there’s no one right way to treat a disease, as every individual and their genetic makeup is different. There’s an incredible opportunity, then, to leverage AI to simplify fully understanding each person at the individual level to adapt their healthcare so it maps to their specific genomes and unique motivational structure.
As Beth shared with the panel, AI needs to be applied across the healthcare sector to make significant changes in the way treatment is both provided and received. As Mastercard has focused on improving and innovating the way fraud and security systems behave within the financial industry for the past two decades, now is the time to apply those learnings to healthcare. Using Brighterion’s AI technology and its ability to detect and mitigate anomalies proactively, there’s a massive opportunity to revamp outdated fraud, waste and abuse (FWA) processes.
Leveraging AI for personalized healthcare is key
Most importantly, the panelists all agreed that the primary role of AI in healthcare should be to manage the personal level of care that patients, physicians and providers require. Brighterion’s ability to derive insights at the individual level means the healthcare industry could realistically ensure delivery of highly personalized treatment, Beth added.
The challenge, however, will be collecting enough quality data to make this possible. Even more challenging are the privacy implications of collecting such detailed, sensitive data.
John also discussed additional hurdles to AI-based personalization in healthcare, including validating models based on training data sets, which can create harmful biases. In fact, as the panelists pointed out, human cognitive biases need to be addressed before we can focus on any AI biases.
The goal: shift AI from informative to decisive
The good news is, AI and its continuous learning capabilities are already revolutionizing workflow optimizations, and end-to-end care throughout healthcare systems and revenue cycles. And as the panelists indicated, actionability will make the difference in ensuring AI is a truly productive technology for healthcare practitioners. For example, automatically creating lists of patients who are high risk is helpful; however, using AI to create those lists with a specific action associated with each risk is far more valuable. Put another way, the goal should be to move AI from being purely informative to decisive.
Concluding the panel discussion, one attendee—a practicing physician—raised a vital question: Doctors have malpractice insurance to protect them in the event that something goes wrong, but how should we handle the liability of an autonomous system? Are we able to fully comprehend the risks and biases of AI in real time at the point of its decision-making? Will the reasoning behind AI and its algorithms ever be as easily comprehensible as that of a human doctor’s?
The answer, the panelists agreed, is that this is precisely what we’ll be working on for at least another decade. After all, while it’s undoubtedly promising, we’re still in the gestational stages of AI being applied to healthcare.