In recent years, artificial intelligence (AI) has weaved its way into the very fabric of our daily lives, providing personalized product recommendations, shaping our social media feeds, and even calculating our quickest route to work.
Meanwhile, AI-based technology has also entered the realm of health care as executives seek out fresh ways to improve quality and cut costs in a value-based care landscape and tech firms look to deliver the solutions. Market intelligence firm Tractica, predicts the global market for AI solutions in the health care sector will grow from $1 billion in 2017 to over $34 billion by 2025.
Despite the enthusiasm and anticipated expansion, however, plenty of skepticism remains. So how can managed care leaders harness the best of AI while avoiding its risks? First Report Managed Care turned to industry insiders for insight.
Emerging Options for Improving Care and Cost
Payers, providers, and pharmaceutical companies are all looking for ways to predict who will become more expensive in the long haul, with the goal of jumping at the chance to intervene and prevent expensive scenarios before they occur, explained Dana Levin-Robinson, MBA, chief of staff at VirtualHealth, a startup that helps insurance companies identify and manage high-risk patients.
“From a managed care perspective, our primary concern is population management,” explained Sam Leo, PharmD, director of specialty clinical programs at Magellan Rx Management, a full-service pharmacy benefit manager. Many current management strategies reflect ways to contain costs across wide swaths of patients with less personal tools such as formulary and prior authorization.
AI, on the other hand, can help “personalize and individualize pharmacy management by navigating the enormous amount of available data, pinpointing and stratifying risks in specific patients or situations, and allowing us to use resources and proactively intervene to prevent a negative, costly outcome,” Dr Leo added.
In the eyes of David S. Muntz, MBA, health care technology consultant and principal of Muntz and Company, AI will wind up radically altering the role of physicians. There is plenty of evidence that speaks to its positive impact in the realm of image related activities. He also noted that there are favorable reports regarding other AI-enabled diagnostic activities that are performing at or above human counterpoint levels.
“Just like with any technology that improves diagnostics, there will be fewer false results—positive or negative—and less chance of performing unnecessary procedures, better outcomes, and increased cost efficiencies,” Mr Muntz predicted. Lowering risk based on better diagnostics and a more precise, personalized result for patients could have favorable cost-saving implications for patients, families, and payers who can avoid funding unnecessary procedures.
Of course, with differing opinions and nature of certain aspects of health care, there are obvious areas that AI cannot improve upon at this time. While Etienne Deffarges, MS, MBA, member of the executive council of the Harvard School of Public Health, does foresee a positive impact in areas like clinical diagnostics, radiology, and immunotherapy, he believes the prognosis in the realm of health care administrative costs is not promising.
Risks of Using AI-Based Tools
The concerns expressed by experts and stakeholders are wide-ranging. Some, for example, point out that the term AI is tossed around too loosely as companies hype products that are traditional analytical solutions that have been repackaged in order to ride the wave of AI interest.
“The primary risk I see is based on what evidence is used to teach AI products. The rigor with which the evidence is reviewed and vetted is critical. Any fault in that process will have widespread impacts,” Mr Muntz explained. “There is also the risk that we’ll become so dependent on AI that we’ll abandon critical thinking and judgment.”
Like Mr Muntz, who noted that his use of “AI” pertains to augmented (and not artificial) intelligence, Mohan Giridharadas, MS, MBA, founder and CEO of predictive analytics and machine learning company, LeanTaaS, that specializes in improving health care operations, believes AI is best used as a tool that complements human intelligence, rather than replacing it. It is intended to provide users with “timely, data-driven recommendations that they can accept, reject, or modify” based on their expertise and judgment of the situation and the context of that moment.
Every encounter between a patient and provider is a unique, one-off event that is never identical to any other encounter. Because of this, relying on machines to get it right every time is not realistic, added Mr Giridharadas.
The potential for AI to replace human decision-making with automation is a concern that has been voiced by physicians as well. In a January 2019 op-ed in The New York Times, Dhruv Khullar, MD, MPP, New York-Presbyterian Hospital, warned that AI could wind up worsening existing health disparities. The technology learns from large data sets and that data does not include enough women or minorities. For example, the resulting algorithms could prove less effective for underrepresented groups and these built-in biases can become invisible over time.
AI’s biggest benefit and drawback are closely intertwined, explained Adam C. Powell, PhD, president of Payer+Provider Syndicate, a management advisory and operational consulting firm focused on the managed care and health care delivery industries. It can enable organizations to understand the factors associated with previously observed outcomes, but the reliance on previous observations can also codify old or contextually-specific practices.
One of the concerns about building a clinical AI system using data from Memorial Sloan Kettering is that doing so can result in a system that makes decisions in the same manner as Memorial Sloan Kettering, continued Dr Powell, alluding to the high-profile collaboration between the cancer treatment and research institution and IBM’s Watson supercomputer. When this type of system is moved from its original New York habitat to a context with fewer resources, he added, the recommendations may be less appropriate.
Machine learning has had plenty of success in handling a variety of interpretation problems, like reading x-rays or analyzing pathology slides, added Zack Nolan, head of applied AI at Beyond Limits, an AI and cognitive computing company that turns space and defense technology into solutions for health care and other markets. These high-tech solutions do not always allow users to understand how the decisions are arrived at, however. “If AI is to help with higher level clinical decisions,” he added, “it will need to be capable of explaining its reasoning.”
Implementation Pitfalls to Avoid
We are in the early stages of the evolution of AI and its impact within managed care markets, said Grant Stephen, MBA, CEO of bPrescient, a life science and health care information management and analytics consulting firm. He believes organizations that are slow to adopt these emerging technologies will find themselves at a disadvantage downstream.
However, it is not as simple as just diving in head first. “It is easy to implement an AI-based solution badly,” he said, and organizations tend to make several common mistakes. They do not always lock in on their business priorities, for example, and some try to stretch too far and take on a major project when they need to focus on key pilot projects early on.
“The other key thing that we see time and again is when an organization wants to implement AI but their data governance isn’t really in place,” Mr Stephen added. “You’ve got to make sure your data is of adequate quality before you can really let AI go to town on it and deliver the value.”