Home Healthcare and Life Sciences Challenges of AI adoption in Healthcare

Challenges of AI adoption in Healthcare

With so many high-profile successes touting the promise of AI in healthcare, developers can too easily underestimate the challenges that must be overcome, and the diversity of expertise required, to bring products to market.

By diving headlong into the market, a developer risks becoming a cautionary tale unless they start with a comprehensive grasp of the potential barriers, challenges, and pitfalls that stand in the way of deployment.

Although healthcare organizations have been notoriously slow to adopt AI, venture capital investment in healthcare AI has been steadily increasing. In January 2020, total investment in the top 50 developers of AI solutions in healthcare hit the $8.5 billion mark, according to McKinsey and Company.

That investment interest is driven by some spectacular successes of AI in multiple healthcare segments. From detecting anomalies in clinical images to enhancing diagnostic decision-making and augmenting robotic surgery, AI has frequently performed as well or better than clinicians.

Using deep neural networks (DNNs) for image classification, AI has already proven its ability to quickly and accurately detect fractures from x-rays, tissue anomalies like lesions and tumors from CT and MRI scans, and from lung imaging, the signatures of infectious disease like TB and COVID-19.

Accuracies over 95% are not uncommon in controlled settings, and while it’s often more accurate than a clinician, AI is always faster.

That all paints a pretty rosy picture for the value of AI in healthcare. It’s all true, though — at least at the 50,000 foot level. But there are serious caveats, and as we dive into the details, we’ll discover that the real picture is far more complex. We’ll see that many of the most hyped successes come from controlled environments, and the same solutions don’t always perform so well under the stress of a clinical workflow.

From the start, developers who plan to build AI solutions for healthcare will need a deep understanding of the state of the art, the intricacies of the market, the numerous regulatory hurdles to implementation, and the flat truth about real barriers to adoption.

Characterizing the AI Challenges in Healthcare

With the potential to drive more efficient resource usage, to improve patient comfort and safety, and to reduce the need for more invasive procedures, AI is becoming increasingly attractive on both the clinical and business sides of healthcare.

But if you’re inspired by the market potential, there’s an unavoidable reality-check in order that may have exactly the opposite effect.

The data required to train underlying neural networks, the frailties of the training process itself, and issues surrounding certification all multiply the complexity of developing AI software for healthcare.

If that’s not daunting enough, successful deployment requires navigating both the myriad regulatory requirements and the often conflicting goals of numerous stakeholders.

Data for training, and lots of it

To train models to perform clinical tasks requires data that consists of health records or images that have already been examined and labeled by clinicians. To be effective, tens or hundreds of thousands of data instances can be required for training input. What’s more, the data must be digitized in a uniform format, captured within exacting guidelines, and sometimes ‘normalized’ or otherwise preprocessed to smooth values prior to training.

In many instances, the big problem is simply availability.– large patient records or diagnostic image sets are often ‘owned’ or controlled by some of the bigger players in healthcare. So historically, gaining access has been a major hurdle for startups.

The good news? More source data is becoming accessible to developers through initiatives designed to address this very issue.

Medical images are increasingly available thanks to organizations like NIH, OASIS, and OpenfMRI. And more effort across the industry is going into formally collecting and labeling images while adhering to standards like DICOM that support both a uniform imaging standard and metadata for annotation and labeling.

Training AI models for the Real World

Of course, acquiring the right data is only the first step. The feature recognition abilities that neural networks ‘learn’ by processing data are encoded in models that must be trained under exacting conditions. Even if you have access to a large annotated dataset of the right data, training the model is a process rife with potential pitfalls.

The risks are well known, but not always straightforward to manage. One risk inherent to the training process is overfitting. That is, training a model to be extremely sensitive to the sample image datasets used to build and test it — to such an extent that it fails to generalize, and underperforms against new images it sees in production.

Running your solution in a clinical setting presents new challenges to the integrity of lab-trained models. A model that performs successfully in a test environment is one thing, but training it sufficiently to withstand the demands of clinical workflow requires another level of sophistication. Models will always perform better against training data than against new data in the field, but they can often fail entirely to adequately generalize their tasks when put to the test.

The good news? More pretrained models are becoming available in some domains from companies like NVIDIA, whose Clara solution offers developers numerous tools to accelerate development of clinical AI solutions.

Nevertheless, the process of training models is a deep topic, and in practice, building successful DNN models typically requires the expertise of a senior-level, experienced data scientist. And eventually, fine-tuning in clinical trials.

Deployment and approval are far from routine

For all the glowing anecdotes about the performance of AI against clinicians, it’s important to understand the context, as well as the realities of premarket approval, certification, and deployment.

To date, many of the most dramatic successes have come in controlled environments. To one extent or another, many haven’t gotten beyond clinical trials or proofs of concept.

This will change, but many stakeholders continue to be rightly concerned about how well AI solutions will perform in demanding clinical environments, where applications may encounter images that confuse the models, or expose variances not seen in testing.

Until then, developers can take cues from the FDA, to understand how they view their role in regulatory oversight of AI.

Premarket Pathways

If your experience includes developing non-AI healthcare applications, you may know that the FDA has historically treated software applications like medical devices for purposes of premarket approval. And perhaps you’re familiar with the FDA’s traditional premarket pathways — premarket clearance (510(k)), De Novo classification, or premarket approval.
But AI presents some non-traditional challenges, especially when adaptive learning algorithms are a component of an application in a clinical setting. Afterall, ti’s a defining feature of AI systems that they continue to learn from new data processed in deployment, and the FDA has taken steps to recognize and embrace the dynamically evolving nature of machine-learning.
In April 2019, the FDA began offering some new ideas to better support the unique qualities of AI in regulatory oversight. They published a discussion paper “Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) – Discussion Paper and Request for Feedback” The paper articulates the FDA’s suggested new approach to premarket review for machine learning-driven software modifications.
Among other things, this new regulatory paradigm suggests they will adopt a full product life-cycle approach to the oversight of AI applications, and potentially require ongoing monitoring of deployed solutions by both the developer and the FDA.

Dealing with Regulations and Reality

If the data, training, and approval challenges aren’t onerous enough, consider that successful deployment also requires navigating the myriad regulatory requirements. In the real world, you’ll also have to contend with the often conflicting concerns of multiple stakeholders, and address them proactively.

Every layer of the solution must be designed with real world considerations in mind:

  • Data privacy and security statutes
  • Regulation – HIPAA, ISO, and conforming to HITRUST’s CSF standards
  • Stakeholder concerns – clinicians, administrators, insurance, facilities, and IT

Some of these configurations are now built into AI toolkits and frameworks that are intended for use in healthcare, but none are optional.

The Bottomline

If healthcare solutions are part of your AI product roadmap, there’s no getting around the requirement that you must maintain and constantly update a realistic picture of the things that matter most — the state of the market, the emerging and evolving regulatory challenges, and the expanding array of tools and data available to developers, both you and your competition.

Overcoming these challenges requires a well thought out, multi-pronged approach, along with the right strategic partners in both sales and healthcare software development.

Even though adoption of AI across the healthcare landscape is gradual and uneven by its very nature, there’s no doubt it’s increasing, and resistance in the marketplace is less and less about fears of AI replacing human expertise. In fact, clinicians are beginning to appreciate the ability of their AI counterparts to process images thousands of times faster, with comparable or better accuracy, and to handle some of the tedious and time consuming tasks overwhelming current resources.

Ironically, by freeing clinicians to handle higher level tasks, those same capabilities may be what brings patients and physicians back together, and what ultimately makes healthcare human again.

Back to All Posts