1. Home
  2. Member Resources
  3. Podcasts
  4. CIPI Connections: Pathology Meets AI: Translating AI Frameworks into Practice

CIPI Connections: Pathology Meets AI: Translating AI Frameworks into Practice

In this episode of CIPI Connections, members of the CAP Artificial Intelligence Committee, Matthew Hanna, MD, FCAP, Nick Spies, MD, FCAP, and Larissa Furtado, MD, FCAP, discuss how pathology is integrating AI. They discuss how existing CAP validation frameworks apply to AI, the unique challenges of data quality and, and the emerging role of AI implementation specialists. Hear real-world insights from anatomic, clinical, and molecular pathology on what works, what doesn’t, and how AI can enhance patient safety.

Subscribe to CIPI Connections on Apple Podcasts, Spotify, or wherever you listen to podcasts.

Details

  • Open all Toggle
  • Close all Toggle

Dr. M. E. de Baca:
Welcome to CIPI Connections, the podcast of the College of American Pathologists Council on Informatics and Pathology Innovation. Here we connect you with the leaders and committees shaping the future of pathology.

I'm Dr. M. E. de Baca, chair of the College of American Pathologists Council on Informatics and Pathology Innovation, also known as sippy. In today's episode of CIPI Connections, we will be talking about how pathology meets artificial intelligence. We're going to dive into how the pathology community validates and implements new technologies and how AI applies to those frameworks.

Today you'll be part of a conversation with three of my good friends and members of the artificial intelligence committee. Dr. Matthew Hanna is a surgical pathologist and the chair of the CAP AI Committee. Dr. Nick Spies is a clinical pathologist and a pioneer for introducing AI implementation specialist roles, and Dr. Larissa Furtado, a molecular pathologist who is currently leading the development of the CAP AI Implementation Guide. Together, they'll explore what works, what doesn't work, and what we've learned from real world AI deployments in pathology. Take it away.

Dr. Matthew Hanna:
Thank you, Doc. We really appreciate you and all your support and really excited about the conversation today. Let's start with the good news. In anatomic pathology, we've long relied on structured workflows. We all know the traditional specimen handling slide preparation, diagnostic reporting, all governed by CLIA and CAP standards. So these are the frameworks that really emphasize accuracy, reproducibility, and above all patient safety. So that mindset really translates beautifully to AI, which we can think of as just another test. So all of what we want to portray as in this concept is having AI be just another data point that is like any other test you would be reviewing and reporting, for example, validating a new immunochemical stain. It involves evaluating the performance criteria, the controlled tissues, documenting all of that. And that same rigor applies to AI model evaluation, doing the analytical validation, the precision testing, the clinical validation and ongoing monitoring. Dr. Spies, what area of laboratory medicine have shown success in translating well to supporting AI workflows?

Dr. Nick Spies:
Yeah, absolutely. So in clinical pathology, we certainly have that advantage in that a lot of what we do is focusing on numerical input data, which makes it a lot easier to start building out some of these proof of concept applications for areas like quality control, error detection, workflow optimization, and many others. We certainly have plenty of examples of this kind of high dimensionality data that would be really well suited for your classical machine learning techniques, things like panels, your basic metabolic panels, your complete blood counts and so on, or data sets like flow cytometry where you're running a lot of immune markers or things like mass spectrometry where you have a full spectra of input data that you have to kind of deconvolute and interpret.

Dr. Matthew Hanna:
Wonderful. And what about in molecular pathology, Dr. Furtado?

Dr. Larissa Furtado:
Molecular pathology, we already use bioinformatics pipelines for NGS analysis and validating an AI model is to a certain extent similar because there is a need to train multiple models, fine tune with separate data sets, lock configurations and validate performance, and all of these steps have corresponding procedures when developing bioinformatics pipelines. So generally the CAP checklist requirements for molecular tests apply well to AI-based methods used in molecular oncology testing.

Dr. Matthew Hanna:
So generally the CAP checklist requirements for molecular tests really apply well to AI-based methods used in molecular oncology testing. Actually, Dr. Furtado, there was a recent article I read that looked at mapping some of those CAP checklist requirements and translating them to molecular workflows. Would you be able to comment on that?

Dr. Larissa Furtado:
Yeah. We did a study in which we really look how well the existing CAP checklists would translate into applications for AI-based molecular tests in molecular oncology. And what we noticed was that there are already several existing checklists that could be used for implementation of a molecular test that uses AI because for example, we do need to validate a test before we implement it clinically, and every time we make changes, we need to be validate the test. There are some other pre-analytic considerations such as determining the minimum input for tumor testing that we do normally for sequencing that we also know that most of the AI based tests that use that are used for molecular, you need that as well, and there are existing requirements to determine cutoffs for qualitative tests to differentiate positive from negative and for classifiers or those AI-based methods that are intended for classification, it's important also to define those thresholds so you can increase the reliability of the call, whether that is a positive or negative call, it gives you a baseline for that.

So there is a lot that translates, which is not surprising because if you think about the AI models that we use, we are not implementing them as autonomous systems in the laboratory. They're always part of a test. So they're either built in as part of a pipeline that analyzes the data such as the methylation classifiers using DNA array data, or they can actually be adjust to the existing molecular pipeline. You can actually build an extra model such as the microcellular life instability analysis to be able to provide that additional data. So there is a lot that translates already.

Dr. Matthew Hanna:
That's great to hear. I mean, I wish everything would translate over as easy or we would've hopefully solved the issue of AI validation. Already looking at AI validation results, some of them might actually look similar to an assay trending out of range under genics plots. Unfortunately, with ai, it may not be as easier as detectable, and we've heard a lot about generalizability concerns and just because a model works well at one site, it was trained on their local data, it doesn't mean that a different lab with a different patient demographics or different specimen profiles will see the same level of performance. Do our existing validation frameworks in pathology, do those conventional frameworks still work for molecular testing?

Dr. Larissa Furtado:
In general, yes, but there are some gaps. For instance, the current frameworks don't address all the important aspects of data preparation, data management or the use of independent data sets for training and validating AI models. There aren't clear standards for data quality either, but we know that AI models are only as good as the data they're trained on, and if the training data isn't diverse or representative of the real world entities and conditions that the model will encounter, the test might not perform well, especially for those underrepresented sample types. So it's really important for the training dataset to be aligned with the intended use of the test. We also face challenges with explainability because unlike bioinformatics pipelines, which are rule-based systems, AI models, particularly neural networks can be black boxes, which makes it hard to understand how they make decisions, but yet the current regulations don't define minimal thresholds for explainability, and we also need strategies for updating models over time. For example, when is a model update just a patch and when does it require full revalidation?

Dr. Nick Spies:
Yeah, I think that's a really interesting question to kind of dive a little bit deeper on. At least in clinical pathology, we're really used to these kind of deterministic systems where we have hundreds or if not thousands of discreet rules built into our middleware or our lab information systems that actually take direct actions based on whether a value is too high or too low or these really kind of discreet, perfectly explainable sets of heuristics for these AI models. The complexity is both a blessing and a curse in that you can get a lot of increased performance set up, whatever you're trying to do with some of these more complex models, but it often comes at the expense of that explainability piece. And so as you mentioned, the data that we get out of these models is only as good as the data that we put in and the data that we would kind of add to these models is going to be changing all day every day as our systems change, our assays change, our analyzers change. So at what point do we really need to stop the works and retrain the models or revalidate that the model still works? Or how do we even build these realtime monitoring systems on top of the QC that we're already used used to perform for our realtime assays is a really interesting question, but one that definitely needs to be fleshed out in more detail before we can really feel comfortable as laboratorians applying these models all across the laboratory.

Dr. Matthew Hanna:
Those are all really great examples. Thank you for sharing. I mean, I think this all really just circles back to patient safety. We can even try to create messaging around AI is that it's not really the delta in performance, but it's really that patient safety net that sits for all the patients who are getting all of this routine testing that it will act as a patient safety net to catch and act as a triaging workflow for those who are rendering those interpretations and diagnoses. And we know that CLIA requires validation for any lab developed tests. Thankfully, we can still do those and AI is no exception. I think most labs won't be building their own AI models. There are going to be licensing them or working with vendors to deploy them at their own hospitals, but the vendor claims don't replace that local validation that's needed or local verification that's needed.

The labs and the medical directors really still have to confirm that the AI works in their own lab on their own cases with their own equipment. So validation isn't a checkbox, it's a continuous process. We talked about ongoing monitoring, error tracking, quality management systems. These are all areas that are continued to grow and are really essential in deploying any test that involves machine learning or AI as part of it. Dr. Spies, I've been hearing a lot of really exciting pioneering work on a new role that might be helping with our AI deployments and ongoing monitoring and called AI implementation specialists. Would love to hear more about that.

Dr. Nick Spies:
Yeah, I think obviously it's a really exciting time to be in pathology with all of the new technologies and new tools that we'll certainly be able to start playing with and a lot of laboratories will start implementing themselves. But with all of this new technology, obviously there comes a need for medical directors, laboratory staff for really all of us to get as familiar as possible with not only kind of the nuts and bolts of how they work, but much more importantly with how do we ensure that they are safely and effectively doing what we are asking to do. And so this idea of the kind of pathologist as the implementation specialist is I think a world that we certainly should be moving towards and getting more comfortable with as medical directors. Quality management is certainly kind of part and parcel with what we do all day every day, and a lot of that relies on a deep understanding of the technologies we're using, the assays and how they work, and most importantly, their failure modes and how we address them. All of that will be equally, if not more important as we move into the age of AI across our systems. And so having pathologists that understand enough about the technology and about these validation studies, these ongoing monitoring and deployment studies and all of the added nuance that comes with these really powerful tools will just be another added kind of skillset that will be really important for us to both develop organically and to explicitly start to teach in our residency programs and our fellowships and beyond.

Dr. M. E. de Baca:
This has been a fascinating conversation. As you can see, pathology has strong frameworks that can guide AI integration, but we need to adapt to meet the unique challenges. The emerging roles of AI implementation specialists and guides will be critical. If you're attending the CAP25, please check out the presentations at our innovation hub. Let me be the first here to thank Dr. Furtado, Spies, and Hanna for joining us today. And thank you, our listeners for tuning in to CIPI Connections. Thanks for joining us for insights, updates and the people behind the innovation. This has been CIPI Connections where ideas meet action in pathology.