Tackling healthcare AI's bias, regulatory and inventorship challenges
Photo: Dr. Terri Shieh-Newton
While AI adoption is increasing in healthcare, there are privacy and content risks that come with technology advancements.
Healthcare organizations, according to Dr. Terri Shieh-Newton, an immunologist and a member at global law firm Mintz, must have an approach to AI that best positions themselves for growth, including managing:
-
Biases introduced by AI. Provider organizations must be mindful of how machine learning is integrating racial diversity, gender and genetics into practice to support the best outcome for patients.
-
Inventorship claims on intellectual property. Identifying ownership of IP as AI begins to develop solutions in a faster, smarter way compared to humans.
Healthcare IT News sat down with Shieh-Newton to discuss these issues, as well as the regulatory landscape’s response to data and how that impacts AI.
Q. Please describe the generative AI challenge with biases introduced from AI itself. How is machine learning integrating racial diversity, gender and genetics into practice?
A. Generative AI is a type of machine learning that can create new content based on the training of existing data. But what happens when that training set comes from data that has inherent bias? Biases can appear in many forms within AI, starting from the training set of data.
Take, as an example, a training set of patient samples already biased if the samples are collected from a non-diverse population. If this training set is used for discovering a new drug, then the outcome of the generative AI model can be a drug that works only in a subset of a population – or have just a partial functionality.
Some traits of novel drugs are better binding to its target and lower toxicity. If the training set excludes a population of patients of a certain gender or race (and the genetic differences that are inherent therein), then the outcome of proposed drug compounds is not as robust as when the training sets include a diversity of data.
This leads into questions of ethics and policies, where the most marginalized population of patients who need the most help could be the group that is excluded from the solution because they were not included in the underlying data used by the generative AI model to discover that new drug.
One can address this issue with more deliberate curation of the training databases. For example, is the patient population inclusive of many types of racial backgrounds? Gender? Age ranges?
By making sure there is a reasonable representation of gender, race and genetics included in the initial training set, generative AI models can accelerate drug discovery, for example, in a way that benefits most of the population.
Q. Regarding another generative AI challenge, what is the regulatory landscape's response to data, and how does that impact model development?
A. Generative AI can be used for several purposes in the regulatory context. One way is to impute missing data from trials. Accurate training allows the generative AI model to produce synthetic data that can fill in missing gaps.
This can be helpful when there are HIPAA regulations that prevent patient data from being released to third parties without the patient's consent. Another way that generative AI can be used is to reduce the number of patients in a clinical trial (for example, the number of patients given a placebo).
A biological system can be modeled for the individual who would otherwise be given a placebo and used in the generative AI model for testing a drug, thereby reducing the number of patients needed for a clinical trial. This has the effect of reducing cost and time needed to run a successive trial.
However, the data produced by generative AI models needs to be viewed with care by regulatory agencies to ensure that it is an accurate representation of the data that would be produced if the drug were tested in actual humans.
To that extent, the FDA is currently evaluating the ability to use data using AI/machine learning as part of drug discovery and patient trials.
Various branches of the FDA, such as the Center for Drug Evaluation and Research, the Center for Biologics Evaluation and Research and the Center for Devices and Radiological Health, have collaborated to issue an initial discussion paper to communicate with different groups of stakeholders to get feedback and to explore relevant considerations for the use of AI/machine learning in the development of drugs and biological products.
The FDA saw more than 100 submissions in 2021 that contained information generated by using AI/machine learning, and that number has continually been increasing. As of now, there is not an immediate change in patient care, but there may be change soon depending on how quickly the FDA acts to revise its regulatory process to take into account that patient trials and treatments are now being designed by the use of AI/machine learning.
Q. What impact can inventorship claims on intellectual property – another challenge – have on generative AI in healthcare?
A. This question of inventorship for inventions made by AI is not fully settled.
As of now, the current case law in the U.S. says AI cannot be an inventor on invention. In June 2022, the USPTO [U.S. Patent and Trademark Office] announced the formation of the AI/emerging technologies partnership, which provides an opportunity to bring stakeholders together through a series of engagements to share ideas, feedback, experiences and insights on the intersection of intellectual property and AI/emerging technologies.
The USPTO held two listening sessions, one on the East Coast and one the West Coast, to hear from various stakeholders about how to address inventorship for AI-assisted inventions. There was also a time period for the public to provide comments, which ended on May 15, 2023. The USPTO should be making some policy decisions about how to handle the inventorship at some point in the future.
One aspect for consideration is the type of AI/machine learning tools used. A person using a straightforward off-the-shelf machine learning model – which has been in the public domain for a while – may have less (or no) inventive contribution than a person who has to make adjustments for the data sets and/or the way the model works.
Generative AI relies on training data to generate new content. So, for example, if a person has to curate a database in a different way to allow for reduction in bias for a better output product, then that person arguably has contributed to invention. If a person has to adjust the weighting of a neural network to achieve a more accurate output, then that would be a contribution to the invention, as well.
Follow Bill's HIT coverage on LinkedIn: Bill Siwicki
Email him: bsiwicki@himss.org
Healthcare IT News is a HIMSS Media publication.