It seems like there is a new story every day about how artificial intelligence (AI) is going to disrupt something. This is true for many industries, but life sciences and pharma is one area where we are seeing high interest and potential.
The effectiveness of drug discovery, clinical trials and regulatory approval depends heavily on the variety and volume of relevant biomedical data and the ability to efficiently integrate data from multiple sources, both proprietary and public.
Here, AI is already proving itself as a critical partner in helping companies scale key business processes, overcome operational challenges and realize new opportunities. Without a doubt, Large Language Models (LLMs) like BioGPT offer lots of promise, but, in today’s reality, they are rife with potential issues for security, intellectual property and factual inaccuracy. In scientific research, the stakes are simply too high for such risks.
Companies who want to take advantage of the benefits promised by AI-based technologies need to know how to navigate through the noise to understand what is achievable and how they can deliver value safely and securely—today.
We talked about these issues in our recent webinar, “Applying AI to Life Sciences in the Age of ChatGPT,” using the expert.ai Platform for Life Sciences to demonstrate how to effectively utilize the data that drives drug discovery, clinical trials and regulatory review processes. This blog post summarizes five of the questions answered in the webinar. Check out the full event recording below to see what you missed.
Question 1. What are some of the outcomes that we can expect from a successful AI implementation?
Generally speaking, it’s essential to define realistic expectations at each phase of the implementation of AI. This is critical for teams to be able to validate progress and set the expected outcomes to support a specific use case. Defining your “success criteria” will depend highly on your business goals and will involve technology and data, as well as input from your domain experts. Here are just some outcomes that you can consider:
- Significant efficiency in accessing information across content silos
- Increased accuracy of the auto-classification model per disease area
- Easily scale the model to classify a high volume of publications in a short time frame
- Greater confidence in the accuracy and security of the results
One point to underscore here is the importance of teams having confidence in the system they are using. People need to understand how the system works and the data that is used to improve results. A black box system that no one understands how it works is not a system that users feel comfortable using. Explainability—knowing how a system operates and how it gets results—creates trust.
Question 2. Can we still have a high-performing AI model even if we don’t have huge amounts of data?
Data scarcity is a reality for many processes. However, not all models need to have extensive training. In certain situations, pre-trained models can work well. Many of these models can be optimized for size and hosting inside the firewall to reduce concerns about proprietary data being shared outside the organization.
Data standardization will also play an important role in supporting AI model performance. For instance, language translation or medical nomenclatures can help convert the language in medical records to a reference language and to normalized terms. With standard nomenclature and natural language understanding, people can make inferences by looking at the semantics of whether something is a drug or a candidate drug. Those things can help reduce the amount of data that we need for training the system or training these models.
Question 3. What is the role of subject matter experts in the AI process and how much subject matter expertise is needed to validate or interact with the machine?
Annotation and validation activities are time consuming and critical for the success of any AI implementation. This is especially true for natural language solutions where data scarcity requires extremely accurate annotation and validation.
In the context of AI for life sciences and healthcare, the lack of available subject matter experts (SMEs) can dramatically slow down the development of an AI model. Anticipating the level of expertise needed and the tools that can be used to speed up annotations or validate results (explainability) will increase the chances of a successful AI implementation.
Subject matter experts must be involved early in the process to build alignment on AI objectives, provide the expertise needed to guide the implementation and to validate an AI approach to those objectives, provide the expertise needed to guide the implementation, and validate AI results.
Question 4. We have a considerable wealth of domain knowledge—how can this be embedded in the model?
As a solution provider, this is another area where collaboration with SMEs is critical. Whether we are building a new language model or using an existing one, technology experts and SMEs need to work closely together to make sure that you are able to capture and embed your existing domain expertise rather than letting a model find patterns on its own based on a training set.
This allows you to define essential implementation steps that the model needs to address, from identifying the targeted patient population, selecting the appropriate data sources (scientific publications, medical notes, radiology reports, etc.) or medical nomenclatures, ensuring the proper description of relevant clinical features, to sharing knowledge on any known biomarkers and key clinical mechanisms.
For instance, being able to measure the toxicity of a treatment over time requires capturing side effects, pre-conditions or other risk factors that are already known by clinicians from prior diagnosis. Another scenario: nomenclatures are a good example of structured knowledge we can use in our models; knowing what biomarkers to target (are they genes/proteins or clinical characteristics known to be important in a disease mechanism) is key to make sure these features are included in the model.
Question 5. Can AI models provide new biomedical insights?
In a word, yes. While it’s essential to embed existing domain knowledge in an AI model to avoid reinventing the wheel, to standardize features or identify potential bias, models can be created with capabilities that help fill the gap for what is missing.
For example, an NLP model can identify patterns in gene/protein interactions that could lead to the identification of biomarkers that are correlated with the risk of progression for a certain disease area. Similarly, a prediction model could identify risk factors in patient medical records that increase the toxicity of a treatment.
Catch up on the full discussion and see the expert.ai Platform for Life Sciences at work in the webinar, “Applying AI to Life Sciences in the Age of ChatGPT.”
Applying AI to Life Sciences in the Age of ChatGPT
The new expert.ai Platform for Life Sciences combines industry language models and AI-based natural language capabilities to transform health and scientific data into insights.
Watch the on-demand webinar to see the platform at work and discover the expert.ai advantage for effectively utilizing the data that drives drug discovery, clinical trials and regulatory approval processes.