
Black Americans are twice as likely as their White counterparts to develop multiple myeloma, but their participation rate in clinical trials of treatments for the bone marrow cancer is a dismal 4.8%. Now drug giant Johnson & Johnson says it’s had success increasing that share by using an untraditional tool: artificial intelligence.
Algorithms helped J&J pinpoint community centers where Black patients with this cancer might seek treatment. That information helped lift the Black enrollment rate in five ongoing studies to about 10%, the company says. Prominent academic centers or clinics that have traditionally done trials are often not easily accessible by minority or low-income patients because of distance or cost.
J&J is now using AI to increase diversity in 50 trials and plans to take that number to 100 next year, says Najat Khan, chief data science officer of its pharmaceutical unit. One skin disease study that used cellphone snapshots and e-consent forms to enable patients to participate in the trial remotely managed to raise enrollment of people of color to about 50%, she says.

“You have claims data, connected to electronic health records data, connected to lab tests, and all of that de-identified and anonymized,” Khan says. “The machine-learning algorithm computes and creates a heat map for you as to where the patients eligible for that trial are.”
In recent decades, evidence has been growing that medicines don’t affect all people the same way. And the Covid-19 pandemic highlighted deep ethnic disparities in access to health care. In response, regulators and advocacy groups have been pressuring drugmakers around the world to include underrepresented racial and ethnic groups in new treatment trials, not only to improve biomedical knowledge but also to build trust in medical systems among minority groups. Many companies are turning to AI for help.
The industry could certainly use some. A recent analysis in the journal Health Affairs found that fewer than 20% of drugs approved in 2020 had data on treatment benefits or side effects for Black patients. Financially and socially, the lack of diversity in trials will cost the US “billions of dollars” over the next three decades, according to a 2022 report by the National Academies of Sciences, Engineering and Medicine, which pointed to factors such as premature deaths and a lack of effective medical intervention. The report also said trials that aren’t inclusive hinder innovation, fail because of low enrollment rates, undermine trust and worsen health disparities.
Regulators increasingly want drugmakers to consider such disparities when vetting new treatments. In 2014 the European Medicines Agency introduced guidance requiring drugmakers drugmakers to justify nonrepresentative clinical trials. Australia’s Therapeutic Goods Administration’s 2022 guidelines say drug study populations should represent the makeup of the broad population. And the US will soon require diversity action plans for clinical trials submitted to the Food and Drug Administration, a provision that was included in the 2023 government spending bill enacted in December.
Clinical trials are hard to run because they involve coordinating with multiple parties: patients, hospitals and contract research companies. So pharma companies have often simply relied on well-established academic medical centers, where populations may not be as diverse. But computer algorithms can help researchers quickly review vast troves of data on past medical studies, search through zillions of patient medical records from around the world and quickly assess the distribution of disease in a population. That data can help drugmakers find new networks of doctors and clinics with access to more diverse patients who fit into their clinical trials more easily—sometimes months faster and much more cheaply than if humans were reviewing the data.
“They [pharmaceutical companies] have to ask physicians to think about a patient when they see them and then think about ethnicity and race—it’s just making a difficult task even more difficult,” says Wout Brusselaers, founder of Deep 6 AI, a startup that sells AI-powered software that matches patients and trials.
AI poses new challenges for drugmakers, though, because the technology carries the risk of making things worse than they already are by introducing what’s known as algorithmic bias.
In 2019, for instance, academics said they uncovered unintentional racial bias in one software product sold by Optum Inc., a major health services company, which health centers across the country used to predict which patients needed high-risk care. The algorithm based its predictions on patients’ health-care spending, rather than the severity or the needs of their illness. Only 18% of Black patients ended up getting additional help, rather than the 47% who needed it, according to a study of the algorithm’s effects at one institution that was published in the journal Science. Its authors say that skew is typical of risk prediction tools that medical centers and government agencies use to service 200 million people nationwide, and that such bias likely operates in other software as well.
Optum, a unit of UnitedHealth Group Inc., says the rules-based algorithm is not racially biased. “The study in question mischaracterized a cost prediction algorithm used in a clinical analytics tool based on one health system’s incorrect use of it, which was inconsistent with any recommended use of the tool,” the company said in a response to questions.
The FDA is considering drafting recommendations for companies that are submitting AI applications for drug development to ensure their models don’t inadvertently discriminate against underserved patients.
“Additional regulatory clarity may be needed in the future, especially as we see the emergence of new AI technologies,” says Tala Fakhouri, associate director for policy analysis at the FDA. Such regulatory clarity would “take into consideration algorithmic bias.”
Some critics point out other problems. Running trials remotely or providing transportation or parking vouchers for participants would draw more minorities into trials than using AI, says Otis Brawley, a professor of oncology at Johns Hopkins University. Black populations in the US are disproportionately poor, and the hospitals looking after them often don’t have the bandwidth for extra projects such as clinical trials, he says.
“AI could do that, but I could do it, too, as long as I were allowed to pay for people’s parking—as long as I did it in resourced wealthy places,” says Brawley, who previously worked at Grady Memorial Hospital in Atlanta, where, he says, he didn’t have resources to run clinical trials. Even at Johns Hopkins, he loses minorities more so than Whites because of parking costs, he says.
Walgreens Boots Alliance Inc.—which began running clinical trials for drugmakers in 2022—has a different approach to encouraging equity in studies. It uses AI tools to locate eligible patients from diverse groups quickly, but it relies on local pharmacists at its almost 9,000 stores across the US to recruit individuals from underrepresented groups, says Ramita Tandon, who heads the clinical trial business at the pharmacy chain. “We have posters, flyers,” with information about trials, she says, or simply “pharmacists that are having the dialogue with the patients when they pick up their scripts.”

This method helped improve participation of Black patients in one cardiovascular study to 15%, Tandon says, a number that exceeds the percentage of Black people in the general population. The new FDA diversity requirements have generated lots of interest from large pharmaceutical companies in Walgreens’ clinical trials business, she says.
Elsewhere, the use of AI is going well beyond race and ethnicity. Japan’s Takeda Pharmaceutical Co. uses AI to help attract and retain diverse populations in its clinical trials, says Andrew Plump, the drugmaker’s head of research and development. AI has helped the company personalize complicated letters of consent to patients in minority groups such as in the LGBTQ community. Technology can adjust wording to correspond to how people identify themselves by gender and sexual orientation, which engenders greater trust in the process, he says.
New York-based H1, which uses generative AI to help match drugmakers with trial sites, says it’s working to remove bias from the data it collects. For example, its data on race and ethnicity can be derived from credit card and bank statements, which means it might not be capturing people who are less well off financially, says Ariel Katz, H1’s chief executive officer.
“We are doing a lot of work to make sure that we feel like our data sets are comprehensive, not biased, but there’s more work to do there,” he says.
J&J now has an AI ethics council, which includes input from academics, and it’s monitoring trials to remove data bias while increasing representation, Khan says. “We always have a human in the loop,” she says. “My team probably spends 60% or 70% of the time on this aspect versus anything else, which is making sure data is fit for purpose and appropriate and representative, and if not, procuring other data sets to make it more representative.”
Source from: Bloomberg