AI Drug Discovery Systems Might Be Repurposed to Make Chemical Weapons, Researchers Warn
In 2020 Collaborations Pharmaceuticals, a company that specializes in looking for new drug candidates for rare and communicable diseases, received an unusual request. The private Raleigh, N.C., firm was asked to make a presentation at an international conference on chemical and biological weapons. The talk dealt with how artificial intelligence software, typically used to develop drugs for treating, say, Pitt-Hopkins syndrome or Chagas disease, might be sidetracked for more nefarious purposes.
In responding to the invitation, Sean Ekins, Collaborations’ chief executive, began to brainstorm with Fabio Urbina, a senior scientist at the company. It did not take long for them to come up with an idea: What if, instead of using animal toxicology data to avoid dangerous side effects for a drug, Collaborations put its AI-based MegaSyn software to work generating a compendium of toxic molecules that were similar to VX, a notorious nerve agent?
The team ran MegaSyn overnight and came up with 40,000 substances, including not only VX but other known chemical weapons, as well as many completely new potentially toxic substances. All it took was a bit of programming, open-source data, a 2015 Mac computer and less than six hours of machine time. “It just felt a little surreal,” Urbina says, remarking on how the software’s output was similar to the company’s commercial drug-development process. “It wasn’t any different from something we had done before—use these generative models to generate hopeful new drugs.”
Collaborations presented the work at Spiez CONVERGENCE, a conference in Switzerland that is held every two years to assess new trends in biological and chemical research that might pose threats to national security. Urbina, Ekins and their colleagues even published a peer-reviewed commentary on the company’s research in the journal Nature Machine Intelligence—and went on to give a briefing on the findings to the White House Office of Science and Technology Policy. “Our sense is that [the research] could form a useful springboard for policy development in this area,” says Filippa Lentzos, co-director of the Center for Science and Security Studies at King’s College London and a co-author of the paper.
The eerie resemblance to the company’s day-to-day routine work was startling. The researchers had previously used MegaSyn to generate molecules with therapeutic potential that have the same molecular target as VX, Urbina says. These drugs, called acetylcholinesterase inhibitors, can help treat neurodegenerative conditions such as Alzheimer’s. For their study, the researchers had merely asked the software to generate substances similar to VX without inputting the exact structure of the molecule.
Many drug discovery AIs, including MegaSyn, use artificial neural networks. “Basically, the neural net is telling us which roads to take to lead to a specific destination, which is the biological activity,” says Alex MacKerell, director of the Computer-Aided Drug Design Center at the University of Maryland School of Pharmacy, who was not involved in the research. The AI systems “score” a molecule based on certain criteria, such as how well it either inhibits or activates a specific protein. A higher score tells researchers that the substance might be more likely to have the desired effect.
In its study, the company’s scoring method revealed that many of the novel molecules MegaSyn generated were predicted to be more toxic than VX, a realization that made both Urbina and Ekins uncomfortable. They wondered if they had already crossed an ethical boundary by even running the program and decided not to do anything further to computationally narrow down the results, much less test the substances in any way.
“I think their ethical intuition was exactly right,” says Paul Root Wolpe, a bioethicist and director of the Center for Ethics at Emory University, who was not involved in the research. Wolpe frequently writes and thinks about issues related to emerging technologies such as artificial intelligence. Once the authors felt they could demonstrate that this was a potential threat, he says, “their obligation was not to push it any further.”
But some experts say that the research did not suffice to answer important questions about whether using AI software to find toxins could practically lead to the development of an actual biological weapon.
“The development of actual weapons in past weapons programs have shown, time and again, that what seems possible theoretically may not be possible in practice,” comments Sonia Ben Ouagrham-Gormley, an associate professor at the Schar School of Policy and Government’s biodefense program at George Mason University, who was not involved with the research.
Despite that challenge, the ease with which an AI can rapidly generate a vast quantity of potentially hazardous substances could still speed up the process of creating lethal bioweapons, says Elana Fertig, associate director of quantitative sciences at the Sidney Kimmel Comprehensive Cancer Center at Johns Hopkins University, who was also not involved in the research.
To make it harder for people to misuse these technologies, the authors of the paper propose several ways to monitor and control who can use these technologies and how they are used, including wait lists that would require users to undergo a prescreening process to verify their credentials before they could access models, data or code that could be readily misused.
They also suggest presenting drug discovery AIs to the public through an application programming interface (API), which is an intermediary that lets two pieces of software talk to each other. A user would have to specifically request molecule data from the API. In an e-mail to Scientific American, Ekins wrote that an API could be structured to only generate molecules that would minimize potential toxicity and “demand the users [apply] the tools/models in a specific way.” The users who would have access to the API could also be limited, and a limit could be set to the number of molecules a user could generate at once. Still, Ben Ouagrham-Gormley contends that without showing that the technology could readily foster bioweapon development, such regulation could be premature.
For their part, Urbina and Ekins view their work as a first step in drawing attention to the issue of misuse of this technology. “We don’t want to portray these things as being bad because they actually do have a lot of value,” Ekins says. “But there is that dark side to it. There is that note of caution, and I think it is important to consider that.”
World News || Latest News || U.S. News
Source link