Skip to main content

AI Can Design Novel, Dangerous DNA That Evades Biosecurity Screening, Study R...

AI Can Design Novel, Dangerous DNA That Evades Biosecurity Screening, Study R...

A groundbreaking study published on October 2, 2025, has revealed a significant vulnerability in the global biosecurity infrastructure, demonstrating that artificial intelligence can design potentially dangerous biological materials that slip past the screening systems used by DNA synthesis companies. Researchers led by Microsoft, in collaboration with industry partners like Twist Bioscience and the International Gene Synthesis Consortium, used open-source AI protein design tools to generate tens of thousands of synthetic versions of known toxins and viral components. The study, which was conducted entirely via computer simulation, found that while current screening software is effective at flagging known harmful DNA sequences, it can be bypassed by AI-generated designs that are functionally similar but structurally different. The findings were detailed in the journal Science.

The research team, led by Microsoft's Chief Scientific Officer Eric Horvitz, set out to "red team" existing biosecurity protocols—a method borrowed from cybersecurity to identify weaknesses by simulating an attack. They used AI to "paraphrase" the genetic code of 72 harmful proteins, including potent toxins like ricin and botulinum neurotoxin, creating over 76,000 new digital blueprints. When these novel sequences were tested against the screening software used by major DNA manufacturers, a large number of them evaded detection. In one test case involving ricin, up to 100% of the AI-generated variants were missed by the screening systems because their altered sequences no longer matched the known signatures of concern. This highlights a critical gap: current defenses are built to recognize known threats, but AI can create novel ones that are functionally equivalent. "AI advances are fueling breakthroughs in biology and medicine," Horvitz stated, but "with new power comes responsibility for vigilance and thoughtful risk management."

Upon discovering the vulnerability in late 2023, the researchers worked discreetly for 10 months with DNA synthesis companies and biosecurity experts to develop and distribute a "patch" to improve detection capabilities before publishing their findings. This collaborative effort has already strengthened global screening protocols. However, the study serves as a stark warning about the dual-use nature of advanced AI and the accelerating pace of biotechnology. Experts note that while this specific loophole has been addressed, the underlying challenge remains, as AI will continue to evolve. The findings are expected to intensify policy discussions around the need for more dynamic, AI-powered screening tools and stricter international governance to ensure that the rapid innovation in synthetic biology does not outpace the safeguards meant to prevent its misuse.

  • Background on DNA Synthesis and Screening: Synthetic DNA is a cornerstone of modern life sciences, enabling rapid advancements in medicine, agriculture, and materials science. To prevent misuse, such as the creation of bioweapons, commercial providers of mail-order DNA have voluntarily adopted screening protocols. Guided by organizations like the International Gene Synthesis Consortium (IGSC), these companies screen DNA orders and customers against a database of known dangerous pathogen and toxin sequences. This system has been the primary safeguard against the malicious creation of biological threats.
  • Study Methodology and "AI Paraphrasing": The researchers used AI protein design tools to alter the amino acid sequences of known toxins while preserving their fundamental structure and, potentially, their harmful function. This process, which they termed "paraphrasing," creates novel genetic blueprints that are functionally analogous to the original toxin but different enough at the sequence level to go unrecognized by existing screening software that looks for exact or very similar matches. The entire experiment was a digital simulation, meaning no physical DNA was created.
  • Key Stakeholders and Collaboration: The research was a major cross-sector effort led by Microsoft and involving key players like Twist Bioscience, a leading DNA synthesis company, and the IGSC. This collaboration was critical for both identifying the vulnerability and rapidly developing a solution. According to Emily M. Leproust, CEO of Twist Bioscience, "as AI capabilities evolve, screening practices must evolve just as quickly." The project underscores the need for ongoing partnership between AI developers, the biotech industry, and government regulators.
  • The "Patch" and Responsible Disclosure: Drawing inspiration from cybersecurity's "Coordinated Vulnerability Disclosure" practices, the research team withheld their findings from public release for nearly a year. During this time, they worked with screening providers to develop and deploy software patches that improve the detection of these AI-generated sequences. This responsible approach ensured that the security gap was closed before it could be widely known or exploited, demonstrating a successful model for managing dual-use AI risks.
  • Regulatory Context and Government Action: The study aligns with growing concerns in Washington and other world capitals about AI-exacerbated biological risks. The White House, through a 2023 Executive Order on AI, has already mandated the development of stronger standards for nucleic acid synthesis screening. As of April 2025, federal funding for life sciences research will be contingent on using DNA providers who adhere to this new, more robust screening framework, creating a powerful incentive for industry-wide adoption.
  • Limitations of Current Screening: The core issue exposed by the study is that current biosecurity screening is primarily "list-based," meaning it checks for sequences present on a pre-defined list of threats. This static defense is inherently vulnerable to novel threats created by generative AI, which excels at producing functionally equivalent but structurally unique outputs. Experts argue that future screening systems will need to incorporate AI themselves, moving from simple sequence matching to predicting a DNA sequence's function.
  • Broader Implications for AI and Biorisk: While this study focused on bypassing DNA screening, it highlights a wider set of concerns about the intersection of AI and biology. Experts have warned that AI could lower the barrier for bad actors by assisting in various stages of bioweapon development, from identifying potential pathogens to bridging knowledge gaps in their creation and deployment. This has led to calls for greater oversight of powerful AI models and the data they are trained on.
  • Future Developments and Next Steps: Experts agree that this is not a one-time fix but the beginning of a continuous arms race between benevolent and malevolent uses of AI in biology. Future steps will likely involve creating AI-powered defensive tools that can analyze the predicted structure and function of a designed protein, not just its sequence. David Relman, a microbiologist at Stanford University, warned, "It's not just that we have a gap — we have a rapidly widening gap, as we speak," emphasizing the need for proactive and adaptive security measures.

Editorial Process: This article was drafted using AI-assisted research and thoroughly reviewed by human editors for accuracy, tone, and clarity. Based on reporting from https://www.npr.org. All content undergoes human editorial review to ensure accuracy and neutrality.

Reviewed by: Norman Metanza

Discussion

0
Join the conversation with 0 comments

No comments yet

Be the first to share your thoughts on this article.

Back

Accessibility Options

Font Size

100%

High Contrast

Reading Preferences