How AI and genetic engineering are challenging traditional biosecurity frameworks
In 2001, letters containing a mysterious white powder began appearing in U.S. government and media offices, ultimately sickening 22 people and killing five. The anthrax attacks revealed a terrifying reality: the tools of modern biological research could be turned into weapons 1 .
In response, the U.S. government created a strict regulatory system to control access to the world's most dangerous pathogens—the Select Agent Program. For two decades, this system has provided a framework for securing these high-consequence agents.
But today, a technological revolution is testing its foundations. The convergence of synthetic biology and artificial intelligence is making it increasingly possible to design or reconstruct pathogens digitally, bypassing the physical controls that have been the cornerstone of biosecurity for generations.
This article explores how scientists and policymakers are racing to reinvent biological security for an age where the most dangerous threats may not exist in a vial, but as lines of code in a computer database.
The United States' Select Agent Program regulates biological agents and toxins deemed to have the potential to pose a severe threat to public health, animal health, or plant health 1 9 .
Established initially in 1996 and significantly expanded after the 2001 anthrax attacks, the program maintains a comprehensive list of regulated pathogens and toxins known as the Select Agents and Toxins List (SATL) 1 9 .
The program operates under stringent regulations requiring laboratories that possess, use, or transfer these agents to implement extensive biosafety and biosecurity measures, including facility registration, personnel screening, detailed inventory controls, and comprehensive security plans 9 .
In 2010, a crucial refinement introduced the "Tier 1" designation for a subset of select agents that "present the greatest risk of deliberate misuse with most significant potential for mass casualties or devastating effects to the economy, critical infrastructure, or public confidence" 1 .
These pathogens are subject to even more stringent regulations, including enhanced personnel reliability checks, additional physical security requirements, and stricter access controls 1 .
| Agent Name | Type | Primary Concern | Risk Level |
|---|---|---|---|
| Bacillus anthracis (Anthrax) | Bacterium | Respiratory transmission, high mortality | |
| Yersinia pestis (Plague) | Bacterium | Pneumonic form highly contagious | |
| Ebola virus | Virus | High mortality rate, public fear | |
| Botulinum neurotoxin | Toxin | Extreme potency, potential for mass exposure | |
| Variola major virus (Smallpox) | Virus | Eradicated disease, lack of population immunity | |
| Foot-and-mouth disease virus | Virus | Potential for devastating economic impact |
Traditional biosecurity has focused on controlling physical samples of pathogens. However, synthetic biology is fundamentally changing this paradigm.
The cost of DNA sequencing and synthesis has dropped exponentially, making it increasingly feasible to reconstruct pathogens from digital blueprints rather than obtaining physical samples 4 6 .
This creates a new vulnerability: the potential for someone to synthesize a dangerous pathogen using commercially available DNA fragments and standard laboratory equipment, completely bypassing the physical security measures of the Select Agent Program.
"DNA synthesis technology is rapidly diminishing barriers to acquisition of pathogens and synthetic biology may enable the accidental or deliberate creation of entirely novel pathogens unrelated to current ones" 4 .
Artificial intelligence is now accelerating these trends at an unprecedented pace. Generative AI tools can design novel protein sequences with specific functions—including potentially harmful ones—that bear little resemblance to any known natural pathogens 3 .
This creates a critical gap in traditional biosecurity screening, which relies on detecting sequence similarity to known threats.
Researchers recently demonstrated that AI-powered protein design tools can create proteins with dangerous functions but minimal sequence homology to known toxins, allowing them to potentially evade current screening methods 3 .
"Generative protein design tools pose a growing biosecurity risk because they have the potential to produce functionally dangerous proteins with little homology to sequences of concern" 3 .
Vaccine development, therapeutic design
Reconstruction of pathogens from digital sequences
Gene therapy, agricultural improvement
Enhanced pathogen virulence or drug resistance
Drug discovery, enzyme engineering
Novel toxins or pathogenic elements that evade detection
High-throughput research and testing
Reduced technical barriers for malicious actors
In 2025, a multi-institutional research team set out to address the critical gap in biosecurity screening revealed by AI-designed proteins. Their landmark study, published in Science, tested a novel function-based screening approach to detect hazardous biological sequences that might evade traditional detection methods 3 .
The researchers recognized that current industry standards rely primarily on sequence homology—comparing new sequences to databases of known threats. But with AI now able to design novel sequences with dangerous functions but minimal sequence similarity to known toxins, this approach is becoming increasingly inadequate.
The team created a diverse set of protein sequences including known toxins, benign natural proteins, and AI-generated novel sequences with structural similarities to toxins but low sequence homology.
They trained machine learning models to recognize functional domains and structural motifs associated with toxicity, independent of exact sequence matches.
The team tested both traditional homology-based screening and their new function-based approach against a blinded set of sequences, including some designed to evade detection while maintaining toxic functions.
Potential hits from both methods were experimentally validated for their functional properties using high-throughput toxicity assays.
| Research Tool | Type | Function in the Experiment |
|---|---|---|
| Protein Sequence Databases | Data Resource | Source of known toxin and non-toxin sequences for training and testing |
| Machine Learning Algorithms | Software | Predictive models for detecting functional motifs associated with toxicity |
| High-Throughput Toxicity Assays | Laboratory Method | Experimental validation of predicted toxic sequences |
| Synthetic DNA Constructs | Biological Material | Test cases for screening effectiveness |
| LLM-Based Design Tools | Software | Generation of novel protein sequences for challenge testing |
of functionally hazardous proteins with low sequence similarity were not detected
of potential threats successfully identified
The study demonstrated that while traditional homology-based methods failed to detect 42% of functionally hazardous proteins with low sequence similarity to known toxins, the hybrid function-based approach successfully identified 89% of these potential threats 3 .
Addressing the biosecurity challenges of synthetic biology and AI requires moving beyond static lists of known pathogens toward more adaptive governance frameworks.
Must be integrated into international standards for DNA synthesis 3 . This approach would screen orders based on the predicted function of encoded proteins rather than solely their similarity to known sequences of concern.
The Select Agent List should become more dynamic, with clearly defined, evidence-based pathways for de-listing or de-tiering agents as new countermeasures reduce their potential threat 1 .
The global nature of biotechnology development demands international coordination. Disparate regulations across countries could create "screening havens" where less scrupulous DNA synthesis companies operate 3 .
Promising initiatives like the International Gene Synthesis Consortium have established voluntary screening standards, but these require updating to address AI-designed biological sequences 3 .
The Biological Weapons Convention (BWC), the principal international agreement prohibiting biological weapons, faces challenges in addressing these new technologies as it traditionally focused on tangible pathogens rather than intangible design information 6 . Updating these frameworks to address the digital dimension of biosecurity threats is an urgent priority.
As with earlier phases of biosecurity regulation, the challenge lies in balancing risk management with scientific progress. Overly restrictive regulations could stifle innovation in critical areas like vaccine development and therapeutic design 5 .
The solution likely lies in risk-proportionate oversight that applies the most stringent controls to the highest-risk activities while facilitating beneficial research.
"Effective biosecurity solutions for AI-enabled biotechnology will require continued multi-stakeholder cooperation and shared technical standards" 6 .
This includes engaging not just governments and research institutions, but also commercial DNA synthesis providers, AI developers, and the broader scientific community.
The regulation of biological select agents is undergoing its most significant transformation since the program's inception. The convergence of synthetic biology and artificial intelligence is blurring the line between physical and digital biological threats, challenging the fundamental premises of our current control systems.
Yet the scientific community has repeatedly demonstrated its capacity to innovate in both technology and governance. By developing new screening methodologies, advocating for evidence-based policies, and fostering international cooperation, researchers and policymakers are working to build a biosecurity framework capable of addressing not just the threats of yesterday, but those emerging on the horizon.
The future of biological security will depend on maintaining this delicate balance—harnessing the extraordinary potential of synthetic biology while protecting against its risks in our increasingly interconnected world.