Modern biotechnology has the tools to edit genes and design drugs, yet thousands of rare diseases remain without a cure. According to executives at Insilico Medicine and GenEditBio, the long-missing ingredient is finding enough smart people to continue the work. Artificial intelligence, they say, is becoming the force multiplier that allows scientists to tackle problems that industry has long left untouched.
Speaking this week at Web Summit Qatar, Insilico president Alex Aliper outlined his company’s goal of developing “pharmaceutical superintelligence.” Insilico was recently launched “MMAI Gym” which aims to train general models of large languages such as ChatGPT and Gemini to perform as well as specialized models.
The goal is to create a multimodal, multitasking model that, Aliper says, can solve many different drug discovery tasks simultaneously with superhuman accuracy.
“We really need this technology to increase the productivity of our pharmaceutical industry and to address the shortage of labor and talent in this space, because there are still thousands of diseases without a cure, without any treatment options, and there are thousands of rare disorders that are neglected,” Aliper told TechCrunch. “So we need smarter systems to deal with this problem.”
Insilico’s platform ingests biological, chemical and clinical data to generate hypotheses about disease targets and candidate molecules. By automating steps that once required legions of chemists and biologists, Insilico says it can navigate vast design spaces, identify high-quality therapeutic candidates and even repurpose existing drugs—all with dramatically reduced cost and time.
For example, the company recently used its AI models to determine whether existing drugs could be repurposed to treat ALS, a rare neurological disorder.
But the work bottleneck doesn’t end at drug discovery. Even when AI can identify promising targets or treatments, many diseases require interventions at a more fundamental biological level.
Techcrunch event
Boston, MA
|
June 23, 2026
GenEditBio is part of the “second wave” of CRISPR gene editing, in which the process is moving away from editing cells outside the body (ex vivo) and toward precise delivery inside the body (in vivo). The company’s goal is to make gene editing a one-time injection directly into affected tissue.
“We’ve developed a proprietary ePDV, or engineered protein delivery vehicle, and it’s a virus-like particle,” GenEditBio co-founder and CEO Tian Zhu told TechCrunch. “We learn from nature and use AI machine learning methods to mine natural resources and find which types of viruses have an affinity for certain tissue types.”
The “natural resources” Zhu is referring to are GenEditBio’s vast library of thousands of unique, non-viral, non-lipid polymer nanoparticles—essentially delivery vehicles designed to safely deliver gene-editing tools to specific cells.
The company says the NanoGalaxy platform uses artificial intelligence to analyze data and identify how chemical structures correlate with specific tissue targets (such as the eye, liver or nervous system). The AI then predicts which modifications to a delivery vehicle’s chemistry will help it carry a payload without provoking an immune response.
GenEditBio tests its ePDVs in vivo in wet labs and the results are fed back to the AI to improve its predictive accuracy for the next round.
Efficient, tissue-specific delivery is a prerequisite for in vivo gene editing, Zhu says. She argues that her company’s approach lowers the cost of goods and standardizes a process that has historically been difficult to scale.
“It’s like taking an over-the-counter drug [that works] for many patients, which makes medicines more affordable and accessible to patients worldwide,” said Zhu.
Her company recently received FDA approval to begin CRISPR therapy trials for corneal dystrophy.
Combating the persistent data problem
As with many AI-based systems, advances in biotechnology ultimately face a data problem. Modeling the cutting edge of human biology requires far more high-quality data than researchers can currently obtain.
“We still need more ground truth data coming from patients,” Aliper said. “The data set is heavily biased in the Western world, where it’s generated. I think we need to make more efforts at the local level, to have a more balanced set of raw data or ground truth data, so that our models are also better able to deal with that.”
Aliper said Insilico’s automated labs generate multi-level biological data from disease samples at scale, without human intervention, which it then feeds into its AI-powered discovery platform.
Zhu says the data AI needs already exists in the human body, shaped by thousands of years of evolution. Only a small fraction of DNA directly “codes” proteins, while the rest acts more like an instruction manual for how genes behave. This information has historically been difficult for humans to interpret, but is increasingly accessible to artificial intelligence models, including recent efforts such as Google DeepMind’s AlphaGenome.
GenEditBio takes a similar approach in the lab, testing thousands of delivery nanoparticles in parallel rather than one at a time. The resulting data sets, which Zhu calls “gold for AI systems,” are used to train his models and, increasingly, to support collaborations with external partners.
One of the next big efforts, according to Aliper, will be to build digital human twins to conduct virtual clinical trials, a process he says is “still in its infancy.”
“We are at a plateau with about 50 drugs approved by the FDA every year annually, and we should see growth,” Aliper said. “There is an increase in chronic disorders because we are aging as a global population … I hope that in 10 to 20 years we will have more treatment options for individualized treatment of patients.”
