
AI holds promise in scientific research, but can’t substitute for humans, experts say
With the Trump administration making sweeping cuts to staff and research grants at science-related agencies, artificial intelligence could offer a tempting way to keep labs going, but scientists say there are limits to the technology’s uses.
The Trump-appointed leaders of The National Institutes of Health, U.S. Centers for Disease Control and the Department of Health and Human Services have moved to cut thousands of jobs and billions in federal grants that fund university research and laboratory needs in the last few months.

© iStock - Baris-Ozer
The federal government may be eyeing artificial intelligence to bridge a gap created by these cuts. In February, the U.S. Department of Energy’s national labs partnered with AI companies OpenAI and Anthropic for an “AI Jam Session,” a day for 1,000 scientists across various disciplines to test the companies’ AI models and share feedback. Some figures in Trump’s cabinet have suggested that artificial intelligence models may be a good substitute for human physicians.
But scientists and builders of AI say it’s not that simple.
AI is playing a major role in scientific discovery — last year’s Nobel Prize in Chemistry was awarded to three scientists for discoveries that used AI to predict the shape of proteins and to invent new ones.
But we aren’t looking at a future where we can substitute researchers and doctors with algorithms, said Jennifer Kang-Mieler, department chair and professor of biomedical engineering at Stevens Institute of Technology in New Jersey.
“It’s a tool they may use to enhance clinical decision-making,” she said. “But I think that clinical expertise is not going to be something that we can completely match with AI.”
Kang-Mieler and other researchers say AI has its limitations, but is playing an increasingly important role in analyzing data, speeding up lab work, assisting in diagnostics, making personalized treatment plans and in cutting some costs related to research.
AI uses in scientific labs and healthcare
Artificial intelligence technologies have been a part of some healthcare and laboratory settings, like image recognition in radiology, for at least a decade, said Bradley Bostic, chairman and CEO of healthcare software company hc1, based in Indiana. But Bostic said the industry is still early in exploring its uses.
“It feels to me similar to 1999, with the World Wide Web,” Bostic said. “We didn’t realize how early days it was. I think that’s where we are right now with AI and specifically in healthcare.”
While AI’s potential is nearly endless, AI’s best uses in scientific and healthcare settings are for tasks that are repetitive and operational, Bostic said. Ideally, AI makes processes more efficient, and frees up humans’ time to focus on more important tasks.
Stephen Wong, the John S. Dunn Presidential Distinguished Chair in Biomedical Engineering at Houston Methodist uses machine learning, deep learning, and large language models every day in his lab, which researches cost-effective strategies for disease management.

© Manjurul - iStock-1131003688
He said he uses AI models for image analysis, medical imaging, processing massive datasets in genomics, the study of proteins, known as proteomics, and drug screening, as well as sifting through existing research and lab data. His goal is to cut down on tedious tasks, and make sense of large-scale data.
“Even tasks like locating crucial information buried in lab notebooks, scientific publications and patents become far more efficient,” he said.
Efficiency is also the goal of Kang-Mieler’s research, which was funded last fall by an NIH grant. Kang-Mieler and colleague Yu Gan are developing an AI-powered diagnostic tool for retinopathy of prematurity (ROP) — an eye disorder and loss of vision — in premature infants.
There was a lack of quality images for AI models to train on, Kang-Mieler said, so they are using images of animal eyes that feature ROP, to create “synthetic” images of what the condition would look like in humans. The neural networks in the AI model will learn how to categorize those synthetic images, and eventually assist eye doctors in spotting ROP. Before AI tools, this process would have been done by the human eye, and take much longer, Kang-Mieler said.
“The way I saw it was also that if we can be really successful in developing and doing this, we can actually take this into other types of diseases, rare diseases, that are hard to diagnose,” she said.
Automation and human capital
Many scientific labs require a lot of physical tasks, like handling liquids, following steps at specific times and sometimes handling hazardous materials. With AI algorithms and hardware, much of that work can be done without humans physically present, researchers at the University of North Carolina are finding.
Ron Alterovitz, the Lawrence Grossberg Distinguished Professor in the Department of Computer Science, has worked with Jim Cahoon, chair of the Department of Chemistry, on an approach to make lab work more autonomous. The pair have studied how an AI model could instruct an autonomous robot to execute lab processes, and then how AI models could analyze experiment results into findings. Alterovitz called it a “make and test” model.

© iStock - posonsky - 1310166610
“So once people can set it in motion, the AI comes up with a design, the robotic automation will make and test it, and the AI will analyze the results and come up with a new design,” he said. “And this whole loop can essentially run autonomously.”
The pair published their findings last fall, saying there are several levels of automation a lab could deploy, from assistive automation, where tasks like liquid handling are automated and humans do everything else, all the way up to the fully automated loop Alterovitz described.
Alterovitz sees many benefits to automated labs. Robots offer a safer method of handling hazardous materials, and allow researchers to conduct experiments 24 hours a day, instead of just when lab techs are clocked in. The robots also provide high accuracy and precision, and can replicate experiments easily, he said.
“If you ask two different people to do the same synthesis process, there’ll be subtle differences in how they do some of the details that can lead to some variance in the results sometimes,” Alterovitz said. “With robots, it’s just done the same way every time, very repeatedly.”
While there are fears that AI and automation will cut jobs in science, Alterovitz said it allows humans to do higher-level tasks. Many labs are already facing a shortage of trained technicians who do a majority of the physical tasks involved.
AI-assisted labs will likely heighten the need for other types of jobs, like data scientists, AI specialists and interdisciplinary experts who can bridge technology with real-world scientific applications, Wong said.
In order to continue innovating and learning new things, labs will still need the “chemical intuition” and problem-solving skills that trained scientists have, Alterovitz said.
AI’s limitations
Kang-Mieler says that AI’s current limitations are a factor that keeps the industry from rushing to apply the technology to everything. AI models are only as good as the data sets they’re trained on, and can contain data bias, or incomplete information that won’t paint a full picture.
And AI models can’t do an essential function of researchers, Kang-Mieler said — discover new information.

© Khanchit Khirisutchalual - iStock-1515913422
“I suppose that AI models can help formulate new hypotheses, but I don’t think that capability is the same as discovery,” Kang-Mieler said. “Current AI models are not developed to make independent discoveries or have original thoughts.”
Bostic has built other technology companies in his career, but said the stakes in scientific research and healthcare are much higher. Inaccurate data in an AI model could lead to a missed diagnosis or another huge problem for a patient. He said the best approach is what he calls “reinforcement learning through human feedback.”
“This is where you don’t have models that are just running independent of people,” Bostic said. “You have the models that are complementing the people and actually being informed by the people.”
Bostic said as the tech industry evolves, AI will play a role in shortening drug trials, providing patients more specialized care and helping research teams make due with fewer skilled workers, he said. But it’s not a fix-all, set-it-and-forget-it solution.
“I don’t see a scenario where clinical decisions are being independently made by machines and there aren’t the experts — who are trained and seeing the total picture of what’s going on with the patient — involved with those decisions anytime soon,” he said.