ɬÀï·¬

Science Communication

Can AI help people trust scientists?

Simple summaries written by artificial intelligence help the public understand research
David Markowitz
By David Markowitz
Jan. 12, 2025

Artificial intelligence-generated summaries of scientific papers make complex information more understandable for the public compared with human-written summaries, according to my recent paper . AI-generated summaries not only improved public comprehension of science but also enhanced how people perceived scientists.

Smoothing out the complexity can help with comprehension.
Smoothing out the complexity can help with comprehension.

I used a popular large language model, , to create simple summaries of scientific papers; this kind of text is often called a significance statement. The AI-generated summaries used simpler language – they were easier to read according to a readability index and used more common words, like “job” instead of “occupation” – than summaries written by the researchers who had done the work.

In one experiment, I found that readers of the AI-generated statements had a better understanding of the science, and they provided more detailed, accurate summaries of the content than readers of the human-written statements.

I also investigated what effects the simpler summaries might have on people’s perceptions of the scientists who performed the research. In this experiment, participants rated the scientists whose work was described in the simpler texts as more credible and trustworthy than the scientists whose work was described in the more complex texts.

In both experiments, participants did not know who wrote each summary. The simpler texts were always AI-generated, and the complex texts were always human-generated. When I asked participants who they believed wrote each summary, they ironically thought the more complex ones were written by AI and simpler ones were written by humans.

It can feel like you need a Ph.D. to understand science research published in a journal.
It can feel like you need a Ph.D. to understand science research published in a journal.

Why it matters

Have you ever read about a scientific discovery and felt like it was written in a foreign language? If you’re like , new scientific information is probably hard to understand – especially if you try to tackle a science article in a research journal.

In an era where scientific literacy is crucial for informed decision-making, the abilities to communicate and grasp complex ideas are more important than ever. Trust in science has been , and one contributing factor may be the challenge of understanding scientific jargon.

This research points to a potential solution: using AI to simplify science communication. By making scientific content more approachable, this work demonstrates that AI-generated summaries may help to restore trust in scientists and, in turn, encourage . The question of trust is particularly important, as people often rely on science in their daily lives, from eating habits to medical choices.

What still isn’t known

As AI continues to evolve, its role in science communication may expand, especially if using generative AI becomes more commonplace or sanctioned by journals. Indeed, the academic publishing field is still establishing . By simplifying scientific writing, AI could contribute to more engagement with complex issues.

While the benefits of AI-generated science communication are perhaps clear, ethical considerations must also be considered. There is some risk that relying on AI to simplify scientific content may remove nuance, potentially leading to misunderstandings or oversimplifications. There’s always the chance of errors, too, if no one pays close attention.

Additionally, transparency is critical. Readers should be informed when AI is used to generate summaries to avoid potential biases.

Simple science descriptions are preferable to and more beneficial than complex ones, and AI tools can help. But scientists could also achieve the same goals by working harder to minimize jargon and communicate clearly – no AI necessary.

This article is republished from under a Creative Commons license. Read the .

The Conversation

Enjoy reading ASBMB Today?

Become a member to receive the print edition four times a year and the digital edition monthly.

Learn more
David Markowitz
David Markowitz

David Markowitz is an associate professor of communication, Michigan State University

Get the latest from ASBMB Today

Enter your email address, and we’ll send you a weekly email with recent articles, interviews and more.

Latest in Opinions

Opinions highlights or most popular articles

Debugging my code and teaching with ChatGPT
Essay

Debugging my code and teaching with ChatGPT

Oct. 16, 2025

AI tools like ChatGPT have changed the way an assistant professor teaches and does research. But, he asserts that real growth still comes from struggle, and educators must help students use AI wisely — as scaffolds, not shortcuts.

AI in the lab: The power of smarter questions
Essay

AI in the lab: The power of smarter questions

Oct. 14, 2025

An assistant professor discusses AI's evolution from a buzzword to a trusted research partner. It helps streamline reviews, troubleshoot code, save time and spark ideas, but its success relies on combining AI with expertise and critical thinking.

How AlphaFold transformed my classroom into a research lab
Essay

How AlphaFold transformed my classroom into a research lab

Oct. 10, 2025

A high school science teacher reflects on how AI-integrated technologies help her students ponder realistic research questions with hands-on learning.

Writing with AI turns chaos into clarity
Essay

Writing with AI turns chaos into clarity

Oct. 2, 2025

Associate professor shares how generative AI, used as a creative whiteboard, helps scientists refine ideas, structure complexity and sharpen clarity — transforming the messy process of discovery into compelling science writing.

Teaching AI to listen
Essay

Teaching AI to listen

Sept. 18, 2025

A computational medicine graduate student reflects on building natural language processing tools that extract meaning from messy clinical notes — transforming how we identify genetic risk while redefining what it means to listen in science.

What’s in a diagnosis?
Essay

What’s in a diagnosis?

Sept. 4, 2025

When Jessica Foglio’s son Ben was first diagnosed with cerebral palsy, the label didn’t feel right. Whole exome sequencing revealed a rare disorder called Salla disease. Now Jessica is building community and driving research for answers.