Thursday, March 27, 2025
Innovative Research

School of Data Science faculty improve AI reliability in medical field

School of Data Science faculty improve AI reliability in medical field

MARCH 17, 2025 — Researchers at the UTSA School of Data Science are making artificial intelligence (AI) more reliably answer medical questions. Their project, Reducing Hallucination of LLMs for Causality-Integrated Personal Medical History Question Answering, received a $35,000 grant in 2024 through the Collaborative Seed Funding Grant program, sponsored by the School of Data Science (SDS) and the Open Cloud Institute.

The research team is led by Ke Yang, an assistant professor of computer science, along with Anthony Rios, an assistant professor of information systems and cybersecurity, and Yuexia Zhang, an assistant professor of management science and statistics. All three are SDS Core Faculty members.

Their work focuses on reducing AI hallucinations, a term used to describe when AI confidently provides false or misleading information. Large language models (LLMs), such as ChatGPT, generate responses based on patterns in data, but they do not have real-world understanding. This means they can sometimes sound convincing while being completely wrong. The issue is made worse when AI learns from flawed, outdated or biased information.

Some AI mistakes are harmless — such as misidentifying Toronto as Canada’s capital — but others, particularly in health care, can be much more serious. A recent Harvard study found that many people preferred ChatGPT’s medical advice over responses from doctors, highlighting the need for more reliable AI in medicine.

Incorrect AI-generated medical advice could lead to misdiagnosed conditions or incorrect treatment recommendations. This makes it essential to improve accuracy.

“We found that there’s not much work targeting AI hallucinations in specialized fields, especially those with high risks such as health care, finance and employment,” Yang said.


“This work has the potential to bolster trust in AI and encourage people to use it for a variety of important applications.”


Ke Yang discussed her research on AI hallucinations and large language models at the 2024 Los Datos Conference.


To improve AI accuracy, UTSA researchers are working on ways to give AI better context before it generates a response.

“We observed that AI sometimes gives incorrect answers because it doesn’t have enough background information on medical questions,” Yang said. “To address this, we proposed to extract previously known knowledge about diagnoses and practices to help AI think more logically. We then structured this information in a way that allows AI to process it more effectively using another AI system.”

The team is developing an AI model that can fact-check itself. To achieve this, they created a Causal Knowledge Graph (CKG) — a well-known format that organizes information from trusted medical sources. This structure helps AI recognize connections between medical concepts, allowing it to provide more accurate and reliable answers.

By integrating this external data with the user’s original question, the model gains a better understanding of the context of the user’s question, making its answers more. If successful, the team expects its model to generate hallucination-free answers, even in areas where the AI has received little to no prior training.

One challenge with using external data sources is ensuring AI pulls only the most relevant context for each question. To solve this, the team is also developing a system that filters information more effectively using subgraphs — smaller, targeted sections of data. These act like an index, helping the AI focus only on the most useful information instead of searching through everything it has learned.

Beyond improving AI-generated medical answers, the team wants to create a benchmark database — a collection of hallucination-free question-and-answer pairs that could serve as a standard for other AI researchers. This resource would serve as a testing tool, allowing developers to evaluate AI models against verified data and improve overall performance across various applications.

“Our project helps create open-source tools for researchers and develop new AI solutions that improve reliability in high-risk fields like health care,” Yang said.


EXPLORE FURTHER
See more stories like this on the School of Data Science website.
Read more about the Harvard study.

Yang believes this research could extend beyond health care, improving AI accuracy in various fields.

“This work has the potential to bolster trust in AI and encourage people to use it for a variety of important applications,” she said.

By reducing hallucinations and improving reliability, UTSA researchers are making AI a safer and more trustworthy tool, particularly in areas where accuracy is critical.

Christopher Reichert



UTSA Today is produced by University Strategic Communications,
the official news source
of The University of Texas at San Antonio.

Send your feedback to news@utsa.edu.


UTSA Today is produced by University Communications and Marketing, the official news source of The University of Texas at San Antonio. Send your feedback to news@utsa.edu. Keep up-to-date on UTSA news by visiting UTSA Today. Connect with UTSA online at Facebook, Twitter, Youtube and Instagram.


Events


Spotlight

Spotlight

spotlight-utsa-uthsa3.png
UTSA & UT Health San Antonio integration

UTSA’s Mission

The University of Texas at San Antonio is dedicated to the advancement of knowledge through research and discovery, teaching and learning, community engagement and public service. As an institution of access and excellence, UTSA embraces multicultural traditions and serves as a center for intellectual and creative resources as well as a catalyst for socioeconomic development and the commercialization of intellectual property - for Texas, the nation and the world.

UTSA’s Vision

To be a premier public research university, providing access to educational excellence and preparing citizen leaders for the global environment.

UTSA’s Core Values

We encourage an environment of dialogue and discovery, where integrity, excellence, respect, collaboration and innovation are fostered.

UTSA’S Destinations