As trust in traditional sources of health information declines in the United States, some people are turning to artificial intelligence to answer their health-related questions. A 2024 KFF poll found that around one in six adults regularly seek health advice from AI.
Relying on AI for health information can lead to mixed and potentially dangerous results. Your health care provider should always be your primary source for health guidance.
But if you use AI to find information about your health, this guide will help you navigate it more safely and responsibly.
What are AI chatbots?
AI chatbots answer user questions in a conversational tone. When ChatGPT launched in late 2022, it set off a wave of popular AI chatbots, including those introduced by social media platforms, search engines, and software companies.
Some people use chatbots as a substitute for or in addition to traditional search engines like Google. In March 2025, online users visited AI chatbots 7 billion times. That’s more than double the traffic recorded at the same time in 2024. And the growth shows no signs of slowing.
AI as a source of health (dis)information
Use of AI as an information source has become increasingly common. Nearly half of young adults interacting with AI tools several times a week, according to the KFF poll.
Over a third (36 percent) of adults trust that AI-generated health information is accurate, and a quarter of adults under age 30 say they regularly use AI chatbots for health information.
Older adults are more skeptical about asking AI their health questions. Three-quarters of people 50 and older say that they have little to no trust in AI-generated health information, according to a 2024 University of Michigan study.
Additionally, one in five older adults were not confident that they could identify inaccurate or false health information. These concerns are not unfounded.
A 2023 study found that people are more likely to believe AI-generated disinformation than false claims spread by humans. And a 2024 analysis tested the effectiveness of built-in safeguards meant to prevent AI from generating false health claims, such as claiming that sunscreen causes skin cancer.
Researchers found the safeguards to be “inconsistently implemented” and that some chatbots pulled information from clickbait headlines and fake testimonials.
A recent study identified AI chatbots’ tendency to overgeneralize and misrepresent scientific studies, and researchers in a June 2024 report received potentially dangerous health advice about mental health and eating disorders from a social media chatbot.
But not all findings related to health information and AI are negative. Some researchers are trying to figure out how to use AI tools to stop the spread of health myths.
For example, a University of Michigan study used AI-generated messages tailored to the intended audience to correct misconceptions about vaccines. However, the author stressed that “public health communication is too important to be left entirely to AI.”
Similarly, AI tools may be useful to detect false claims and generate fact checks of common health myths on social media. A University of California, San Diego study found machine learning, a type of AI that learns from data independently, to be better at identifying deception than humans.
Additionally, a University of Kansas study found that people preferred to “speak” anonymously to chatbots rather than with a human when they felt embarrassed by a health topic. With appropriate safeguards, the use of AI in these scenarios could encourage people to seek medical attention for awkward and stigmatized health conditions.
How to use AI to find accurate health information
AI can be a useful tool to answer your health-related questions as long as you use it with care.
“Generative AI tools use all the information that’s out there,” said Dr. Margaret Lozovatsky, a pediatrician and the vice president of digital health innovations at the American Medical Association, in a 2024 interview.
But she cautioned that “they don’t always have a way to assess whether it’s good information or bad information.”
Tips to use AI as a source for health information
- Use clear, specific prompts: How you ask AI for information is almost as important as what you ask. You can improve AI responses by avoiding overly general prompts and being specific about how you want the answer delivered. A prompt like “How does Ozempic work?” might elicit a long, overly complicated response that is difficult to understand. For a concise and simple answer, you could instead write, “Provide a brief explanation of how weight loss drugs like Ozempic work and make it understandable to a 10-year-old.”
- Ask AI to use reputable sources—with examples: You can also direct AI chatbots to use only reputable sources to generate answers. Be sure to give examples of the types of sources you want. For the Ozempic prompt above, you might add, “Please only cite information from reputable academic and peer-reviewed journals like Nature and Science, trusted health organizations like the CDC and WHO, and respected news sources like The New York Times and Reuters.”
- Always ask for and check the sources: As with any information you read online, you shouldn’t trust AI-generated responses that don’t cite sources. You can ask AI chatbots to “please include citations or references” in all answers. Then you can check that the sources are real, up-to-date, and credible.
- Push for better information: Asking follow-up or clarifying questions can help improve AI responses. If you ask for a list of risk factors for heart disease, you can follow up by requesting a more detailed summary of some or all of the risk factors, along with strategies to reduce heart disease risk. You can even get AI chatbots to improve their own answers by using prompts like, “Rewrite the response for a non-native English speaker” and, “Identify gaps or inaccuracies in the previous response.”
- Beware of AI “hallucinations”: AI tools have been known to generate inaccurate or absurd “information” with no explanation, a phenomenon called AI hallucination. For example, if you ask a chatbot to summarize a scientific paper, it might make up references that don’t exist. Use more than one source to compare and verify any information you get from chatbots.
- Use AI tools for explanations, not diagnoses: AI chatbots can be helpful for summarizing complex topics, explaining unfamiliar words, and identifying resources for further reading. AI chatbots cannot and should never be used to diagnose an illness or recommend a treatment.
- Never rely solely on AI: AI is a tool that can help you find and decipher health information, but it shouldn’t be considered a reliable source on its own. Think of AI-generated answers as a starting point, and remember to use multiple sources for health information.
- Confirm AI-generated information with a health care provider: Your health care provider is your best source for health information. Even if you get a “satisfactory” answer from an AI tool, you still need to follow up with an expert.
“It’s fun to try generative AI, but you should always be skeptical of the source,” said Dr. F. Perry Wilson, a Yale Medicine doctor, in a 2024 article.
“In the end, trust your doctors, as we are the ones who have the responsibility to look out for your best interest.”
