Americans increasingly verify AI chatbot answers using Google or other sources

chatbot

Although AI chatbots like Microsoft Copilot, ChatGPT and Grok are now part of daily life for many Americans, regular users don't fully trust the answers the AI gives them.

A new survey from ChatOn shows that while conversational tools are widely used for speedy answers, writing tasks, and idea generation, concerns about accuracy and privacy are guiding user behavior and many users will turn to Google and other sources to double-check what they’re told rather than blindly trusting responses.

SEE ALSO: AI is fueling an explosive rise in fraud and digital identity crime

ChatOn's report reveals how Americans interact with AI, how often they rely on it, and the habits they’ve developed to be confident about the results they receive. Even as AI usage increases, the responses to the survey suggest most users approach AI with a blend of interest and caution rather than total trust.

The most common AI use case remains searching for information or getting quick answers. 74 percent of respondents said this is their primary reason for using a chatbot.

Writing and editing short messages followed at 65 percent, while brainstorming ideas came in third at 54 percent.

AI chatbot usage

Use frequency varies considerably. 22 percent turn to AI several times a day, while 14 percent check in once a day. 36 percent said they use chatbots a few times a week, and 16 percent a few times a month.

When it comes to AI chatbot proficiency, nearly half of those who responded to the survey rated themselves as intermediate users. 24 percent felt advanced, 3 percent considered themselves experts, 20 percent described themselves as casual and 4 percent beginners.

Users seem to prefer to learn by doing. 46 percent say they experiment with prompts to improve results, 27 percent consult friends, colleagues, or classmates, 21 percent turn to online tutorials, while 29 percent don’t actively bother to try to build new skills.

Accuracy habits are interesting. 48 percent say they asked follow-up questions to check answers. 42 percent rephrased prompts to get better results. 39 percent verified AI-generated information by looking it up on Google or elsewhere. ChatOn says these behaviors suggest users trust AI as a starting point rather than a final authority.

Privacy concerns are also a factor. More than half avoid sharing sensitive personal details. 42 percent won’t upload confidential files, and 36 percent avoid discussing work-related issues. These boundaries reflect an awareness of the potential risks in using AI.

Respondents also noted the problem with errors. 39 percent said they sometimes received irrelevant replies. 36 percent encountered outdated information, 33 percent contradictions, and 19 percent the old favorite of invented sources or references.

Dmitry Khritankov, Product Director at ChatOn, said the results highlight the gap between comfort and deeper understanding. "While most users rate their AI proficiency as intermediate or higher, their strong interest in improving their skills shows that familiarity doesn't equal mastery. This presents an opportunity for us as an AI chatbot developer to deliver even greater value by making AI more understandable, accessible, and safe," he said.

What do you think of these findings? Share your thoughts in the comments below.

Photo Credit: Ico Maker/Shutterstock

Why Trust Us



At BetaNews.com, we don't just report the news: We live it. Our team of tech-savvy writers is dedicated to bringing you breaking news, in-depth analysis, and trustworthy reviews across the digital landscape.

BetaNews, your source for breaking tech news, reviews, and in-depth reporting since 1998.

© 1998-2025 BetaNews, Inc. All Rights Reserved. About Us - Privacy Policy - Cookie Policy - Sitemap.