Hi Community,
I’d like to understand how your teams are handling ungrounded and hallucinated responses from the bot.
Hallucinations are relatively easier to spot since they usually come without citations. However, ungrounded responses tend to create more confusion for end users. In most cases, users don’t read the full cited article and rely mainly on the summary provided by the bot, which can sometimes be misleading.
For example, when a user asks to unlock their account, the bot surfaces an article about password reset and confirms that this will also unlock the account—which isn’t always true. This leads to incorrect expectations and user confusion.
I’d be interested to learn:
- How are you detecting or mitigating ungrounded responses?
- Are there any best practices you’ve adopted to reduce user confusion in such cases?
Looking forward to your insights and experiences.