Skip to main content
1. New

Presentation options for user feedback in Agent Studio

Related products:Agent Studio
  • December 16, 2025
  • 1 reply
  • 29 views

Forum|alt.badge.img+4

Our use case is similar to a CSAT Survey. In Classic Paths, we presented a series of static questions to the user, and they clicked on the appropriate button (Very Satisfied = 5 through Very Dissatisfied=1).

We need additional features in order to replicate this functionality in Agent Studio:
- “Human in the Loop” Action for Conversational Processes. This Action would always pause the conversational process to solicit an interaction with the user.
- More control over how options are presented to the user
- Show buttons to user for options

Our Conversational Process has 7 “input” slots to capture the user feedback that is gathered over the course of the conversation. Each slot is set to “Always prompt user” with the desired question in the slot description and a resolver strategy with the possible responses. The bot still infers the user input since many of the slots have identical text in the resolver strategy. An explicit Action to ask for user input as part of the Conversational Process would be welcomed.

We show ‘Very Satisfied’ to the user as the first option. The bot always picks this as ‘Option 1’ and allows users to select this option by pressing 1. Very Satisfied is a 5 and our users are familiar with that value due to how ServiceNow records this information. Asking users to select 1 for Very Satisfied is confusing and I do not see a way of preventing this from being shown to the users. 

We would like to have control over the presentation for soliciting user input. Along these lines, support for buttons and button labels with emojis would be welcomed.

1 reply

Kevin Mok
Forum|alt.badge.img+1
  • Community Manager
  • January 13, 2026

Hey ​@ffranzwa - I understand your request, and it makes sense. The current experience you're developing resembles the old dialog builders from before the agentic era. Conversation processes are meant to be conversational; they’re not designed to build dialogs.

I suggest making the CSAT survey more conversational by having the Assistant ask multiple questions at once, allowing users to respond or choose from 1-5.

Something along the lines of:

Hi {{user}}, We'd love to get your quick feedback on your recent experience. Using a scale of 1–5 (where 1 is the lowest and 5 is the highest), please reply with a number for each question in order:

(1) How satisfied were you overall with our service?

(2) How easy was it to get what you needed?

(3) How helpful was the AI in resolving your request?

(4) How confident are you in using this service again in the future?

Simply respond with a sequence like "4, 5, 4, 5." Thank you for your time!

 

I think that would make more sense and also make your users more engaged and productive when it comes to answering questions, instead of answering one by one.