Skip to main content
5. Not Enough Info to Make Decision

Slot Resolver strategy inference suggestions

Related products:Agent Studio
  • February 18, 2026
  • 4 replies
  • 28 views

Forum|alt.badge.img+2

Problems:

  1. We do not have much influence over how to get slots to infer (descriptions and the 3 inference options are nice, but are not sufficient in many use cases)
  2. Slot Resolver Strategies are very poor with inference when the options to choose are objects with multiple attributes.
    1. I can make it work by just outputting the name of the value (list of strings instead of list of objects), but I also have description values for each option object that I want to be considered to assist in choosing the correct option.
    2. This approach also means the api calls are made twice for resolver strategies: once in the resolver strategy to get the string list and then again to get the object in the conversational process which is needed for further actions.
  3. If a slot is inferred incorrectly, the user can ask for a change, but it doesn’t handle that change very reliably, if at all.
  4. Asking for all options of a slot does not always work and if it does, it usually is cut off after around 20.

Current Workaround:

  • Use a generate_structured_value_action to take in a description slot (and possibly others like the user) and do the selection
    • This enables me to pass it multiple attributes to use for selection and build out a detailed prompt on how to choose the proper values.
    • I have it also return a confidence score so if it isn’t confident in the choice, I can exit the plugin with information to provide more detail
    • One negative here is that it then lacks the full context of the conversation. I can try and grab the context with a slot, but that does not always work reliably.
  • Use plugin, conversational process, and slot descriptions to try and influence behavior as much as possible for choosing options and displaying options properly, but this does not always work reliably.

While this is working well, I want slots themselves to be able to handle this sort of behavior instead of building around the problems.

Proposed solutions:

  1. For a slot, give us a text box to define the system_prompt and user_input just like the generate_structured_value_action. This should work seamlessly with resolver strategies. The system prompt can define the logic on how to use inputs to choose the proper slot value and when it is necessary to ask the user for an exact value.
  2. For slots that are not set to always infer, always output a human readable version of the value selected on the action when it is selected. At any time afterwards, the user may request to change the value. If they do so, the conversational process goes back to the action where the slot is set as required and moves forward from there. Slots set as required in following steps should be reconsidered in this case.
  3. If a user asks to see all options of a slot that has a resolver strategy, all options are presented to the user and they are not cut off after ~20. The max I have currently is around 130 options for 1 slot. Most are under 20, but I do have multiple over this.
  4. Give the context as another system provided variable that can be leveraged in a conversational process like data and meta_info

4 replies

Ajay Merchia
Forum|alt.badge.img+3
  • Community Manager
  • March 23, 2026

This is really detailed, Andrew — and the generate_structured_value_action with confidence scoring is a clever workaround.

That said, a lot of what you're describing should be achievable with the right slot configuration today, and we'd love to understand more about what you're building end-to-end — it's a little hard to tell from the post alone exactly where the friction is. Would you be open to walking us through the specific use case?

  • What does the object schema look like for the slots where inference is struggling?
  • When you say mid-conversation changes aren't reliable — what does the user say, and what does the bot do in response?
  • How many of your plugins are using the structured value workaround today?

That way we can best advise on whether there's a configuration path or a real gap here.


Ajay Merchia
Forum|alt.badge.img+3
  • Community Manager
  • March 23, 2026
Updated idea status 1. New5. Not Enough Info to Make Decision

Forum|alt.badge.img+2
  • Author
  • Inspiring
  • March 27, 2026

For choosing some options, there is a description and other fields that are useful in helping the bot and user choose the correct option. For example, I could have a list of options returned from an API like the following:

[
{
"name": "option 1",
"id": 1234,
"bu": "IT",
"description": "Only for use by IT. There are other sites specifically for other BUs."
},
{
"name": "option 2",
"id": 12345,
"bu": "Security",
"description": "Only for use by Security. There are other sites specifically for other BUs."
},
{
"name": "option 2",
"id": 123456,
"bu": "Marketing",
"description": "Only for use by Marketing. There are other sites specifically for other BUs."
}
]

If I pass this full object to a slot resolver, it will not be able to resolve properly as it is then an object, not a string, and fails to function properly. If I have it just be a string list of the names, it will work, but then the bu and description are completely lost an not able to be used to help make the decision. The id is also lost so I cannot get the id to be used in a further api call without making the same api call again.

In newer situations, I am doing the list of strings, but what this has resulted in is making the api to get the options twice. First in the slot resolver and second in the conversational process just so I am able to filter on the options to get the id. This causes the plugin to be slower, double the api calls, and a loss of the description and bu attributes in helping to make the decision. Therefore, it is preferred to have the slot be an object to reduce the number of api calls and plugin length.

For the proposed solution #2, the bot does show the selected value most of the time, but not 100% and changing the value to something else does not reliably work.

Proposed solution #3 and $4 are still applicable. 


Forum|alt.badge.img+2
  • Author
  • Inspiring
  • March 27, 2026

To fully answer your questions:

  1. Object schema is similar to the json shown above
  2. If I say “I need help with XYZ”, the slot resolves to a long description of “user needs help with XYZ and had done abc made up troubleshooting steps”. I am sometimes shown this description before the ticket is created. If I say no, the description is just “I need help with XYZ”, it might change the value shown to the user to be that string, but the slot is still resolving to “user needs help with XYZ and had done abc made up troubleshooting steps”.
    1. Another issue could be decision trees. If a slot is inferred incorrectly to be “A”, more or less information may be collected via decision trees. If the conversational process doesn’t go back to where that slot was made mandatory, the request may be missing information, or have collected incorrect information.
  3. I have around 4 where it was preferred to do this over the list of strings approach. They are high usage plugins however (onboarding / access requests).