Skip to main content
1. New

Slot Resolver strategy inference suggestions

Related products:Agent Studio
  • February 18, 2026
  • 0 replies
  • 11 views

Forum|alt.badge.img+1

Problems:

  1. We do not have much influence over how to get slots to infer (descriptions and the 3 inference options are nice, but are not sufficient in many use cases)
  2. Slot Resolver Strategies are very poor with inference when the options to choose are objects with multiple attributes.
    1. I can make it work by just outputting the name of the value (list of strings instead of list of objects), but I also have description values for each option object that I want to be considered to assist in choosing the correct option.
    2. This approach also means the api calls are made twice for resolver strategies: once in the resolver strategy to get the string list and then again to get the object in the conversational process which is needed for further actions.
  3. If a slot is inferred incorrectly, the user can ask for a change, but it doesn’t handle that change very reliably, if at all.
  4. Asking for all options of a slot does not always work and if it does, it usually is cut off after around 20.

Current Workaround:

  • Use a generate_structured_value_action to take in a description slot (and possibly others like the user) and do the selection
    • This enables me to pass it multiple attributes to use for selection and build out a detailed prompt on how to choose the proper values.
    • I have it also return a confidence score so if it isn’t confident in the choice, I can exit the plugin with information to provide more detail
    • One negative here is that it then lacks the full context of the conversation. I can try and grab the context with a slot, but that does not always work reliably.
  • Use plugin, conversational process, and slot descriptions to try and influence behavior as much as possible for choosing options and displaying options properly, but this does not always work reliably.

While this is working well, I want slots themselves to be able to handle this sort of behavior instead of building around the problems.

Proposed solutions:

  1. For a slot, give us a text box to define the system_prompt and user_input just like the generate_structured_value_action. This should work seamlessly with resolver strategies. The system prompt can define the logic on how to use inputs to choose the proper slot value and when it is necessary to ask the user for an exact value.
  2. For slots that are not set to always infer, always output a human readable version of the value selected on the action when it is selected. At any time afterwards, the user may request to change the value. If they do so, the conversational process goes back to the action where the slot is set as required and moves forward from there. Slots set as required in following steps should be reconsidered in this case.
  3. If a user asks to see all options of a slot that has a resolver strategy, all options are presented to the user and they are not cut off after ~20. The max I have currently is around 130 options for 1 slot. Most are under 20, but I do have multiple over this.
  4. Give the context as another system provided variable that can be leveraged in a conversational process like data and meta_info