Skip to main content
Question

Agent Studio software lookup: routing conflicts + candidate list capped at 6 + “confirm by typing” loop (sanitized config included)

  • February 4, 2026
  • 0 replies
  • 21 views

hundleymf
Forum|alt.badge.img+4

Hi Moveworks Community — we’re looking for help troubleshooting routing/trigger conflicts and candidate list selection behavior for an Agent Studio software lookup experience. I’m including sanitized configuration details below (no internal URLs, no people info).

1) What we built (use case + goal)

We built an Agent Studio plugin to let users search an approved software SharePoint list by software title, including spacing/variants and misspellings (e.g., “Power BI” vs “PowerBI”). Output should show Title + Approval Status + (Restrictions only when applicable) + Corning Standard + Order/Access URL.

2) What Moveworks engineering has already done (important context)

We worked directly with a Moveworks engineer and have already implemented multiple changes/guardrails:

  • Added steerability / deprioritization for forms to reduce surfacing of software request forms in these flows (forms may still be evaluated, but the goal is to prevent forms from becoming the final answer).
  • Implemented a Dynamic Slot Resolver by creating a new HTTP action to fetch software titles from SharePoint and using it as the resolver strategy for the software slot. This was intended to prevent empty results that cause fallback to other tools (forms/knowledge).
  • Noted why fallbacks still happen: forms can still be triggered/evaluated as the model calls multiple tools; when our plugin returns empty data (software_data = []), the assistant is more likely to answer using forms/knowledge.

We still have major gaps in the end-user experience and in ensuring this use case consistently “wins” routing.

3) Current problems (what we’re seeing)

 

A) Routing conflicts (use case not consistently “winning”)

Even for “I need/install/request/order ” style prompts, the assistant sometimes routes to self-install guidance, Software Center, request forms, or knowledge articles instead of using our plugin response end-to-end.

Question: What is the best-practice configuration pattern to ensure our Agent Studio plugin is the final response when the user’s intent matches these software-title prompts (even if other tools get evaluated)?

 

B) Candidate list behavior is capped / incorrect (Adobe example)

Expected: If user types “Adobe”, the assistant should return all matches (we have around 25) and prompt selection 1–25. If the user enters “2”, it should immediately return details for #2.

Actual behavior we see:

  • The assistant returns random Adobe products and/or only up to 6 candidate options even though there are many more matches (25).
  • It asks user to choose 1–6; after the user chooses a number, it then asks for “confirmation” and effectively requires the user to type the software name to proceed.

Questions:

  1. Is there a known cap/behavior that limits candidate lists (e.g., 6 options) even when Output Cardinality is set to interpret output as a list of candidate values?
  2. What’s the recommended pattern to support “many matches” (e.g., 25+) with numeric selection that resolves immediately (user enters “2” → show details for item 2), without requiring retyping?

 

C) Misspellings cause a “confirm by typing” loop

We see similar behavior with misspellings. Example pattern:

  • Bot suggests: “1) Strawberry Perl 2) Did you mean something else”
  • If user selects “1”, it then asks for confirmation and requires typing to proceed.

Question: How do we improve the UX to: “No exact match. Here are close matches — pick a number,” and then proceed cleanly with that selection?

 

4) Additional functional requirements we want (seeking best practices)

  • When presenting multiple options, add a short note that the request will go through the normal review/approval process (wording provided internally).
  • If Corning Standard = “N/A”, omit that field (don’t display it).
  • If Form URL is blank, return no response for that title (missing URL indicates a data issue and we do not want users proceeding).
  • Improve fuzzy search messaging: “No exact match, but here are close matches — is this what you meant?”

 

5) Sanitized configuration details (no internal URLs/PII)

 

Plugin

Plugin name: IT_Software_Search_Software

  • Uses positive/negative utterance examples in the plugin config (screenshot available).

Note/question: In our negative examples we currently have single-title entries like “Teams”, “Acrobat”, “Adobe”, etc. alongside functional negatives (e.g., “I need an editing software”). 

 

Conversational Process

Conversational Process name: IT_Software_Search_Software_Process

Slot

  • Slot Name: software_name
  • Data Type: string
  • Slot Description: “the specific name of the software application the user is inquiring about.”
  • Resolver Strategy: Dynamic
    • Action: fetch_all_sharepoint_software
    • Input Mapping: { }
    • Output Mapping (current): maps candidate values from SharePoint titles
    • Output Cardinality: Interpret output as a list of candidate values
    • Input Args: { }

 

Compound Action

Compound Action name: IT_Software_Check_Software_Status_Ca

Action 1: IT_Software_Search_Software_By_Title_Action

  • Output Key: sharepoint_response
  • Input Arg: software_title = data.software_name_input
  • Progress Update: “Looking up Available Software…”

Action 2: IT_Software_Filter_Software_Results_Script

  • Output Key: filtered_result
  • Input Args:
    • raw_items = data.sharepoint_response.value
    • user_query = data.software_name_input

Return / Output Mapper:

  • software_results = data.filtered_result.software_results

 

Compound Action wrapper (where the model instruction is generated)

Action name: IT_Software_Check_Software_Status_CA

  • Input Mapping: software_name_input: data.software_name
  • Output Mapping:
    • software_data: response.software_results
    • display_instructions_for_model: conditional — if results exist, output “SYSTEM OVERRIDE: DIRECTORY LOOKUP MODE …” (instructions to use only software_data, format rules for Approved/Denied/Restrictions, etc.); else “No exact matches found.”
  • Execution behavior: “Wait for this action to fully complete before resuming the rest of this Conversational Process.”

 

HTTP Actions

HTTP Action 1: fetch_all_sharepoint_software

  • Base URL: https://graph.microsoft.com/v1.0
  • Auth: OAuth 2.0
  • Query Params: expand = fields

HTTP Action 2: IT_Software_Search_Software_By_Title_Action

  • Base URL: https://graph.microsoft.com/v1.0
  • Auth: OAuth 2.0
  • Query Params:
    • expand = fields
    • $filter = startswith(fields/Title, '{{software_title}}')
  • Headers:
    • Prefer = HonorNonIndexedQueriesWarningMayFailRandomly

 

Script Action

Script Action name: IT_Software_Filter_Software_Results_Script
High-level behavior:

  • Lowercases the user query and filters results where the search term appears in the Title.
  • Builds output objects based on Approval Status values:
    • Denied → Title + ApprovalStatus only
    • Approved → Title + ApprovalStatus + CorningStandard + HowToOrder
    • Approved with restrictions → include Restrictions

 

6) What we want help with (explicit asks)

  1. How to ensure this plugin “wins” routing for software-title prompts without forms/knowledge taking over as final answer.
  2. How to return 25+ candidate values (e.g., Adobe) instead of 6, and let numeric selection work without “confirm by typing.”
  3. Best configuration for fuzzy match and misspelling resolution UX (“no exact match, here are candidates”).
  4. Any known limits or patterns around candidate list display, numeric selection, and confirmation loops.

Thanks in advance — any references, patterns, or product guidance would be appreciated.