Skip to main content

đź’ˇArchitecture for an Autonomous Ticket QA Agent

  • January 22, 2026
  • 4 replies
  • 86 views

Hello Community!

I wanted to share a useful architecture for implementing an Autonomous Ticket Quality Assessment Agent using Moveworks. This can be done using any ITSM, but this example is specific to ServiceNow. This approach enables teams to run consistent, scalable QA evaluations across assignment groups, store results directly in ServiceNow, and proactively notify QA leaders with actionable insights.

 

The Use Case: Autonomous Ticket Quality Assessment

The goal is to evaluate the quality of resolved ServiceNow tickets on a recurring basis and provide structured feedback at both the agent and assignment-group level.

On a weekly scheduled trigger, the system:

  • Evaluates tickets by assignment group
     
  • Scores each ticket across standardized QA dimensions defined by the organization
     
  • Stores results back in ServiceNow
     
  • Generates a summarized report for QA managers to review

The Solution: AI-Driven Ticket QA

This architecture uses Agent Studio to combine HTTP Actions, LLM Actions, System Triggers, and ServiceNow persistence to create a closed-loop QA system. Using Moveworks’ built-in LLM Actions enables secure and efficient access to Moveworks’ underlying models for agent reasoning and decision-making. Below is a high-level overview of how this can be implemented in Agent Studio.

 

Architecture Flow

1. Scheduled Job Triggers Plugin to Retrieve Assignment Groups

  • Retrieves the list of ServiceNow assignment groups to evaluate
    (sys_user_group)

2. Pull Weekly Tickets Per Assignment Group

  • For each assignment group, fetch tickets created or updated during the evaluation window using the group’s sys_id
    (incident)

3. Score Each Ticket

For every ticket retrieved, Moveworks runs an LLM-powered QA evaluation using the following  built-in LLM Action:

  • mw.generate_structured_value_action

This produces a structured scorecard with 1–5 ratings and justifications across five QA dimensions:

  • Classification Accuracy
    Priority, impact, urgency alignment, and correct assignment group/category
     
  • Ticket Documentation
    Field completeness, clarity, and work/resolution notes quality
     
  • Customer Communication
    Timeliness and consistency of updates to the requester
     
  • Categorization
    Correct category and subcategory selection
     
  • Professionalism & Ownership
    Tone, stagnation avoidance, and accountability

 

4. Save Ticket QA Scores in ServiceNow

  • Each ticket’s QA scorecard is written back into a custom ServiceNow table for review and historical tracking
    (u_moveworks_ticket_scorecard)
     

📸 (QA scorecard attached to a ServiceNow ticket)

 

 

Assignment Group Reporting & Insights

5. Generate Weekly Assignment Group Summary

Once all ticket scorecards for an assignment group are completed, Moveworks generates a group-level summary using the following built-in LLM Action:

  • mw.generate_text_action

The summary highlights:

  • Average and recurring scoring patterns
     
  • Common quality gaps
     
  • Notable strengths and wins

📸 (Moveworks analysis on ticket scorecards.)

 

📸 (Recommended actions for improvement)

 

Proactive QA Notifications

6. Notify the QA Lead

  • The system looks up the configured QA recipient
     
  • Sends the assignment-group summary via an Agent Studio notify
     

This ensures QA leaders receive timely, actionable insights automatically, without needing to manually compile reports.

 

Edge Case Handling: No-Ticket Weeks

  • If an assignment group has no tickets during the evaluation period, the system:
     
    • Skips ticket scoring and summarization
       
    • Continues processing remaining groups
       
    • Avoids sending empty or unnecessary notifications
       

 

Why This Architecture Works Well

  • âś… Scales QA across multiple assignment groups
     
  • âś… Ensures consistent evaluations using structured LLM scoring
     
  • âś… Keeps ServiceNow as the authoritative data source
     
  • âś… Delivers proactive insights to QA leaders
     
  • âś… Minimizes ongoing manual effort
     

 

This approach creates a robust, automated QA feedback loop that drives continuous improvement across support organizations.

Please comment below if you have questions, ideas for enhancements, or have implemented something similar!

 

4 replies

rgeroulo
Forum|alt.badge.img
  • Employee
  • January 22, 2026

Awesome combination of the MW Scheduled Jobs and the native LLM actions🔥


  • Participating Frequently
  • January 22, 2026

Love this architecture. The mw.generate_structured_value_action for scoring is a clever use case, much better than trying to build custom eval logic. And storing it back in ServiceNow keeps everything auditable without needing a separate BI tool.

 

 

 

 


  • New Participant
  • January 23, 2026

Thanks for sharing the architecture breakdown for this plugin.

As you built this, do you you score all resolved tickets, or do you skip low-signal ones like auto-resolved / short tickets?


  • Author
  • Employee
  • January 23, 2026

Thanks for sharing the architecture breakdown for this plugin.

As you built this, do you you score all resolved tickets, or do you skip low-signal ones like auto-resolved / short tickets?

In this architecture, the default approach is to look at all tickets within the week for each assignment group, so you get a consistent baseline.

That said, the design is intentionally flexible. The ServiceNow query to get tickets is fully configurable, so you can absolutely filter out low-signal tickets if you want—things like auto-resolved incidents, very short-lived tickets, specific close codes, or even certain categories. You can tune this based on what your organization considers “meaningful” work.

The same applies to scope and timeframe: you’re not limited to weekly reviews or assignment groups only. You can analyze any time range, subset of tickets, or grouping (agent, service, priority, etc.) simply by adjusting the query logic upstream, while keeping the QA scoring and reporting flow exactly the same