Build better products with our product team.
Some organizations have limited how long you can re-open a incident. Today the bot will offer the ability to re-open the incident even if the case cannot be re-opened by policy. Apparently that message isn’t customizable. Could we either enable the ability to match the ITSM re-open policy to the bot prompts so the re-open option isn’t available? Another option could be to make the prompts customizable so the user can be notified they won’t be able to re-open the incident if it’s older than X days and to create a new case instead.
The thumbs up/down buttons are way smaller than the voting options in classic. Especially when dealing with users who have special needs it needs to be more accessible. Even regular users may not bother with giving feedback when it takes concentration to hit the button with the mouse. They should be a bit bigger than they are now.
Copilot understand when APIs are returning no records & handle the dialog appropriately. For Additional information - Please use -
Description:Adding the live chat interaction number (IMSxxxxx) and request ID to the analytics logs provided via Moveworks Data API for Smart Hand-off records is suggested.Business Need:These details are essential for accurately tracking live chat sessions, understanding user sentiment, and identifying areas for improvement. They enable more effective analysis of negative feedback and support the creation of reliable internal dashboards for performance monitoring.
Overview Currently, the platform’s analytics focus on user-initiated interactions with our Moveworks bot. However, as customer bots evolve to be more proactive—sending notifications and updates without direct user prompts—we lack visibility into the reach and effectiveness of these communications. To better understand the impact of bot-initiated messages, we propose the addition of analytics for proactive communications. This would provide insights into what types of messages are being sent, how frequently, and how users are engaging with them. Proposal: Implement Analytics for Bot-Initiated Messaging We request a new analytics capability to track and analyze any communication that originates from the bot, including but not limited to:• System-generated notifications (e.g., password reset alerts)• Employee communications tool messages• Creator Studio-initiated messages• Concierge or plugin-based notifications (e.g., ITSM status updates, comments)• External integration notifications (e.g., ITSM approval workflows) Key Metrics & Capabilities Requested:1. Message Volume Tracking – How many bot-initiated messages are being sent over time?2. Message Source Breakdown – Categorization by system, integration, or trigger type.3. Recipient Insights – Who is receiving these messages? Are certain groups more (or less) engaged than others?4. Engagement Metrics – How do users interact with bot-initiated messages? (e.g., click rates, dismissals, response rates)5. Impact Measurement – Correlation between notifications and user actions (e.g., did a status change notification lead to a follow-up inquiry?). Benefits:• Visibility – Understand the scale and scope of bot-initiated communications.• Optimization – Identify which messages are effective and refine strategies accordingly.• User Experience Improvement – Reduce notification fatigue by analyzing engagement trends.• Data-Driven Decision Making – Leverage insights to enhance proactive support efforts. By implementing analytics for proactive bot messaging, we can ensure that these communications are both meaningful and effective, ultimately improving the best possible user experience and the value of the Moveworks bot.
It would be great if the Analytics provided in the Moveworks Portal could be integrated as a source with our Bot, allowing Admins and Analytics Users of Moveworks to query the Bot with questions vs. manually navigating to the portal and clicking through tabs and options. Example: Hey Bot, what has been the trend of active users over the last three months? Hey Bot, what are the top three feedback trends raised by users? This could even be extended to Roles and Permissions (does John Smith have access to analytics?) and Employee Experience Insights.
Requesting a new feature that would allow us to track the performance of the Interception skill ran on General Request, Work Request, and Incident tickets. Currently, we can obtain raw data from our Customer Success Team upon request. However, we all agree that this method is not sustainable in the long term as it is time-consuming for all parties involved.We believe it would be beneficial to have a feature that allows us to measure and track this skill. This would provide insights into the effectiveness of the Interception skill and aid in enhancing user interaction. Furthermore, it would help us understand whether our bot is providing the correct information/form or if users are disregarding our bot. This information would be invaluable in shaping our education strategy.Therefore, we kindly request that this metric be incorporated into the standard analytics dashboard for easy tracking. We believe this addition would greatly enhance our ability to monitor and improve the performance of our bot in the Interception Skill.
We noticed that Webhooks notifications do not have the option for users to react to them. This stops us from collecting feedback from users easily on their functionality.Would it be possible to have the option to have feedback 👍/👎 added to Webhook plugins as well? Maybe as a selectable option on the launch configuration page of the plugin.I’m working around the issue by adding a feedback form on our notification. Thanks in advance for reviewing this idea 😊
Allow users to toggle Dark Mode on and off.
Hi Team,Product Idea: Execution‑Rate Limits and User Verification Workflow for Plugin ActionsProblemPlugins can be triggered repeatedly within a short time window—either due to automation loops, misconfiguration, or unintended user behavior. When a Plugin executes more than an expected number of times in a defined period, it introduces risks such as:- Unintended system changes being applied repeatedly- Increased load on downstream systems- Potential security concerns if repeated execution indicates misuse or compromised credentials- Loss of user trust if actions occur without clear intentThere is currently no native mechanism in Moveworks to rate‑limit Plugin executions or to validate authenticity when abnormal execution patterns occur.Proposed CapabilityIntroduce a configurable execution‑limit framework for Plugins, allowing administrators to define:- Max execution count (e.g., 10 executions)- Time window (e.g., within 1 hour, 24 hours, etc.)- Trigger conditions (per user, per device, per workflow, or global)When the threshold is exceeded, Moveworks should automatically:- Pause further executions of the Plugin.- Notify the end user (or admin) that the Plugin has been triggered unusually often.- Request confirmation from the user to validate authenticity.- Resume or block execution based on the response. Regards,Sravani S
For now, the bot only proposes one button to reopen tickets, no matter what kind of ticket it is (incident, request, problem, task...). The button is called “Re-open issue”. This is not logical, because a simple request is not an issue.I propose we rename the button as “Re-open ticket” so that it is neutral and works with all kinds of tickets.
As a product owner/manager, I’m often presenting a roadshow deck to stakeholders at different levels of the organization. As Moveworks continues to evolve, it would be incredibly helpful to have a self-service repository of “default” architecture slides and marketing materials that we could pull from as needed. While I can always reach out to my CSM, having this content available on demand would make building internal decks much easier.
Problems:We do not have much influence over how to get slots to infer (descriptions and the 3 inference options are nice, but are not sufficient in many use cases) Slot Resolver Strategies are very poor with inference when the options to choose are objects with multiple attributes. I can make it work by just outputting the name of the value (list of strings instead of list of objects), but I also have description values for each option object that I want to be considered to assist in choosing the correct option. This approach also means the api calls are made twice for resolver strategies: once in the resolver strategy to get the string list and then again to get the object in the conversational process which is needed for further actions. If a slot is inferred incorrectly, the user can ask for a change, but it doesn’t handle that change very reliably, if at all. Asking for all options of a slot does not always work and if it does, it usually is cut off after around 20.Current Workaround:Use a generate_structured_value_action to take in a description slot (and possibly others like the user) and do the selection This enables me to pass it multiple attributes to use for selection and build out a detailed prompt on how to choose the proper values. I have it also return a confidence score so if it isn’t confident in the choice, I can exit the plugin with information to provide more detail One negative here is that it then lacks the full context of the conversation. I can try and grab the context with a slot, but that does not always work reliably. Use plugin, conversational process, and slot descriptions to try and influence behavior as much as possible for choosing options and displaying options properly, but this does not always work reliably.While this is working well, I want slots themselves to be able to handle this sort of behavior instead of building around the problems.Proposed solutions:For a slot, give us a text box to define the system_prompt and user_input just like the generate_structured_value_action. This should work seamlessly with resolver strategies. The system prompt can define the logic on how to use inputs to choose the proper slot value and when it is necessary to ask the user for an exact value. For slots that are not set to always infer, always output a human readable version of the value selected on the action when it is selected. At any time afterwards, the user may request to change the value. If they do so, the conversational process goes back to the action where the slot is set as required and moves forward from there. Slots set as required in following steps should be reconsidered in this case. If a user asks to see all options of a slot that has a resolver strategy, all options are presented to the user and they are not cut off after ~20. The max I have currently is around 130 options for 1 slot. Most are under 20, but I do have multiple over this. Give the context as another system provided variable that can be leveraged in a conversational process like data and meta_info
Already have an account? Login
No account yet? Create an account
Enter your E-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.
OKSorry, our virus scanner detected that this file isn't safe to download.
OK