Skip to main content

    1068 Ideas

    Jared
    JaredInspiring

    Feature Request: Analytics for Proactive (“Pushed”) Bot Communications2. On Roadmap

    Overview Currently, the platform’s analytics focus on user-initiated interactions with our Moveworks bot. However, as customer bots evolve to be more proactive—sending notifications and updates without direct user prompts—we lack visibility into the reach and effectiveness of these communications. To better understand the impact of bot-initiated messages, we propose the addition of analytics for proactive communications. This would provide insights into what types of messages are being sent, how frequently, and how users are engaging with them. Proposal: Implement Analytics for Bot-Initiated Messaging We request a new analytics capability to track and analyze any communication that originates from the bot, including but not limited to:• System-generated notifications (e.g., password reset alerts)• Employee communications tool messages• Creator Studio-initiated messages• Concierge or plugin-based notifications (e.g., ITSM status updates, comments)• External integration notifications (e.g., ITSM approval workflows) Key Metrics & Capabilities Requested:1. Message Volume Tracking – How many bot-initiated messages are being sent over time?2. Message Source Breakdown – Categorization by system, integration, or trigger type.3. Recipient Insights – Who is receiving these messages? Are certain groups more (or less) engaged than others?4. Engagement Metrics – How do users interact with bot-initiated messages? (e.g., click rates, dismissals, response rates)5. Impact Measurement – Correlation between notifications and user actions (e.g., did a status change notification lead to a follow-up inquiry?). Benefits:• Visibility – Understand the scale and scope of bot-initiated communications.• Optimization – Identify which messages are effective and refine strategies accordingly.• User Experience Improvement – Reduce notification fatigue by analyzing engagement trends.• Data-Driven Decision Making – Leverage insights to enhance proactive support efforts. By implementing analytics for proactive bot messaging, we can ensure that these communications are both meaningful and effective, ultimately improving the best possible user experience and the value of the Moveworks bot.

    Python Script Actions Support in Agent Studio6. Delivered

    Introducing Python Script Actions — now Generally Available in Agent Studio across all commercial regions! Building custom logic in your agents just got a massive upgrade. One of the most consistent requests from developers has been clear: "APIthon is too limited — I need real Python with real libraries." Until now, you were stuck with a restricted scripting subset, making complex data transformations, calculations, and text processing either impossible or painfully hacky.Python Script Actions fix that.You can now write full Python 3 code directly inside Agent Studio (screenshots of old and new experience attached) — with access to all built-in modules and a curated set of pre-approved external libraries. Python Script Actions plug into Compound Actions and Conversational Processes exactly like APIthon scripts do, but with dramatically more power. Define input arguments from slots or previous step outputs, write your logic, and the value of the last line of code becomes your output. That's it!What's included:Full Python 3 with all standard libraries (json, datetime, re, math, collections, etc.) 8 pre-approved external libraries out of the box:  pandas , numpy , scikit-learn , beautifulsoup4 , nltk,  plotly ,  seaborn , pycryptodomex Same integration model as APIthon — define input args, write code, last line = output Built-in testing in the Agent Studio editor before you deploy 6 supported data types for input arguments: string, integer, number, boolean, array, objectFull documentation: Python Script Actions ReferenceTry it out and let us know what you think!  

    Slot Resolver strategy inference suggestions1. New

    Problems:We do not have much influence over how to get slots to infer (descriptions and the 3 inference options are nice, but are not sufficient in many use cases) Slot Resolver Strategies are very poor with inference when the options to choose are objects with multiple attributes. I can make it work by just outputting the name of the value (list of strings instead of list of objects), but I also have description values for each option object that I want to be considered to assist in choosing the correct option. This approach also means the api calls are made twice for resolver strategies: once in the resolver strategy to get the string list and then again to get the object in the conversational process which is needed for further actions. If a slot is inferred incorrectly, the user can ask for a change, but it doesn’t handle that change very reliably, if at all. Asking for all options of a slot does not always work and if it does, it usually is cut off after around 20.Current Workaround:Use a generate_structured_value_action to take in a description slot (and possibly others like the user) and do the selection This enables me to pass it multiple attributes to use for selection and build out a detailed prompt on how to choose the proper values. I have it also return a confidence score so if it isn’t confident in the choice, I can exit the plugin with information to provide more detail One negative here is that it then lacks the full context of the conversation. I can try and grab the context with a slot, but that does not always work reliably. Use plugin, conversational process, and slot descriptions to try and influence behavior as much as possible for choosing options and displaying options properly, but this does not always work reliably.While this is working well, I want slots themselves to be able to handle this sort of behavior instead of building around the problems.Proposed solutions:For a slot, give us a text box to define the system_prompt and user_input just like the generate_structured_value_action. This should work seamlessly with resolver strategies. The system prompt can define the logic on how to use inputs to choose the proper slot value and when it is necessary to ask the user for an exact value. For slots that are not set to always infer, always output a human readable version of the value selected on the action when it is selected. At any time afterwards, the user may request to change the value. If they do so, the conversational process goes back to the action where the slot is set as required and moves forward from there. Slots set as required in following steps should be reconsidered in this case. If a user asks to see all options of a slot that has a resolver strategy, all options are presented to the user and they are not cut off after ~20. The max I have currently is around 130 options for 1 slot. Most are under 20, but I do have multiple over this. Give the context as another system provided variable that can be leveraged in a conversational process like data and meta_info