Ally Moderator Workflow Proposal

back

The Ally Moderator Workflow

The fundamental entities around which Ally organises information are the Actor and the Story. Actors are the members of the community Ally is analysing who take part in a story. The story is the conversation between actors that has been classified and assigned to one or more themes. These themes include behaviours such as financial fraud, grooming, and hate speech among others.

Creating a Story

There are two ways a story can be created. Either the story has been created automatically by the Ally System, or a story can be created manually by a moderator. The manual process can be performed by using either the Ally Dashboard or the Ally API.

The Anatomy of a Story

A story is composed of the messages sent between the associated actors. The messages are colour coded to match these actors and messages that have been classified by the Ally System are labelled with the classification. Where possible, classified messages will include substring highlighting so the cause of the classification can be surfaced. There is also the option to reassign actor colours if moderators choose to convey meaning with their colour choices. For example, bad actors could be assigned warm colours, and recipients or good actors could be assigned cool colours. By default, however, colours are assigned in the same order for every story. The specific colours have been chosen to ensure their is sufficient contrast between actors. This combination of labels and colour coding allows for fast scanning of the story so moderators are able to understand the structure of the conversation and quickly determine whether an action needs to be taken against one or more actors.

For Stories that require discussion, moderators are able to annotate both the actors as well as the messages in a story. Being able to add a comment to an actor saying “This is the instigator” or “This is the victim” or adding a comment to a message such as “I think this suggests this person is trying to use social engineering to con the recipient” can be incredibly helpful if a story needs to be reassigned to a new moderator or escalated to a supervisor.

In many cases, a story will not include the entire conversation and will only incorporate the relevant messages plus some messages either side for context. If a moderator decides more context is required and additional messages are available, they are able to include these additional messages at the beginning or end of the story.

Classification Feedback

It is critical that Ally receives feedback from moderators indicating whether classifications are correct or not. This feedback loop can then be used to improve the quality of classifications and tailor Ally to the community being moderated. Mechanisms to support this feedback loop are available within the story view so moderators can efficiently provide their feedback while reviewing the story.

Verbal and Non-Verbal Events

The Ally API supports the ingress of both verbal events (ie chat messages) as well as non-verbal events (ie game events). The core of a story are the verbal events, but non-verbal events can be equally useful for providing context for a story. If a player reacts negatively, it is useful to know whether the cause was a message sent by another player or the fact that they just lost the game. This is particularly important when analysing negative responses to messages. An example of this is when a player receives an unwanted flirtation, they could react by sending a negative message or, more often not say anything at all. If the recipient chooses to leave the “room” or even the game, it can be critical to capture these non-verbal events as choosing to no longer take part in a community or a game can have a direct impact on the success of the game.

Workflow States

A story can be in one of multiple states. These states reflect where the story resides within the workflow.

  • Story In Progress: A potential story has been detected by Ally but the Story is in progress.
  • Ready for Moderation: The story is complete and ready for moderation.
  • Moderation in Progress: A moderator has begun the process of reviewing the Story.
  • On Hold: The moderation process is on hold. This could be because the Story was escalated to a supervisor or additional information needs to be gathered to complete the moderation process.
  • Resolved: The story has been resolved. This can either be due to an action taken by a moderator or because a story resolved itself. If a story was in progress, but the actors managed to resolve their issue then the story can be considered resolved.
  • Closed Without Action: The story was closed but no action was deemed necessary.
  • Closed With Action: The story was closed and an action against an actor was taken.

{TODO: Add the state transition diagram}

A story can progress through the workflow either by users of Ally or automatically by Ally depending on the business logic that has been set up.

Taking Action

During the moderation process, actions can be taken against actors. These actions can include warning, muting, banning, or any other action that the community supports. The list of available actions is tailored for each community so there is a one-to-one relationship between the available actions in Ally and the actions supported community in question. Ally can be integrated into the community via webhooks.

Repeat Offenders

When describing how to get the most out of Ally, we recommend moderators start with the actors list view. This view can be used as a way of triaging the most problematic actors. We know that a small number of bad actors can have a disproportionate impact on a community. These actors tend to be repeat offenders. By focusing on these actors first, the moderation team will be able to maximise their effectiveness.

The Actor Profile

Each actor has a profile page which aggregates information about their open and closed stories, actions taken against them, and their patterns of behavior. This view will allow moderators to quickly answer questions such as:

  • Has this actor been muted before?
  • How many hate speech stories have been created against this actor?
  • How frequently are stories created that include this actor?

This information provides the context required for moderators to be able to determine what action is appropriate to take against an actor. If the actor is a repeat offender and repeated temporary mute actions haven’t curbed their behaviour it may be time to ban them from the game.

High Level Moderation

The power of Ally comes into its own when we consider modes of moderation. Chat moderation typically involves reviewing and moderating individual messages. For a decently sized community, the number of messages sent per day can easily be in the tens of millions. Ally allows moderators to evaluate behaviour at the conversation and player levels. Not only does this result in more effective moderation as moderators are able to evaluate large chunks of the chat stream through targeted actions, but they are also less likely to burn out. Seeing toxic content ad nauseam takes it’s tole on the people exposed to this content. By allowing moderators to evaluate behaviour at a higher level, Ally helps break this cycle. Moderators are able to focus on questions such as “Does this actor repeatedly send explicit sexual content?” instead of “Is this message explicitly sexual?”.

Moderators are also able to apply actions at the actor level not just the story level. If a moderator is reviewing the behaviour of an actor and found that there are many stories that fit the same pattern and warrant the same response, these stories can be handled as a batch and resolved or closed together. The result could be closing hundreds of stories in the time it takes to review a few of them. This is especially powerful when it comes to dealing with repeat offenders.

Keyboard Shortcuts

Once a moderator is sufficiently familiar with how Ally works, they will find the use of keyboard shortcuts elevates their efficiency even further. The exact set of shortcuts can be tailored to meet the needs of the community moderation team.

Managing Work and the QA Process

Organisations, Products, and User Roles

Ally supports multiple data streams for a single organisation. We see each data stream reflecting the data from an individual game or product owned by the organisation.

These are the different roles a User can have:

Every user is either an Organisation Admin or an Organisation Member.

  • Organisation Admin: Can add other organisation admins and members as well as assign product roles to organisation members for their organisation. The organisation admin is a special user who can access all of the products for their organisation and perform any of the actions available to the product roles.
  • Organisation Member: All users who are not organisation admins.

Organisation members also have one of these roles for each product they are assigned to:

  • Product Admin: Can add/manage other product admins, supervisors, and moderators for the product(s) they are assigned to. Can do everything a product supervisor can.
  • Product Supervisor: Can assign work to the work queue of product moderators and access the QA view for their product. Can do everything a product moderator can.
  • Product Moderator: Can browse content in the Ally Dashboard (messages, actors, stories, and actions). Can create new stories. Can move a story through the workflow states.

NB: Although the Ally System does support multiple organisations, in production we have a different instance of the Ally System for each organisation. Every instance of the Ally System resides within its own Virtual Private Cloud (VPC). This allows us to ensure there isn’t any inappropriate sharing of data between organisations.

Work Management and Assignment

Each product moderator has a work queue that shows the stories that have been assigned to them. A story can only be assigned to a single user at any given time though a story can be reassigned.

There is a view for each user that allows them to see their queue and track their progress.

The QA Process

The primary approach to handle QA in Ally is spot checking how moderators closed stories and determine whether the actions taken against an actor meet the community management guidelines. Product supervisors and admins can access the QA view as well as see the results of the QA process. Product moderators are able to see the QA feedback for stories they closed.

System and moderator analytics

We feel feedback is critical to confirming success and finding issues within Ally and the moderation process. This is why Ally will include a view within the dashboard that displays time series analysis and statistics covering both how users work with Ally and how Ally is performing.

This view will enable pinpointing inefficiencies in the moderator process as well how effective the Ally business logic is. For example, it should be obvious when one of the classifiers is under-performing and requires tweaking or retraining.

Crucially, it will be possible to see when events take place when reviewing the time series charts. It is critical for both our team and yours to be able to see after a deployment how the deployed changes have impact key performance indicators for Ally and the moderation team.

Iterative Improvement through analysis

The Ally analytics view makes it easy to pin point performance problems with both automated processes as well as moderator effectiveness. We’ll work with you to iteratively improve the moderator workflow and overcome performance problems. The analytics view will allow you to see what issues we ned to focus on as well see whether a deployment has resolved the issue at hand.


© 2019. All rights reserved.