Discord Transparency Report: April — Dec 2019

Discord Transparency Report: April — Dec 2019

A Note from Jason and Stan

We created Discord in 2015 to bring people together around games, and it’s amazing to see the different ways people today use it to talk with their friends and communities.

However, with tens of millions of people active on Discord every day, keeping bad content and malicious behavior off requires vigilance — from both us and our users. While the Trust & Safety team makes life difficult for bad actors, our users play their part by reporting violations — showing that they care about this community as much as we do.

Last year we published our first-ever transparency report, providing a detailed snapshot of the work we do to ensure a safe and positive experience on Discord. As we continue to invest in Trust & Safety and improve our enforcement capabilities, we’ll have new insights and learnings to share. The next report will land in August as we move to a bi-annual cadence.


  • Discord’s overall user base grew significantly in 2019, and we also saw an increase in reports over the course of the year. We received roughly 176,000 over the last nine months, representing a 12% average monthly increase from the first three months.
  • Our Trust & Safety team also grew in 2019, and we took more enforcement actions than ever before. Discord banned around 5.2 million accounts from April to December 2019 — the overwhelming majority of those for spam violations — and banned nearly 20,000 servers. While this may seem like a sizable increase from our last report, that is largely due to a change in reporting methodology, which we go into greater detail about below.
  • Finally, we were also able to remove a majority of servers responsible for worst-of-the-worst exploitative and extremist content before any bystanders had to encounter them or their content.

Reports received

The Trust & Safety team spends much of its time responding to your reports. We encourage people to report any activity they believe violates our Community Guidelines or Terms of Service, and we investigate every report thoroughly. This chart breaks down the 176,000 reports we received by type of violation:

You may notice that the category breakdown in the above chart looks somewhat different from last year. The “Other Issue” category has shrunk, in part because we’ve added two new categories, “Incomplete Report,” and “Platform,” to provide greater detail and context. Broadly speaking, platform violations involve behavior that abuses our API, such as creating a selfbot or spamming our API with invalid requests. Incomplete reports refer to situations where there wasn’t enough information to determine whether a violation took place, so no action was taken.

Responding to reports

The chart below shows how we responded to every report from April to December of last year. (For more information on action rates and what they mean, check out the first Transparency Report.)

The highest action rates typically occur where there’s both a high degree of verifiability and the issue affects a lot of people, like spam or malware. Correspondingly, if it’s an interpersonal conflict between two people, the action rates tend to be lower because the facts are often difficult to verify. It’s also worth noting that in cases such as self-harm reports, child endangerment, or imminent harm, “actioning” may mean escalating to authorities outside Discord.

You’ll notice that the action rate of harassment reports is relatively low. That’s because harassment is a broad category that includes bad behavior of different types and levels of severity, some of which can be stopped simply by blocking a user or banning them from a server. For example, we receive reports from new users about how someone using foul language is harassing them. In those cases, we’ll educate them about the block function but not take any action on the reported user.

On the other hand, the action rate for doxxing — posting personally identifiable information — is fairly high. We take these reports extremely seriously and if confirmed we almost always ban the violator, in addition to deleting the content. Reports of malware and spam also see fairly high action rates, in part because there isn’t much gray area in what that constitutes.

User bans

In our first Transparency Report we only reported bans of users with a verified email address. This was because having an unverified account often indicates it’s a throwaway or duplicate account used by bad actors. However, as noted in the Summary section above, we are changing the reporting methodology, and going forward will report total account bans. To be fully transparent, and to make it easier to compare numbers historically, these are the ban figures from our first report using the new methodology:

We can now provide a more comprehensive overview of 2019. In this chart, we’ve grouped bans by category and tracked them across all four quarters.

You’ll notice this chart uses a logarithmic scale, since spam is such a huge category compared with everything else. In the last nine months, close to four million spambots were banned and removed from the platform, mostly by our anti-spam filters. Still, spammers are constantly evolving their behavior and we’re investing more in resources to catch them even earlier. Ultimately, we don’t think people should have to spend any time thinking about spam or protecting against it.

The second-largest type of user bans is exploitative content. We’ll go into more detail about this category a bit later since we’ve made an aggressive push to remove all users who engage in this behavior.

Server bans

In addition to taking action on individual users, we may also ban servers that violate our Terms or Guidelines:

Increased focus on proactive enforcement

Responding to user reports is an important part of Trust & Safety’s work, but we know there is also violating content on Discord that may go unreported. This is where our proactive efforts come in. Our goal is to stop these bad actors and their activity before anyone else encounters it. We prioritize getting rid of the worst-of-the-worst content because it has absolutely no place on Discord, and because the risk of harm is high.

Exploitative content is a major focus of our proactive work, more specifically non-consensual pornography (NCP), where intimate photos are shared as “revenge porn,” and sexual content related to minors (SCRM). We have a zero-tolerance policy for this activity and when we find it — either through reactive or proactive means — we remove it immediately, along with any users and servers sharing the content. In cases involving child sexual abuse material, we swiftly report the content and the users to the National Center for Missing and Exploited Children.

Another focus area for proactive enforcement is violent extremism, where we continue to take aggressive action on both groups and individuals. Violent extremism, in broad terms, is content where users advocate or support violence as a means to an ideological end. Examples include racially motivated violent groups, religiously motivated groups dedicated to violence, and incel groups.

So, what does proactive work entail? We don’t proactively read the contents of users’ private messages — privacy is incredibly important to us and we try to balance it thoughtfully with our duty to prevent harm. However, we scan 100% of images and videos uploaded to our platform using industry-standard PhotoDNA to detect matches to known child sexual abuse material. When we find someone who is engaging in this type of activity, we investigate their networks and their activity on Discord to proactively uncover accomplices or other sources of the content.

Here are the results of our proactive work from April through December 2019:

In all three of these areas, over half the servers were removed without bystanders having encountered their content. On average, about 70% of NCP servers were deleted before they were reported, and close to 60% of SCRM and extremist servers were deleted proactively. We’re committed to putting even more resources into these proactive efforts.

The appeals process

No system is perfect, and even if our team is confident we took the right action, there may be mitigating circumstances or information that wasn’t originally available. Accordingly, every action taken on an account can be appealed by the user.

When considering an appeal, Trust & Safety reviews the original report and takes into account the severity of the current violation and any previous violations by the user or server. It’s worth noting that most users who successfully appeal do express remorse, take responsibility, or provide a deeper explanation of what happened.

Between April and December 2019, we banned upward of 5.2 million accounts, and just slightly under 16,000 of those were reinstated on appeal. Here’s a breakdown of the unbanning rate by category:

The outlier in the unban category is platform violations. This is largely due to cases where users have (either knowingly or unknowingly) created a selfbot. It’s common for people to not be aware that this violates our Terms, so we’re often willing to give users who appeal the benefit of the doubt.


Every single malicious act on Discord is one too many. We want Discord to be welcoming for everyone, and we promise to do our part by working aggressively to stop bad behavior before it happens, while respecting people’s privacy.

In 2020 you can expect continued transparency from Discord. Thank you for putting your trust in us and for choosing to build your community here.

Discord Transparency Report: April — Dec 2019 was originally published in Discord Blog on Medium, where people are continuing the conversation by highlighting and responding to this story.

Source: Discord

Leave a Reply

Your email address will not be published.