The Discord Identity Breach: What Do We Know?
Is the UK's Online Safety Act flawed or is this the fault of the companies? What should users do?
In early October 2025, Discord disclosed that one of its third-party customer service vendors had been hacked. The attackers stole data used in age verification appeals, including government ID photos, names, email addresses, and support ticket histories.
This was not a breach of Discord, but rather a breach of a third party service provider, 5CA, that we used to support our customer service efforts.
This incident impacted a limited number of users who had communicated with our Customer Support or Trust & Safety teams.
Of the accounts impacted globally, we have identified approximately 70,000 users that may have had government-ID photos exposed, which our vendor used to review age-related appeals.
No messages or activities were accessed beyond what users may have discussed with Customer Support or Trust & Safety agents.
Discord stressed that its main systems weren’t breached, the issue lay with a subcontractor. But the damage was done. Up to 70,000 users may have had sensitive ID documents leaked, with hackers claiming access to far more.
The hackers themselves actually undertook an interview with BleepingComputer where they described the security breach:
According to the threat actor, they gained access to Discord’s Zendesk instance for 58 hours beginning on September 20, 2025. However, the attackers say the breach did not stem from a vulnerability or breach of Zendesk but rather from a compromised account belonging to a support agent employed through an outsourced business process outsourcing (BPO) provider used by Discord.
On October 14th this outsourced business, a company called 5CA, issued a statement insisting that they had not suffered a breach, and that they did not handle government IDs:
We are aware of media reports naming 5CA as the cause of a data breach involving one of our clients. Contrary to these reports, we can confirm that none of 5CA’s systems were involved, and 5CA has not handled any government-issued IDs for this client. All our platforms and systems remain secure, and client data continues to be protected under strict data protection and security controls.
We are conducting an ongoing forensic investigation into the matter and collaborating closely with our client, as well as external advisors, including cybersecurity experts and ethical hackers. Based on interim findings, we can confirm that the incident occurred outside of our systems and that 5CA was not hacked. There is no evidence of any impact on other 5CA clients, systems, or data. Access controls, encryption, and monitoring systems are fully operational and, as a precautionary measure, are under heightened review.
Our preliminary information suggests the incident may have resulted from human error, the extent of which is still under investigation. We remain in close contact with all relevant parties and will share verified findings once confirmed.
This is a messy story, and its currently unclear exactly who was at fault here and how the hackers managed to gain access to so much personal user information. Both Discord and 5CA are pointing fingers at each other. 5CA insisting they don’t handle government IDs is strange considering compromised users have apparently been contacted by Discord and law enforcement to inform them of this very matter.
For millions of families who use Discord as a social space for their children and teens, this should be a wake-up call, and a warning about how safety laws can create new risks.
Why Does This Breach Matter?
The UK’s Online Safety Act was designed to make the internet safer for children. It encourages or insists that platforms must verify user ages in order to filter what content people see. In practice this means platforms are now collecting, and sometimes outsourcing, large quantities of private identity data, including government-issued identity documents.
Discord’s breach now shows us what can go wrong:
The weakest link is often not the platform itself, but its vendors and as always, human error.
Once identity data leaves a user’s device, it’s at risk.
Children’s and teens’ IDs, including passports and driving licences, are now part of global breach data sets.
This incident highlights why Britain’s Online Safety Act, though perhaps well-intentioned, has its flaws. It was intended to protect children against widespread free pornographic material, but has since crept to cover a huge number of gaming and social platforms.
Most of these platforms like Discord don’t build their own ID verification systems. They hire third-party providers, often startups with unclear security policies. These vendors become a new class of data risk that the Act doesn’t properly regulate.
The law pushes for more data collection “to protect children.” But every extra database of personal IDs becomes another potential breach, especially when handled by external processors. Since each and every platform may require similar procedures, a teenager may have their personal information scattered across dozens or even hundreds of third-party companies by the time they turn 18.
If a subcontractor does then leaks data, who is responsible? The vendor or the platform? The Act’s language doesn’t clearly answer this.
This whole affair now brings into question the UK government’s attempts to introduce Digital ID cards, and centralised identity databases. At present the British system of distributed identity documents offers a level of protection against hackers, but if all of our identity, medical, legal, financial and other information is held by similar third-party vendors then we can expect to see more catastrophic data breaches.
What Parents (and Policymakers) Should Take From This
If an app demands ID verification, ask why and what alternatives exist (e.g. parental consent codes, device-level checks).
Platforms should publicly disclose their third-party processors and publish annual security audits.
Empowering parents and kids to understand risks, not automating safety through data collection, remains in our opinion the better long-term fix. Discord offers a range of account security options, including two-factor authentication. It may also be worth contacting them to see if any data for yourself or your child can be deleted.
One of the core principles of data protection, enshrined in the GDPR, DPA 2018 and other laws, as well as codes of best practice, is data minimisation. Once platform user ID has been verified, there should no reason to keep hold of that data, whether it is a passport or driving license or anything else. If we must have these checks then the rule should be verify then delete.
The DigiShield View
The Online Safety Act’s biggest weakness is that it confuses protection with control.
Safety requires trust, transparency, and informed families, not more databases.
We will continue tracking how UK legislation shapes digital privacy for children and pushing for smarter, less invasive alternatives.
