Overview
Vercel is a complete platform for the web. It provides developer tools and cloud infrastructure to build, scale, and secure a faster, more personalized web. With Vercel's platform and developer tooling, users can deploy code from their computers to the web in seconds, collaborate with teammates on pre-production versions of their websites, and benefit from other useful features like frontend analytics, instant deployment rollbacks, and integrated testing. Vercel's platform can handle usage needs at enormous scale, mitigate DDoS attacks, route traffic to over 100 edge locations around the globe to increase performance speed, and automatically cache sites to ensure uptime.
When it comes to content on Vercel's platform and our users, we take seriously our responsibility to preserve freedom of expression while countering illegal content that can harm individuals and society. This is especially important because Vercel is a platform with global reach and our users' hosted content is viewed around the world by people from different cultures, languages, and backgrounds. We act against harmful content through a combination of our policies and terms of service, detection systems, reporting tools, human review, review tools and systems, and content moderation workflows.
In this report, we outline and provide metrics contemplated by the DSA regarding our moderation practices for potentially illegal content and policy-violative content in the EU. We are committed to improving and augmenting future iterations with further insights about violative content on our platform.
Content Moderation Measures and Processes
Vercel moderates content in both proactive and reactive ways.
Our proactive moderation happens as a result of internal detection of problematic usage and content on the platform. These detections are based on tooling and methods that look for abusive patterns of usage and abuse-related content. Vercel's platform abuse operations teams and engineers use these tools to investigate abuse-related activity on our platform.
Our reactive moderation happens as a response to notice given to Vercel by external entities, such as abuse reports filed with our reporting tool vercel.com/abuse, DMCA notices and legal requests submitted to Vercel, and other forms of communication with Vercel. To respond to reported problems, Vercel's abuse operations teams follow specialized content moderation workflows so that their actions and issue resolution outcomes align with our acceptable use policy and terms of service. An example type of content moderation is in response to copyright infringement claims in the form of Digital Millennial Copyright Act (DMCA) notices. We have provisions in our terms of service and DMCA Policy for how users can contact us to submit a notice. Our team assesses each notice and takes action as appropriate, such as removing the content from our hosting platform.
Vercel's content moderation is performed by our trust & safety team, which includes as two of its sub-teams: an abuse operations team, and a platform abuse operations team. Both of these sub-teams have documented their content moderation processes as workflows and they educate and train new team members using these workflows in collaborative work sessions. Content moderation actions can include removing user access to user accounts or teams, removing user or team content from public view, removing user or team content from user or team access, and blocklisting specific content from the Vercel platform. In 2024, as part of Vercel's DSA compliance efforts, automated email notices are being introduced across content moderation action types so that users and reporters get immediate and automatic notice about content that is being moderated including rationale and any avenues of recourse for appeals of moderation actions.
The number of reports processed that resulted in content moderation, grouped by the channel through which Vercel initially received the report.
Contact Method | Number of Complaints Received |
---|
Abuse Form | 360 |
---|
Email | 6359 |
---|
Internal | 8024 |
---|
internal_automated_fingerprint | 0 |
---|
internal_automated_yara_rule | 0 |
---|
internal_auto_closed_case | 0 |
---|
Other | 19 |
---|
Reporter API | 194 |
---|
Support Center | 2 |
---|
The number of reports processed that resulted in content moderation, grouped by the way the problem was surfaced to Vercel.
Type of Illegal Content/Violation | Detection Method: Self-Identified | Detection Method: Other |
---|
Child sexual abuse material | 126 | 57 |
---|
Copyright infringements | 0 | 7 |
---|
Cyber harassment | 0 | 1 |
---|
General calls or incitement to violence and/or hatred | 0 | 0 |
---|
Impersonation or account hijacking | 0 | 0 |
---|
Inauthentic accounts | 15 | 1 |
---|
Other: Commercial use on Hobby | 0 | 0 |
---|
Other: DMCA notices | 0 | 1015 |
---|
0 | 0 |
---|
Other: Miscellaneous illegal | 13 | 5 |
---|
Other: Namespace (Namesquatting) | 0 | 0 |
---|
Other: Namespace (Resale) | 0 | 0 |
---|
Other: Ownership dispute | 0 | 0 |
---|
Other: Phishing (Crypto) | 18 | 266 |
---|
Other: Platform misuse | 3987 | 142 |
---|
Phishing | 3851 | 4609 |
---|
Prohibited or restricted products | 0 | 9 |
---|
Promoting criminal activity | 12 | 2 |
---|
Trademark infringements | 0 | 809 |
---|
Unsafe or non-compliant products | 2 | 11 |
---|
The number of external complaints received that resulted in content moderation, grouped by the type of violation.
Type of Illegal Content/Violation | Number of external complaints received |
---|
Child sexual abuse material | 57 |
---|
Copyright infringements | 7 |
---|
Cyber harassment | 1 |
---|
General calls or incitement to violence and/or hatred | 0 |
---|
Impersonation or account hijacking | 0 |
---|
Inauthentic accounts | 1 |
---|
Other: Commercial use on Hobby | 0 |
---|
Other: DMCA notices | 1015 |
---|
0 |
---|
Other: Miscellaneous illegal | 5 |
---|
Other: Namespace (Namesquatting) | 0 |
---|
Other: Namespace (Resale) | 0 |
---|
Other: Ownership dispute | 0 |
---|
Other: Phishing (Crypto) | 266 |
---|
Other: Platform misuse | 142 |
---|
Phishing | 4609 |
---|
Prohibited or restricted products | 9 |
---|
Promoting criminal activity | 2 |
---|
Trademark infringements | 809 |
---|
Unsafe or non-compliant products | 11 |
---|
The number of own initiatives taken that resulted in content moderation, grouped by the type of violation.
Type of Illegal Content/Violation | Number of Own Initiatives Taken |
---|
Child sexual abuse material | 126 |
---|
Copyright infringements | 0 |
---|
Cyber harassment | 0 |
---|
General calls or incitement to violence and/or hatred | 0 |
---|
Impersonation or account hijacking | 0 |
---|
Inauthentic accounts | 15 |
---|
Other: Commercial use on Hobby | 0 |
---|
Other: DMCA notices | 0 |
---|
0 |
---|
Other: Miscellaneous illegal | 13 |
---|
Other: Namespace (Namesquatting) | 0 |
---|
Other: Namespace (Resale) | 0 |
---|
Other: Ownership dispute | 0 |
---|
Other: Phishing (Crypto) | 18 |
---|
Other: Platform misuse | 3987 |
---|
Phishing | 3851 |
---|
Prohibited or restricted products | 0 |
---|
Promoting criminal activity | 12 |
---|
Trademark infringements | 0 |
---|
Unsafe or non-compliant products | 2 |
---|
This figure would show the median time to acknowledgment and action, grouped by the Member State however for general complaints, Vercel does not collect reporter location data, and no reports from Member State official entities were actioned via content moderation. All reports from Member State official entities were resolved by Vercel users via passing the requests on to them. Moving forward, Vercel plans to collect reporter location information so that future reports will have this country-level granularity.
The number of reports actioned, grouped by reason and type of restriction that was applied.
Type of Violation | Service Suspension | Visibility Restriction | Account Restriction |
---|
Child sexual abuse material | 65 | 54 | 64 |
---|
Copyright infringements | 0 | 7 | 0 |
---|
Cyber harassment | 0 | 1 | 0 |
---|
General calls or incitement to violence and/or hatred | 0 | 0 | 0 |
---|
Impersonation or account hijacking | 0 | 0 | 0 |
---|
Inauthentic accounts | 13 | 0 | 3 |
---|
Other: Commercial use on Hobby | 0 | 0 | 0 |
---|
Other: DMCA notices | 8 | 1001 | 6 |
---|
0 | 0 | 0 |
---|
Other: Miscellaneous illegal | 11 | 1 | 6 |
---|
Other: Namespace (Namesquatting) | 0 | 0 | 0 |
---|
Other: Namespace (Resale) | 0 | 0 | 0 |
---|
Other: Ownership dispute | 0 | 0 | 0 |
---|
Other: Phishing (Crypto) | 154 | 22 | 108 |
---|
Other: Platform misuse | 2638 | 45 | 1446 |
---|
Phishing | 4443 | 411 | 3606 |
---|
Prohibited or restricted products | 6 | 1 | 2 |
---|
Promoting criminal activity | 7 | 0 | 7 |
---|
Trademark infringements | 4 | 801 | 4 |
---|
Unsafe or non-compliant products | 5 | 1 | 7 |
---|
This section would contain the number of reports actioned, grouped by the type of violation and the Member State, however for general complaints, Vercel does not collect reporter location data, and no reports from Member State official entities were actioned via content moderation. All reports from Member State official entities were resolved by Vercel users via passing the requests on to them. Moving forward, Vercel plans to collect reporter location information so that future reports will have this country-level granularity.