This is a legislative proposal for Legislative Proposal Request | Official Website - Assemblyman Rob Bonta Representing the 18th California Assembly District
Submitted by
David Gunderman, Broker Associate - Keller Williams Luxury International Realty
Don Marti, web advertising professional
Require large social media companies to notify advertisers when their ads are shown on illegal or policy-violating content.
Problem - Please describe the problem(s) that the proposal would address (please be specific, with supporting data and sources)
Today, social media companies choose to host a disturbing variety of illegal activity, including
Besides the high-profile political controversy that these large companies are best known for, social media platforms also host other crimes and scams, including
Social media companies choose to under-invest in implementation of the law, and even of their own policies, because they can get away with it. Social media services design their systems with "filter bubbles" to show specific content to specific individuals, and to hide illegal activity from the advertisers whose ads pay for it.
The big picture problems of social media are the subject of much active research and debate. While this is important work in academic and policy circles, the need for research must not be allowed to delay a much-needed remedy for local California businesses.
Although the main victims of social media practices are users, the under-rated victims are the advertisers who unknowingly pay to sponsor illegal activity. Social media companies sell a secretly adulterated product—ad placements on activity that the advertiser would not choose to sponsor if aware of it.
Advertisers seldom choose to place their advertising on illegal activity and often pull their ads when they discover it. However, social media companies are failing to inform advertisers when the advertisers do sponsor illegal activity, even when advertisers are supporting content that social media companies themselves will not associate with their end-user-facing brands.
It is impractical, or prohibited by terms of service, for advertisers to monitor every piece of content that their ads are associated with on social media sites. Ads and content are associated automatically, and users may see the same ad in association with many different pieces of content.
Advertisers pay the bills for illegal activity because they lack the information they would need to stop doing so. Social media companies have the needed information already, and need to disclose it.
Solution - Please describe the proposal and how the proposal would address the problem (please be specific, citing existing law if possible)
We propose a new common-sense requirement for large social media companies: When a company takes action to remove content or substantially limit its circulation, the social media company must disclose to the advertisers affected:
-
What content was removed
-
On what grounds the content was removed
-
How long the content stayed up
-
How many users were affected, if known
-
The amount that the advertiser was charged for their placement on the content
If the illegal content was another advertisement, the company should disclose to nearby advertisers. For example, if a real mortgage broker had their ad run along with an ad for a cryptocurrency scam, and the cryptocurrency scam ad is removed, the mortgage broker should be informed.
Disclosure should include violations of laws that apply to the affected users, even if they do not apply to the advertisers. For example, if a California advertiser's ad ends up on a video that is not allowed to be displayed in Germany, and the ad is shown to users in Germany before the video is blocked there, then the California advertiser should be notified even if the video is still available to users in California. The advertiser's reputation is being affected from the point of view of the audience they are paying to reach.
Cost - Please describe the estimated cost of proposal and identify the entity that would pay for the proposal. If state would pay, please identify a source for the funding and where you would recommend cutting state spending to pay for the proposal.
The costs of this proposal would be paid by the large social media companies. This proposal would apply to a small number of firms: any companies that both distribute user-generated content from millions of California users and that accept advertising from thousands of California advertisers.
The cost of compliance would be minimal, as social media companies already collect, store, and report on the information that they would be required to disclose. They use this for their own internal reporting purposes and to contribute to industry forums. One company showed the extent of this data collection by discussing research on ad adjacency. And the companies already maintain a convenient online reporting interface for their advertisers.
The proposal could be implemented as simply as by adding an HTML table and a text notification to an existing "advertiser dashboard" web application, along with a database query to support it. The actual cost for implementation by each firm affected would be under $50,000.
After social media companies begin the required disclosures under this proposal, they might choose to accelerate their existing plans to improve what are now under-resourced moderation and ad review functions. The cost of this follow-on work would depend on a variety of compensation decisions, but would likely involve hiring of technical, policy, and subject matter experts in the state of California.
-
Law enforcement. Incentivizing the early removal of illegal material would help prevent social-media-enabled crime that can otherwise become a problem at the local level. (For example, private gun sales on social media can result in weapons becoming available to those who cannot legally own them.)
-
Small business. Many local businesses, including retailers and real estate professionals, have a difficult choice. Today, they can only avoid sponsoring criminal activity by refraining from advertising on social media entirely. This would require them to give up an easy self-serve tool to reach local customers. Disclosure would enable each business decision maker to buy advertising based on their own values and on which content is delivered to their actual customers.
-
Reporting protects First Amendment rights of social media users and advertisers. Some advertisers may choose to continue to support controversial material. For example, fans of the rap group Insane Clown Posse were listed as a gang by the National Gang Intelligence Center. This listing could result in policy violations and content deletions affecting those fans on social sites. If an advertiser chooses not to take action on notifications and continues to support content related to Insane Clown Posse, no new law should interfere with their ability to do so. Social media companies also sometimes remove or restrict content in error, including content that advertisers would want to continue to support.
-
In pandemic times, rapid accountability for social media advertising is good for public health. Much of the material that is eventually removed from social media sites turns out to be either an ordinary scam based on people's health concerns, or active disinformation about a health-related conspiracy theory. When legitimate health care providers are aware of, and able to disconnect from, misleading health content, consumers are less likely to be confused by misleading messages.
-
Because this proposal is simple and could take effect quickly, it can help inform other work by advertisers, social media companies, and policymakers. While questions of how best to design, operate and moderate social media sites are still actively researched, this proposal provides a simple way to keep stakeholders informed on the state of progress.
Organizational Opposition - Please describe the likely organizations that would oppose the proposal.
-
Large social media companies, which are currently able to profit from illegal activity using ad revenue from advertisers who are not aware of where their ads appear.
-
Foreign disinformation groups, which benefit from under-moderation of large social sites to spread violence and disease in the USA.
-
Disclosure will distract social media companies from other law and policy enforcement projects. See the costs section. This proposal requires only a small amount of work, to expose to advertisers a set of data that already exists within social media companies. It would likely be carried out by an advertiser reporting team, not the developers who work on moderation and public interest projects.
-
Advertisers who are concerned about the environment in which their ads appear can already use ad transparency tools such as NYU Ad Observatory. This proposal would complement, not replace, transparency tools, and would remain effective even if social media companies could force such tools to shut down.
-
Most advertisers have not demanded these reports yet. Social media crime and disinformation is an issue of growing concern. Advertisers who choose not to act on the reports required by this proposal would not have to.
-
Industry standards for defining relationships between ad and content already exist. For example, see: IAB Measurement Guidelines
-
Related: RECOMMENDED NEXT STEPS | Stop Hate for Profit
Provide audit of and refund to advertisers whose ads were shown next to content that was later removed for violations of terms of service.
-
Six Constitutional Hurdles for Platform Speech Regulation | Center for Internet and Society