Community Guidelines
Copy link
COVID-19: Community Guidelines Updates and Protections: As people around the world confront this unprecedented public health emergency, we want to make sure that our Community Guidelines protect people from harmful content and new types of abuse related to COVID-19. We’re working to remove content that has the potential to contribute to real-world harm, including through our policies prohibiting coordination of harm, sale of medical masks and related goods, hate speech, bullying and harassment and misinformation that contributes to the risk of imminent violence or physical harm. To learn more about our policies on COVID-19 and vaccines, see here.
The Short
We want Instagram to continue to be an authentic and safe place for inspiration and expression. Help us foster this community. Post only your own photos and videos and always follow the law. Respect everyone on Instagram, don’t spam people or post nudity.
The Long
Instagram is a reflection of our diverse community of cultures, ages, and beliefs. We’ve spent a lot of time thinking about the different points of view that create a safe and open environment for everyone.
We created the Community Guidelines so you can help us foster and protect this amazing community. By using Instagram, you agree to these guidelines and our Terms of Use. We’re committed to these guidelines and we hope you are too. Overstepping these boundaries may result in deleted content, disabled accounts, or other restrictions.
In some cases, we allow content for public awareness which would otherwise go against our Community Guidelines – if it is newsworthy and in the public interest. We do this only after weighing the public interest value against the risk of harm and we look to international human rights standards to make these judgments.
-
Share only photos and videos that you’ve taken or have the right to share.
As always, you own the content you post on Instagram. Remember to post authentic content, and don’t post anything you’ve copied or collected from the Internet that you don’t have the right to post. Learn more about intellectual property rights.
-
Post photos and videos that are appropriate for a diverse audience.
We know that there are times when people might want to share nude images that are artistic or creative in nature, but for a variety of reasons, we don’t allow nudity on Instagram. This includes photos, videos, and some digitally-created content that show sexual intercourse, genitals, and close-ups of fully-nude buttocks. It also includes some photos of female nipples, but photos in the context of breastfeeding, birth giving and after-birth moments, health-related situations (for example, post-mastectomy, breast cancer awareness or gender confirmation surgery) or an act of protest are allowed. Nudity in photos of paintings and sculptures is OK, too.
People like to share photos or videos of their children. For safety reasons, there are times when we may remove images that show nude or partially-nude children. Even when this content is shared with good intentions, it could be used by others in unanticipated ways. You can learn more on our Tips for Parents page.
-
Foster meaningful and genuine interactions.
Help us stay spam-free by not artificially collecting likes, followers, or shares, posting repetitive comments or content, or repeatedly contacting people for commercial purposes without their consent. Don’t offer money or giveaways of money in exchange for likes, followers, comments or other engagement. Don’t post content that engages in, promotes, encourages, facilitates, or admits to the offering, solicitation or trade of fake and misleading user reviews or ratings.
You don’t have to use your real name on Instagram, but we do require Instagram users to provide us with accurate and up to date information. Don't impersonate others and don't create accounts for the purpose of violating our guidelines or misleading others.
-
Follow the law.
Instagram is not a place to support or praise terrorism, organized crime, or hate groups. Offering sexual services, buying or selling firearms, alcohol, and tobacco products between private individuals, and buying or selling non-medical or pharmaceutical drugs are also not allowed. We also remove content that attempts to trade, co-ordinate the trade of, donate, gift, or ask for non-medical drugs, as well as content that either admits to personal use (unless in the recovery context) or coordinates or promotes the use of non-medical drugs. Instagram also prohibits the sale of live animals between private individuals, though brick-and-mortar stores may offer these sales. No one may coordinate poaching or selling of endangered species or their parts.
Remember to always follow the law when offering to sell or buy other regulated goods. Accounts promoting online gambling, online real money games of skill or online lotteries must get our prior written permission before using any of our products.
We have zero tolerance when it comes to sharing sexual content involving minors or threatening to post intimate images of others.
-
Respect other members of the Instagram community.
We want to foster a positive, diverse community. We remove content that contains credible threats or hate speech, content that targets private individuals to degrade or shame them, personal information meant to blackmail or harass someone, and repeated unwanted messages. We do generally allow stronger conversation around people who are featured in the news or have a large public audience due to their profession or chosen activities.
It's never OK to encourage violence or attack anyone based on their race, ethnicity, national origin, sex, gender, gender identity, sexual orientation, religious affiliation, disabilities, or diseases. When hate speech is being shared to challenge it or to raise awareness, we may allow it. In those instances, we ask that you express your intent clearly.
Serious threats of harm to public and personal safety aren't allowed. This includes specific threats of physical harm as well as threats of theft, vandalism, and other financial harm. We carefully review reports of threats and consider many things when determining whether a threat is credible.
-
Maintain our supportive environment by not glorifying self-injury.
The Instagram community cares for each other, and is often a place where people facing difficult issues such as eating disorders, cutting, or other kinds of self-injury come together to create awareness or find support. We try to do our part by providing education in the app and adding information in the Help Center so people can get the help they need.
Encouraging or urging people to embrace self-injury is counter to this environment of support, and we’ll remove it or disable accounts if it’s reported to us. We may also remove content identifying victims or survivors of self-injury if the content targets them for attack or humor.
-
Be thoughtful when posting newsworthy events.
We understand that many people use Instagram to share important and newsworthy events. Some of these issues can involve graphic images. Because so many different people and age groups use Instagram, we may remove videos of intense, graphic violence to make sure Instagram stays appropriate for everyone.
We understand that people often share this kind of content to condemn, raise awareness or educate. If you do share content for these reasons, we encourage you to caption your photo with a warning about graphic violence. Sharing graphic images for sadistic pleasure or to glorify violence is never allowed.
Help us keep the community strong:
- Each of us is an important part of the Instagram community. If you see something that you think may violate our guidelines, please help us by using our built-in reporting option. We have a global team that reviews these reports and works as quickly as possible to remove content that doesn’t meet our guidelines. Even if you or someone you know doesn’t have an Instagram account, you can still file a report. When you complete the report, try to provide as much information as possible, such as links, usernames, and descriptions of the content, so we can find and review it quickly. We may remove entire posts if either the imagery or associated captions violate our guidelines.
- You may find content you don’t like, but doesn’t violate the Community Guidelines. If that happens, you can unfollow or block the person who posted it. If there's something you don't like in a comment on one of your posts, you can delete that comment.
- Many disputes and misunderstandings can be resolved directly between members of the community. If one of your photos or videos was posted by someone else, you could try commenting on the post and asking the person to take it down. If that doesn’t work, you can file a copyright report. If you believe someone is violating your trademark, you can file a trademark report. Don't target the person who posted it by posting screenshots and drawing attention to the situation because that may be classified as harassment.
- We may work with law enforcement, including when we believe that there’s risk of physical harm or threat to public safety.
For more information, check out our Help Center and Terms of Use.
Thank you for helping us create one of the best communities in the world,
The Instagram Team
Adult Nudity and Sexual Activity
Policy Rationale
We restrict the display of nudity or sexual activity because some people in our community may be sensitive to this type of content. Additionally, we default to removing sexual imagery to prevent the sharing of non-consensual or underage content. Restrictions on the display of sexual activity also apply to digitally created content unless it is posted for educational, humorous, or satirical purposes.
Our nudity policies have become more nuanced over time. We understand that nudity can be shared for a variety of reasons, including as a form of protest, to raise awareness about a cause, or for educational or medical reasons.
Where such intent is clear, we make allowances for the content. For example, while we restrict some images of female breasts that include the nipple, we allow other images, including those depicting acts of protest, women actively engaged in breast-feeding and photos of post-mastectomy scarring. For images depicting visible genitalia or the anus in the context of birth and after-birth moments or health-related situations we include a warning label so that people are aware that the content may be sensitive. We also allow photographs of paintings, sculptures, and other art that depicts nude figures.
Do not post:
- Imagery of real nude adults, if it depicts:
- Visible genitalia except in the context of birth giving and after-birth moments or if there is medical or health context situations (for example, gender confirmation surgery, examination for cancer or disease prevention/assessment).
- Visible anus and/or fully nude close-ups of buttocks unless photoshopped on a public figure.
- Uncovered female nipples except in the context of breastfeeding, birth giving and after-birth moments, medical or health context (for example, post-mastectomy, breast cancer awareness or gender confirmation surgery) or an act of protest.
- Imagery of sexual activity, including:
- Explicit sexual activity and stimulation
- Explicit sexual intercourse or oral sex, defined as mouth or genitals entering or in contact with another person's genitals or anus, where at least one person's genitals are nude.
- Explicit stimulation of genitalia or anus, defined as stimulating genitalia or anus or inserting objects, including sex toys, into genitalia or anus, where the contact with the genitalia or anus is directly visible.
- Implied sexual activity and stimulation, except in cases of medical or health context, advertisements, and recognized fictional images or with indicators of fiction:
- Implied sexual intercourse or oral sex, defined as mouth or genitals entering or in contact with another person's genitals or anus, when the genitalia and/or the activity or contact is not directly visible.
- Implied stimulation of genitalia or anus, defined as stimulating genitalia or anus or inserting objects, including sex toys, into or above genitalia or anus, when the genitalia and/or the activity or contact is not directly visible.
- Other activities, except in cases of medical or health context, advertisements, and recognized fictional images or with indicators of fiction, including but not limited to:
- Erections
- Presence of by-products of sexual activity.
- Sex toys placed upon or inserted into mouth.
- Stimulation of naked human nipples.
- Squeezing female breasts, defined as a grabbing motion with curved fingers that shows both marks and clear shape change of the breasts. We allow squeezing in breastfeeding contexts.
- Fetish content that involves:
- Acts that are likely to lead to the death of a person or animal.
- Dismemberment.
- Cannibalism.
- Feces, urine, spit, snot, menstruation or vomit.
- Bestiality.
- Adult sexual activity in digital art, except when posted in an educational or scientific context, or when it meets one of the criteria below and shown only to individuals 18 years and older.
- Explicit sexual activity and stimulation
- Extended audio of sexual activity
For the following content, we include a label so that people are aware the content may be sensitive:
Imagery of visible adult male and female genitalia, fully nude close-ups of buttocks or anus, or implied/other sexual activity, when shared in medical or health context which can include, for example:
- Birth-giving and after-birth giving moments, including both natural vaginal delivery and caesarean section
- Gender confirmation surgery
- Genitalia self-examination for cancer or disease prevention/assessment
We only show this content to individuals 18 and older:
- Real world art that depicts implied or explicit sexual activity.
- Imagery depicting bestiality in real-world art provided it is shared neutrally or in condemnation and the people or animals depicted are not real.
- Implied adult sexual activity in advertisements, recognized fictional images or with indicators of fiction.
- Adult sexual activity in digital art, where:
- The sexual activity (intercourse or other sexual activities) isn’t explicit and not part of the above specified fetish content.
- The content was posted in a satirical or humorous context.
- Only body shapes or contours are visible.
Read lessRead more
Child Sexual Exploitation, Abuse and Nudity
Policy Rationale
We do not allow content or activity that sexually exploits or endangers children. When we become aware of apparent child exploitation, we report it to the National Center for Missing and Exploited Children (NCMEC), in compliance with applicable law. We know that sometimes people share nude images of their own children with good intentions; however, we generally remove these images because of the potential for abuse by others and to help avoid the possibility of other people reusing or misappropriating the images.
We also work with external experts, including the Facebook Safety Advisory Board, to discuss and improve our policies and enforcement around online safety issues, especially with regard to children. Learn more about the technology we’re using to fight against child exploitation.
Do not post:
Child sexual exploitation
Content or activity that threatens, depicts, praises, supports, provides instructions for, makes statements of intent, admits participation in or shares links of the sexual exploitation of children (including real minors, toddlers or babies or non-real depictions with a human likeness, such as in art, AI-generated content, fictional characters, dolls, etc). This includes but is not limited to:
- Sexual intercourse
- Explicit sexual intercourse or oral sex, defined as mouth or genitals entering or in contact with another person's genitals or anus, where at least one person's genitals are nude.
- Implied sexual intercourse or oral sex, including when contact is imminent or not directly visible.
- Stimulation of genitals or anus, including when activity is imminent or not directly visible.
- Presence of by-products of sexual activity.
- Any of the above involving an animal.
- Children with sexual elements, including but not limited to:
- Restraints.
- Focus on genitals.
- Presence of aroused adult.
- Presence of sex toys or use of any object for sexual stimulation or gratification.
- Sexualised costume.
- Stripping.
- Staged environment (for example, on a bed) or professionally shot (quality/focus/angles).
- Open-mouth kissing.
- Content of children in a sexual fetish context.
- Content that supports, promotes, advocates or encourages participation in pedophilia unless it is discussed neutrally in an academic or verified health context.
- Content that identifies or mocks alleged victims of child sexual exploitation by name or image.
Solicitation
Content that solicits sexual content or activity depicting or involving children, defined as:
- Child Sexual Abuse Material (CSAM)
- Nude imagery of real or non-real children
- Sexualized imagery of real or non-real children
Content that solicits sexual encounters with children
Inappropriate interactions with children
Content that constitutes or facilitates inappropriate interactions with children, such as:
- Arranging or planning real-world sexual encounters with children
- Purposefully exposing children to sexually explicit language or sexual material
- Engaging in implicitly sexual conversations in private messages with children
- Obtaining or requesting sexual material from children in private messages
Exploitative intimate imagery and sextortion
Content that attempts to exploit real children by:
- Coercing money, favors or intimate imagery with threats to expose intimate imagery or information.
- Sharing, threatening or stating an intent to share private sexual conversations or intimate imagery.
Sexualization of children
- Content (including photos, videos, real-world art, digital content, and verbal depictions) that sexualizes real or non-real children.
- Groups, Pages and profiles dedicated to sexualizing real or non-real children.
Child nudity
Content that depicts real or non-real child nudity where nudity is defined as:
- Close-ups of real or non-real children’s genitalia
- Real or non-real nude toddlers, showing:
- Visible genitalia, even when covered or obscured by transparent clothing.
- Visible anus and/or fully nude close-up of buttocks.
- Real or non-real nude minors, showing:
- Visible genitalia (including genitalia obscured only by pubic hair or transparent clothing)
- Visible anus and/or fully nude close-up of buttocks.
- Uncovered female nipples.
- No clothes from neck to knee - even if no genitalia or female nipples are showing.
- Unless the non-real imagery is for health or educational purposes or is a depiction of child nudity in real-word art
Non-sexual child abuse
Imagery that depicts real or non-real non-sexual child abuse regardless of sharing intent, unless the imagery is from real-world art, cartoons, movies or video games
Content that praises, supports, promotes, advocates for, provides instructions for or encourages participation in non-sexual child abuse.
For the following content, we include a warning screen so that people are aware the content may be disturbing and limit the ability to view the content to adults, ages eighteen and older:
- Videos or photos that depict police officers or military personnel committing non-sexual child abuse.
- Imagery of non-sexual child abuse, when law enforcement, child protection agencies, or trusted safety partners request that we leave the content on the platform for the express purpose of bringing a child back to safety.
For the following content, we include a sensitivity screen so that people are aware the content may be upsetting to some:
- Videos or photos of violent immersion of a child in water in the context of religious rituals.
For the following Community Standards, we require additional information and/or context to enforce:
For the following content, we include a warning label so that people are aware that the content may be sensitive:
- Imagery posted by a news agency that depicts child nudity in the context of famine, genocide, war crimes, or crimes against humanity, unless accompanied by a violating caption or shared in a violating context, in which case the content is removed.
We may also remove imagery depicting the aftermath of non-sexual child abuse when reported by news media partners, NGOs or other trusted safety partners.
Read lessRead more
Spam
Policy Rationale
We work hard to limit the spread of spam because we do not want to allow content that is designed to deceive, or that attempts to mislead users, to increase viewership. This content creates a negative user experience, detracts from people's ability to engage authentically in online communities and can threaten the security, stability and usability of our services. We also aim to prevent people from abusing our platform, products or features to artificially increase viewership or distribute content en masse for commercial gain.
Do not:
-
Post, share, engage with content or create accounts, Groups, Pages, Events or other assets, either manually or automatically, at very high frequencies.
-
Attempt to or successfully sell, buy or exchange site privileges, engagement, or product features, such as accounts, admin roles, permission to post, Pages, Groups, likes, etc., except in the case of clearly identified branded content, as defined by our Branded Content Policies.
-
Require or claim that users are required to engage with content (e.g. liking, sharing) before they are able to view or interact with promised content.
-
Encourage likes, shares, follows, clicks or the use of apps or websites under false pretenses, such as:
- Offering false or non-existent services or functionality(e.g. “Get a ‘Dislike’ button!”)
- Failing to direct to promised content (e.g. “Click here for a discount code at Nordstrom”; false play buttons)
- The deceptive or misleading use of URLs, defined as:
- Cloaking: Presenting different content to Facebook users and Facebook crawlers or tools.
- Misleading content: Content contains a link that promises one type of content but delivers something substantially different.
- Deceptive redirect behavior: Websites that require an action (e.g. captcha, watch ad, click here) in order to view the expected landing page content and the domain name of the URL changes after the required action is complete.
- Like/share-gating: Landing pages that require users to like, share, or otherwise engage with content before gaining access to content.
- Deceptive landing page functionality: Websites that have a misleading user interface, which results in accidental traffic being generated (e.g. pop-ups/unders, clickjacking, etc.).
- Typosquatting: An external website pretends to be a reputable brand or service by using a name, domain or content that features typos, misspellings or other means to impersonate well-known brands using a landing page similar to another, trusted site to mislead visitors (e.g. www.faceb00k.com, www.face_book.com).
- And other behaviors that are substantially similar to the above.
Read lessRead more
Fraud and Deception
Policy Rationale
In an effort to prevent fraudulent activity on the platform which can harm people or businesses, we remove content and action on behaviors which intend to defraud users or third parties. Therefore we remove content that purposefully intends to deceive, willfully misrepresent or otherwise exploit others for money or property. This includes content that seeks to coordinate or promote these activities using our platform. We allow people to raise awareness and educate others as well as condemn these activities, unless this includes content that contains sensitive information, such as personally identifiable information.
Do not post:
Content that provides instructions on, engages in, promotes, coordinates, encourages, facilitates, recruits for, or admits to the offering or solicitation of any of the following activities:
-
Deceiving others to generate a financial or personal benefit to the detriment of a third party or entity through:
- Investment or financial scams:
- Loan scams
- Advance fee scams.
- Gambling scams
- Ponzi or pyramid schemes.
- Money or cash flips or money muling.
- Investment scams with promise of high rates of return.
- Inauthentic identity scams:
- Charity scams.
- Romance or impersonation scams
- Establishment of false businesses or entities.
- Product or rewards scams:
- Grant and benefits scams.
- Tangible, spiritual or illuminati scams.
- Insurance scams, including ghost broking
- Fake jobs, work from home or get-rich-quick scams.
- Debt relief or credit repair scams.
- Investment or financial scams:
-
Engaging and co-ordinating with others to fraudulently generate a financial or personal benefit at a loss for a third party, such as people, businesses or organisations through:
- Fake documents or financial instruments by:
- Creating, selling or buying of:
- Fake or forged documents.
- Fake or counterfeit currency or vouchers.
- Fake or forged educational and professional certificates.
- Money laundering
- Creating, selling or buying of:
- Fake documents or financial instruments by:
-
Stolen information, goods, or services by:
- Credit card fraud and goods or property purchases with stolen financial information
- Trading, selling or buying of:
- Personal Identifiable Information.
- Fake and misleading user reviews or ratings.
- Credentials for subscription services.
- Coupons.
- Sharing, selling, trading, or buying of:
- Future exam papers or answer sheets.
-
Betting manipulation (for example match fixing).
-
Manipulation of measuring devices such as electricity or water meters in order to bypass their authorised or legal use.
For the following Community Standards, we require additional information and/or context to enforce:
Do not post:
Content that engages in, promotes, encourages, facilitates, or admits to the following activities:
- Bribery.
- Embezzlement.
In certain cases, we will allow content that may otherwise violate the Community Standards when it is determined that the content is satirical. Content will only be allowed if the violating elements of the content are being satirized or attributed to something or someone else in order to mock or criticize them.
Read lessRead more
Account Integrity and Authentic Identity
Policy Rationale
Authenticity is the cornerstone of our community. We believe that authenticity helps create a community where people are accountable to each other, and to Facebook, in meaningful ways. We want to allow for the range of diverse ways that identity is expressed across our global community, while also preventing impersonation and identity misrepresentation. That is why we require people to create a Facebook account using the name they go by in everyday life. Our authenticity policies are intended to create a safe environment where people can trust and hold one another accountable.
In order to maintain a safe environment and empower free expression, we remove accounts that are harmful to the community, including those that compromise the security of other accounts and our services. We have built a combination of automated and manual systems to block and remove accounts that are used to persistently or egregiously abuse our Community Standards.
Because account level removal is a severe action, whenever possible, we aim to give our community a chance to learn our rules and follow our Community Standards. Penalties, including account disables, are designed to be proportionate to the severity of the violation and the risk of harm posed to the community. Continued violations, despite repeated warnings and restrictions, or violations that pose severe safety risks will lead to an account being disabled.
We do not allow the use of our services and will restrict or disable accounts or other entities (such as pages, groups, and events) if you:
- Severely violate our Community Standards.
- Persistently violate our Community Standards.
- Coordinate as part of a network of accounts or other entities in order to violate or evade our Community Standards.
- Represent Dangerous Individuals or Organizations.
- Create or use an account that demonstrates an intent to violate our Community Standards.
- Create or use an account by scripted or other inauthentic means.
- Create an account, Page, Group or Event to evade our enforcement actions, including creating an account to bypass a restriction or after we have disabled your previous account, Page, Group or Event.
- Create or use an account that deliberately misrepresents your identity in order to mislead or deceive others, or to evade enforcement or violate our other Community Standards. We consider a number of factors when assessing misleading identity misrepresentation, such as:
- Repeated or significant changes to identity details, such as name or age
- Misleading profile information, such as bio details and profile location
- Using stock imagery or stolen photos
- Other related account activity
- Impersonate others by:
- Using their photos with the explicit aim to deceive others.
- Creating an account assuming to be or speak for another person or entity.
- Creating a Page assuming to be or speak for another person or entity for whom the user is not authorized to do so.
- Are under 13 years old.
- Are a convicted sex offender.
- Are prohibited from receiving our products, services or software under applicable laws.
In certain cases, such as those outlined below, we will seek further information about an account before taking actions ranging from temporarily restricting accounts to permanently disabling them.
- Accounts misrepresenting their identity (Facebook and Messenger only) by:
- Creating an account using a name that is not the authentic name you go by in everyday life
- Using an inherently violating name, containing slurs or any other violations of the Community Standards
- Providing a false date of birth.
- Creating a single account that represents or is used by more than one person.
- Maintaining multiple accounts as a single user.
- Creating an account using a name that is not the authentic name you go by in everyday life
- Compromised accounts.
- Empty accounts with prolonged dormancy.
Read lessRead more
Inauthentic Behavior
Policy Rationale
In line with our commitment to authenticity, we do not allow people to misrepresent themselves on Facebook, use fake accounts, artificially boost the popularity of content or engage in behaviors designed to enable other violations under our Community Standards. This policy is intended to protect the security of user accounts and our services, and create a space where people can trust the people and communities they interact with.
Do not:
- Use multiple Facebook accounts or share accounts between multiple people
- Misuse Facebook or Instagram reporting systems to harass others
- Conceal a Page’s purpose by misleading users about the ownership or control of that Page
- Engage in or claim to engage in inauthentic behavior, which is defined as the use of Facebook or Instagram assets (accounts, Pages, Groups, or Events), to mislead people or Facebook:
- About the identity, purpose, or origin of the entity that they represent.
- About the popularity of Facebook or Instagram content or assets.
- About the purpose of an audience or community.
- About the source or origin of content.
- To evade enforcement under our Community Standards.
For the following Community Standards, we require additional information and/or context to enforce:
- We do not allow entities to engage in, or claim to engage in Coordinated Inauthentic Behavior, defined as the use of multiple Facebook or Instagram assets, working in concert to engage in Inauthentic Behavior (as defined above), where the use of fake accounts is central to the operation
- We do not allow entities to engage in, or claim to engage in foreign or government interference, which is Coordinated Inauthentic Behavior conducted on behalf of a foreign or government actor.
- We do not allow governments that have instituted sustained blocks of social media to use their official departments, agencies, and embassies to deny the use of force or violent events in the context of an attack against the territorial integrity of another state in violation of Article 2(4) of the UN charter.
Read lessRead more
Dangerous Organizations and Individuals
Policy Rationale
In an effort to prevent and disrupt real-world harm, we do not allow organizations or individuals that proclaim a violent mission or are engaged in violence to have a presence on Meta. We assess these entities based on their behavior both online and offline, most significantly, their ties to violence. Under this policy, we designate individuals, organizations, and networks of people. These designations are divided into three tiers that indicate the level of content enforcement, with Tier 1 resulting in the most extensive enforcement because we believe these entities have the most direct ties to offline harm.
Tier 1 focuses on entities that engage in serious offline harms - including organizing or advocating for violence against civilians, repeatedly dehumanizing or advocating for harm against people based on protected characteristics, or engaging in systematic criminal operations. Tier 1 includes hate organizations; criminal organizations, including those designated by the United States government as Specially Designated Narcotics Trafficking Kingpins (SDNTKs); and terrorist organizations, including entities and individuals designated by the United States government as Foreign Terrorist Organizations (FTOs) or Specially Designated Global Terrorists (SDGTs). We remove praise, substantive support, and representation of Tier 1 entities as well as their leaders, founders, or prominent members.
In addition, we do not allow content that praises, substantively supports, or represents events that Meta designates as violating violent events - including terrorist attacks, hate events, multiple-victim violence or attempted multiple-victim violence, serial murders, or hate crimes. Nor do we allow (1) praise, substantive support, or representation of the perpetrator(s) of such attacks; (2) perpetrator-generated content relating to such attacks; or (3) third-party imagery depicting the moment of such attacks on visible victims. We also remove content that praises, substantively supports or represents ideologies that promote hate, such as nazism and white supremacy.
Tier 2 focuses on entities that engage in violence against state or military actors but do not generally target civilians -- what we call “Violent Non-State Actors.” We remove all substantive support and representation of these entities, their leaders, and their prominent members. We remove any praise of these groups’ violent activities.
Tier 3 focuses on entities that may repeatedly engage in violations of our Hate Speech or Dangerous Organizations policies on-or-off the platform or demonstrate strong intent to engage in offline violence in the near future, but have not necessarily engaged in violence to date or advocated for violence against others based on their protected characteristics. This includes Militarized Social Movements, Violence-Inducing Conspiracy Networks, and individuals and groups banned for promoting hatred. Tier 3 entities may not have a presence or coordinate on our platforms.
We recognize that users may share content that includes references to designated dangerous organizations and individuals in the context of social and political discourse. This includes content reporting on, neutrally discussing or condemning dangerous organizations and individuals or their activities.
News reporting includes information that is shared to raise awareness about local and global events in which designated dangerous organizations and individuals are involved.
- E.g. “Breaking News: Al-Shabab claimed responsibility for the attack in Somalia”
- E.g. “Timeline and expert analysis: How the shooting at the Buffalo Supermarket unfolded and what did the perpetrator say in court”
Neutral discussion includes factual statements, commentary, questions,and other information that do not express positive judgment around the designated dangerous organization or individual and their behavior.
- E.g. “Al Qaeda represents less threat than ISIS given the lack of leadership and finance”
- E.g. “Anders Breivik is one example of how complex the radicalization process can be”
Condemnation includes disapproval, disgust, rejection, criticism, mockery, and other negative expressions about a designated dangerous organization or individual and their behavior.
- E.g. “ I feel disgusted by the crime of Salvador Ramos. The judge’s words resonated so much to me. He should get no mercy by the court”
- E.g. “Hitler’s crimes shall never be forgotten ever. These were some of the darkest moments in history”
Our policies are designed to allow room for these types of discussions while simultaneously limiting risks of potential offline harm. We thus require people to clearly indicate their intent when creating or sharing such content. If a user's intention is ambiguous or unclear, we default to removing content.
In line with international human rights law, our policies allow discussions about the human rights of designated individuals or members of designated dangerous entities, unless the content includes other praise, substantive support, or representation of designated entities or other policy violations, such as incitement to violence.
Please see our Corporate Human Rights Policy for more information about our commitment to internationally recognized human rights.
We Remove:
We remove praise, substantive support and representation of various dangerous organizations and individuals. These concepts apply to the organizations themselves, their activities, and their members. These concepts do not proscribe peaceful advocacy for particular political outcomes.
Praise, defined as any of the below:
- Speak positively about a designated entity or event;
- E.g., “The fighters for the Islamic State are really brave!”
- Give a designated entity or event a sense of achievement;
- E.g., “Timothy McVeigh is a martyr.”
- Legitimizing the cause of a designated entity by making claims that their hateful, violent, or criminal conduct is legally, morally, or otherwise justified or acceptable;
- E.g., “Hitler did nothing wrong.”
- Aligning oneself ideologically with a designated entity or event.
- E.g., “I stand with Brenton Tarrant.”
We remove Praise of Tier 1 entities and designated events. We will also remove praise of violence carried out by Tier 2 entities.
Substantive Support, defined as any of the below:
- Any act which improves the financial status of a designated entity - including funnelling money towards, or away from a designated entity;
- E.g., “Donate to the KKK!”
- Any act which provides material aid to a designated entity or event;
- E.g., “If you want to send care packages to the Sinaloa Cartel, use this address:”
- Putting out a call to action on behalf of a designated entity or event;
- E.g., “Contact the Atomwaffen Division - (XXX) XXX-XXXX”
- Recruiting on behalf of a designated entity or event;
- E.g., “If you want to fight for the Caliphate, DM me”
- Channeling information or resources, including official communications, on behalf of a designated entity or event
- E.g., Directly quoting a designated entity without caption that condemns, neutrally discusses, or is a part of news reporting.
We remove Substantive Support of Tier 1 and Tier 2 entities and designated events.
Representation, defined as any of the below:
- Stating that you are a member of a designated entity, or are a designated entity;
- E.g., “I am a grand dragon of the KKK.”
- Creating a Page, Profile, Event, Group, or other Facebook entity that is or purports to be owned by a Designated Entity or run on their behalf, or is or purports to be a designated event.
- E.g., A Page named “American Nazi Party.”
We remove Representation of Tier 1 and 2 Designated Organizations, Hate-Banned Entities, and designated events.
Types and Tiers of Dangerous Organizations
Tier 1: Terrorism, organized hate, large-scale criminal activity, attempted multiple-victim violence, multiple victim violence, serial murders, and violating violent events
We do not allow individuals or organizations involved in organized crime, including those designated by the United States government as specially designated narcotics trafficking kingpins (SDNTKs); hate; or terrorism, including entities designated by the United States government as Foreign Terrorist Organizations (FTOs) or Specially Designated Global Terrorists (SDGTs), to have a presence onthe platform. We also don't allow other people to represent these entities. We do not allow leaders or prominent members of these organizations to have a presence on the platform, symbols that represent them to be used on the platform, or content that praises them or their acts. In addition, we remove any coordination of substantive support for these individuals and organizations.
We do not allow content that praises, substantively supports, or represents events that Meta designates as terrorist attacks, hate events, multiple-victim violence or attempted multiple-victim violence, serial murders, hate crimes or violating violent events. Nor do we allow (1) praise, substantive support, or representation of the perpetrator(s) of such attacks; (2) perpetrator-generated content relating to such attacks; or (3) third-party imagery depicting the moment of such attacks on visible victims.
We also do not allow Praise, Substantive Support, or Representation of designated hateful ideologies.
Terrorist organizations and individuals, defined as a non-state actor that:
- Engages in, advocates, or lends substantial support to purposive and planned acts of violence,
- Which causes or attempts to cause death, injury or serious harm to civilians, or any other person not taking direct part in the hostilities in a situation of armed conflict, and/or significant damage to property linked to death, serious injury or serious harm to civilians
- With the intent to coerce, intimidate and/or influence a civilian population, government, or international organization
- In order to achieve a political, religious, or ideological aim.
Hate Entity, defined as an organization or individual that spreads and encourages hate against others based on their protected characteristics. The entity’s activities are characterized by at least some of the following behaviors:
- Violence, threatening rhetoric, or dangerous forms of harassment targeting people based on their protected characteristics;
- Repeated use of hate speech;
- Representation of Hate Ideologies or other designated Hate Entities, and/or
- Glorification or substantive support of other designated Hate Entities or Hate Ideologies.
Criminal Organizations, defined as an association of three or more people that:
- is united under a name, color(s), hand gesture(s) or recognized indicia; and
- has engaged in or threatens to engage in criminal activity such as homicide, drug trafficking, or kidnapping.
Multiple-Victim Violence and Serial Murders
- We consider an event to be multiple-victim violence or attempted multiple-victim violence if it results in three or more casualties in one incident, defined as deaths or serious injuries. Any Individual who has committed such an attack is considered to be a perpetrator or an attempted perpetrator of multiple-victim violence.
- We consider any individual who has committed two or more murders over multiple incidents or locations a serial murderer.
Hateful Ideologies
- While our designations of organizations and individuals focus on behavior, we also recognize that there are certain ideologies and beliefs that are inherently tied to violence and attempts to organize people around calls for violence or exclusion of others based on their protected characteristics. In these cases, we designate the ideology itself and remove content that supports this ideology from our platform. These ideologies include:
- Nazism
- White Supremacy
- White Nationalism
- White Separatism
- We remove explicit Praise, Substantive Support, and Representation of these ideologies, and remove individuals and organizations that ascribe to one or more of these hateful ideologies.
Tier 2: Violent Non-State Actors
Organizations and individuals designated by Meta as Violent Non-state Actors are not allowed to have a presence on Facebook, or have a presence maintained by others on their behalf. As these communities are actively engaged in violence, substantive support of these entities is similarly not allowed. We will also remove praise of violence carried out by these entities.
Violent Non-State Actors, defined as any non-state actor that:
- engages in purposive and planned acts of violence primarily against a government military or other armed communities; and
- that causes or attempts to
- cause death to persons taking direct part in hostilities in an armed conflict, and/or
- deprive communities of access to vital infrastructure and natural resources, and/or bring significant damage to property, linked to death, serious injury or serious harm to civilians
Tier 3: Militarized Social Movements, Violence-Inducing Conspiracy Networks, and Hate Banned Entities
Pages, Communities, Events, and Profiles or other Facebook entities that are - or claim to be - maintained by, or on behalf of, Militarized Social Movements and Violence-Inducing Conspiracy Networks are prohibited. Admins of these pages, communities and events will also be removed.
Click here to read more about how we address movements and organizations tied to violence.
We do not allow Representation of Organizations and individuals designated by Meta as Hate-Banned Entities.
Militarized Social Movements (MSMs), which include:
-
Militia Communities, defined as non-state actors that use weapons as a part of their training, communication, or presence; and are structured or operate as unofficial military or security forces and:
- Coordinate in preparation for violence or civil war; or
- Distribute information about the tactical use of weapons for combat; or
- Coordinate militarized tactical coordination in a present or future armed civil conflict or civil war.
-
Communities supporting violent acts amid protests, defined as non-State actors that repeatedly:
- Coordinate, promote, admit to or engage in:
- Acts of street violence against civilians or law enforcement; or
- Arson, Looting, or other destruction of property; or
- Threaten to violently disrupt an election process; or
- Promote bringing weapons to a location when the stated intent is to intimidate people amid a protest.
Violence-Inducing Conspiracy Networks (VICNs), defined as a Non-State Actor that:
- Organizes under a name, sign, mission statement, or symbol; and
- Promote theories that attribute violent or dehumanizing behavior to people or organizations that have been debunked by credible sources; and
- Has inspired multiple incidents of real-world violence by adherents motivated by the desire to draw attention to or redress the supposed harms promoted by these debunked theories.
Hate-Banned Entities, defined as entities that engage in repeated hateful conduct or rhetoric, but do not rise to the level of a Tier 1 entity because they have not engaged in or explicitly advocated for violence, or because they lack sufficient connections to previously designated organizations or figures.
For the following Community Standards, we require additional information and/or context to enforce:
- In certain cases, we will allow content that may otherwise violate the Community Standards when it is determined that the content is satirical. Content will only be allowed if the violating elements of the content are being satirized or attributed to something or someone else in order to mock or criticize them.
Read lessRead more
Sexual Solicitation
Policy Rationale
As noted in Section 8 of our Community Standards (Adult Sexual Exploitation), people use Facebook to discuss and draw attention to sexual violence and exploitation. We recognize the importance of and allow for this discussion.We also allow for the discussion of sex worker rights advocacy and sex work regulation. We draw the line, however, when content facilitates, encourages or coordinates sexual encounters or commercial sexual services between adults. We do this to avoid facilitating transactions that may involve trafficking, coercion and non-consensual sexual acts.
We also restrict sexually-explicit language that may lead to sexual solicitation because some audiences within our global community may be sensitive to this type of content, and it may impede the ability for people to connect with their friends and the broader community.
Do not post:
Content that offers or asks for adult commercial services, such as requesting, offering or asking for rates for escort service and paid sexual fetish or domination services. (Content that recruits or offers other people for third-party commercial sex work is separately considered under the Human Exploitation policy).
Attempted coordination of or recruitment for, adult sexual activities, except when promoting an event or venue, including but not limited to:
- Filmed sexual activities.
- Pornographic activities, strip club shows, live sex performances or erotic dances.
- Sexual, erotic or tantric massages.
Explicit sexual solicitation by, including but not limited to the following, offering or asking for:
- Offering or asking for sex or sexual partners (including partners who share fetish or sexual interests).
- Sex chat or conversations.
- Nude photos/videos/imagery/sexual fetish items.
- Sexual slang terms.
We allow expressing desire for sexual activity, promoting sex education, discussing sexual practices or experiences, or offering classes or programs that teach techniques or discuss sex.
Content that is implicitly or indirectly offering or asking for sexual solicitation and meets both of the following criteria. If both criteria are not met, it is not deemed to be violating. For example, if content is a hand-drawn image depicting sexual activity but does not ask or offer sexual solicitation, it is not violating:
-
Criteria 1: Offer or ask
- Content that implicitly or indirectly (typically through providing a method of contact) offers or asks for sexual solicitation.
-
Criteria 2: Suggestive Elements
- Content that makes the aforementioned offer or ask using one or more of the following sexually suggestive elements:
- Regional sexualized slang,
- Mentions or depictions of sexual activity such as sexual roles, sex positions, fetish scenarios, state of arousal, act of sexual intercourse or activity (e.g. sexual penetration or self-pleasuring), commonly sexual emojis
- Including content (hand drawn, digital or real-world art) that depicts sexual activity as defined in Adult Nudity and Sexual Activity policy
- Poses,
- Audio of sexual activity or other content that violates the Adult Nudity and Sexual Activity policy
- Content that makes the aforementioned offer or ask using one or more of the following sexually suggestive elements:
An offer or ask for pornographic material (including, but not limited to, sharing of links to external pornographic websites)
Sexually-explicit language that goes into graphic detail beyond mere reference to:
- A state of sexual arousal (e.g wetness or erection) or
- An act of sexual intercourse (e.g sexual penetration, self-pleasuring or exercising fetish scenarios).
- Except for content shared in a humorous, satirical, or educational context, as a sexual metaphor or as sexual cursing.
For the following Community Standards, we require additional information and/or context to enforce:
- In certain cases, we will allow content that may otherwise violate the Community Standards when it is determined that the content is satirical. Content will only be allowed if the violating elements of the content are being satirized or attributed to something or someone else in order to mock or criticize them.
Read lessRead more
Restricted Goods and Services
Policy Rationale
To encourage safety and deter potentially harmful activities, we prohibit attempts by individuals, manufacturers, and retailers to purchase, sell, raffle, gift, transfer or trade certain goods and services on our platform. We do not tolerate the exchange or sale of any drugs that may result in substance abuse covered under our policies below. Brick-and-mortar and online retailers may promote firearms, alcohol, and tobacco items available for sale off of our services; however, we restrict visibility of this content for minors. We allow discussions about the sale of these goods in stores or by online retailers, as well as advocating for changes to regulations of goods and services covered in this policy.
Do not post:
Firearms
Content that:
- Attempts to buy, sell, or trade, firearms, firearm parts, ammunition, explosives, or lethal enhancements except when posted by a Page, Group or Instagram profile representing legitimate brick-and-mortar entities, including retail businesses, websites, brands or government agencies (e.g. police department, fire department) or a private individual sharing content on behalf of legitimate brick-and-mortar entities.
- Attempts to donate or gift firearms, firearm parts, ammunition, explosives, or lethal enhancements except when posted in the following contexts:
- Donating, trading in or buying back firearms and ammunition by a Page, Group or Instagram profile representing legitimate brick-and-mortar entities, including retail businesses, websites, brands or government agencies, or a private individual sharing content on behalf of legitimate brick-and-mortar entities.
- An auction or raffle of firearms by legitimate brick-and-mortar entities, including retail businesses, government-affiliated organizations or non-profits, or private individuals affiliated with or sponsored by legitimate brick-and-mortar entities.
- Asks for firearms, firearm parts, ammunition, explosives, or lethal enhancements
- Sells, gifts, exchanges, transfers, coordinates, promotes (by which we mean speaks positively about, encourages the use of) or provides access to 3D printing or computer-aided manufacturing instructions for firearms or firearms parts regardless of context or poster.
- Attempts to buy, sell, or trade machine gun conversion devices
Non-medical drugs (drugs or substances that are not being used for an intended medical purposes or are used to achieve a high - this includes precursor chemicals or substances that are used for the production of these drugs.)
Content that:
- Attempts to buy, sell, trade, co-ordinate the trade of, donate, gift or asks for non-medical drugs.
- Admits to buying, trading or co-ordinating the trade of non-medical drugs by the poster of the content by themselves or through others.
- Admits to personal use without acknowledgment of or reference to recovery, treatment, or other assistance to combat usage. This content may not speak positively about, encourage use of, coordinate or provide instructions to make or use non-medical drugs.
- Coordinates or promotes (by which we mean speaks positively about, encourages the use of, or provides instructions to use or make) non-medical drugs.
Pharmaceutical drugs (drugs that require a prescription or medical professionals to administer)
Content that:
- Attempts to buy, sell or trade pharmaceutical drugs except when:
- Listing the price of vaccines in an explicit education or discussion context.
- Offering delivery when posted by legitimate healthcare e-commerce businesses.
- Attempts to donate or gift pharmaceutical drugs
- Asks for pharmaceutical drugs except when content discusses the affordability, accessibility or efficacy of pharmaceutical drugs in a medical context
Marijuana
Content that attempts to buy, sell, trade, donate or gift or asks for marijuana.
Endangered species (wildlife and plants):
Content that:
- Attempts to buy, sell, trade, donate,or gift or asks for endangered species or their parts.
- Admits to poaching, buying or trading of endangered species or their parts committed by the poster of the content either by themselves or their associates through others. This does not include depictions of poaching by strangers.
- Depicts poaching of endangered species or their parts committed by the poster of the content by themselves or their associates.
- Shows coordination or promotion (by which we mean speaks positively about, encourages the poaching of, or provides instructions to use or make products from endangered species or their parts)
Live non-endangered animals excluding livestock
- Content that attempts to buy, sell or trade live non-endangered animals except when:
- Posted by a Page, Group or Instagram profile representing legitimate brick-and-mortar entities, including retail businesses, legitimate websites, brands, or rehoming shelters, or a private individual sharing content on behalf of legitimate brick-and-mortar entities.
- Posted in the context of donating or rehoming live non-endangered animals, including rehoming fees for peer-to-peer adoptions, selling an animal for a religious offering, or offering a reward for lost pets.
Human blood
- Content that attempts to buy, sell or trade human blood.
- Content that asks for human blood except for a donation or gift.
Alcohol / tobacco
Content that:
- Attempts to buy, sell or trade alcohol or tobacco except when:
- Posted by a Page, Group, or Instagram profile representing legitimate brick-and-mortar entities, including retail businesses, websites or brands, or a private individual sharing content on behalf of legitimate brick-and-mortar entities.
- Content refers to alcohol/tobacco that will be exchanged or consumed on location at an event, restaurant, bar, party and so on.
- Attempts to donate or gift alcohol or tobacco except when posted by a Page, Group, or Instagram profile representing legitimate brick-and-mortar entities, including retail businesses, websites or brands, or a private individual sharing content on behalf of legitimate brick-and-mortar entities.
- Asks for alcohol or tobacco
Weight loss products
- Content about weight loss that contains a miracle claim and attempts to buy, sell, trade, donate or gift weight loss products.
Historical artifacts
- Content that attempts to buy, sell, trade, donate or gift or asks for historical artifacts.
Entheogens
- Content that attempts to buy, sell, trade, donate or gift or asks for entheogens.
- Note: Debating or advocating for the legality or discussing scientific or medical merits of entheogens is allowed.
Hazardous Goods and Materials
- Content that attempts to buy, sell, trade, donate or gift or asks for hazardous goods and materials
Except when any of the above occurs in a fictional or documentary context
For the following content, we restrict visibility to adults 21 years of age and older:
Firearms
- Content posted by or promoting legitimate brick-and-mortar store, entities, including retail businesses websites, brands, or government agencies which attempt to buy, sell, trade, donate or gift (including in the context of an auction or a raffle) firearms, firearm parts, ammunition, explosives, or lethal enhancements.
For the following content, we restrict visibility to adults 18 years of age and older:
Alcohol/tobacco
- Content posted by or promoting legitimate brick-and-mortar entities, including retail businesses websites or brands, which attempt to buy, sell, trade, donate or gift of alcohol or tobacco products.
Bladed weapons
- Content that attempts to buy, sell, trade, donate or gift bladed weapons.
Weight loss products and potentially dangerous cosmetic procedures
Content that
- Attempts to buy, sell, trade, donate or gift weight loss products or potentially dangerous cosmetic procedures.
- Admits to or depicts using a weight loss product or potentially dangerous cosmetic procedures, except when admitting to use in a disapproval context.
- Shows coordination or promotion (by which we mean speaks positively, encourages the use of or provides instructions to use or make a diet product or perform dangerous cosmetic procedures).
Sex toys and sexual enhancement products
- Content that attempts to buy, sell, trade, donate or gift sex toys and sexual enhancement products
Real money gambling
- Content that attempts to sell or promote online gaming and gambling services where anything of monetary value (including cash or digital/virtual currencies, e.g. bitcoin) is required to play and anything of monetary value forms part of the prize.
Entheogens
- Content that shows admission to personal use of, coordinates or promotes (by which we mean speaks positively about), or encourages the use of entheogens.
- Except when any of the above occurs in a fictional or documentary context.
For the following Community Standards, we require additional information and/or context to enforce:
- In certain cases, we will allow content that may otherwise violate the Community Standards when it is determined that the content is satirical. Content will only be allowed if the violating elements of the content are being satirized or attributed to something or someone else in order to mock or criticize them.
Read lessRead more
Child Sexual Exploitation, Abuse and Nudity
Policy Rationale
We do not allow content or activity that sexually exploits or endangers children. When we become aware of apparent child exploitation, we report it to the National Center for Missing and Exploited Children (NCMEC), in compliance with applicable law. We know that sometimes people share nude images of their own children with good intentions; however, we generally remove these images because of the potential for abuse by others and to help avoid the possibility of other people reusing or misappropriating the images.
We also work with external experts, including the Facebook Safety Advisory Board, to discuss and improve our policies and enforcement around online safety issues, especially with regard to children. Learn more about the technology we’re using to fight against child exploitation.
Do not post:
Child sexual exploitation
Content or activity that threatens, depicts, praises, supports, provides instructions for, makes statements of intent, admits participation in or shares links of the sexual exploitation of children (including real minors, toddlers or babies or non-real depictions with a human likeness, such as in art, AI-generated content, fictional characters, dolls, etc). This includes but is not limited to:
- Sexual intercourse
- Explicit sexual intercourse or oral sex, defined as mouth or genitals entering or in contact with another person's genitals or anus, where at least one person's genitals are nude.
- Implied sexual intercourse or oral sex, including when contact is imminent or not directly visible.
- Stimulation of genitals or anus, including when activity is imminent or not directly visible.
- Presence of by-products of sexual activity.
- Any of the above involving an animal.
- Children with sexual elements, including but not limited to:
- Restraints.
- Focus on genitals.
- Presence of aroused adult.
- Presence of sex toys or use of any object for sexual stimulation or gratification.
- Sexualised costume.
- Stripping.
- Staged environment (for example, on a bed) or professionally shot (quality/focus/angles).
- Open-mouth kissing.
- Content of children in a sexual fetish context.
- Content that supports, promotes, advocates or encourages participation in pedophilia unless it is discussed neutrally in an academic or verified health context.
- Content that identifies or mocks alleged victims of child sexual exploitation by name or image.
Solicitation
Content that solicits sexual content or activity depicting or involving children, defined as:
- Child Sexual Abuse Material (CSAM)
- Nude imagery of real or non-real children
- Sexualized imagery of real or non-real children
Content that solicits sexual encounters with children
Inappropriate interactions with children
Content that constitutes or facilitates inappropriate interactions with children, such as:
- Arranging or planning real-world sexual encounters with children
- Purposefully exposing children to sexually explicit language or sexual material
- Engaging in implicitly sexual conversations in private messages with children
- Obtaining or requesting sexual material from children in private messages
Exploitative intimate imagery and sextortion
Content that attempts to exploit real children by:
- Coercing money, favors or intimate imagery with threats to expose intimate imagery or information.
- Sharing, threatening or stating an intent to share private sexual conversations or intimate imagery.
Sexualization of children
- Content (including photos, videos, real-world art, digital content, and verbal depictions) that sexualizes real or non-real children.
- Groups, Pages and profiles dedicated to sexualizing real or non-real children.
Child nudity
Content that depicts real or non-real child nudity where nudity is defined as:
- Close-ups of real or non-real children’s genitalia
- Real or non-real nude toddlers, showing:
- Visible genitalia, even when covered or obscured by transparent clothing.
- Visible anus and/or fully nude close-up of buttocks.
- Real or non-real nude minors, showing:
- Visible genitalia (including genitalia obscured only by pubic hair or transparent clothing)
- Visible anus and/or fully nude close-up of buttocks.
- Uncovered female nipples.
- No clothes from neck to knee - even if no genitalia or female nipples are showing.
- Unless the non-real imagery is for health or educational purposes or is a depiction of child nudity in real-word art
Non-sexual child abuse
Imagery that depicts real or non-real non-sexual child abuse regardless of sharing intent, unless the imagery is from real-world art, cartoons, movies or video games
Content that praises, supports, promotes, advocates for, provides instructions for or encourages participation in non-sexual child abuse.
For the following content, we include a warning screen so that people are aware the content may be disturbing and limit the ability to view the content to adults, ages eighteen and older:
- Videos or photos that depict police officers or military personnel committing non-sexual child abuse.
- Imagery of non-sexual child abuse, when law enforcement, child protection agencies, or trusted safety partners request that we leave the content on the platform for the express purpose of bringing a child back to safety.
For the following content, we include a sensitivity screen so that people are aware the content may be upsetting to some:
- Videos or photos of violent immersion of a child in water in the context of religious rituals.
For the following Community Standards, we require additional information and/or context to enforce:
For the following content, we include a warning label so that people are aware that the content may be sensitive:
- Imagery posted by a news agency that depicts child nudity in the context of famine, genocide, war crimes, or crimes against humanity, unless accompanied by a violating caption or shared in a violating context, in which case the content is removed.
We may also remove imagery depicting the aftermath of non-sexual child abuse when reported by news media partners, NGOs or other trusted safety partners.
Read lessRead more
Adult Sexual Exploitation
Policy Rationale
We recognize the importance of Facebook as a place to discuss and draw attention to sexual violence and exploitation. In an effort to create space for this conversation and promote a safe environment, we allow victims to share their experiences, but remove content that depicts, threatens or promotes sexual violence, sexual assault, or sexual exploitation. We also remove content that displays, advocates for or coordinates sexual acts with non-consenting parties to avoid facilitating non-consensual sexual acts.
To protect victims and survivors, we remove images that depict incidents of sexual violence and intimate images shared without the consent of the person(s) pictured. As noted in the introduction, we also work with external safety experts to discuss and improve our policies and enforcement around online safety issues, and we may remove content when they provide information that content is linked to harmful activity. We have written about the technology we use to protect against intimate images and the research that has informed our work. We’ve also put together a guide to reporting and removing intimate images shared without your consent.
Do not post:
In instances where content consists of any form of non-consensual sexual touching, necrophilia, or forced stripping, including:
- Depictions (including real photos/videos except in a real-world art context), or
- Sharing, offering, asking for or threatening to share imagery, or
- Descriptions, unless shared by or in support of the victim/survivor, or
- Advocacy (including aspirational and conditional statements), or
- Statements of intent, or
- Calls for action, or
- Admitting participation, or
- Mocking victims of any of the above.
- We will also take down content shared by a third party that identifies victims or survivors of sexual assault when reported by the victim or survivor.
Content that attempts to exploit people by any of the following:
- Sextortion: Coercing money, favors or intimate imagery from people with threats to expose their intimate imagery or intimate information
- Sharing, threatening, stating an intent to share, offering or asking for non-consensual intimate imagery that fulfills all of the 3 following conditions:
- Imagery is non-commercial or produced in a private setting.
- Person in the imagery is (near) nude, engaged in sexual activity or in a sexual pose.
- Lack of consent to share the imagery is indicated by meeting any of the signals:
- Vengeful context (such as, caption, comments or page title).
- Independent sources (such as, law enforcement record) including entertainment media (such as, leak of images confirmed by media).
- A visible match between the person depicted in the image and the person who has reported the content to us.
- The person who reported the content to us shares the same name as the person depicted in the image.
- Secretly taken non-commercial imagery of a real person's commonly sexualized body parts (breasts, groin, buttocks, or thighs) or of a real person engaged in sexual activity. This imagery is commonly known as "creepshots" or "upskirts" and includes photos or videos that mock, sexualize or expose the person depicted in the imagery.
- Threatening or stating an intent to share private sexual conversations that meets the following criteria:
- Lack of consent is indicated by:
- Vengeful context and/or threatening context, or
- A visible match between the person depicted in the image and the person who has reported the content to us.
- The person who reported the content to us shares the same name as the person depicted in the image.
- Lack of consent is indicated by:
For the following content, we include a warning screen so that people are aware the content may be disturbing:
Narratives and statements that contain a description of non-consensual sexual touching (written or verbal) that includes details beyond mere naming or mentioning the act if:
- Shared by the victim, or
- Shared by a third party (other than the victim) in support of the victim or condemnation of the act or for general awareness to be determined by context/caption.
Content mocking the concept of non-consensual sexual touching
For the following Community Standards, we require additional information and/or context to enforce:
We may restrict visibility to people over the age of 18 and include a warning label on certain content depicting non-consensual sexual touching, when it is shared to raise awareness and without entertainment or sensational context, where the victim or survivor is not identifiable and where the content does not involve nudity.
In addition to our at-scale policy of removing content that threatens or advocates rape or other non-consensual sexual touching, we may also disable the posting account.
We may also enforce on content shared by a third party that identifies survivors of sexual assault when reported by an authorized representative or Trusted Partner.
Read lessRead more
Violence and Incitement
Policy Rationale
We aim to prevent potential offline harm that may be related to content on Facebook. While we understand that people commonly express disdain or disagreement by threatening or calling for violence in non-serious ways, we remove language that incites or facilitates serious violence. We remove content, disable accounts and work with law enforcement when we believe there is a genuine risk of physical harm or direct threats to public safety. We also try to consider the language and context in order to distinguish casual statements from content that constitutes a credible threat to public or personal safety. In determining whether a threat is credible, we may also consider additional information like a person's public visibility and the risks to their physical safety.
In some cases, we see aspirational or conditional threats directed at terrorists and other violent actors (e.g. "Terrorists deserve to be killed"), and we deem those non-credible, absent specific evidence to the contrary.
Do not post:
Threats that could lead to death (and other forms of high-severity violence) and admission of past violence targeting people or places where threat is defined as any of the following:
- Statements of intent to commit high-severity violence. This includes content where a symbol represents the target and/or includes a visual of an armament or method to represent violence.
- Calls for high-severity violence including content where no target is specified but a symbol represents the target and/or includes a visual of an armament or method that represents violence.
- Statements advocating for high-severity violence.
- Aspirational or conditional statements to commit high-severity violence.
- Statements admitting to committing high-severity violence (in written or verbal form, or visually depicted by a perpetrator), except when shared in a context of redemption, self-defense or when committed by law enforcement, military or state security personnel.
Content that asks for, offers, or admits to offering services of high-severity violence (for example, hitmen, mercenaries, assassins, female genital mutilation) or advocates for the use of these services
Admissions, statements of intent or advocacy, calls to action, or aspirational or conditional statements to kidnap or abduct a target or that promotes, supports or advocates for kidnapping or abduction
Content that depicts kidnappings or abductions if it is clear the content is not being shared by a victim or their family as a plea for help, or shared for informational, condemnation or awareness raising purposes
Threats of high-severity violence using digitally-produced or altered imagery to target living people with armaments, methods of violence or dismemberment
Threats that lead to serious injury (mid-severity violence) and admission of past violence toward private individuals, unnamed specified persons, minor public figures, high-risk persons or high-risk groups where threat is defined as any of the following:
- Statements of intent to commit violence, or
- Statements advocating violence, or
- Calls for mid-severity violence including content where no target is specified but a symbol represents the target, or
- Aspirational or conditional statements to commit violence, or
- Statements admitting to committing mid-severity violence (in written or verbal form, or visually depicted by a perpetrator), except when shared in a context of redemption, self-defense, fight-sports context or when committed by law enforcement, military or state security personnel.
Content about other target(s) apart from private individuals, minor public figures, high-risk persons or high-risk groups and any credible:
- Statements of intent to commit violence, or
- Calls for action of violence, or
- Statements advocating for violence, or
- Aspirational or conditional statements to commit violence
Threats that lead to physical harm (or other forms of lower-severity violence) towards private individuals (self-reporting required) or minor public figures where threat is defined as any of the following:
- Private individuals (name and/or face match are required) or minor public figures that includes:
- Statements of intent, calls for action, advocating, aspirational or conditional statements to commit low-severity violence
Instructions on how to make or use weapons if there is evidence of a goal to seriously injure or kill people through:
- Language explicitly stating that goal, or
- Photos or videos that show or simulate the end result (serious injury or death) as part of the instruction.
- Unless when shared in a context of recreational self defense, for military training purposes, commercial video games, or news coverage (posted by a Page or with a news logo).
Providing instructions on how to make or use explosives, unless there is clear context that the content is for a non-violent purpose (for example, part of commercial video games, clear scientific/educational purpose, fireworks or specifically for fishing)
Any content containing statements of intent, calls for action, conditional or aspirational statements, or advocating for violence due to voting, voter registration or the administration or outcome of an election
Statements of intent or advocacy, calls to action, or aspirational or conditional statements to bring or take up armaments to locations (including but not limited to places of worship, educational facilities, polling places or locations used to count votes or administer an election) or locations where there are temporary signals of a heightened risk of violence. This may be the case, for example, when there is a known protest and counter-protest planned or violence broke out at a protest in the same city within the last 7 days. This includes a visual of an armament or method that represents violence that targets these locations.
Statements of intent or advocacy, calls to action, or aspirational or conditional statements to forcibly enter locations (including but not limited to places of worship, educational facilities, polling places or locations used to count votes or administer an election) where there are temporary signals of a heightened risk of violence. This may be the case, for example, when there is a known protest and counter-protest planned or violence broke out at a protest in the same city within the last 7 days.
For the following Community Standards, we require additional information and/or context to enforce:
Do not post:
-
Violent threats against law enforcement officers.
-
Violent threats against people accused of a crime. We remove this content when we have reason to believe that the content is intended to cause physical harm.
-
Coded statements where the method of violence is not clearly articulated, but the threat is veiled or implicit, as shown by the combination of both a threat signal and contextual signal from the list below.
-
Threat: Content is a coded statement that is one of the following:
-
Shared in a retaliatory context (e.g., expressions of desire to engage in violence against others in response to a grievance or threat that may be real, perceived or anticipated)
-
References to historical or fictional incidents of violence (e.g., content that threatens others by referring to known historical incidents of violence that have been executed throughout history or in fictional settings)
-
Acts as a threatening call-to-action (e.g. content inviting or encouraging others to carry out violent acts or to join in carrying out the violent acts)
-
Indicates knowledge of or shares sensitive information that could expose others to violence (e.g. content that either makes note of or implies awareness of personal information that might make a threat of violence more credible. This includes implying knowledge of a person's residential address, their place of employment or education, daily commute routes or current location)
-
Context:
-
Local context or expertise confirms that the statement in question could lead to imminent violence.
-
The target of the content or an authorized representative reports the content to us.
-
Threats against election workers, including claims of election-related wrongdoing against private individuals when combined with a signal of violence or additional context that confirms that the claim could lead to imminent violence or physical harm.
-
Implicit statements of intent or advocacy, calls to action, or aspirational or conditional statements to bring armaments to locations, including but not limited to places of worship, educational facilities, polling places, or locations used to count votes or administer an election (or encouraging others to do the same). We may also restrict calls to bring armaments to certain locations where there are temporarily signals of a heightened risk of violence or offline harm. This may be the case, for example, when there is a known protest and counter-protest planned or violence broke out at a protest in the same city within the last 7 days
Read lessRead more
Hate Speech
Policy Rationale
We believe that people use their voice and connect more freely when they don’t feel attacked on the basis of who they are. That is why we don’t allow hate speech on Facebook. It creates an environment of intimidation and exclusion, and in some cases may promote offline violence.
We define hate speech as a direct attack against people — rather than concepts or institutions— on the basis of what we call protected characteristics: race, ethnicity, national origin, disability, religious affiliation, caste, sexual orientation, sex, gender identity and serious disease. We define attacks as violent or dehumanizing speech, harmful stereotypes, statements of inferiority, expressions of contempt, disgust or dismissal, cursing and calls for exclusion or segregation. We also prohibit the use of harmful stereotypes, which we define as dehumanizing comparisons that have historically been used to attack, intimidate, or exclude specific groups, and that are often linked with offline violence. We consider age a protected characteristic when referenced along with another protected characteristic. We also protect refugees, migrants, immigrants and asylum seekers from the most severe attacks, though we do allow commentary and criticism of immigration policies. Similarly, we provide some protections for characteristics like occupation, when they’re referenced along with a protected characteristic. Sometimes, based on local nuance, we consider certain words or phrases as frequently used proxies for PC groups.
We also prohibit the usage of slurs that are used to attack people on the basis of their protected characteristics. However, we recognize that people sometimes share content that includes slurs or someone else's hate speech to condemn it or raise awareness. In other cases, speech, including slurs, that might otherwise violate our standards can be used self-referentially or in an empowering way. Our policies are designed to allow room for these types of speech, but we require people to clearly indicate their intent. If the intention is unclear, we may remove content.
Learn more about our approach to hate speech.
Do not post:
Tier 1
Content targeting a person or group of people (including all groups except those who are considered non-protected groups described as having carried out violent crimes or sexual offenses or representing less than half of a group) on the basis of their aforementioned protected characteristic(s) or immigration status with:
- Violent speech or support in written or visual form
- Dehumanizing speech or imagery in the form of comparisons, generalizations, or unqualified behavioral statements (in written or visual form) to or about:
- Insects (including but not limited to: cockroaches, locusts)
- Animals in general or specific types of animals that are culturally perceived as intellectually or physically inferior (including but not limited to: Black people and apes or ape-like creatures; Jewish people and rats; Muslim people and pigs; Mexican people and worms)
- Filth (including but not limited to: dirt, grime)
- Bacteria, viruses, or microbes
- Disease (including but not limited to: cancer, sexually transmitted diseases)
- Feces (including but not limited to: shit, crap)
- Subhumanity (including but not limited to: savages, devils, monsters, primitives)
- Sexual predators (including but not limited to: Muslim people having sex with goats or pigs)
- Violent criminals (including but not limited to: terrorists, murderers, members of hate or criminal organizations)
- Other criminals (including but not limited to “thieves,” “bank robbers,” or saying “All [protected characteristic or quasi-protected characteristic] are ‘criminals’”).
- Certain objects (women as household objects or property or objects in general; Black people as farm equipment; transgender or non-binary people as “it”)
- Statements denying existence (including but not limited to: "[protected characteristic(s) or quasi-protected characteristic] do not exist", "no such thing as [protected charactic(s) or quasi-protected characteristic]" )
- Harmful stereotypes historically linked to intimidation, exclusion, or violence on the basis of a protected characteristic, such as Blackface; Holocaust denial; claims that Jewish people control financial, political, or media institutions; and references to Dalits as menial laborers
- Mocking the concept, events or victims of hate crimes even if no real person is depicted in an image.
Tier 2
Content targeting a person or group of people on the basis of their protected characteristic(s) with:
-
Generalizations that state inferiority (in written or visual form) in the following ways:
- Physical deficiencies are defined as those about:
- Hygiene, including but not limited to: filthy, dirty, smelly.
- Physical appearance, including but not limited to: ugly, hideous.
- Mental deficiencies are defined as those about:
- Intellectual capacity, including but not limited to: dumb, stupid, idiots.
- Education, including but not limited to: illiterate, uneducated.
- Mental health, including but not limited to: mentally ill, retarded, crazy, insane.
- Moral deficiencies are defined as those about:
- Character traits culturally perceived as negative, including but not limited to: coward, liar, arrogant, ignorant.
- Derogatory terms related to sexual activity, including but not limited to: whore, slut, perverts.
- Physical deficiencies are defined as those about:
-
Other statements of inferiority, which we define as:
- Expressions about being less than adequate, including but not limited to: worthless, useless.
- Expressions about being better/worse than another protected characteristic, including but not limited to: "I believe that males are superior to females."
- Expressions about deviating from the norm, including but not limited to: freaks, abnormal.
-
Expressions of contempt (in written or visual form), which we define as:
- Self-admission to intolerance on the basis of a protected characteristics, including but not limited to: homophobic, islamophobic, racist.
- Expressions that a protected characteristic shouldn't exist.
- Expressions of hate, including but not limited to: despise, hate.
-
Expressions of dismissal, including but not limited to: don´t respect, don't like, don´t care for
-
Expressions of disgust (in written or visual form), which we define as:
- Expressions that suggest the target causes sickness, including but not limited to: vomit, throw up.
- Expressions of repulsion or distaste, including but not limited to: vile, disgusting, yuck.
-
Cursing, except certain gender-based cursing in a romantic break-up context, defined as:
- Referring to the target as genitalia or anus, including but not limited to: cunt, dick, asshole.
- Profane terms or phrases with the intent to insult, including but not limited to: fuck, bitch, motherfucker.
- Terms or phrases calling for engagement in sexual activity, or contact with genitalia, anus, feces or urine, including but not limited to: suck my dick, kiss my ass, eat shit.
Tier 3
Content targeting a person or group of people on the basis of their protected characteristic(s) with any of the following:
- Segregation in the form of calls for action, statements of intent, aspirational or conditional statements, or statements advocating or supporting segregation.
- Exclusion in the form of calls for action, statements of intent, aspirational or conditional statements, or statements advocating or supporting, defined as
- Explicit exclusion, which means things like expelling certain groups or saying they are not allowed.
- Political exclusion, which means denying the right to political participation.
- Economic exclusion, which means denying access to economic entitlements and limiting participation in the labour market.
- Social exclusion, which means things like denying access to spaces (physical and online)and social services, except for gender-based exclusion in health and positive support Groups.
Content that describes or negatively targets people with slurs, where slurs are defined as words that inherently create an atmosphere of exclusion and intimidation against people on the basis of a protected characteristic, often because these words are tied to historical discrimination, oppression, and violence. They do this even when targeting someone who is not a member of the PC group that the slur inherently targets.
For the following Community Standards, we require additional information and/or context to enforce:
Do not post:
- Content explicitly providing or offering to provide products or services that aim to change people’s sexual orientation or gender identity.
- Content attacking concepts, institutions, ideas, practices, or beliefs associated with protected characteristics, which are likely to contribute to imminent physical harm, intimidation or discrimination against the people associated with that protected characteristic. Facebook looks at a range of signs to determine whether there is a threat of harm in the content. These include but are not limited to: content that could incite imminent violence or intimidation; whether there is a period of heightened tension such as an election or ongoing conflict; and whether there is a recent history of violence against the targeted protected group. In some cases, we may also consider whether the speaker is a public figure or occupies a position of authority.
- Content targeting a person or group of people on the basis of their protected characteristic(s) with claims that they have or spread the novel coronavirus, are responsible for the existence of the novel coronavirus, are deliberately spreading the novel coronavirus, or mocking them for having or experiencing the novel coronavirus.
In certain cases, we will allow content that may otherwise violate the Community Standards when it is determined that the content is satirical. Content will only be allowed if the violating elements of the content are being satirized or attributed to something or someone else in order to mock or criticize them.
Read lessRead more
Bullying and Harassment
Policy Rationale
Bullying and harassment happen in many places and come in many different forms from making threats and releasing personally identifiable information to sending threatening messages and making unwanted malicious contact. We do not tolerate this kind of behavior because it prevents people from feeling safe and respected on Facebook.
We distinguish between public figures and private individuals because we want to allow discussion, which often includes critical commentary of people who are featured in the news or who have a large public audience. For public figures, we remove attacks that are severe as well as certain attacks where the public figure is directly tagged in the post or comment. We define public figures as state and national level government officials, political candidates for those offices, people with over one million fans or followers on social media and people who receive substantial news coverage.
For private individuals, our protection goes further: We remove content that's meant to degrade or shame, including, for example, claims about someone's sexual activity. We recognize that bullying and harassment can have more of an emotional impact on minors, which is why our policies provide heightened protection for users between the ages of 13 and 18.
Context and intent matter, and we allow people to post and share if it is clear that something was shared in order to condemn or draw attention to bullying and harassment. In certain instances, we require self-reporting because it helps us understand that the person targeted feels bullied or harassed. In addition to reporting such behavior and content, we encourage people to use tools available on Facebook to help protect against it.
We also have a Bullying Prevention Hub, which is a resource for teens, parents, and educators seeking support for issues related to bullying and other conflicts. It offers step-by-step guidance, including information on how to start important conversations about bullying. Learn more about what we are doing to protect people from bullying and harassment here.
Note: This policy does not apply to individuals who are part of designated organizations under the Dangerous Organizations and Individuals policy or individuals who died prior to 1900.
Tier 1: Universal protections for everyone:
-
Everyone is protected from:
- Unwanted contact that is:
- Repeated, OR
- Sexually harassing, OR
- Is directed at a large number of individuals with no prior solicitation.
- Calls for self-injury or suicide of a specific person, or group of individuals.
- Attacks based on their experience of sexual assault, sexual exploitation, sexual harassment, or domestic abuse.
- Statements of intent to engage in a sexual activity or advocating to engage in a sexual activity.
- Severe sexualized commentary.
- Derogatory sexualized photoshop or drawings
- Attacks through derogatory terms related to sexual activity (for example: whore, slut).
- Claims that a violent tragedy did not occur.
- Claims that individuals are lying about being a victim of a violent tragedy or terrorist attack, including claims that they are:
- Acting or pretending to be a victim of a specific event, or
- Paid or employed to mislead people about their role in the event.
- Unwanted contact that is:
-
Threats to release an individual's private phone number, residential address, email address or medical records (as defined in the Privacy Violations policy).
-
Calls for, or statements of intent to engage in, bullying and/or harassment.
-
Content that degrades or expresses disgust toward individuals who are depicted in the process of, or right after, menstruating, urinating, vomiting, or defecating
-
Everyone is protected from the following, but for adult public figures, they must be purposefully exposed to:
- Calls for death and statements in favor of contracting or developing a medical condition.
- Celebration or mocking of death or medical condition.
- Claims about sexually transmitted infections.
- Derogatory terms related to female gendered cursing.
- Statements of inferiority about physical appearance.
Tier 2: Additional protections for all Minors, Private Adults and Limited Scope Public Figures (for example, individuals whose primary fame is limited to their activism, journalism, or those who become famous through involuntary means):
-
In addition to the universal protections for everyone, all minors (private individuals and public figures), private adults and limited scope public figures are protected from:
- Claims about sexual activity, except in the context of criminal allegations against adults (non-consensual sexual touching).
- Content sexualizing another adult (sexualization of minors is covered in the Child Sexual Exploitation, Abuse and Nudity policy).
-
All minors (private individuals and public figures), private adults and limited scope public figures) are protected from the following, but for minor public figures, they must be purposefully exposed to:
- Dehumanizing comparisons (in written or visual form) to or about:
- Animals and insects, including subhuman creatures, that are culturally perceived as inferior.
- Bacteria, viruses, microbes, and diseases.
- Inanimate objects, including trash, filth, feces.
- Dehumanizing comparisons (in written or visual form) to or about:
-
Content manipulated to highlight, circle, or otherwise negatively draw attention to specific physical characteristics (nose, ear, and so on).
-
Content that ranks them based on physical appearance or character traits.
-
Content that degrades individuals who are depicted being physically bullied (except in self-defense and fight-sport contexts).
Tier 3: Additional protections for Private Minors, Private Adults, and Minor Involuntary Public Figures:
-
In addition to all the protections listed above, all private minors, private adults (who must self-report), and minor involuntary public figures are protected from:
- Targeted cursing.
- Claims about romantic involvement, sexual orientation or gender identity.
- Calls for action, statements of intent, aspirational or conditional statements, or statements advocating or supporting exclusion.
- Negative character or ability claims, except in the context of criminal allegations and business reviews against adults.
- Expressions of contempt, disgust, or content rejecting the existence of an individual, except in the context of criminal allegations against adults.
-
When self-reported, private minors, private adults, and minor involuntary public figures are protected from the following:
- First-person voice bullying.
- Unwanted manipulated imagery.
- Comparison to other public, fictional or private individuals on the basis of physical appearance.
- Claims about religious identity or blasphemy
- Comparisons to animals or insects that are not culturally perceived as intellectually or physically inferior (“tiger," “lion").
- Neutral or positive physical descriptions.
- Non-negative character or ability claims.
- Attacks through derogatory terms related to a lack of sexual activity.
Tier 4: Additional protections for Private Minors only:
- Minors get the most protection under our policy. In addition to all the protections listed above, private minors are also protected from:
- Allegations about criminal or illegal behavior.
- Videos of physical bullying against minors, shared in a non-condemning context.
Tier 5: Bullying and harassment through pages, groups, events and messages
- The protections of Tiers 1 through 4 are also enforced on pages, groups, events and messages.
We add a cover to this content so people can choose whether to see it:
Videos of physical bullying against minors shared in a condemning context
For the following Community Standards, we require additional information and/or context to enforce:
Do not:
- Post content that targets private individuals through unwanted Pages, Groups and Events. We remove this content when it is reported by the victim or an authorized representative of the victim.
- Create accounts to contact someone who has blocked you.
- Post attacks that use derogatory terms related to female gendered cursing. We remove this content when the victim or an authorized representative of the victim informs us of the content, even if the victim has not reported it directly.
- Post content that would otherwise require the victim to report the content or an indicator that the poster is directly targeting the victim (for example: the victim is tagged in the post or comment). We will remove this content if we have confirmation from the victim or an authorized representative of the victim that the content is unwanted.
- Post content praising, celebrating or mocking anyone's death. We also remove content targeting a deceased individual that we would normally require the victim to report.
- Post content calling for or stating an intent to engage in behavior that would qualify as bullying and harassment under our policies. We will remove this content when we have confirmation from the victim or an authorized representative of the victim that the content is unwanted.
- Post content sexualizing a public figure. We will remove this content when we have confirmation from the victim or an authorized representative of the victim that the content is unwanted.
- Initiate contact that is unwanted, including when contact is sexually harassing the recipient. We will remove any content shared in an unwanted context when we have a confirmation from the recipient, or an authorized representative of the recipient that contact is unwanted.
- Engage in mass harassment against individuals that targets them based on their decision to take or not take the COVID-19 vaccine with:
- Statements of mental or moral inferiority based on their decision, or
- Statements that advocate for or allege a negative outcome as a result of their decision, except for widely proven and/or accepted COVID-19 symptoms or vaccine side effects.
- Remove directed mass harassment, when:
- Targeting, via any surface, ‘individuals at heightened risk of offline harm’, defined as:
- Human rights defenders
- Minors
- Victims of violent events/tragedies
- Opposition figures in at-risk countries during election periods
- Election officials
- Government dissidents who have been targeted based on their dissident status
- Ethnic and religious minorities in conflict zones
- Member of a designated and recognizable at-risk group
- Targeting any individual via personal surfaces, such as inbox or profiles, with:
- Content that violates the bullying and harassment policies for private individuals or,
- Objectionable content that is based on a protected characteristic
- Targeting, via any surface, ‘individuals at heightened risk of offline harm’, defined as:
- Disable accounts engaged in mass harassment as part of either
- State or state-affiliated networks targeting any individual via any surface.
- Adversarial networks targeting any individual via any surface with:
- Content that violates the bullying and harassment policies for private individuals or,
- Content that targets them based on a protected characteristic, or,
- Content or behavior otherwise deemed to be objectionable in local context
Read lessRead more
Suicide and Self Injury
Policy Rationale
We care deeply about the safety of the people who use our apps. We regularly consult with experts in suicide and self-injury to help inform our policies and enforcement, and work with organizations around the world to provide assistance to people in distress.
While we do not allow people to intentionally or unintentionally celebrate or promote suicide or self-injury, we do allow people to discuss these topics because we want Facebook to be a space where people can share their experiences, raise awareness about these issues, and seek support from one another.
We define self-injury as the intentional and direct injuring of the body, including self-mutilation and eating disorders. We remove any content that encourages suicide or self-injury, including fictional content such as memes or illustrations and any self-injury content which is graphic, regardless of context. We also remove content that identifies and negatively targets victims or survivors of suicide or self-injury seriously, humorously or rhetorically, as well as real time depictions of suicide or self-injury. Content about recovery of suicide or self-harm that is allowed, but may contain imagery that could be upsetting, such as a healed scar, is placed behind a sensitivity screen.
When people post or search for suicide or self-injury- related content, we will direct them to local organizations that can provide support and if our Community Operations team is concerned about immediate harm we will contact local emergency services to get them help. For more information, visit the Facebook Safety Center.
With respect to live content, experts have told us that if someone is saying they intend to attempt suicide on a livestream, we should leave the content up for as long as possible, because the longer someone is talking to a camera, the more opportunity there is for a friend or family member to call emergency services.
However, to minimize the risk of others being negatively impacted by viewing this content, we will stop the livestream at the point at which the threat turns into an attempt. As mentioned above, in any case, we will contact emergency services if we identify someone is at immediate risk of harming themselves.
Do not post:
Content that promotes, encourages, coordinates, or provides instructions for
- Suicide.
- Self-injury.
- Eating disorders.
Content that depicts graphic self-injury imagery
It is against our policies to post content depicting a person who engaged in a suicide attempt or death by suicide
Content that focuses on depiction of ribs, collar bones, thigh gaps, hips, concave stomach, or protruding spine or scapula when shared together with terms associated with eating disorders
Content that contains instructions for drastic and unhealthy weight loss when shared together with terms associated with eating disorders.
Content that mocks victims or survivors of suicide, self-injury or eating disorders who are either publicly known or implied to have experienced suicide or self-injury
For the following content, we restrict content to adults over the age of 18, and include a sensitivity screen so that people are aware the content may be upsetting:
- Photos or videos depicting a person who engaged in euthanasia/assisted suicide in a medical setting.
For the following content, we include a sensitivity screen so that people are aware the content may be upsetting to some:
- Content that depicts older instances of self-harm such as healed cuts or other non-graphic self-injury imagery in a self-injury, suicide or recovery context.
- Content that depicts ribs, collar bones, thigh gaps, hips, concave stomach, or protruding spine or scapula in a recovery context.
We provide resources to people who post written or verbal admissions of engagement in self injury, including:
- Suicide.
- Euthanasia/assisted suicide.
- Self-harm.
- Eating disorders.
- Vague, potentially suicidal statements or references (including memes or stock imagery about sad mood or depression) in a suicide or self-injury context.
For the following Community Standards, we require additional information and/or context to enforce:
- We may remove suicide notes when we have confirmation of a suicide or suicide attempt. We try to identify suicide notes using several factors, including but not limited to, family or legal representative requests, media reports, law enforcement reports or other third party sources (e.g. government agencies, NGOs).
- A suicide note may also be removed when reported through the Suicidal Content Contact Form or Instagram Contact Form when we have confirmation of a suicide or suicide attempt
Read lessRead more
Violent and Graphic Content
Policy Rationale
To protect users from disturbing imagery, we remove content that is particularly violent or graphic, such as videos depicting dismemberment, visible innards or charred bodies. We also remove content that contains sadistic remarks towards imagery depicting the suffering of humans and animals.
In the context of discussions about important issues such as human rights abuses, armed conflicts or acts of terrorism, we allow graphic content (with some limitations) to help people to condemn and raise awareness about these situations.
We know that people have different sensitivities with regard to graphic and violent imagery. For that reason, we add a warning label to some graphic or violent imagery so that people are aware it may be sensitive before they click through. We also restrict the ability for users under 18 to view such content.
Do not post:
Imagery of people
Videos of people or dead bodies in non-medical settings if they depict
- Dismemberment.
- Visible internal organs; partially decomposed bodies.
- Charred or burning people unless in the context of cremation or self-immolation when that action is a form of political speech or newsworthy.
- Victims of cannibalism.
- Throat-slitting.
Live streams of capital punishment of a person
Sadistic Remarks
- Sadistic remarks towards imagery that is put behind a warning screen under this policy advising people that the content may be disturbing, unless there is a self-defense context or medical setting.
- Sadistic remarks towards the following content which includes a label so that people are aware it may be sensitive:
- Imagery of one or more persons subjected to violence and/or humiliating acts by one or more uniformed personnel doing a police function.
- Imagery of fetuses or newborn babies.
- Imagery of fetuses and babies outside of the womb that are deceased.
- Explicit sadistic remarks towards the suffering of animals depicted in the imagery.
- Offering or soliciting imagery that is deleted or put behind a warning screen under this policy, when accompanied by sadistic remarks.
For the following content, we include a warning screen so that people are aware the content may be disturbing. We also limit the ability to view the content to adults, ages 18 and older:
Imagery of people
Videos of people or dead bodies in a medical setting if they depict:
- Dismemberment.
- Visible internal organs; partially decomposed bodies.
- Charred or burning people, including cremation or self-immolation when that action is a form of political speech or newsworthy.
- Victims of cannibalism.
- Throat-slitting.
Photos of wounded or dead people if they show:
- Dismemberment.
- Visible internal organs; partially decomposed bodies.
- Charred or burning people.
- Victims of cannibalism.
- Throat-slitting.
Imagery that shows the violent death of a person or people by accident or murder
Imagery that shows capital punishment of a person
Imagery that shows acts of torture committed against a person or people
Imagery of non-medical foreign objects (such as metal objects, knives, nails) inserted or stuck into a person causing grievous injury
Imagery of animals
The following content involving animals:
- Videos depicting humans killing animals if there is no explicit manufacturing, hunting, food consumption, processing or preparation context.
- Imagery of animal to animal fights, when there are visible innards or dismemberment of non-regenerating body, unless in the wild.
- Imagery of humans committing acts of torture or abuse against live animals.
- Imagery of animals showing wounds or cuts that render visible innards or dismemberment, if there is no explicit manufacturing, hunting, taxidermy, medical treatment, rescue or food consumption, preparation or processing context, or the animal is already skinned or with its outer layer fully removed.
For the following content, we include a label so that people are aware the content may be sensitive:
Imagery of non-medical foreign objects inserted into a person through their skin in a religious or cultural context
Imagery of visible innards in a birthing context
Imagery of fetuses and newborn babies that show:
- Dismemberment.
- Visible innards.
- An abortion or abandonment context.
Imagery of fetuses and babies outside of the womb that are deceased, unless another person is present in the image.
Imagery of newborn babiesfetuses and babies outside the womb in an abandonment context
Imagery of animals in a ritual slaughter context showing dismemberment, or visible innards, or charring or burning
For the following Community Standards, we require additional information and/or context to enforce:
We remove:
- Videos and photos that show the violent death of someone when a family member requests its removal.
- Videos of violent death of humans where the violent death is not visible in the video but the audio is fully or partially captured and the death is confirmed by either a law enforcement record, death certificate, Trusted Partner report or media report.
Read lessRead more