Meta’s Oversight Board to Tackle AI-Created Explicit Imagery

Meta’s Oversight Board is stepping into the spotlight once again to scrutinize the social media giant’s guidelines around AI-generated content, particularly focusing on explicit images of public figures. The board has undertaken the review of two significant cases involving AI-generated explicit imagery that has raised questions about Meta’s effectiveness in policing such content across its platforms, Facebook and Instagram.

Despite Meta’s existing policies against nudity, these instances spotlight a challenging facet of digital content moderation – the handling of explicit AI-generated imagery, commonly referred to as “deepfake porn.” This type of content, especially when it targets female celebrities, politicians, and other public figures, has emerged as a sophisticated form of online harassment. The increasing prominence of such malicious content has ignited discussions on the need for more stringent regulations to address the issue.

Among the cases taken up by the Oversight Board, one involves an Instagram post that displayed a nude AI-generated image of an Indian woman. The account responsible for the post was identified to specialize in sharing AI-generated images of Indian women exclusively. Despite being reported to Meta, the report was initially disregarded and closed after 48 hours without review. Further appeals by the user met the same fate until the case was escalated to the Oversight Board, prompting Meta to eventually remove the contentious post.

The second case under the board’s review originated from a Facebook group dedicated to AI art. It featured an AI-generated image portraying a nude woman being groped by a man, intended to resemble a well-known American public figure. The woman’s name was also mentioned in the post’s caption. This post was automatically removed by Meta’s systems, which recognized it from previous reports. However, the subsequent appeal by the user to restore the post was automatically denied, leading them to seek the Oversight Board’s intervention.

The decision to review cases from two distinct geographical regions—India and the United States—reflects the Oversight Board’s intention to examine potential inconsistencies in Meta’s content moderation policies and practices across different markets and languages. Helle Thorning-Schmidt, co-chair of the Oversight Board, emphasized the importance of assessing whether Meta’s efforts in content moderation extend equitable protection to all women globally, acknowledging that the company’s response times and effectiveness vary significantly from one region to another.

As part of its review process, the Oversight Board is calling for public inputs on the matter over the next two weeks. The board’s findings, expected to be published in the coming weeks, will include decisions on the specific cases and broader policy recommendations for Meta. This initiative follows a similar endeavor where the Oversight Board’s deliberations led Meta to commit to more comprehensive labeling of AI-generated content on its platforms, highlighting the board’s growing influence on Meta’s content policy formulation and enforcement strategies. This engagement represents a crucial step toward addressing the complex challenges posed by AI-generated explicit content and its implications for social media governance.

Source

Sensi Tech Hub
Logo