Building rules in public: Twitter announces its new approach to synthetic and manipulated media

  • By ; Nelly All

     

     

    As part of Twitter’s responsibility to create rules that are fair and set clear expectations for everyone on the service, Twitter announced its plan last fall to seek input from the public in Arabic, English, Hindi, Spanish, Portuguese and Japanese, on how it will address synthetic and manipulated media. Today, it’s sharing what it learned and how it shaped the update to The Twitter Rules, how it will treat this content when it identifies it, as well as something new that will be seen in Twitter as part of this change.

     

    Learnings

    Through a survey on its initial draft of this rule, as well as Tweets that included the hashtag #TwitterPolicyFeedback, Twitter gathered more than 6,500 responses from people around the world. It also consulted with a diverse, global group of civil society and academic experts on its draft approach. Overall, people recognize the threat that misleading altered media poses and want Twitter to do something about it. Here are some of the top-line findings:

     

    • Twitter should give me more information: Globally, more than 70 percent of people who use Twitter said “taking no action” on misleading altered media would be unacceptable. Respondents were nearly unanimous in their support for Twitter providing additional information or context on Tweets that have this type of media. 

     

    • This type of content should be labeled: Nearly 9 out of 10 individuals said placing warning labels next to significantly altered content would be acceptable. That is about as many who said it would be acceptable to alert people before they Tweet misleading altered media. 

     

    Compared to placing warning labels, respondents were somewhat less supportive of removing or hiding Tweets that contained misleading altered media. For example, 55 percent of those surveyed in the US said it would be acceptable to remove all of such media. When asked to give their open-ended thoughts about the proposed rule, people who opposed removal of all altered media talked about the impact on free expression and censorship.

    • If it is likely to cause harm, it should be removed: More than 90 percent of people who shared feedback support Twitter removing this content when it’s clear that it is intended to cause certain types of harm.

    • There should be enforcement action when sharing this content: More than 75 percent of people believe accounts that share misleading altered media should face enforcement action. Enforcement actions could include people on Twitter having to delete their Tweet or having their account suspended.

    What’s the new rule?

    People may not deceptively share synthetic or manipulated media that are likely to cause harm. In addition, Twitter may label Tweets containing synthetic and manipulated media to help people understand the media’s authenticity and to provide additional context.

     

    Twitter will use the following criteria to consider Tweets and media for labeling or removal under this rule:

    1. Are the media synthetic or manipulated?
    In determining whether media have been significantly and deceptively altered or fabricated, some factors it considers include:

     

    ·       Whether the content has been substantially edited in a manner that fundamentally alters its composition, sequence, timing, or framing; 

    ·       Any visual or auditory information (such as new video frames, overdubbed audio, or modified subtitles) that has been added or removed; and

    ·       Whether media depicting a real person has been fabricated or simulated.

    2. Are the media shared in a deceptive manner?
    Twitter will also consider whether the context in which media are shared could result in confusion or misunderstanding or suggests a deliberate intent to deceive people about the nature or origin of the content, for example by falsely claiming that it depicts reality.

    Twitter also assesses the context provided alongside media, for example:

    ·       The text of the Tweet accompanying or within the media

    ·       Metadata associated with the media

    ·       Information on the profile of the person sharing the media

    ·       Websites linked in the profile of the person sharing the media, or in the Tweet sharing the media

    3. Is the content likely to impact public safety or cause serious harm?
    Tweets that share synthetic and manipulated media are subject to removal under this policy if they are likely to cause harm. Some specific harms Twitter considers include:

    ·       Threats to the physical safety of a person or group

    ·       Risk of mass violence or widespread civil unrest

    ·       Threats to the privacy or ability of a person or group to freely express themselves or participate in civic events, such as: stalking or unwanted and obsessive attention; targeted content that includes tropes, epithets, or material that aims to silence someone; voter suppression or intimidation

     



    حمّل تطبيق Alamrakamy| عالم رقمي الآن