The new crisis misinformation strategy attempts to reduce the spread of incorrect or misleading information.
Posts judged to be in violation of the new crisis misinformation policy are buried behind a notice of possible "damage to crisis-affected communities". The tweet, however, is still viewable with a click of a button for "accountability purposes."
If a user's tweet contains the following words, they will receive a warning:
- False news coverage, event reporting, or information that misrepresents the situation on the ground
- False charges about the use of force, invasions of territorial sovereignty, or weapon use
- Allegations of war crimes or mass atrocities against specific communities that are demonstrably incorrect or deceptive.
- False information about the international community's response, sanctions, defensive activities, or humanitarian efforts
Infringing content will not be amplified or suggested throughout the platform, and likes, retweets, and shares will be disabled as well. The message also contains a link to further information about Twitter's approach to crisis disinformation. According to Yoel Roth, Twitter's head of safety and integrity, strong comments, efforts to debunk or fact check, and personal stories or first-person accounts are not covered by the new policy.
"In addition to our previous work to make trustworthy information more available during crisis events," Roth stated, "this new strategy will help us prevent the spread of the most conspicuous, erroneous content, particularly those which might cause significant harm."
The social network requires verification from various reliable, publically available sources, such as conflict monitoring groups, humanitarian organizations, open-source investigators, journalists, and others, to assess the truth of online claims.
These new guidelines are now focused on international armed conflict—specifically, the situation in Ukraine—but Twitter wants to "update and broaden the policy" to encompass other types of crises.