The company defines deepfakes as “any photo, audio, or video that has been significantly altered or fabricated in a way that intends to mislead people or changes its original meaning.” While the definition kind of makes sense, Twitter will need to make sure its AI wouldn’t classify memes are deepfakes. Expanding on this in its survey, the social network says a piece of media that makes someone sick (huh?). Another example of violating the policy is adding or removing people from the original piece of content that can be considered as deepfake. Twitter also outlined steps it might take to flag a tweet with doctored media:
Place a notice next to tweets that share synthetic or manipulated media. Warn people before they share or like tweets with synthetic or manipulated media. Add a link – for example, to a news article or Twitter Moment – so that people can read more about why various sources believe the media is synthetic or manipulated.
The survey asks you basic questions such as “Should Twitter remove tweets with deepfakes or keep them with labels?” It also asks people about what should the platform do with various kinds of deepfakes which might harm someone’s physical safety, mental health, or reputation. If the company decides to take up these labels for categorizing deepfakes, it might have to deal with a lot of disputes from users. Damien Mason, a digital privacy advocate at ProPrivacy, a comparison site for privacy tools, said labeling manipulated content just avoids political catastrophes: You can take Twitter’s survey – available in English, Hindi, Arabic, Spanish, Portuguese, and Japanese – here. If you have some kind of specific feedback, you can use #TwitterPolicyFeedback. Wonder who’ll sort through all those tweets.