{"id":3035,"date":"2022-01-06T10:41:26","date_gmt":"2022-01-06T09:41:26","guid":{"rendered":"https:\/\/passion.media\/?p=3035"},"modified":"2022-01-06T10:41:40","modified_gmt":"2022-01-06T09:41:40","slug":"is-identifying-a-virtual-bot-still-possible","status":"publish","type":"post","link":"https:\/\/blog.mym.com\/en\/is-identifying-a-virtual-bot-still-possible\/","title":{"rendered":"Is identifying a virtual bot still possible?"},"content":{"rendered":"\n
Hey you ! <\/p>\n\n\n\n
The border between the real and the virtual is becoming more and more blurred. The appearance of the metaverse, the popularity of fictitious influencers… In short, it’s a phenomenon that’s spreading. A recent study published by the Swiss Federal Institute of Technology in Lausanne states that 20% of global trends on Twitter are created from scratch by bots. According to the same study, there are no less than 108,000 fake accounts involved in these operations on the platform.<\/p>\n\n\n\n
Originally, it was obvious to make the distinction between a bot and an account animated by a real person. Today, the operation is more delicate. Not only is the line blurring, but it can also lead to other abuses. Bots sometimes create phishing apps that can be used for disinformation, boycott, defamation and hate campaigns… This is what happened during the last American elections. <\/p>\n\n\n\n
Recently, researchers from the University of Pennsylvania and Stony Brook University (New-York) looked at a key question: how to distinguish human users from bots? The idea was to determine if it is still possible today, for a classic user, to distinguish between an account really animated by a human and an account 100% managed by a virtual bot. The researchers analyzed more than three million tweets written by three thousand bot accounts and as many authentic profiles. They then classified them according to 17 distinctive characteristics (age, gender, personality traits, emotion, etc.). The result? It would be very difficult, if not impossible, for a human observer to tell the difference between a real and a fake profile. So, how to protect yourself from the associated risks?<\/p>\n\n\n\n
The study also shows another element, rather reassuring. Bots, analyzed individually, can hardly be detected. On the other hand, if they are analyzed as a group, similar aspects are quickly noticed and their artificial side is revealed. It would therefore be possible to regulate them and integrate them into a content moderation policy. This is in line with a fundamental trend. The pressure on platforms is increasing, in order to encourage them to set up ethical committees and stricter and better regulated publication rules. This is for example what the French platform MYM has chosen to put in place with an independent ethical committee. This committee defines the rules for publishing content on the platform and ensures that they are respected. It is a way to guarantee the quality and variety of the content, the user experience and the work of the content creators. So, is this type of measure going to become widespread?<\/p>\n","protected":false},"excerpt":{"rendered":"
Bots are now an integral part of the content platform landscape. Clearly identified as such or more hidden, they are present on each of our feeds. But how do we distinguish them from other content? How to manage and moderate this type of content? How to avoid abuse? We discuss it.<\/p>\n","protected":false},"author":1,"featured_media":3033,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":[],"categories":[50],"tags":[],"yoast_head":"\n