How attackers can manipulate social media recommendations

Recommendations based on AI are something we encounter all the time. From shopping sites, streaming services and social media we're constantly shown stuff that the AI thinks we'll like.

But how easy would it be for an attacker to manipulate these recommendations to promote conspiracy theories or spread disinformation?

Andy Patel, a researcher with cyber security company F-Secure's Artificial Intelligence Center of Excellence, recently completed a series of experiments to learn how simple manipulation techniques can affect AI-based recommendations on a social network.

Patel collected data from Twitter and used it to train collaborative filtering models (a type of machine learning used to encode similarities between users and content based on previous interactions) for use in recommendation systems. He then carried out experiments that involved retraining these models using data sets containing additional retweets (thereby poisoning the data) between selected accounts to see how recommendations changed.

By selecting appropriate accounts to retweet and varying the number of accounts performing retweets along with the number of retweets they published, even a very small number of retweets were enough to manipulate the recommendation system into promoting accounts whose content was shared through the injected retweets.

Of course the social networks themselves use algorithms to produce recommendations and promote stories. Patel explains how it's possible to game the system. "The way to attack these things is called 'shilling', essentially what you do is you create a large number of users in the system. Then you perform actions with those users in order to like, promote or demote a piece of content or an item in a store. So, essentially the way this works is that people come together and collaborate and perform these actions, or people create false or fake accounts, and then they perform actions like re-tweeting and liking things until they get an understanding of the effect."

This often relies on networks of fake accounts that all follow each other. Most of these don't post their own original tweets but are used to amplify tweets from elsewhere, making lots of retweets from the network.

"On Twitter is that there are hundreds of thousands of these fake accounts," adds Patel. "I think it's quite common for people in these circles to have their main account and a bunch of other accounts that they control. They do this stuff manually, it's not automated, it's not really bots, it's just people with several accounts and they use those to try and promote the content that they want to promote. But given how these social networks work, it probably means that some of the time they're just promoting that content to other people who are doing similar stuff."

What can social networks do to protect themselves against this kind of manipulation? Patel suggests they need to be cutting off the influence. "They tend to suspend the accounts that are close to the original content that gets retweeted but they don't do anything to those accounts doing the retweets." This could involve using some form of validation such as requiring ID from users when they open an account.

You can read more about the research on the F-Secure blog.

Image Credit: Oleksiy Mark / Shutterstock

Comments are closed.

© 1998-2024 BetaNews, Inc. All Rights Reserved. Privacy Policy - Cookie Policy.