AI's challenge to internet freedom
In October 2020, observing International Internet Day, I spoke about the threats to Internet freedom. A lot has happened in less than four years, and a lot has changed. But the threats did not go away. On the contrary, Internet users and their freedoms are in more danger now than ever.
In February 2024, as we observe Safer Internet Day, it is necessary to reiterate that there is no safety without freedom, online or offline. Especially as the enemies of both are now equipped with the most powerful tool for cyber oppression yet -- Artificial Intelligence (AI).
AI as a tool for oppression and deception
Annual reporting by the non-profit organization Freedom House shows that internet freedom has been declining globally for 13 consecutive years. What’s new about the report’s latest installment, “The Repressive Power of Artificial Intelligence,” is in its title. AI has been used by governments all over the world to restrict freedom of speech and abuse opposition.
This oppression is both direct and indirect. Directly, AI models supercharge the detection and removal of prohibited speech online. Dissenting opinions cannot spread when they are shut off so quickly. AI-based facial recognition can also help identify protesters, making it unsafe for them to have any of their images shared on social media.
Indirectly, AI advances oppressive goals by spreading misinformation. Two factors play an important role here. First, chatbots and other AI-based tools enable automation that cost-effectively distributes large volumes of false information across platforms. Secondly, AI tools can generate fake images, videos, and audio content that distort reality. These fabrications promote general distrust in publicly available information even when identified as fake. Distrust, in turn, makes people incapable of coordinated action.
Threats to safer Internet
The AI-boosted power of governments to monitor and oppress online activity also directly threatens individual safety. Opposition leaders and lay citizens who express dissenting views can be cyberbullied or censured. Automation in tracking and identifying people online allows for scary efficiency in making them disappear.
Furthermore, opposing fractions, whether private or public persons or organizations become targets of state-mandated cyberattacks. These can also be supercharged by new advancements in AI, making them all the more dangerous and damaging. Thus, it is easy to see how AI-powered surveillance simultaneously undermines both freedom and safety.
However, threats to online safety come not only from powerful forces. The Safer Internet Day initiative is, in many ways, about how private individuals threaten one another over the Internet, from cyberbullying to identity theft. AI tools are now also readily available to any Internet user, at least to some extent. Some of the ways they are being used are particularly disturbing.
CSAM is on the rise
It is bad enough when AI technology is utilized to create explicit and pornographic deep fakes of adults. Both governments and private individuals do this to discredit and hurt people or for personal gratification. Even worse is when it is done to produce child sexual abuse material (CSAM).
AI-generated CSAM and explicit material are already circulating online. The fact that a simple prompt is now all it takes to create child pornography presents unprecedented challenges to law enforcement and other agencies fighting for a safer Internet. Firstly, the resources to remove all such material from the websites are already far from enough. The expected proliferation of it will make the situation even worse.
Secondly, investigating real new cases of child abuse and tracking active abusers is more complicated. A new layer of challenges is added by the difficulty of distinguishing fakes and manipulated previously-known content from newly surfaced depictions of actual child exploitation. In cases when this material does not depict a real child, there are also legal puzzles as to how its creation and possession should be treated.
Finally, manipulating pictures of fully dressed minors to create ultra-realistic sexualized versions opens whole new horizons for child exploitation. It is also a devastating blow to the campaign for a safer Internet.
Reversing the tide: AI for a better Internet
The fear of being flooded with AI-generated CSAM drives support for the proposed EU bill that would obligate messaging platforms to scan private messages looking for CSAM and grooming activities. This proposal also draws criticism stemming from a different fear -- that once the EU turns to such measures, it will start slipping toward the kind of oppressive surveillance witnessed elsewhere.
While solutions balancing privacy and safety in this area are still up for discussion, organizations should take protective steps in the public Internet domain. AI here is dangerous because it can do a lot very fast. It automates content creation and various tasks that would otherwise take considerable time and resources. The answer to this problem comes from making AI-driven automation work for the good. It is already being done.
Before a wave of AI-produced CSAM threatened the Internet, the Communications Regulatory Authority of Lithuania (RRT) had already used an AI-powered tool to remove real CSAM from websites. As part of our Project 4β, Oxylabs developed this tool pro bono to automate RRT’s tasks and improve results.
Using the data from this project, researchers from Surfshark have estimated that over 1,700 websites in the EU may contain unreported CSAM. Surfshark’s analysis shows that there is plenty to do for automated scanning solutions on the public Internet.
This is where AI can be used to advance both Internet freedom and safety. To advance its usage as a tool for good, we as a society can:
- Continue to improve AI-based web scraping to detect and accurately identify all CSAM.
- Invest in training convolutional neural networks (CNNs) to create AI models for efficiently distinguishing between real and fake.
- Equip investigative journalists with AI-based and other data collection tools so that they can extract and report information hidden by oppressive governments.
- Explore the possibilities of AI as a tool for cybersecurity, concentrating on exposing fake news while safeguarding data that can be used for personal identification.
This is, of course, just the beginning. Other ways in which AI can enhance our cybersecurity will manifest as the field continues to develop.
Summing up
Facing its threats, we can easily forget that AI is neither good nor bad in itself. It does not have to oppress or endanger us. We can develop it to protect us, online and off.
Similarly, Internet freedom does not have to make us less safe. Safety and freedom are not opposites; thus, we do not need to sacrifice one for the other. Balanced correctly, freedom makes us safer while safety liberates.
Julius Černiauskas is CEO at Oxylabs