Facebook shuts down abortion group's page for 'promotion or encouragement of drug use'


Just yesterday we wrote about the repeated closure of an atheist Facebook group by the social network. Now, in a similar act that has more than a slight whiff of censorship to it, Facebook has "unpublished" the page of Women On Web, a group that puts women who live in places that have abortion restrictions in contact with doctors.
The doctors that women are connected with can provide them with abortion pills, and it seems that because of this the group, Women On Web, has been found by Facebook to be engaged in the "promotion or encouragement of drug use". The group describes itself as "a place for the pro-abortion rights, pro-choice, and reproductive justice communities to engage in discussion and share news."
Atheist group claims Facebook keeps deleting its page


An atheist Facebook page with more than 1.6 million fans is being repeatedly deleted by the social network, claims the group behind it. Atheist Republic says that a coordinated campaign has resulted in the page being removed at least three times.
It is thought that Facebook’s automated removal process may be to blame, with algorithms taking the page offline after a series of reports. The page is used to actively criticize religion, and it is believed that an orchestrated fightback from religious groups is leading to its repeated automatic removal.
Facebook uses newspaper ads to warn about fake news and gives tips to help spot it


Facebook's fight against fake news has been taken to the printed press. The social networking giant has taken out a series of ads in UK newspapers giving tips about how to spot fake news. The ad campaign comes as Brits prepare to go to the polls and vote in the snap General Election in a month's time.
The issue of fake news really came to prominence in the run up to the US election, and research has shown that Facebook has become a tool that is used as part of campaigning to spread propaganda. In addition to the print ads, Facebook has also closed down thousands of UK accounts and is also expanding its automated system for spotting fake news to the UK.
Facebook to hire just 3,000 more people to moderate the content of its 1.9 billion users


There's been a lot of disturbing content on Facebook in recent weeks and months -- murders, rapes, assaults, shootings and more. After a fatal shooting was shared on the site, Facebook vowed to do more to combat this sort of material.
As the social network announced that it now has 1.9 billion users, it also announced that it is to employ an extra 3,000 people to help moderate content. This brings the company's total number of moderators to 7,500. Can this possibly be enough to manage the posts of nearly 2 billion people, and who on earth would want to be tasked with viewing some of the most gruesome content to determine whether or not it should be removed?
Chrome extension Who Targets Me? reveals how Facebook is used for election propaganda


Social media is powerful, so it's really little wonder that the likes of Facebook are used for propaganda. We already know that advertising can be very carefully targeted for maximum impact, and this can prove important when it comes to getting across a political message.
With the UK on the verge of an early general election -- one that will be fought with Brexit and Scottish Independence looming large -- political campaigns are getting underway, including on Facebook. To help educate voters about how they are being besieged by political parties, a free Chrome extension called Who Targets Me? has been launched. It reveals just how personal information made available on the social network is used.
Facebook denies allowing advertisers to target people based on their emotional state


A leaked internal document shows that Facebook is capable of identifying people according to their emotional state. The document, seen by The Australian, shows how the social network can monitor users' posts and determine when they are feeling "stressed, defeated, overwhelmed, anxious, nervous, stupid, silly, useless, or a failure."
The leak pertains to Facebook's Australian office and suggests that algorithms can be used to detect "moments when young people need a confidence boost." It raises serious ethical questions about Facebook's capabilities, but the company denies it is doing anything wrong.
Twitter announces 16 live streaming partners including Bloomberg for 24-hour news


Twitter continues to place ever-increasing importance on video on its network, and the company has announced a new batch of partners that will bring a host of live-streamed programming to the service.
There are 16 streaming partners in total, including Bloomberg which will bring a 24-hour rolling news services to Twitter. Other partners include Live Nation which will deliver live concert performances, and NBA, PGA and MLB to cater for the sports side of things.
Facebook updates Rights Manager so content owners can earn ad income from pirated videos


Like Google, Facebook places great importance on advertising. The social network not only earns money from ads itself, but also allows companies and individuals to do so by displaying ads in videos. Pirates were quick to spot an easy way to earn money -- steal someone else's popular video and watch the ad revenue roll in.
Now Facebook is fighting back in a way that has already been used to some extent by YouTube. There is a new "claim ad earnings" option in the Rights Manager tool which enables the owner of a particular video to bag the ad revenue when their material is pirated. But the update to Rights Manager are more far-reaching than this.
Report: Facebook really is used for propaganda and to influence elections


It's something that many people have expected for some time, and now we know that it's true. Facebook has admitted that governments around the world have used the social network to spread propaganda and try to influence the outcome of elections.
In the run-up to the US election, there was speculation that powerful groups had been making use of Facebook to influence voters by spreading fake news. Now, in a white paper, Facebook reveals that through the use of fake accounts, targeted data collection and false information, governments and organizations have indeed been using the social network to control the news, shape the political landscape, and create different narratives and outcomes.
Google still hasn't given up on Google+ and Topics is the latest attempt to keep people on the service


Google+ finds itself the butt of many a joke, but the company behind the service is happy to proclaim that "millions of people use" it. There are already numerous ways to discover content on Google+, and now there's yet another: Topics.
This is essentially Google's take on the idea of related content, and it's a bid to keep users on the site -- or in the app -- for longer. Not content with having people reading what they set out to read, now additional "Topics to explore" will be suggested.
New report shows the number of requests for user data Facebook receives from global governments


Today Facebook publishes its Global Government Requests Report, revealing just how many data requests the social network has received from governments around the world. This time around, the report covers the second half of 2016, and it shows a mixed-bag of figures.
While the number of items that had to be restricted due to contravention of local laws dropped, the number of government data requests increased by 9 percent compared to the previous six months. Facebook is well-aware that it faces scrutiny and criticism for its willingness to comply with data requests, and the company tries to allay fears by saying: "We do not provide governments with "back doors" or direct access to people's information."
Facebook is testing pre-emptive related articles in News Feed


The "related articles" feature of Facebook's News Feed is nothing new -- in fact it has been with us for more than three years. But now the social network is trialling a new way of displaying related content; rather than waiting until you have clicked on a story to suggest related stories you might be interested in, Facebook will instead be offering these suggestions before you read an article.
As well as giving users the chance to read more about a topic from different source, Facebook says that it will help people to discover articles which have been fact-checked. It is -- almost by accident, it seems -- another way for Facebook to tackle fake news.
Facebook responds to the Cleveland murder shared on the social network


Over the weekend, it was suggested that Steve Stephens used Facebook Live to livestream himself fatally shooting a man in his 70s. He went on to use the social network to admit to other murders, as well as saying that he wanted to "kill as many people as I can."
Despite rumors of a murder having been committed live on Facebook, the social network issued a statement clarifying that, while Stephens had broadcast on Facebook Live over the weekend, the footage had actually been uploaded rather than livestreamed. Whether broadcast live or not, the story -- once again -- brings into question Facebook's content vetting procedures.
Investigation finds Facebook mods fail to remove illegal content such as extremist and child porn


That Facebook is fighting against a tide of objectionable and illegal content is well known. That the task of moderating such content is a difficult and unenviable one should come as news to no one. But an investigation by British newspaper The Times found that even when illegal content relating to terrorism and child pornography was reported directly to moderators, it was not removed.
More than this, the reporter involved in the investigation found that Facebook's algorithms actively promoted the groups that were sharing the illegal content. Among the content Facebook failed to remove were posts praising terrorist attacks and Islamic State, and others calling for more attacks to be carried out. Failure to remove illegal content once reported is, under British law, a crime in itself.
Taking the pulse of social media to drive healthcare policy


A new study from UK think tank Demos in conjunction with health charity The King's Fund looks at how the internet and in particular social media can be used to shape health policy.
It reveals that 43 percent of internet users have now used the web to access health information, up from just 18 percent in 2007. Alongside well-administrated official sources, unregulated online forums have grown to be valuable spaces for users to discuss conditions and treatments, ask questions, and share advice with those who have had similar experiences.
Recent Headlines
BetaNews, your source for breaking tech news, reviews, and in-depth reporting since 1998.
© 1998-2025 BetaNews, Inc. All Rights Reserved. About Us - Privacy Policy - Cookie Policy - Sitemap.