Meta is training its AI using an entire nation’s data… with no opt-out


The question of how to train and improve AI tools is one that triggers fierce debate, and this is something that has come into sharp focus as It becomes clear just how Meta is teaching its own artificial intelligence.
The social media giant is -- perhaps unsurprisingly to many -- using data scrapped from Facebook and Instagram posts, but only in Australia. Why Australia? Unlike Europe where General Data Protection Regulation (GDPR) necessitated Meta to give users a way to opt out of having their data used in this way, Australia has not been afforded this same opportunity. What does this mean?
Tweak your settings in X if you don't want Elon Musk using your data to train Grok AI


Artificial intelligence is underpinned by the data used to train it, and even in this early stage of the game this has already proved controversial. In addition to complaints about the use of copyrighted content to train AIs, concern has also been voiced about the use of personal data.
Elon Musk, unsurprisingly, wants a slice of the AI pie, and is looking to train up the Grok AI model. The social platform formerly known as Twitter, X, is being used as a source of training data, meaning that your tweets (sorry, posts) are, by default, being sucked up for this very purpose.
Apple Developer Academy to equip students with AI skills


Apple announced today an extensive expansion of its educational offerings, introducing a dedicated artificial intelligence (AI) curriculum for all Apple Developer Academy students and mentors. Starting this fall, the program will be accessible to learners across 18 academies in six countries, including Brazil, Indonesia, Italy, Saudi Arabia, South Korea, and the United States.
This initiative aims to integrate AI training as a fundamental component of the Apple Developer Academy's core curriculum, alongside established areas such as coding, design, and marketing. The curriculum is designed to prepare students for the burgeoning iOS app economy and beyond, equipping them with the skills necessary to contribute effectively to their local communities and the global market.
Hitachi and Microsoft forge billion-dollar alliance to spearhead generative AI revolution


Hitachi and Microsoft have announced a strategic collaboration projected to be worth billions of dollars, aiming to drive generative AI innovations across various sectors.
As part of the agreement, Hitachi will integrate Microsoft's array of technologies, including Azure Open AI Service, Dynamics 365, Copilot for Microsoft 365, and GitHub Copilot, into its Lumada solutions. This move is poised to enhance the Lumada business, targeting a revenue goal of 2.65 trillion yen ($18.9 billion USD) for FY2024, and streamline processes for its 270,000 employees.
ZT Systems unveils ACX200 platform with NVIDIA Blackwell Superchip for advanced AI


ZT Systems today announced its ACX200 platform featuring the NVIDIA GB200 Grace Blackwell Superchip. Powered by NVIDIA Blackwell Tensor Core GPUs and NVIDIA Grace CPUs interconnected via lightning-fast NVIDIA NVLink technology, the ZT Systems ACX200 is a liquid-cooled rack-integrated server platform designed for accelerated computing. This combination of NVIDIA Blackwell GPUs with ZT’s hyperscale-focused design and integration capabilities enables cutting-edge accelerated computing for exascale AI training and inference workloads.
“The ZT Systems ACX200 accelerates our customers’ capability to deliver AI at unprecedented scale, with dramatically improved performance and energy efficiency,” said Tom Lattin, ZT Systems VP, Platform Engineering. “This system is designed to help the most advanced service providers harness the power of NVIDIA Blackwell as they roll out next-generation artificial intelligence applications. With innovative rack- and cluster-level configurations optimized for their workloads and ZT’s worldwide deployment expertise, customers can rapidly operationalize this powerful technology across their global datacenter footprints.”
Poisoning the data well for Generative AI


The secret to generative AI’s success is data. Vast volumes of data that are used to train the large language models (LLMs) that underpin generative AI’s ability to answer complex questions and find and create new content. Good quality data leads to good outcomes. Bad, deliberately poisoned, or otherwise distorted data leads to bad outcomes.
As ever more organizations implement generative AI tools into their business systems, it’s important to reflect on what attackers can do to the data on which generative AI tools are trained.
Google can now train AI with Reddit posts thanks to new Data API access deal


Google and Reddit have "expanded and deepened" their existing partnership in a deal that will give the search giant near-unfettered access to content posted on the site. As well as helping users to find Reddit content via the search engine, the move also means that Redditor's posts will be used to train Google's AI models.
Reddit says that in giving Google access to it Data API, it will be "easier to discover and access the communities and conversations people are looking for". It is being pushed by both companies as a way of promoting the "open internet", but it is something that is proving conversial.
Recent Headlines
BetaNews, your source for breaking tech news, reviews, and in-depth reporting since 1998.
© 1998-2025 BetaNews, Inc. All Rights Reserved. About Us - Privacy Policy - Cookie Policy - Sitemap.