Articles about EU AI Act

EU urged to pause rollout of new AI rules

Businesses and politicians are calling on the EU to pause the rollout of its wide-ranging new AI Act which became law last year. The act relies on technical standards which have been slow to emerge. Provisions of the act such as new rules for General-Purpose AI (GPAI) models, due to apply on August 2nd, still lack essential guidance.

Swedish prime minister Ulf Kristersson has called the new rules ‘confusing’ and said that he worried the continued rollout could lead to Europe falling behind technologically or specific applications not being available on the European market.

Continue reading

Compliance with new European legislation increases info security workloads

A new report shows 90 percent of professionals surveyed report that complying with DORA, the NIS2 Directive, and/or the EU AI Act will impact their workload.

The study, from cloud-based risk and compliance platform AuditBoard, shows information security professionals feel the weight of compliance efforts most, with 38 percent expecting to be impacted to a great extent, compared to 29 percent of risk management professionals and 28 percent of IT professionals. Increased workloads could potentially lead to a greater risk of non-compliance as teams struggle to keep up with daily tasks.

Continue reading

UK government sets out plans to turbocharge AI use

Artificial intelligence will be unleashed across the UK to deliver a decade of national renewal under a new plan announced by the government.

The Prime Minister has agreed to take forward all 50 recommendations set out in the AI Opportunities Action Plan released last year, in a plan to make the UK ‘irresistible’ to AI firms looking to start, scale, or grow their business.

Continue reading

Why businesses can't go it alone over the EU AI Act

When the European Commission proposed the first EU regulatory framework for AI in April 2021, few would have imagined the speed at which such systems would evolve over the next three years. Indeed, according to the 2024 Stanford AI Index, in the past 12 months alone, chatbots have gone from scoring around 30-40 percent on the Graduate-Level Google-Proof Q&A Benchmark (GPQA) test, to 60 percent. That means chatbots have gone from scoring only marginally better than would be expected by randomly guessing answers, to being nearly as good as the average PhD scholar.

The benefits of such technology are almost limitless, but so are the ethical, practical, and security concerns. The landmark EU AI Act (EUAIA) legislation was adopted in March this year in an effort to overcome these concerns, by ensuring that any systems used in the European Union are safe, transparent, and non-discriminatory. It provides a framework for establishing:

Continue reading

UK tech execs want more government oversight of AI

Research from IT consultancy Zartis shows 72 percent of UK tech executives want more AI regulation, and almost a third (29 percent) want 'a lot more'.

The study, carried out by Censuswide, surveyed 100 senior technology executives in the UK and finds around 80 percent claim global government coordination of AI regulation is important to their company and 41 percent say it’s very important.

Continue reading

What the EU AI act means for cybersecurity teams and organizational leaders

On March 13, 2024, the European Parliament adopted the Artificial Intelligence Act (AI Act), establishing the world’s first extensive legal framework dedicated to artificial intelligence. This imposes EU-wide regulations that emphasize data quality, transparency, human oversight, and accountability. With potential fines reaching up to €35 million or 7 percent of global annual turnover, the act has profound implications for a wide range of companies operating within the EU.

The AI Act categorizes AI systems according to the risk they pose, with stringent compliance required for high-risk categories. This regulatory framework prohibits certain AI practices deemed unacceptable and meticulously outlines obligations for entities involved at all stages of the AI system lifecycle, including providers, importers, distributors, and users.

Continue reading

How machine identity can close a critical AI accountability gap in the EU AI Act

machine learning

European lawmakers are plowing ahead with what could be one of the most important pieces of legislation in a generation. The EU AI Act will take a notably more proactive approach to regulation than current proposals in the US and UK. But experts have spotted a critical loophole introduced in amendments to the legislation that could expose rather than protect citizens and societies from AI risk.

In short, this loophole could undermine the entire purpose of the proposed law and it must be closed. To do this successfully, legislators need to take steps to prioritize machine identities as a way to enhance AI governance, accountability, security and trust. Time is running out.

Continue reading

BetaNews, your source for breaking tech news, reviews, and in-depth reporting since 1998.

Regional iGaming Content

© 1998-2025 BetaNews, Inc. All Rights Reserved. About Us - Privacy Policy - Cookie Policy - Sitemap.