Governance is top priority for agentic AI users

data governance

Nearly 80 percent of IT professionals responding to a new survey rank governance as 'extremely important,' underscoring the fact that while organizations are eager to innovate, they still want to do so responsibly

The study by API management firm Gravitee looks at the use of Agentic AI systems and Large Language Models (LLMs) by large and midsize companies and finds 72 percent of respondents report that their organizations are actively using agentic AI systems today.

Continue reading

New MCP server uses AI to help enterprises secure SaaS

SaaS AI

Organizations are often using 50 or more different security tools and, even with the help of AI, they need to manually interact with each when investigating cybersecurity incidents.

A new SaaS security Model Context Protocol (MCP) server launched by AppOmni at this week's RSA Conference is designed to let security teams spend less time investigating incidents and more time taking action to fix them.

Continue reading

Cybercriminals lure LLMs to the dark side

Web hacker

A new AI security report from Check Point Software shows how cybercriminals are co-opting generative AI and large language models (LLMs) in order to damage trust in digital identity.

At the heart of these developments is AI's ability to convincingly impersonate and manipulate digital identities, dissolving the boundary between authentic and fake.

Continue reading

The challenges of using AI in software development [Q&A]

Futuristic robot artificial intelligence huminoid AI programming coding

Artificial intelligence has found its way into many areas, not least software development. But using this technology isn't without problems around security, code quality and more.

We talked to Vibor Cipan, senior manager -- developer relations, community and success at AI coding agent Zencoder to find out more about the challenges of AI development and how to address them.

Continue reading

Would you trust a robot lawyer?

Robot lawyer

A new survey for Robin AI reveals that while nearly one in three people would be open to letting a robot lawyer represent them in court, the vast majority would only do so if a human lawyer was overseeing the process.

The research carried out by Perspectus Global polled a sample of 4,152 people across the US and UK and finds that on average, respondents say they would need a 57 percent discount to choose an AI lawyer over a human.

Continue reading

AI is challenging organizations to rethink cyber resilience

Artificial intelligence business

A new report from managed security services company LevelBlue reveals that organizations are forging ahead with AI innovations despite increased security concerns.

The report shows AI-powered attacks, such as deepfakes and synthetic identity attacks, are expected to rise in 2025, but many remain unprepared. The report finds that only 29 percent of executives say they are prepared for AI-powered threats, despite nearly half (42 percent) believing they will happen.

Continue reading

Navigating data privacy and security challenges in AI [Q&A]

Privacy text on keyboard button. Internet privacy concept.

As artificial intelligence (AI) continues to reshape industries, data privacy and security concerns are escalating. The rapid growth of AI applications presents new challenges for companies in safeguarding sensitive information.

Emerging advanced AI models like Deepseek, developed outside the US, underscore the risks of handling critical data. We spoke to Amar Kanagaraj, CEO of Protecto -- a data guardrail company focused on AI security and privacy -- to get his insights on the most pressing AI data protection challenges.

Continue reading

Crisis in 'digital dexterity' threatens AI investments

Future artificial intelligence robot and cyborg.

A new study shows that 92 percent of IT leaders believe the new era of digital transformation will increase digital friction and that less than half (47 percent) of employees have the requisite digital dexterity to adapt to technological changes.

The report from digital employee experience (DEX) specialist Nexthink, based on a survey of 1,100 IT decision makers worldwide, shows a further 88 percent expect workers to be daunted by new technologies such as generative AI.

Continue reading

Popular LLMs produce insecure code by default

The AI CPU is generating code

A new study from Backslash Security looks at seven current versions of OpenAI's GPT, Anthropic's Claude and Google's Gemini to test the influence varying prompting techniques have on their ability to produce secure code.

Three tiers of prompting techniques, ranging from 'naive' to 'comprehensive,' were used to generate code for everyday use cases. Code output was measured by its resilience against 10 Common Weakness Enumeration (CWE) use cases. The results show that although secure code output success rises with prompt sophistication all LLMs generally produced insecure code by default.

Continue reading

AI tools present critical data risks in the enterprise

Risky AI

New research shows that 71.7 percent of workplace AI tools are high or critical risk, with 39.5 percent inadvertently exposing user interaction/training data and 34.4 percent exposing user data.

The analysis from Cyberhaven draws on the actual AI usage patterns of seven million workers, providing an unprecedented view into the adoption patterns and security implications of AI in the corporate environment.

Continue reading

The in-demand AI job roles and what they mean for business [Q&A]

AI search for jobs

As artificial intelligence finds its way into more and more areas there are concerns around accuracy, security, jobs and more.

Addressing these means organizations will need to fill some new roles. To find out what they are and what impact they will have we spoke to Aimei Wei, chief technical officer and co-founder of Stellar Cyber, to get her views on the AI hiring market.

Continue reading

1Password adds protection for agentic AI in the enterprise

Agentic AI

Current AI models can perform many tasks such as generating text, but these are 'prompted' -- that is the AI isn't acting by itself. But this is about to change with the arrival of agentic AI.

Gartner estimates that by 2028, 33 percent of enterprise software applications will include agentic AI, up from less than one percent in 2024, enabling 15 percent of day-to-day work decisions to be made autonomously.

Continue reading

Identity verification shifts in 2025 and what they mean for business and consumers [Q&A]

Business login

Generative AI is already defeating traditional identity verification (IDV) methods like knowledge-based authentication, 2FA, and more.

This shift is likely to see the acceleration of new forms of IDV in 2025 that place a greater emphasis on ensuring they're both more secure and easy for people to use. This will result in a convergence of customer identity and access management (CIAM) which essentially gives customers more control over their identity and verification.

Continue reading

New watchdog platform designed to protect enterprise AI deployments

AI protection security

As enterprises turn to increasingly sophisticated AI applications and agentic AI workflows, the large cloud footprint required to support such complex systems has become critically difficult to secure.

To address this issue Operant AI is launching AI Gatekeeper, a runtime defense platform designed to block rogue AI agents, LLM poisoning, and data leakage wherever AI apps are deployed, securing live AI applications end-to-end beyond Kubernetes and the edge.

Continue reading

How agentic AI takes GenAI to the next level [Q&A]

Agentic AI

Agentic AI has been in the news quite a bit of late, but how should enterprises expect it to impact their organizations?

We spoke to Mike Finley, CTO of AnswerRocket, to discuss Agentic AI's benefits, use cases and more.

Continue reading

Load More Articles