New edge appliances allow organizations to deploy AI securely


Enterprises and governments share a common interest in safeguarding private information, but often the rollout of AI systems can unwittingly put sensitive data at risk.
Trusted AI specialist Seekr is announcing a new all-in-one AI system -- built for government agencies -- to ensure that AI can be deployed in air-gapped environments, standalone data centers, and contested environments.
The impact of AI in the legal sector [Q&A]


AI is changing many industries. In the legal sector it's altering how businesses operate, automating routine tasks and boosting productivity for lawyers.
We spoke to Alon Shwartz, COO and co-founder of Trellis AI, to find out more about AI’s transformative effect on the legal world.
A third of employees keep their AI use secret


A new survey finds that 32 percent of employees who use GenAI tools at work say they're keeping it a secret from their employer.
The research from Ivanti finds that some use discretion when using AI because they like the 'secret advantage' it offers (36 percent); others because they worry their job may be cut (30 percent); and 27 percent have AI-fueled imposter syndrome, saying they don’t want people to question their ability.
Mid-market business and IT leaders disagree on AI opportunities


IT and business leaders from UK mid-market organizations have conflicting views on the role of AI in enabling growth and driving productivity, according to new research.
The report from Node4, based on responses from over 600 IT and business leaders across multiple sectors, shows IT leaders rank investment in AI in their top two strategies for improving productivity and efficiency, but it only just makes business leaders' top five.
Combating misinformation with AI document management [Q&A]


Many organizations rush to implement AI chatbots without addressing their document management issues first, but when these systems deliver incorrect information it can create significant risks.
But while AI is part of the problem it can also be part of the solution. We spoke to Stéphan Donzé, CEO of AODocs, to find out more.
Governance is top priority for agentic AI users


Nearly 80 percent of IT professionals responding to a new survey rank governance as 'extremely important,' underscoring the fact that while organizations are eager to innovate, they still want to do so responsibly
The study by API management firm Gravitee looks at the use of Agentic AI systems and Large Language Models (LLMs) by large and midsize companies and finds 72 percent of respondents report that their organizations are actively using agentic AI systems today.
New MCP server uses AI to help enterprises secure SaaS


Organizations are often using 50 or more different security tools and, even with the help of AI, they need to manually interact with each when investigating cybersecurity incidents.
A new SaaS security Model Context Protocol (MCP) server launched by AppOmni at this week's RSA Conference is designed to let security teams spend less time investigating incidents and more time taking action to fix them.
Cybercriminals lure LLMs to the dark side


A new AI security report from Check Point Software shows how cybercriminals are co-opting generative AI and large language models (LLMs) in order to damage trust in digital identity.
At the heart of these developments is AI's ability to convincingly impersonate and manipulate digital identities, dissolving the boundary between authentic and fake.
The challenges of using AI in software development [Q&A]


Artificial intelligence has found its way into many areas, not least software development. But using this technology isn't without problems around security, code quality and more.
We talked to Vibor Cipan, senior manager -- developer relations, community and success at AI coding agent Zencoder to find out more about the challenges of AI development and how to address them.
Would you trust a robot lawyer?


A new survey for Robin AI reveals that while nearly one in three people would be open to letting a robot lawyer represent them in court, the vast majority would only do so if a human lawyer was overseeing the process.
The research carried out by Perspectus Global polled a sample of 4,152 people across the US and UK and finds that on average, respondents say they would need a 57 percent discount to choose an AI lawyer over a human.
AI is challenging organizations to rethink cyber resilience


A new report from managed security services company LevelBlue reveals that organizations are forging ahead with AI innovations despite increased security concerns.
The report shows AI-powered attacks, such as deepfakes and synthetic identity attacks, are expected to rise in 2025, but many remain unprepared. The report finds that only 29 percent of executives say they are prepared for AI-powered threats, despite nearly half (42 percent) believing they will happen.
Navigating data privacy and security challenges in AI [Q&A]


As artificial intelligence (AI) continues to reshape industries, data privacy and security concerns are escalating. The rapid growth of AI applications presents new challenges for companies in safeguarding sensitive information.
Emerging advanced AI models like Deepseek, developed outside the US, underscore the risks of handling critical data. We spoke to Amar Kanagaraj, CEO of Protecto -- a data guardrail company focused on AI security and privacy -- to get his insights on the most pressing AI data protection challenges.
Crisis in 'digital dexterity' threatens AI investments


A new study shows that 92 percent of IT leaders believe the new era of digital transformation will increase digital friction and that less than half (47 percent) of employees have the requisite digital dexterity to adapt to technological changes.
The report from digital employee experience (DEX) specialist Nexthink, based on a survey of 1,100 IT decision makers worldwide, shows a further 88 percent expect workers to be daunted by new technologies such as generative AI.
Popular LLMs produce insecure code by default


A new study from Backslash Security looks at seven current versions of OpenAI's GPT, Anthropic's Claude and Google's Gemini to test the influence varying prompting techniques have on their ability to produce secure code.
Three tiers of prompting techniques, ranging from 'naive' to 'comprehensive,' were used to generate code for everyday use cases. Code output was measured by its resilience against 10 Common Weakness Enumeration (CWE) use cases. The results show that although secure code output success rises with prompt sophistication all LLMs generally produced insecure code by default.
AI tools present critical data risks in the enterprise


New research shows that 71.7 percent of workplace AI tools are high or critical risk, with 39.5 percent inadvertently exposing user interaction/training data and 34.4 percent exposing user data.
The analysis from Cyberhaven draws on the actual AI usage patterns of seven million workers, providing an unprecedented view into the adoption patterns and security implications of AI in the corporate environment.
Recent Headlines
Most Commented Stories
BetaNews, your source for breaking tech news, reviews, and in-depth reporting since 1998.
© 1998-2025 BetaNews, Inc. All Rights Reserved. About Us - Privacy Policy - Cookie Policy - Sitemap.