Shadow AI a major concern for enterprise IT


A new report reveals that nearly 80 percent of IT leaders say their organization has experienced negative outcomes from employee use of generative AI, including false or inaccurate results from queries (46 percent) and leaking of sensitive data into AI (44 percent).
Notably the survey of 200 US IT directors and executives from Komprise shows that 13 percent say that these poor outcomes have also resulted in financial, customer or reputational damage.
The challenges and opportunities of generative AI [Q&A]


The promise of GenAI is undeniable, it offers transformative potential to streamline workflows, boost efficiencies, and deliver competitive advantage. Yet, for many organizations, the journey to implement AI is far from straightforward.
Obstacles typically fall into three categories: strategic, technological, and operational. We spoke with Dorian Selz, CEO and co-founder of Squirro, to explore these obstacles in more detail, as well as looking at some of the biggest misconceptions enterprises have when starting their GenAI journey.
How failure to identify AI risks can lead to unexpected legal liability [Q&A]


Use of generative AI is becoming more common, but this comes with a multitude of inherent risks, security and data privacy being the most immediate. Managing these risks may seem daunting, however, there is a path to navigate through them, but first you have to identify what they are.
We talked to Robert W. Taylor, Of Counsel with Carstens, Allen & Gourley, LLP to discuss how a failure to identify all the relevant risks can leave businesses open to to unexpected legal liabilities.
GenAI vulnerable to prompt injection attacks


New research shows that one in 10 prompt injection atempts against GenAI systems manage to bypass basic guardrails. Their non-deterministic nature also means failed attempts can suddenly succeed, even with identical content.
AI security company Pangea ran a Prompt Injection Challenge in March this year. The month-long initiative attracted more than 800 participants from 85 countries who attempted to bypass AI security guardrails across three virtual rooms with increasing levels of difficulty.
Cybercriminals lure LLMs to the dark side


A new AI security report from Check Point Software shows how cybercriminals are co-opting generative AI and large language models (LLMs) in order to damage trust in digital identity.
At the heart of these developments is AI's ability to convincingly impersonate and manipulate digital identities, dissolving the boundary between authentic and fake.
How agentic AI takes GenAI to the next level [Q&A]


Agentic AI has been in the news quite a bit of late, but how should enterprises expect it to impact their organizations?
We spoke to Mike Finley, CTO of AnswerRocket, to discuss Agentic AI's benefits, use cases and more.
Organizations fix under half of exploitable vulnerabilities


The latest State of Pentesting report from Cobalt reveals that organizations are fixing less than half of all exploitable vulnerabilities, with just 21 percent of GenAI app flaws being resolved.
It also highlights a degree of over-confidence with 81 percent of security leaders saying they are 'confident' in their firm's security posture, despite 31 percent of the serious findings discovered having not been resolved.
AI contributes to a more complex privacy landscape


Despite many organizations reporting significant business gains from using GenAI, data privacy is still a major risk. Notably, 64 percent of respondents to a new survey worry about inadvertently sharing sensitive information publicly or with competitors, yet nearly half admit to inputting personal employee or non-public data into GenAI tools.
The latest Data Privacy Benchmark Study from Cisco, with input from from 2,600 privacy and security professionals across 12 countries, shows an increased focus on investing in AI governance processes, an overwhelming 99 percent of respondents anticipate reallocating resources from privacy budgets to AI initiatives in the future.
Exploring the security risks underneath generative AI services


Artificial intelligence has claimed a huge share of the conversation over the past few years -- in the media, around boardroom tables, and even around dinner tables. While AI and its subset of machine learning (ML) have existed for decades, this recent surge in interest can be attributed to exciting advancements in generative AI, the class of AI that can create new text, images, and even videos. In the workplace, employees are turning to this technology to help them brainstorm ideas, research complex topics, kickstart writing projects, and more.
However, this increased adoption also comes with a slew of security challenges. For instance, what happens if an employee uses a generative AI service that hasn’t been vetted or authorized by their IT department? Or uploads sensitive content, like a product roadmap, into a service like ChatGPT or Microsoft Copilot? These are some of the many questions keeping security leaders up at night and prompting a need for more visibility and control over enterprise AI usage.
70 percent of organizations are developing AI apps


Over 70 percent of developers and quality assurance professionals responding to a new survey say their organization is currently developing AI applications and features, with 55 percent stating that chatbots and customer support tools are the main AI-powered solutions being built.
The research from Applause surveyed over 4,400 independent software developers, QA professionals and consumers explored common AI use cases, tools and challenges, as well as user experiences and preferences.
How GenAI is set to change procurement [Q&A]


In recent years generative AI has made its way into many areas of business, helping to transform and streamline processes. However, its potential in the procurement space remains relatively unexplored.
We talked to Kevin Frechette, CEO of Fairmarkit, to find out how enterprises can exploit GenAI to gain agility, efficiency, and smarter decision-making in their sourcing decisions.
DeepSeek outperforms US models in new AI Trust Score


Chinese AI models (like DeepSeek) are outperforming US models like Meta Llama in specific categories such as sensitive information disclosure according to a new AI Trust Score introduced by Tumeryk.
It evaluates AI models across nine key factors, including data leakages, toxic content, truthfulness, and bias. This enables CISO’s to ensure their AI deployments are secure, compliant, and trustworthy, and offers developers solutions for addressing any issues in their AI applications.
How GenAI adoption introduces network and security challenges [Q&A]


Enterprises are increasingly using GenAI to transform their organization. As they move ahead, they're evaluating their preparedness from a business, safety, skills, and product level. But there's another key factor at the backend that's being overlooked: the network.
Full GenAI adoption introduces significant new challenges and demands on the network, such as bandwidth strain and unique security vulnerabilities. If these demands aren't accommodated, organizations won't realize the benefits of GenAI.
GenAI is changing enterprise priorities with privacy a major concern


The latest Enterprise Cloud Index (ECI) survey from Nutanix shows that that while 80 percent of organizations have already implemented a GenAI strategy, implementation targets vary significantly.
Organizations are eager to leverage GenAI for productivity, automation, and innovation, but they also face critical hurdles in the form of data security, compliance, and IT infrastructure modernization. 95 percent of respondents agree that GenAI is changing their organization’s priorities
IT industry today faces same issues that aggravated 1990s manufacturing: How can we take a cue from history?


Until the late 1990s, manufacturing reigned as the lifeblood of the global economy -- leading in productivity, employment, growth, and investments across all points of the world. However, once we neared the close of the 20th century, manufacturing found its Achilles heel in the compounded complexity accrued from outdated processes, an over-reliance on human labor that simply couldn’t meet its extreme needs, supply chain disruptions, and rising costs.
I fear that today, the information technology industry finds itself at all-too-familiar cross-roads. Why is this?
Recent Headlines
Most Commented Stories
© 1998-2025 BetaNews, Inc. All Rights Reserved. About Us - Privacy Policy - Cookie Policy - Sitemap.