New platform lets developers build more accurate AI apps faster


High-quality retrieval is key to delivering the best user experience in AI search and retrieval-augmented generation (RAG) applications.
Knowledge platform Pinecone has announced new vector database capabilities combined with proprietary AI models to help developers build more accurate AI applications, faster and more easily.
Managing the shift to machine-to-machine communication [Q&A]


As AI continues to evolve, it will enable machines to communicate in new, dynamic, autonomous ways without human intervention.
This machine-to-machine growth has a huge potential to impact industries from smart factories to energy. We spoke to John Kim, CEO of API communication specialist Sendbird, to discuss these changes and how they will affect business.
The crucial role of data pipelines in building strong GenAI apps [Q&A]


For GenAI to live up to its promise reliable flow of data is key. AI models are only as good as the data pipeline connections bringing in quality data.
Outdated connections mean more hallucinations and untrustworthy results with data engineers hopelessly trying to manually integrate hundreds of AI data feeds. We spoke to Rivery co-founder and CEO Itamar Ben Hemo to discuss why good data pipelines are key to success.
Addressing AI challenges for the enterprise [Q&A]


With more and more businesses keen to benefit from the possibilities that AI offers it seems like everyone is jumping on the bandwagon. But this raises a number of implementation and management challenges, especially now as enterprise AI workloads begin to scale.
We spoke to Tzvika Zaiffer, solutions director at Spot by NetApp, to discuss how these challenges can be addressed and the best practices that are emerging to ensure that implementations go smoothly.
Google calls the AI fuzz to find vulnerabilities


Not familiar with 'fuzzing'? It's a software testing technique that involves feeding invalid, unexpected, or random data into a program to detect coding errors and security vulnerabilities.
Back in August 2023, Google introduced AI-Powered Fuzzing, using large language models (LLM) to improve fuzzing coverage to find more vulnerabilities automatically -- before malicious attackers could exploit them.
Why you might soon find yourself talking to adverts


We've probably all shouted at an advert on TV or muttered darkly at one that pops up when surfing the web, but how would you feel about ads you can actually converse with?
Communications company GMS has developed Generative Response Ads, a technology that enables consumers to engage in real-time conversations directly within ad spaces using AI.
With AI agents, Microsoft aims to change the way you work


We're constantly told that AI will make our lives easier by taking on the tedious everyday tasks that we don't really like doing. Who wouldn't want to have an AI agent do some of your office donkey work?
That's what Microsoft is offering with new out-of-the-box, purpose-built agents in Microsoft 365 Copilot that will take on unique roles, working alongside or on behalf of a team or organization to handle simple, mundane tasks as well as complex, multi-step business processes.
Use of GenAI in development raises security concerns


Most developers (85 percent) and security teams (75 percent) have security concerns over relying on GenAI to develop software.
A report from Legit Security, based on a survey of over 400 security professionals and software developers across North America, finds 96 percent of security and software development professionals report that their companies use GenAI-based solutions for building or delivering applications.
Meet Daisy, the AI granny designed to waste scammers' time


We all know how frustrating it can be to get scam phone calls, whether they're pretending to be your bank or trying to claim your computer needs fixing.
Of course it can be fun to keep them talking and string them along for a while, but most of us don't have the time to do that. Now though UK telco Virgin Media O2 has created an AI pensioner specifically designed to waste the scammers' time so we don't have to.
AI redefines priorities for IT leaders


A new survey from Flexera shows that 42 percent of IT leaders say they believe if they could integrate AI, it would make the most difference to their organizations.
The study surveyed 800 IT leaders from the US, UK, Germany and Australia to determine how IT decision makers' priorities have evolved over the past 12 months and outline their focus for next year.
New defense suite is designed to secure AI workloads


As organizations increasingly adopt AI capabilities, the most common and dangerous attacks often go undetected by static code scanning or traditional security methods.
The only effective way to stop common AI attacks, such as prompt injection and zero-day vulnerabilities, is through active runtime detection and defense. Operant AI is launching a new 3D Runtime Defense Suite aimed at protecting live cloud applications, including AI models and APIs in their native environments.
New tool helps prepare workforces for cyber threats


Humans are generally the weakest link in the cybersecurity chain, so training and awareness are essential alongside technology to keep organizations safe.
With the launch of its AI Scenario Generator, Immersive Labs enables organizations to seamlessly generate threat scenarios for crisis simulations to ensure their workforces are ready for the latest threats.
Navigating the world of disinformation, deepfakes and AI-generated deception [Book Review]


Online scams aren't anything new, but thanks to artificial intelligence they're becoming more sophisticated and harder to detect. We've also seen a rise in disinformation and deepfakes many of them made possible, or at least more plausible, by AI.
This means that venturing onto the internet is increasingly like negotiating a digital minefield. With FAIK, risk management specialist at KnowBe4 Perry Carpenter sets out to dissect what makes these threats work and the motivations behind them as well as offering some strategies to protect yourself.
AI degradation -- what is it and how do we address it? [Q&A]


Many in the industry believe that AI is degrading because it's being starved of human-generated data. This leads to models being trained on the output of older models which increases the risk of hallucinations and errors.
But how big an issue is this and what can we do to fix it? We spoke to Persona CEO and co-founder, Rick Song to find out.
Businesses turn to humans to combat AI threats


A new survey from HackerOne shows 67 percent of respondents believe an external, unbiased review of GenAI is the most effective way to uncover AI safety and security issues as AI red teaming gathers momentum.
Nearly 10 percent of security researchers now specialize in AI technology as 48 percent of security leaders consider AI to be one of the greatest risks to their organizations, according to the report -- based on data from 500 global security leaders, and more than 2,000 hackers on the HackerOne platform.
Recent Headlines
Most Commented Stories
© 1998-2025 BetaNews, Inc. All Rights Reserved. About Us - Privacy Policy - Cookie Policy - Sitemap.