Will AI transform how we secure APIs? [Q&A]


Digital services, including Generative AI, rely heavily upon Application Programming Interfaces (APIs) to access and relay data. But securing these conduits can be difficult so is this a problem that AI could help solve?
We spoke to James Sherlow, systems engineering director, EMEA, at Cequence Security, to find out how Generative AI might be used to address API security.
Google launches new AI risk assessment tool


Last year Google launched its Secure AI Framework (SAIF) to help people safely and responsibly deploy AI models.
Today it's adding to that with a new tool that can help others assess their security posture, apply these best practices, and put SAIF principles into action.
AI expected to be the most important tech in 2025


A new study by the IEEE -- the world's largest technical professional organization -- focuses on on what are likely to be the most important technologies in 2025 along with future technology trends, including expectations for AI's market growth, benefits, uses, and skill sets.
The study surveyed over 350 CIOs, CTOs, IT directors, and other technology leaders in Brazil, China, India, the UK and US at organizations with more than 1,000 employees, it finds that 58 percent believe AI will be the most important tech next year, while 26 percent say cloud computing and 24 percent robotics.
Majority of SaaS applications and AI tools are unmanaged


A new report reveals that 90 percent of SaaS applications and 91 percent of AI tools within enterprises remain unmanaged, suggesting a widespread vulnerability that continues to grow.
The study from Grip Security highlights the limitations of traditional security strategies in combating 'SaaS risk creep' the number of SaaS applications used in an enterprise increased by 40 percent over the last two years.
The CEO's digital playbook for 2025 [Q&A]


As we head towards the end of the year, the pace of challenges posed by technologies like AI shows no signs of letting up.
So what should CEOs be doing to ensure that their workforces are equipped to deal with the changes and ensure that their business remains competitive? We spoke to Mike Lee, general manager at AND Digital, to find out.
Cloud attacks grow in cost and scale


A new report from Sysdig highlights the growing cost and scale of cloud attacks and the evolution of tactics being used by attackers.
Among the findings are that over $100,000 is lost per day to AI resource jacking. It hasn't taken long for threat actors to leverage stolen cloud access to exploit large language models (LLMs), as illustrated by an LLMjacking attack that left one victim on the hook for $30,000 in just three hours. Left unchecked, an LLMjacking operation can cost more than $100,000 per day.
Good observability drives productivity for developer and ops teams


A new report from Splunk looks at the role of observability within today's increasingly complex IT environments.
Based on a survey of 1,850 ITOps and developer professionals, it finds enterprises with good observability resolve issues faster, boost developer productivity, control costs and improve customer satisfaction. Due to such benefits, 86 percent of all respondents plan to increase their observability investments.
Hanging on the telephone set to be replaced by messaging services


It was 175 years ago that Italian inventor Antonio Meucci came up with the technology that would later be improved and popularized by Alexander Graham Bell to become the telephone.
New research from cloud communications company Sinch finds that newer technologies are starting to change how we communicate -- particularly with businesses -- offering richer, more interactive, and personalized experiences.
AI outpaces ability to secure data


The rush to embrace AI, especially generative AI and large language models, has outpaced most organizations' ability to keep their data safe and effectively enforce security protocols according to a new report.
The study from Swimlane shows 74 percent of cybersecurity decision-makers are aware of sensitive data being input into public AI models despite 70 percent having established protocols in place.
Over 80 percent of hackers believe the AI threat landscape is moving too fast to secure


A new report from Bugcrowd finds 82 percent of ethical hackers and researchers on the platform believe that the AI threat landscape is evolving too fast to adequately secure.
Based on responses from 1,300 users of the platform, the report also finds that 71 percent say AI adds value to hacking, compared to only 21 percent in 2023. In addition, hackers are increasingly using generative AI solutions, with 77 percent now reporting the adoption of such tools -- a 13 percent increase from 2023.
Don't fancy making that presentation? Let your avatar do it


New research finds that 95 percent of workers would allow an AI avatar to perform tasks in a virtual meeting, such as making presentations, for them.
The study of 4,000 people worldwide from business travel management platform, TravelPerk shows employees prefer to assign admin-focused tasks in meetings to AI avatars, such as reminding them of deadlines (61 percent) or scheduling meetings (54 percent) which can then enable them to focus on more 'human' interactions.
Why safe use of GenAI requires a new approach to unstructured data management [Q&A]


Large language models generally train on unstructured data such as text and media. But most enterprise data security strategies are designed around structured data (data organized in traditional databases or formal schemas).
The use of unstructured data in GenAI introduces new challenges for governance, privacy and security that these traditional approaches aren't equipped to handle.
CISOs concerned about attackers using AI


Data from a recent survey conducted by RSA Conference shows that 72 percent of Fortune 1000 CISOs say they have already seen threat actors using generative AI against their organization.
AI-generated phishing emails are the top threat, with 70 percent of CISOs reporting that they've observed highly tailored phishing emails targeting their business Other top GenAI threats include vishing (37 percent), automated hacking (22 percent), deepfakes (21 percent) and misinformation (17 percent).
Evaluating LLM safety, bias and accuracy [Q&A]


Large language models (LLMs) are making their way into more and more areas of our lives. But although they're improving all the time they're still far from perfect and can produce some unpredictable results.
We spoke to CEO of Patronus AI Anand Kannappan to discuss how businesses can adopt LLMs safely and avoid the pitfalls.
Public sector and infrastructure come under attack as malicious web requests rise


The number of malicious web requests rose by 53.2 percent in the first half of 2024, compared to the same period last year according to a new study.
The report from German cybersecurity company Myra finds that for the first quarter of 2024, the number of malicious requests on websites, online portals and web APIs increased by 29.8 percent compared to 2023. In the second quarter, the growth was even more pronounced at 80 percent.
Recent Headlines
BetaNews, your source for breaking tech news, reviews, and in-depth reporting since 1998.
Regional iGaming Content
© 1998-2025 BetaNews, Inc. All Rights Reserved. About Us - Privacy Policy - Cookie Policy - Sitemap.