CrowdStrike integrates Falcon cybersecurity with NVIDIA NIM Agent Blueprints to support secure generative AI development


CrowdStrike has announced its integration of the Falcon cybersecurity platform with NVIDIA NIM Agent Blueprints, aimed at helping developers securely utilize open-source foundational models and accelerate generative AI innovation.
Developing enterprise-grade generative AI applications involves a complex process that requires blueprints for standard workflows—such as customer service chatbots, retrieval-augmented generation, and drug discovery—to streamline development. Ensuring the security of these models and the underlying data is essential for maintaining the performance and integrity of generative AI applications.
97 percent of organizations worried about AI security threats


A new report from Deep Instinct shows that 97 percent of security professionals are concerned that their organization will suffer an AI-generated security incident.
In addition 75 percent have had to change their cybersecurity strategy in the last year due to the rise in AI-powered cyber threats, with 73 percent expressing a greater focus on prevention capabilities.
Keeping AI data and workloads secure and accessible


AI is already revolutionizing whole industries and professions. New applications and projects appear regularly across every sector, limited only it seems by the boundary of our own inspiration. That means that AI workloads will be critical to organizations across the board; the question is: how can we ensure AI applications are stable, secure and accessible?
Many companies depend on the trusted backup to ensure fail-safety and security against data loss and outages. From a data protection perspective this makes sense, however, backups aren’t best suited to business continuity and disaster recovery (DR), particularly for the most important data and workloads, such as AI.
AI and security: It is complicated but it doesn't need to be


AI is growing in popularity and this trend is only set to continue. This is supported by Gartner which states that approximately 80 percent of enterprises will have used generative artificial intelligence (GenAI) application programming interfaces (APIs) or models by 2026. However, AI is a broad and ubiquitous term, and, in many instances, it covers a range of technologies.
Nevertheless, AI presents breakthroughs in the ability to process logic differently which is attracting attention from businesses and consumers alike who are experimenting with various forms of AI today. At the same time, this technology is attracting similar attention from threat actors who are realising that it could be a weakness in a company’s security while it could also be a tool that helps companies to identify these weaknesses and address them.
Embracing the future: How AI is transforming security and networking


Network management and security should go hand in hand. However, making these services work has become more complicated and riskier due to the growth of the public cloud, the use of software applications, and the need to integrate different solutions together.
This complex network security domain requires more skilled cybersecurity professionals. But as this need becomes obvious, so does the glaring skills gap. In the UK, half of all businesses face a fundamental shortfall in cybersecurity skills, and 30 percent grapple with more complex, advanced cybersecurity expertise deficiencies.
How organizations can stay secure in the face of increasingly powerful AI attacks


It’s almost impossible to escape the hype around artificial intelligence (AI) and generative AI. The application of these tools is powerful. Text-based tools such as OpenAI’s ChatGPT and Google’s Bard can help people land jobs, significantly cut down the amount of time it takes to build apps and websites, and add much-needed context by analyzing large amounts of threat data. As with most transformative technologies, there are also risks to consider, especially when it comes to cybersecurity.
AI-powered tools have the potential to help organizations overcome the cybersecurity skills gap. This same technology that is helping companies transform their businesses is also a powerful weapon in the hands of cybercriminals. In a practice, that’s sometimes referred to as offensive AI, where cybercriminals use AI to automate scripts that exploit vulnerabilities in an organization’s security system or make social engineering attacks more convincing. There’s no doubt that it represents a growing threat to the cybersecurity landscape that security teams must prepare for.
Security researchers can pocket financial rewards in the new Microsoft AI Bounty Program


Microsoft now has a bug bounty program that aims to find issues in artificial intelligence. Specifically, the Microsoft AI Bounty Program is focused on tracking down vulnerabilities in the company’s own AI-powered "Bing experience". This catch-all term covers a surprising number of products and services.
Interestingly, with this bounty program Microsoft is only offering rewards for the discovery of vulnerabilities considered Critical or Important. Those that are deemed of Moderate or Low severity will go unrewarded.
AI for the good guys: Practical lessons for AI and cyber risk


Threat actors are early adopters. Cyber defense is brimming with uncertainties, but one dynamic you can be confident about is that threat actors will leverage everything available to exploit a target. In 2023, this means the rise of artificial intelligence-enabled attacks, from AI-generated social engineering scripts to powerful automation designed to find and exploit vulnerabilities and spread laterally through systems and networks.
Security teams, therefore need to be prepared to meet the challenge of cloud scale threats on both a technical level and an organizational level. It requires anticipating threats that exist beyond technical vulnerabilities, including, for example, social engineering and DDoS. This is part of the challenge of modern cyber security -- the attack surface comprises not just the entirety of IT infrastructure, its endpoints, and all the data it uses and stores, but also its users. It is too large to be effectively managed manually.
Get '10 Machine Learning Blueprints You Should Know for Cybersecurity' (worth $39.99) for FREE


Machine learning in security is harder than other domains because of the changing nature and abilities of adversaries, high stakes, and a lack of ground-truth data.
This book will prepare machine learning practitioners to effectively handle tasks in the challenging yet exciting cybersecurity space. It begins by helping you understand how advanced ML algorithms work and shows you practical examples of how they can be applied to security-specific problems with Python -- by using open source datasets or instructing you to create your own.
How machine learning safeguards organizations from modern cyber threats


2024 is fast approaching, and it seems likely that the new year heralds the same torrent of sophisticated malware, phishing, and ransomware attacks as 2023. Not only are these long-standing threats showing few signs of slowing down, but they're increasing by as much as 40 percent, with federal agencies and public sector services being the main targets.
Meanwhile, weak points like IoT and cloud vulnerabilities are making it tougher for cybersecurity pros to secure the wide attack surface that these edge devices create.
OpenAI launches bug bounty program to help boost ChatGPT security


As the world goes crazy for AI, many are voicing concerns about the numerous artificial intelligence systems that are rapidly gathering fans. ChatGPT is one of the tools that has exploded in popularity, and now OpenAI, the company behind the system, has launched a bug bounty program to help track down flaws and problems.
The company is calling on "the global community of security researchers, ethical hackers, and technology enthusiasts" to unearth vulnerabilities, bugs and security flaws. With the OpenAI Bug Bounty Program, it is possible to earn anything from $200 to $20,000 for sharing discoveries, with the size of the payment being dependent on the severity of the problem found.
Recent Headlines
Most Commented Stories
BetaNews, your source for breaking tech news, reviews, and in-depth reporting since 1998.
Regional iGaming Content
© 1998-2025 BetaNews, Inc. All Rights Reserved. About Us - Privacy Policy - Cookie Policy - Sitemap.