New risks, new opportunities and democratization -- AI predictions for 2024

How AI is weaponized for cyberattacks


A new report from Abnormal Security highlights real-world examples of how AI is being used to carry out cyberattacks.
Generative AI allows scammers to craft unique email content, making detection that relies on matching known malicious text strings infinitely more difficult.
Almost 90 percent say they're prepared for password-based attacks -- but half still fall for them


A new report from Axiad shows that 88 percent of IT professionals feel their company is prepared to defend against a password-based cyberattack, yet 52 percent say their business has fallen victim to one within the last year.
Based on over 200 responses from US IT pros, the study shows 39 percent think phishing is the most feared cyberattack, while 49 percent say it's the attack most likely to happen.
'Composite AI' could be key to successful artificial intelligence in the enterprise


New research shows that businesses are increasing their investments in AI across many areas, but there are challenges and risks that they need to manage.
The study of 1,300 tech leaders from Dynatrace shows 98 percent are concerned that generative AI could be susceptible to unintentional bias, error, and misinformation. In addition 95 percent are concerned that using generative AI to create code could result in leakage and improper or illegal use of intellectual property.
Generative AI: Approaching the crossroads of innovation and ethics


As the recent hype and excitement around Generative AI (GenAI) begins to settle somewhat, we are entering a critical phase where innovation must be more closely aligned with ethical considerations. The impact of AI is already evident in various aspects of life, pointing to a future where, ideally, its use is not only widespread but also guided by principled decision-making. In this context, the emphasis should be on using AI to address appropriate problems, not just any problem.
In particular, the early iterations of GenAI platforms have demonstrated their potential but also the need for careful application. In many organizations, GenAI has already improved both customer and employee experiences, with advanced chatbots capable of mimicking human interaction taking automated customer service to a whole new level by providing quick and relevant responses. In an ideal world, this use case highlights AI’s dual purpose: to enhance human capabilities while maintaining a focus on human-centred experiences.
36 percent of IT workers worry that AI will take their jobs


A new study finds that 36 percent of IT workers are very concerned that generative AI tools will take their jobs in the next five years, this is 17 points higher than for other office workers.
However, the report from Ivanti finds office workers are six times more likely to say that generative AI benefits employers than employees.
Generative AI sparks excitement and uncertainty


A new survey from Betterworks shows that the arrival of generative AI has generated excitement, experimentation, innovation, fear, and uncertainty among employees and organizations.
The research, conducted by Propeller Insights, shows over half of employees are using GenAI at work for complex activities and believe it has the potential to reduce bias across a range of processes, despite the fact that only 41 percent of organizations are actively evaluating it or have made GenAI a priority.
Why structured data offers LLMs tremendous benefits -- and a major challenge [Q&A]


ChatGPT and other LLMs are designed to train and learn from unstructured data -- namely, text. This has enabled them to support a variety of powerful use cases.
However, these models struggle to analyze structured data, such as numerical and statistical information organized in databases, limiting their potential.
Generative AI sees rapid adoption in the enterprise


Generative AI has seen rapid adoption in the enterprise with 67 percent of respondents to a new study reporting that their companies are currently using generative AI, and 38 percent of this group saying that their companies have been working with AI for less than a year.
The report from O'Reilly shows many are still in the early stages of the AI journey, however. 18 percent report having applications in production, but there are multiple bottlenecks for enterprises looking to implement these technologies. First is identifying appropriate use cases (53 percent), followed by legal issues, risk, and compliance (38 percent).
Enterprises lack in-house skills for generative AI adoption


Only 38 percent of executives say their organization has the in-house expertise to adopt generative AI for innovation, according to a new study from the IBM Institute for Business Value.
Generative AI promises to upgrade ecosystem innovation by transforming the entire workflow. A large majority of executives say generative AI will greatly improve ideation (80 percent), discovery (82 percent), collaboration with partners for innovation (77 percent), and innovation execution (74 percent).
Organizations flock to generative AI despite security concerns


A new survey of over 900 global IT decision makers shows that although 89 percent of organizations consider GenAI tools like ChatGPT to be a potential security risk, 95 percent are already using them in some form within their businesses.
The research for Zscaler, carried out by Sapio Research, also reveals 23 percent of those using GenAI aren't monitoring the usage at all, and 33 percent have yet to implement any additional GenAI-related security measures -- though many have it on their roadmap.
Organizations turn to GenAI to combat downtime


Downtime-producing incidents such as application outages and service degradation are putting organizations at risk of losing up to $499,999 per hour on average, so it's no surprise they're turning to AI to help their responses.
A new State of DevOps Automation and AI report from Transposit shows 84.5 percent of respondents either believe AI can significantly streamline their incident management processes and improve overall efficiency or are excited about the opportunities AI presents for automating certain aspects of incident management.
Phishing emails increase over 1,200 percent since ChatGPT launch


A new survey of over 300 cybersecurity professionals from SlashNext looks at cybercriminal behavior and activity on the Dark Web particularly as it relates to leveraging Generative AI tools and chatbots and finds a startling 1,265 percent increase in malicious phishing emails since the launch of ChatGPT in November 2022.
It also shows a 967 percent increase in credential phishing in particular and that 68 percent of all phishing emails are text-based Business Email Compromise (BEC) attacks.
Ethical hackers help organizations avoid cyber incidents


Ethical hacking company HackerOne has announced that its ethical hacker community has surpassed $300 million in total all-time rewards on the HackerOne platform.
The company's 2023 Hacker-Powered Security Report also shows 30 hackers have earned more than a million dollars on the platform, with one hacker surpassing four million dollars in total earnings.
How organizations can stay secure in the face of increasingly powerful AI attacks


It’s almost impossible to escape the hype around artificial intelligence (AI) and generative AI. The application of these tools is powerful. Text-based tools such as OpenAI’s ChatGPT and Google’s Bard can help people land jobs, significantly cut down the amount of time it takes to build apps and websites, and add much-needed context by analyzing large amounts of threat data. As with most transformative technologies, there are also risks to consider, especially when it comes to cybersecurity.
AI-powered tools have the potential to help organizations overcome the cybersecurity skills gap. This same technology that is helping companies transform their businesses is also a powerful weapon in the hands of cybercriminals. In a practice, that’s sometimes referred to as offensive AI, where cybercriminals use AI to automate scripts that exploit vulnerabilities in an organization’s security system or make social engineering attacks more convincing. There’s no doubt that it represents a growing threat to the cybersecurity landscape that security teams must prepare for.
Recent Headlines
Most Commented Stories
Betanews Is Growing Alongside You
Only a fool still uses Windows 7
© 1998-2025 BetaNews, Inc. All Rights Reserved. Privacy Policy - Cookie Policy.