New risks, new opportunities and democratization -- AI predictions for 2024

AI, particularly widely available tools like ChatGPT, has made big waves in 2023. Is this set to continue into next year or will we see a different approach to using the technology?

Here are the views of some industry experts.

Ofer Klein, co-founder and CEO of Reco.ai foresees new risks. "The incredibly fast adoption of generative AI tools will lead to new data risks, such as privacy violations, fake AI tools, phishing, etc. Organizations will need to put together AI safety standards to keep their customer and employee data safe. Having a SaaS security solution that can identify connected generative AI tools will be critical."

Crystal Morin, cybersecurity strategist at Sysdig, thinks compliance will become more important. "Compliance will be the next biggest thing for AI that will be all over the media, at least for the first three to six months of 2024. 2023 rolled out cybersecurity and AI Executive Orders and the SEC disclosure rules. With the disclosure rules taking effect at the end of 2023 and the AI EO released at the end of 2023, following the obsession with generative AI through the year, media trends will focus on these. Who is doing or not doing what regarding security compliance will be the biggest media trend, with security disclosures and attack analysis."

Dale 'Dr. Z' Zabriskie, CISSP CCSK, field CISO at Cohesity, believes generative AI and security will come together in the worldwide fight against cybercrime and advanced-persistent threats. "Attackers will be leveraging AI tools to entice employees via social engineering to click and act recklessly, exploit zero-day vulnerabilities, and much more at a much faster rate. We can expect both adversaries and innovative defenders to leverage AI, and it will be a force multiplier in both of their efforts."

Mick Baccio, global security advisor at Splunk SURGe echoes this:

Less technologically sophisticated nations will have a lower barrier of entry to launch disinformation campaigns with AI, opening the floodgates for script kiddies to enter the cyber underworld on a global scale.

We anticipate new types of assaults in 2024, including commercial and economic disinformation campaigns, with more targeted attacks against companies’ brands and reputations.

Debbie Gordon, founder and CEO of Cloud Range, also thinks AI will become more important in a security context. "The cybersecurity landscape will significantly change with more of an emphasis on human-centric defense strategies and advanced technologies. AI and machine learning (ML) will become important in identifying and mitigating threats, which will allow practitioners to focus more on strategic decision-making, creating a level playing field with threat actors. Using virtual environments will see improvements in user experience while the importance of human elements such as training and awareness will become more recognized."

Siroui Mushegian, Barracuda CIO, thinks we'll see deceptive AI techniques increase:

Generative AI is a double edged sword for the cybersecurity industry -- it's used to make defenders faster and more capable, but it's also doing the same thing for adversaries. Attackers have become more deceptive in their techniques and harder to detect as generative AI gets better and better at impersonating humans, making traditional signs of social engineering harder to identify from the first point of contact.

These trends will continue into 2024, and become even more dangerous. It's important that the industry’s capabilities continue to keep pace with attackers' use of emerging technologies like generative AI and 5G in the coming year.

Joe Kim, president and CEO at Sumo Logic says, "Sec and Dev teams will double down on AI, including AI-generated hallucinations: One-third of organizations are already using generative AI in at least one business function, according to a McKinsey global survey from August. In 2024, we can expect this number to grow as sec and dev teams leverage AI-driven analytics to proactively address vulnerabilities, identify new threats, speed up development cycles, and promote efficiencies and collaboration within teams. By partnering with AI, human security and engineering pros will discover the benefits of AI like AI-generated hallucinations, like when AI generates potential scenarios or vulnerabilities that may not be immediately evident, prompting human experts to pre-emptively address these unforeseen risks. AI will not replace the important role that humans play in solving observability and security challenges, but only make teams more efficient, proactive and collaborative."

Ala Shaabana, co-founder and COO of Opentensor Foundation, thinks we’ll start to see a greater democratization of AI. "To date, most AI tools are owned and controlled by large tech companies such as Microsoft, Google and Meta. That will begin to change next year as groundswell of AI programmers work collectively to develop AI tools that are open and accessible to the public. The decentralization of AI will allow the best AI tools to rise to the top through market demand and allow programmers to monetize them if they wish. Decentralized AI will be a counterweight to the monopolization of AI technology by the big tech giants."

Adrian Reece, VP of data science at Prodoscore, breaks down how AI will shape business in 2024:

Conversational interfaces with AI and large language models (LLMs) have simplified tasks that once required advanced degrees, making high-level tasks more accessible than ever before.

Integrating AI into business processes is a game changer, likely leading to increased productivity and a healthier bottom line as more is performed with less overhead.

Ultimately, leveraging AI means businesses can accomplish more with fewer resources, as AI handles intricate knowledge-based work with ease.

AkhileshTripathi , CEO of Digitate, thinks AI will be good for employees. "The conversation around automation and AI has focused heavily on how these technologies will threaten or harm workers. But the reality is that they will do far more good for employees than harm. By leveraging automation and AI, workers can become more efficient, eliminate cumbersome, manual tasks, and free themselves to take on higher-value projects. According to a new survey, 82 percent of IT decision-makers found that automation has improved employee productivity and 74 percent said it has improved employee satisfaction."

Danny Allan, CTO of Veeam believes that 2024 won't be the year that AI radically changes. "We've seen the hype around blockchain and Web3 in years past, and generative AI is no different. While much of the 'AI hype' occurred in 2023 and LLMs have been around for nearly a decade, we can't expect any groundbreaking use cases and broader adoption for quite some time -- in the next ten years or so. Instead, generative AI will have the greatest impact on individual productivity, such as marketing. Next year, use cases for internal processes versus external ones for the benefit of customers will be the primary focus. We, of course, need to be aware of the distinction."

However, Mona Ghadiri, senior director of product management at BlueVoyant thinks AI's evolution will continue to unlock new possibilities. "There's a lot to be said in terms of where AI will go in 2024, but the only experts in AI thus far are those adapting to the change. While many will say it's overkill because we see it in the headlines every day, the truth is we just need to get past the initial hype phase. Right now it's easy to be self-conscious, but once we get to the second, third and fourth generations of this thing, we can comfortably benefit from it because we fully understand its risks."

Image credit: BiancoBlue/depositphotos.com

Comments are closed.

© 1998-2024 BetaNews, Inc. All Rights Reserved. Privacy Policy - Cookie Policy.