To get to AGI, we must first solve the AI challenges of today, not tomorrow

Artificial-intelligence, AI

If the World Economic Forum in Davos was any indication, AI safety and security will be this year’s top priority for AI developers and enterprises alike. But first, we must overcome hype-driven distractions that siphon attention, research, and investment away from today’s most pressing AI challenges.

In Davos, leaders from across the technology industry gathered, previewing innovations, and prophesying what’s to come. The excitement was impossible to ignore, and whether it is deserved or not, the annual meeting has built a reputation for exacerbating technology hype cycles and serving as an echo chamber for technology optimists. 

But from my perspective, there was a lot more to it. Amidst all the Davos buzz, many conversations took on the challenge of assessing critical AI challenges across development and security, and outlining a path forward.

Sam Altman and Satya Nadella took on the real and present threats of LLM-generated misinformation and deep fakes -- both serious threats as nearly half of the world’s population braces for an election this year. I paneled a session alongside Yann Lecun, Max Tegmark, and Seraphina Goldfarb-Tarrant, where we discussed the need to overcome durable adoption challenges like cost and accessibility, the path to artificial general intelligence (AGI), and how we understand the utility and security of today’s AI systems.

With talk of AGI and AI-powered economies continuing beyond Davos, it’s easy to lose sight of the challenges looming ahead. But to bring these long-promised AI systems and their impact to life, we first must solve the challenges of the Large Language Models (LLMs) of today and the autonomous AI systems of tomorrow.

LLMs Inherited Challenges and Created Their Own 

LLMs have drastically changed the makeup of enterprise technology across industries. There is no shortage of excitement. However, some have begun to feel disillusioned, questioning what AI prospects are real and which are merely hype. After all, the benefits of LLMs are matched equally by new and familiar safety and security challenges. 

The threat of bias and toxicity come to mind. Misinformation and security breaches threaten to disrupt elections and compromise privacy. Deep fakes are set to run rampant this year, claiming victims like Taylor Swift and President Biden with explicit content and impersonations. This is just the tip of a very large iceberg that’s yet to surface. 

As we forge ahead towards AGI, more challenges will be uncovered. And the solutions to today’s challenges will undoubtedly translate to future AI systems. Solutions to combat LLM-generated misinformation today might become the underpinnings of the controls used on AGI systems. Preventative measures to thwart prompt injection and data poisoning will extend far beyond LLMs, too. 

Putting off the questions and challenges of today ignores the reality that these AI systems are the foundations of future intelligence AI and AGI systems.

After LLMs -- and Before AGI -- Comes the Internet of Agents

Between now and an AGI future, a lot of development remains. In the quest for greater AI-driven productivity, humans remain the limiting factor. That will change in the next evolution of AI. 

Today’s human-to-AI systems will be phased out in favor of AI-to-AI systems as LLMs are refined and become more capable and accurate. Human-in-the-loop approaches will be replaced by light human supervision that merely ensures AI agents are operating as expected. The Internet of Agents (IoA), an interconnected system of intelligent agents with specific assignments, is the natural next step for AI.

Imagine a scenario where an AI agent can detect a bug within an enterprise application’s code, assign a patch to a coding agent powered by an LLM, and push it live through an agent tasked with managing enterprise production environments. This could take all of several minutes. Whereas human intervention could stretch that timeline to hours or even days.

Whether we like it or not, the “invisible hand” of the market will push this vision forward. As trust in AI systems builds, enterprise executives and development teams will cede control over these systems in the name of efficiency, productivity, and profitability.

AGI is Coming, But We Still Have a Long Way to Go.

It’s easy to imagine the Internet of Agents quickly transforming into AGI, but that remains a far-fetched possibility. This, my fellow AI House panelists and I (despite our different backgrounds and positions) agreed on. Yann Lecun, one of the “godfathers of AI,” compared the intelligence of AI today to a cat with the cat coming out on top. While AI systems are improving and becoming more capable, they simply do not have the diversity of skills and intelligence that humans have.

Premonitions of an AI doomsday and calls to prevent it are more symbolic than realistic. Today, AI is more likely to support an intricate phishing attack or generate harmful content than power an AGI-powered robot. Enterprises are more susceptible to a complex hack of their sensitive data than they are to be hijacked by an all-knowing, power-hungry AGI.

As We Look Beyond the Hype, We Must Make a Choice

AI is not unlike the once-new and exciting technologies that came before it. The ebbs and flows of the technology ecosystem have seen countless innovations -- think cloud computing, blockchain, and Web3 – cause hysteria. The fates of these technologies are often sealed by the reactions, decisions, and investments made before, during, and after the hype. Cloud computing was once a written-off alternative to legacy on-premise systems; today it stands as the most foundational piece to the enterprise infrastructure stack despite the many scale, safety, and security challenges it presented.

We do not know what challenges AI has in store; even the pioneers of AI are left pondering what the future holds. But we cannot dread a possible eventuality while we continue to build towards it. Instead of letting the unknown drive us toward disillusionment and distrust, we should investigate the AI systems of today; building, refining, and securing new solutions and architectures that will support stronger and safer AI today and tomorrow.

David Haber is Co-Founder and CEO of Lakera.

Comments are closed.

© 1998-2024 BetaNews, Inc. All Rights Reserved. Privacy Policy - Cookie Policy.