The good, the bad and the scary of AI -- all in one week
AI has been very much top of the agenda this week. We've had President Biden's executive order on AI, we've had the AI Safety Summit in the UK, we've even had Collins Dictionary choosing AI as its word of the year (not to be confused with the three-toed sloth beloved of Scrabble players).
Today we also have new research from SnapLogic looking at how generative AI is being used, viewed, and adopted within large enterprises.
As you can imagine, all of this results in a bit of a mixed picture. Should we be embracing the benefits of AI? Or should we be worried about becoming slaves to our new AI overlords?
The good news from the SnapLogic study of over 900 office workers is that 47 percent of respondents believe that generative AI could save them a day's worth of work per week. 67 percent of respondents who currently use gen-AI for work say it already saves them between one and five hours of work each week.
However, 68 percent say they don't have enough of an understanding of gen-AI for their current role. 53 percent want to learn more though.
Jeremiah Stone, Chief Technology Officer at SnapLogic, says, "Mid-level workers are the backbone of many large companies and as they both ‘do’ the work and oversee other people’s work, their perspective on generative AI is invaluable. We were honestly surprised to see the contrast between the number of people who recognize that generative AI can save a considerable amount of labor, compared to the number of people not currently using it at all. There’s a lot of lost productivity, exacerbated by the fact that some people are likely using gen-AI incorrectly or in ways that could actually pose a risk to their employer."
We've also seen a raft of activity this week as politicians try to get to grips with what the development of AI might mean. President Biden's new executive order is the first of its kind requiring new safety guidelines and research on AI's impact. This has sparked a good deal of comment in the industry.
Billy Biggs, VP of public sector at WalkMe, says, "As the Executive Order indicates, it is the responsibility of both the public and private sectors to manage AI and its growth potential, as well as its risk factors. For the workforce and people on the frontlines of this technology, we must prioritize safe, secure and trusted applications of artificial intelligence technologies."
Platform.sh's VP of data privacy and compliance, Joey Stanford, "President Biden's executive order on AI is certainly a step in the right direction and the most comprehensive to date; however, it's unclear how much impact it will have on the data security landscape. AI-led security threats pose a very complex problem and the best way to approach the situation is not yet clear. The order attempts to address some of the challenges but may end up not being effective or quickly becoming outdated in the approach. For instance, a few AI developers like Google and OpenAI have agreed to use watermarks but nobody knows how this is going to be done yet, so we don’t know how easy it's going to be to bypass/remove the watermark. That said, it is still progress and I'm glad to see that."
Peter Guagenti, president of Tabnine, the creators of the industry's first and most widely used AI-powered assistant for developers, says, "Corporate control over models and the data they are trained on is critical. As the White House’s announcement called out, 'AI not only makes it easier to extract, identify, and exploit data -- it also heightens incentives to do so because companies use data to train AI systems.' Given this, protecting our privacy with AI is incredibly important. And it's not just about Americans' privacy; it's also about intellectual property and copyright held by business entities. Big Tech has been completely unconstrained in their competitive practices for the last 25 years, and unsurprisingly their monopolistic tendencies are now playing out across AI. Case in point: there are currently pending lawsuits against the companies behind the large scale models for copyright infringement, and directly against Microsoft, in particular, for training its code-generation models on code sourced from private code repositories without the permission of the code creators. Data used in models must be explicitly allowed and fully transparent, an ongoing and persistent problem for AI that urgently needs to be dealt with."
Meanwhile, across the Atlantic at the AI Safety Summit, 28 countries have signed the Bletchley Declaration, described as 'a world-first agreement' towards establishing a shared understanding of the opportunities and risks posed by frontier AI.
High profile attendee Elon Musk has made the headlines by calling AI, "One of the biggest threats to humanity," but he also strikes a more positive note, "What we're really aiming for here is to establish a framework for insight so that there's at least a third-party referee, an independent referee, that can observe what leading AI companies are doing and at least sound the alarm if they have concerns."
Joseph Thacker, researcher at SaaS security pioneer AppOmni, says:
Experts are split on the actual concerns around AI destroying humanity, but it's clear that AI is an effective tool that can (and will) be used by forces for both good and bad. The Bletchley Declaration acknowledges the potential of AI in improving human wellbeing, fostering innovation, and protecting human rights. At the same time, it recognizes the risks, including those from frontier AI, and emphasizes the need for risk-based policies and safety testing. The application is very similar to the White House Executive Order on AI Safety. And both adequately outline the correct steps for ensuring safety -- communicating identifying risks, and requiring transparency and testing. The penalties for not doing so, and the way in which these requirements play out internally at organizations will largely determine their success.
The declaration doesn't cover adversarial attacks on current models or adversarial attacks on systems which let AI have access to tools or plugins, which may introduce significant risk collectively even when the model itself isn't capable of anything critically dangerous.
Chris Grove, expert in critical infrastructure cybersecurity at Nozomi Networks, says, “Large, multilateral agreements can be complex and difficult to legislate, but it does look like some efforts are close to fruition. In the very least, they're further ahead than they were previously. That point alone makes the summit a success, and a much-needed step in the right direction."
“If AI is unregulated for too long, we will undoubtedly see a rise in AI prejudice and ethical issues, including the loss of jobs. We're already seeing deep concerns in this area. In fact, Ivanti found that 63 percent of IT workers and 44 percent of office workers worry Generative AI will take their job in five years," says Dr. Srinivas Mukkamala, AI authority leader and chief product officer at Ivanti. "As the AI Safety Summit continues, I hope we see further regulatory clarity that goes beyond lip service. As well as a commitment to diversity of thought in the official development of these regulations to prevent certain pockets of society being disproportionally impacted."
Image Credit: Wayne Williams