Is over-focusing on privacy hampering the push to take full advantage of AI?
In 2006, British mathematician Clive Humby declared that data is the new oil -- and so could be the fuel source for a new, data-driven Industrial Revolution.
Given that he and his wife helped Tesco make £90m from its first attempt at a Clubcard, he should know. And it looks like the “derricks” out there are actually pumping that informational black gold up to the surface: the global big data analytics market is predicted to be more than $745bn by 2030 -- and while it may not be the most dependable metric, Big Tech is throwing billions at AI at a rate described as “some of the largest infusions of cash in a specific technology in Silicon Valley history”.
Understanding the risks of integrating GenAI in GRC programs: A framework for compliance teams
NIST's recent AI Risk proposal, AI RMF Generative AI Profile, aims to assist organizations in comprehending AI risks internally and from third-party vendors. While GenAI adoption is on the rise across various sectors, compliance managers are more cautious about incorporating AI into their compliance programs. Despite all the hype about AI, a survey conducted by The Wall Street Journal among approximately 300 compliance professionals revealed that only one-third currently incorporate GenAI within their compliance programs.
Collaborative efforts between entities like NIST and prominent organizations including OpenAI and Microsoft are underway to expedite the development of standards and recommendations for the responsible deployment of AI. Amidst grappling with the implementation of GenAI, it becomes imperative to understand how third parties are integrating this technology to better evaluate corporate risk, consequently enhancing regulatory and compliance reporting.
The double-edged sword of AI in cybersecurity
As artificial intelligence (AI) continues to advance, its impact on cybersecurity grows more significant. AI is an incredibly powerful tool in the hands of both cyber attackers and defenders, playing a pivotal role in the evolving landscape of digital threats and security defense mechanisms. The technology has seen use both by attackers to conduct cyber attacks, and defenders to deter and counter threats.
The incorporation of AI into malicious social engineering campaigns creates a new era where cyber threat actors are more convincingly deceptive. With access to a vast amount of data, cyber threat actors can both increase the success and effectiveness of large-scale phishing campaigns, or use this access to huge amounts of data to spread disinformation online.
Apple takes a gamble on AI, but rolls a critical miss on dice
At its WWDC yesterday, Apple unveiled its first major foray into modern artificial intelligence, or "Apple Intelligence" as it prefers to call it.
The company may have been slow to adopt the technology, but it’s now going all-in. Apple Intelligence will be baked into the upcoming iOS 18, iPadOS 18, and macOS Sequoia, offering new writing tools for rewriting, proofreading, and summarizing text across apps, Genmoji for personalized emojis, and a significantly improved Siri.
DuckDuckGo AI Chat gives anonymous and private access to GPT-3.5, Claude 3, Llama 3 and Mixtral
For all of the excitement currently surrounding artificial intelligence, there are also a lot of concerns. Not only are people worried about the power of AIs, but there is also a great deal of apprehension about the privacy and security of ChatGPT and other tools of its ilk.
Stepping up with a solution is privacy-centric firm DuckDuckGo. With the newly launched DuckDuckGo AI Chat, it offers "anonymous access to popular AI models, including GPT-3.5, Claude 3, and open-source Llama 3 and Mixtral". There is also the promise that chats will not be used to train AI models.
NVIDIA overtakes Apple as AI boom propels company value over $3 trillion
With its market value rocketing to $3.1 trillion, NVIDIA has become the second most valuable company in the world. A five percent rise in share prices pushed the chipmaker ahead of Apple, and now there is only Microsoft which is worth more than NVIDIA.
The soaring value of the firm is due in no small part to its heavy involvement and investment in AI. Having started life in the 1990s as a minor player in the graphic chip market, NVIDIA has ridden the artificial intelligence tidal wave.
Generative AI: Productivity dream or security nightmare?
The field of AI has been around for decades, but its current surge is rewriting the rules at an accelerated rate. Fueled by increased computational power and data availability, this AI boom brings with it opportunities and challenges.
AI tools fuel innovation and growth by enabling businesses to analyze data, improve customer experiences, automate processes, and innovate products -- at speed. Yet, as AI becomes more commonplace, concerns about misinformation and misuse arise. With businesses relying more on AI, the risk of unintentional data leaks by employees also goes up. For many though, the benefits outweigh any risks. So, how can companies empower employees to harness the power of AI without risking data security?
Measuring AI effectiveness beyond productivity metrics
Last year was an AI milestone marked by enthusiasm, optimism, and caution. AI-powered productivity tools promise to boost productivity by automating repetitive coding and tedious tasks and generating code. A year later, organizations are struggling to quantify the impact of their AI initiatives and are reevaluating metrics to ensure they reflect the desired business outcomes.
Measuring developer productivity has historically been a challenge, with or without the introduction of AI-powered developer tools. Last year, McKinsey & Company described developer productivity measurement as a “black box,” noting that in software development, “the link between inputs and outputs is considerably less clear” than other functions.
Raspberry Pi AI Kit brings artificial intelligence to the Raspberry Pi 5
It was surely only a matter of time before AI made its way to the Raspberry Pi and today sees the launch of the Raspberry Pi AI Kit. Developed in close collaboration with Hailo, it provides a way of seamlessly integrating local, high-performance, power-efficient inferencing into a number of applications.
The Raspberry Pi AI Kit comprises the M.2 HAT+ preassembled with a Hailo-8L AI accelerator module. Installed on a Raspberry Pi 5, the AI Kit allows users to build AI vision applications, running in real time, with low latency and low power requirements.
It's not all artificial: The 4 types of intelligence CTOs need to get the most out of AI
Enterprises plan to spend roughly $35.5 million on IT modernization in 2024, with over a third going to AI to boost productivity. But it’s not all sunshine and rainbows. At the same time, 64 percent of IT leaders worry about rushing to adopt generative AI without understanding what's needed to use it effectively and safely. And while 75 percent of organizations have experimented with generative AI, only 9 percent have adopted the technology widely. There’s so much more potential to tap into.
To get the best out of AI to supercharge operations it all comes down to intelligence. Afterall, AI is only as intelligent as those using it. There are the four types of intelligence that CTOs need to build, and it’s nothing to do with coding or super complicated technology. It’s about cultivating soft skills and human talent to control AI in a responsible way.
Compliance and cybersecurity in the age of AI [Q&A]
Artificial Intelligence is dramatically transforming the business landscape. It streamlines operations, provides critical insights, and empowers businesses to make data-driven decisions efficiently. Through machine learning, predictive analytics, and automation, AI assists in identifying trends, forecasting sales, and streamlining supply chains, leading to increased productivity and improved business outcomes. It isn't, unfortunately, without problems.
We talked to Matt Hillary, Vice President of Security and CISO at Drata, about the issues surrounding AI when it comes to critical security and compliance.
Unlocking cybersecurity success: The need for board and CISO alignment
The C-Suite’s perception of cybersecurity has evolved dramatically over the past decade. It’s gone from being an afterthought for technology departments to worry about, to a cornerstone for business survival and operational strategy. The heightened awareness of cybersecurity stems from a deeper grasp of the legal, reputational and financial implications of data breaches. This, combined with regulatory pressures such as the original NIS directive, has forced leaders to enhance their organizations’ cybersecurity measures.
The result is that 75 percent of organizations now report that cybersecurity is a high priority for their senior management team. While on the surface this should be celebrated, when digging deeper, conversations between CISOs and the wider C-Suite often just revolve around high-profile or user-centric security risks. More technical and advanced threats such as those related to application security are overlooked. The race to embrace AI and increasingly complicated cloud infrastructures have also made communicating cybersecurity priorities even more difficult for CISOs.
Out of the shadows and into the light: Embracing responsible AI practices amid bias and hallucinations
The path to widespread AI is a bumpy one. While its potential to enhance consumer experiences and streamline business operations through personalization, autonomy, and decentralized reasoning is evident, the technology comes with inherent risks.
AI can produce conclusions that aren’t true, spread misinformation and in some cases, perpetuate existing biases. This -- the darker side of AI’s impact -- can leave business leaders facing financial, legal, and reputational damage.
Artificial Intelligence: What are 4 major cyber threats for 2024?
AI is one of the most powerful innovations of the decade, if not the most powerful. Yet with that power also comes the risk of abuse.
Whenever any new, disruptive technology is introduced to society, if there is a way for it to be abused for the nefarious gain of others, wrongdoers will find it. Thus, the threat of AI is not inherent to the technology itself, but rather an unintended consequence of bad actors using it for purposes that wreak havoc and cause harm. If we do not do something about these cyber threats posed by the misuse of AI, the legitimate, beneficial uses of the technology will be undermined.
Move over Google, LLMs are taking over!
When Google was founded in 1998, it ushered in a new era of access to information. The groundbreaking search engine wasn’t the first to debut (that was World Wide Web Wanderer in 1993), but it was the one that caught on. By 2004, Google was fielding over 200 million searches per day; by 2011, that number had exploded to about three billion daily searches. By that time, the word “Google” had morphed from just the name of the search engine to a verb that meant “to use an internet search engine.” Twenty years later, Google still dominates the market with an almost 84 percent share as of late 2023.
Though Google is still the most popular search engine, new technology has emerged that could threaten its supremacy -- LLMs. The use of this technology is growing at an astonishing rate. In fact, in February 2024, ChatGPT generated over 1.6 billion visits.
Recent Headlines
Most Commented Stories
© 1998-2024 BetaNews, Inc. All Rights Reserved. Privacy Policy - Cookie Policy.