Poisoning the data well for Generative AI
The secret to generative AI’s success is data. Vast volumes of data that are used to train the large language models (LLMs) that underpin generative AI’s ability to answer complex questions and find and create new content. Good quality data leads to good outcomes. Bad, deliberately poisoned, or otherwise distorted data leads to bad outcomes.
As ever more organizations implement generative AI tools into their business systems, it’s important to reflect on what attackers can do to the data on which generative AI tools are trained.
The role of APIs within Large Language Models and the shift towards a multi-LLM world
With the arrival of Large Language Models (LLMs) such as ChatGPT, BERT, Llama, and Granite, the operational dynamics within the enterprise sector have significantly changed. LLMs introduce unique efficiencies, paving the way for innovative business solutions. LLMs currently stand at the forefront of technological advancement, offering enterprises the tools to automate complex processes, enhance customer experiences, and obtain actionable insights from large datasets.
The integration of these models into business operations marks a new chapter in digital transformation and therefore requires a closer look at their development and deployment.
Microsoft sprinkles some AI magic onto PowerToys v0.81.0 with new Advanced Paste tool
Hitting a new release cycle, Microsoft has unleashed PowerToys v0.81.0 complete with a brand-new module. With the arrival of the Advanced Paste utility, users gain access to a AI-powered clipboard that makes it possible to paste copied text in any format needed.
It is possible to use a range of keyboard shortcuts to paste text as plain text, markdown, or JSON, but there are plenty more features baked into Advanced Paste. Harnessing AI, the module can use natural language descriptions to explain what you want to do to the copied text. There are also plenty more changes and additions.
How a curious, learning-oriented culture promotes innovation
Technology companies are experiencing tremendous challenges on several fronts. On the one hand, emerging technologies like Generative AI (Gen AI) are opening up new possibilities for revenue and growth. On the other, a tight labor market in general and an even tighter IT labor market means companies can’t just “buy” talent -- they’ve got to “build” it as well.
Encouraging creativity can reap significant rewards for businesses. People can think more deeply and freely about problem-solving and develop creative solutions by sparking curiosity. Research has shown that greater creativity can result in fewer decision-making errors, more innovation, reduced group conflict, more open communication and sharing of information, and better team performance.
Harnessing generative AI to create a new breed of supercharged lawyers and law firms
Traditionally the legal sector has lagged behind other industries when it comes to embracing new technologies. However, generative AI is proving to be the exception to the rule. Its potential to transform the profession, driving lawyers and firms to unlock new levels of productivity and efficiency, is too great to be ignored.
Growing numbers of law firms are putting their money where their mouths are, with global spending on legal AI software tools at already over $1 billion and projected to increase by almost 20 percent (CAGR) every year across the rest of the decade, reaching an estimated $37bn by the end of 2024. There’s a huge societal and industry shift underway; lawyers and law firms must act now, or risk being left behind.
New Recall tool could be Microsoft's best use of AI in Windows 11 yet -- and its most private
Microsoft Build kicks off today but -- as is usually the case -- there have been various pre-event announcements, not least of which is the unveiling of AI-powered Copilot+ PCs. The hardware side of things is both powerful and exciting, with huge implications for not only computing capabilities, but also privacy.
This new breed of computers features neural processing units (NPU) meaning AI-tasks can be performed on-device, without the need to transmit data via the internet. One of Microsoft’s first tools to take advantage of this is Recall (once known as AI Explorer) which is an astonishingly powerful workflow tool that records and maintains a timeline of your computing activities and gives you a way to instantly locate content you have been working on. Microsoft describes it as like having a photographic memory, but it is perhaps better thought of as the ultimate productivity assistant.
Cyber security and artificial intelligence -- business value and risk
In the current era of digitalization, cybersecurity has become a topmost priority for businesses, regardless of their size and nature. With the growing dependence on digital infrastructure and data, safeguarding against cyber threats has become crucial to ensure uninterrupted business operations. However, the evolving nature of cyberattacks poses significant challenges for traditional security measures.
This is where Artificial Intelligence (AI) emerges as a game-changer, offering substantial benefits and inherent risks in cybersecurity.
Balancing the rewards and risks of AI tools
AI’s promise of time and money saved has captivated employees and business leaders alike. But the real question is… is it too good to be true? As enticing as these rewards may be, the risks of this new technology must also be seriously considered.
Balancing the risks and rewards of AI is causing pause for many organizations as they grapple with the right way to adopt AI. Every deployment in every organization is going to look different -- meaning that the balance of risk and reward is also going to look different depending on the scenario. Here, we’ll talk through the promised rewards and the potential pitfalls of adopting generative AI technologies, as well as some guiding questions to help determine if it’s the right move for your business.
How RAG completes the generative AI puzzle
Generative AI entered the global consciousness with a bang at the close of 2022 (cue: ChatGPT), but making it work in the enterprise has amounted to little more than a series of stumbles. Shadow AI use in the enterprise is sky high as employees are making day-to-day task companions out of AI chat tools. But for the knowledge-intensive workflows that are core to an organization’s mission, generative AI has yet to deliver on its lofty promise to transform the way we work.
Don’t bet on this trough of disillusionment to last very long, however. A process called retrieval augmented generation (RAG) is unlocking the kinds of enterprise generative AI use cases that previously were not viable. Companies such as Meta, Google, Amazon, Microsoft, OpenAI and a number of AI startups have been aggressively rolling out enterprise-focused RAG-based solutions.
Walking the AI tightrope in IAM: finding the right balance for your organization
Identity and access management (IAM) is the foundation for control and productivity in today’s digital business environments. Ensuring the right people have the right level of access to the resources they need whenever they need them -- and that the wrong people don’t -- is a core responsibility for administrators and security teams. In a typical hybrid, distributed, multi-cloud environment, with thousands of identities to manage dynamically as the business evolves, the scale of the challenge is considerable. Enter Artificial Intelligence (AI), the seductive solution to all large-scale data-intensive challenges. AI has enormous potential for streamlining the many workloads associated with IAM and lifting the burden on stretched administrative and security teams.
We are undoubtedly experiencing an AI gold rush, but there are tensions in this brave new world. Our recent SME IT Trends report reflects this reality; while 87 percent of UK IT administrator respondents plan to implement AI initiatives in the next two years and 70 percent believe that their organization should be investing in AI, a significant minority (15 percent) say their organization is moving too fast on AI. They are in conflict with the 22 percent who think their business is moving too slowly. That amounts to well over a third of SME IT administrators who are uncomfortable with their company’s AI adoption rate.
A practical solution to the AI challenge: Why it matters that the AI Safety Institute has embraced open source
To be a world leader in AI, the UK must leverage its position as Europe’s number one in open source software. As the PM said on Friday, open source “creates start-ups” and “communities”. The UK’s open source community has flourished under the UK tech sector radar in recent years. OpenUK’s 2023 report showed 27 percent of UK Tech Sector Gross Value Add was attributable to the business of open source from the UK.
On the back of the AI Safety Summit last November, the UK has not taken the European Union’s route to a legislative solution. We will soon see the outcome of the EU’s gamble in being the first in the world to legislate. The very prescriptive legislation will likely be out of date before it is in use and this may engender regulatory capture in AI innovation in the EU. Few beyond big tech will be able to manage the compliance program necessary to meet the regulation. The risk is obvious and real.
The key technologies fueling chatbot evolution
Most of us are familiar with chatbots on customer service portals, government departments, and through services like Google Bard and OpenAI. They are convenient, easy to use, and always available, leading to their growing use for a diverse range of applications across the web.
Unfortunately, most current chatbots are limited due to their reliance on static training data. Data outputted by these systems can be obsolete, limiting our ability to gain real-time information for our queries. They also struggle with contextual understanding, inaccuracies, handling complex queries, and limited adaptability to our evolving needs.
The importance of people, process and expertise for cyber resilience in the AI age
No business is immune to the cyber threats that exist today, ranging from malicious software and ransomware to AI threats and more, which occur daily, weekly and often even more frequently than this. To counter them, companies must have strategies in place to minimize the potential damage of an attack by protecting data and putting plans in place to recover from a cyberattack as quickly and effectively as possible.
The increased adoption of AI by everyone from employees to cyber criminals is adding further risk and complexity to the security landscape. While cybercriminals are incorporating AI into their arsenal to enhance their attack strategies, employees are unwittingly helping these attackers gain their sought-after prize, data. Many employees today are experimenting with generative AI models to assist with their jobs, but many put vast amounts of data, ranging from personal details to company information, into these systems, often without the organization’s knowledge.
With the new GPT-4o model OpenAI takes its ChatGPT to the next level
Pioneering AI firm OpenAI has launched the latest edition of its LLM, GPT-4o. The flagship model is being made available to all ChatGPT users free of charge, although paying users will get faster access to it.
There is a lot to this update, but OpenAI highlights improvements to capabilities across text, voice and vision, and well as faster performance. Oh, and if you were curious, the "o" in GPT-4o stands for "omni".
OpenAI launches a ChatGPT app for macOS; Windows users will have to wait
In a bid to make its AI chatbot more accessible, OpenAI has announced a new desktop ChatGPT app. There are already third-party desktop apps, but now there is an official option too.
It joins the existing mobile apps that are available for iOS and Android and, unusually, it is macOS users who get their hands on the desktop app before Windows users.
Recent Headlines
Most Commented Stories
© 1998-2025 BetaNews, Inc. All Rights Reserved. Privacy Policy - Cookie Policy.