A practical solution to the AI challenge: Why it matters that the AI Safety Institute has embraced open source


To be a world leader in AI, the UK must leverage its position as Europe’s number one in open source software. As the PM said on Friday, open source “creates start-ups” and “communities”. The UK’s open source community has flourished under the UK tech sector radar in recent years. OpenUK’s 2023 report showed 27 percent of UK Tech Sector Gross Value Add was attributable to the business of open source from the UK.
On the back of the AI Safety Summit last November, the UK has not taken the European Union’s route to a legislative solution. We will soon see the outcome of the EU’s gamble in being the first in the world to legislate. The very prescriptive legislation will likely be out of date before it is in use and this may engender regulatory capture in AI innovation in the EU. Few beyond big tech will be able to manage the compliance program necessary to meet the regulation. The risk is obvious and real.
The key technologies fueling chatbot evolution


Most of us are familiar with chatbots on customer service portals, government departments, and through services like Google Bard and OpenAI. They are convenient, easy to use, and always available, leading to their growing use for a diverse range of applications across the web.
Unfortunately, most current chatbots are limited due to their reliance on static training data. Data outputted by these systems can be obsolete, limiting our ability to gain real-time information for our queries. They also struggle with contextual understanding, inaccuracies, handling complex queries, and limited adaptability to our evolving needs.
The importance of people, process and expertise for cyber resilience in the AI age


No business is immune to the cyber threats that exist today, ranging from malicious software and ransomware to AI threats and more, which occur daily, weekly and often even more frequently than this. To counter them, companies must have strategies in place to minimize the potential damage of an attack by protecting data and putting plans in place to recover from a cyberattack as quickly and effectively as possible.
The increased adoption of AI by everyone from employees to cyber criminals is adding further risk and complexity to the security landscape. While cybercriminals are incorporating AI into their arsenal to enhance their attack strategies, employees are unwittingly helping these attackers gain their sought-after prize, data. Many employees today are experimenting with generative AI models to assist with their jobs, but many put vast amounts of data, ranging from personal details to company information, into these systems, often without the organization’s knowledge.
With the new GPT-4o model OpenAI takes its ChatGPT to the next level


Pioneering AI firm OpenAI has launched the latest edition of its LLM, GPT-4o. The flagship model is being made available to all ChatGPT users free of charge, although paying users will get faster access to it.
There is a lot to this update, but OpenAI highlights improvements to capabilities across text, voice and vision, and well as faster performance. Oh, and if you were curious, the "o" in GPT-4o stands for "omni".
OpenAI launches a ChatGPT app for macOS; Windows users will have to wait


In a bid to make its AI chatbot more accessible, OpenAI has announced a new desktop ChatGPT app. There are already third-party desktop apps, but now there is an official option too.
It joins the existing mobile apps that are available for iOS and Android and, unusually, it is macOS users who get their hands on the desktop app before Windows users.
The critical intersection between AI and identity management


Today, almost every organization and most individuals are using or experimenting with Artificial Intelligence (AI). There are plenty of examples of how it is changing businesses for the better, from marketing and HR to IT teams. What was once computationally impossible, or prohibitively expensive to do, is now within reach with the use of AI.
According to Gartner, approximately 80 percent of enterprises will have used generative AI (GenAI) APIs or models by 2026. As AI drives value for organizations, it is fueling further demand and adoption.
Unmasking the impact of shadow AI -- and what businesses can do about it


The AI era is here -- and businesses are starting to capitalize. Britain’s AI market alone is already worth over £21 billion and expected to add £1 trillion of value to the UK economy by 2035. However, the threat of “shadow AI” -- unauthorized AI initiatives within a company -- looms large.
Its predecessor -- “shadow IT” -- has been well understood (albeit not always well managed) for a while now. Employees using personal devices and tools like Dropbox, without the supervision of IT teams, can increase an organization’s attack surface -- without execs or the C-suite ever knowing. Examples of shadow AI include customer service teams deploying chatbots without informing the IT department, unauthorized data analysis, and unsanctioned workflow automation tools (for tasks like document processing or email filtering).
Microsoft eases its foot off the accelerator for Copilot development in Windows 11


The world has gone crazy for AI, and Microsoft has jumped feet-first into the technology. Copilot is just one of the company’s tools in this field, but not everyone is completely in love with this digital assistant.
For anyone who is of the opinion that things are moving too fast when it comes to Copilot, there is some good news. With the release of the latest beta build of Windows 11, Microsoft says that it is slowing down the rollout of new Copilot experiences.
Get 'Enterprise Transformation to AI and the Metaverse' (worth $59.99) for FREE


Enterprise Transformation to AI and the Metaverse provides guidance on how organizations can respond effectively to a rapidly converging collection of advanced technologies, methods, and models often referred to as “the metaverse.”
The arrival of the metaverse will likely lead to one of the most disruptive eras in modern history. We will see our personal, social, professional, and business lives change just as dramatically as we experienced with the arrival of the personal computer, Internet, and smart phone.
What the EU AI act means for cybersecurity teams and organizational leaders


On March 13, 2024, the European Parliament adopted the Artificial Intelligence Act (AI Act), establishing the world’s first extensive legal framework dedicated to artificial intelligence. This imposes EU-wide regulations that emphasize data quality, transparency, human oversight, and accountability. With potential fines reaching up to €35 million or 7 percent of global annual turnover, the act has profound implications for a wide range of companies operating within the EU.
The AI Act categorizes AI systems according to the risk they pose, with stringent compliance required for high-risk categories. This regulatory framework prohibits certain AI practices deemed unacceptable and meticulously outlines obligations for entities involved at all stages of the AI system lifecycle, including providers, importers, distributors, and users.
The increasing sophistication of synthetic identity fraud


Synthetic identity fraud is most commonly associated with fraud in banking or against credit unions but is often mistakenly overlooked in digital commerce. With fraudsters becoming cleverer about how they use synthetic identities, it’s a tactic that fraud fighters need to watch out for and guard against.
Synthetic identity fraud is when a fraudster takes a piece of real identifying information belonging to a legitimate individual and combines it with other identifying information that is either fake or real but belongs to someone else.
How AI will shape the future of the legal industry


The Department for Science, Innovation and Technology (DSIT) announced a £6.4 million grant for small and medium-sized enterprises (SMEs) to invest in AI-technology skills-based training. This development is the latest in a string of AI funding initiatives across the UK corporate sector, indicating that 2024 is the year emerging technologies will revolutionize the workplace in all aspects.
AI technology is transforming business functions across industries. However, the legal sector, in particular, has demonstrated tremendous progress. Often portrayed as laggards when it comes to embracing innovation, legal’s cautious, conservative approach to tech adoption has become a thing of the past in the age of AI. A recent survey from the Legal Services Board (LSB) discusses how over 95 percent of legal businesses found that implementing new technologies has made them more responsive to clients’ needs. Moreover, 60 percent of surveyed legal businesses found their clients expect them to power their legal services through tech innovation.
AI-powered data management: Navigating data complexity in clinical trials


The data flood gates have opened wide for clinical trial research. In fact, the amount of data gathered may be more akin to a tsunami or a monsoon. For decades, researchers struggled with a lack of data available in clinical trials; however, they may have received more than they asked for. Research shows that the biopharmaceutical industry generates up to a trillion gigabytes of data annually and clinical trials, one of the principal contributors to these data points, generate an average of up to 3 million data points per trial. This influx of sources can make it challenging to discern relevant from superfluous information, complicating analysis and delaying critical decision-making.
An increase in decentralization paired with expanded collection methods in clinical trials have increased access to and accumulation of data. Information gathered from remote monitoring devices, electronic health records (EHRs), laboratory tests, surveys and questionnaires and third-party databases, all contribute to the data challenge in clinical trials. In reality, the number of touchpoints across clinical trials, from sponsors to clinical research organizations (CROs) to site staff, combined with the complexity and disparity of data sources leads to challenges in ensuring data quality.
Workforces need the skills to defend against AI-enabled threats


It’s no secret that artificial intelligence (AI) is transforming software development. From automating routine tasks to enhancing code efficiency and optimizing testing processes, AI is helping developers save time, money, and resources. It can also analyze code to detect bugs, security vulnerabilities, and quality issues more effectively than traditional models. If you’re thinking there’s a "but" coming, you’re right.
The downside to the benefits of leveraging AI technologies in software development is that it can also enhance the capabilities of malware developers. As such, the proliferation of AI is not necessarily fueling new cyberattacks, it is simply creating an even distribution of enhanced proficiency for both legitimate and malicious actors.
Get 'Coding with AI For Dummies' (worth $18) for FREE


Coding with AI For Dummies introduces you to the many ways that artificial intelligence can make your life as a coder easier. Even if you’re brand new to using AI, this book will show you around the new tools that can produce, examine, and fix code for you.
With AI, you can automate processes like code documentation, debugging, updating, and optimization. The time saved thanks to AI lets you focus on the core development tasks that make you even more valuable.
Recent Headlines
© 1998-2025 BetaNews, Inc. All Rights Reserved. Privacy Policy - Cookie Policy.