Articles about Large Language Model

New fully open and transparent large language model launches -- it’s Swiss, of course

The Swiss have something of a reputation for being methodical -- particularly when it comes to things like banking -- so it’s no surprise that they take a similar approach to creating a large language model.

EPFL, ETH Zurich and the Swiss National Supercomputing Centre (CSCS) have today released Apertus, a large-scale, open, multilingual LLM. Apertus -- Latin for ‘open’ -- the name highlights its distinctive feature, that the entire development process, including its architecture, model weights, and training data and recipes, is openly accessible and fully documented.

Continue reading

Can AI master classic text adventures? Someone went on a quest to find out

AI playing text adventure game

Large language models (LLMs) have shown impressive results in many areas, but when it comes to playing classic text adventure games, they often struggle to make it past even the simplest of puzzles.

A recent experiment by Entropic Thoughts tested how well various models could navigate and solve interactive fiction, using a structured benchmark to compare results across multiple games. The takeaway was that while some models can make reasonable progress, even the best require guidance and struggle with the skills these classic problem-solving games demand.

Continue reading

AI adoption accelerates security risks in hybrid cloud

Hybrid cloud infrastructure is under mounting strain from the growing influence of artificial intelligence, according to a new report.

The study, from observability specialist Gigamon, of over 1,000 global security and IT leaders, shows breach rates have surged to 55 percent during the past year, representing a 17 percent year-on-year rise, with AI-generated attacks emerging as a key driver of this growth.

Continue reading

GenAI vulnerable to prompt injection attacks

New research shows that one in 10 prompt injection atempts against GenAI systems manage to bypass basic guardrails. Their non-deterministic nature also means failed attempts can suddenly succeed, even with identical content.

AI security company Pangea ran a Prompt Injection Challenge in March this year. The month-long initiative attracted more than 800 participants from 85 countries who attempted to bypass AI security guardrails across three virtual rooms with increasing levels of difficulty.

Continue reading

OpenAI releases GPT 4.1 models and Elon Musk should be terrified for Grok

OpenAI has just thrown a serious wrench into the AI landscape with the release of three new models: GPT-4.1, GPT-4.1 mini, and GPT-4.1 nano. They’re dramatic improvements over GPT-4o, raising the bar for what AI can actually do. If Elon Musk wasn’t already nervous about Grok falling behind, he probably should be now. In comparison, Grok is starting to look a bit… dusty.

At the top of the stack is GPT-4.1, which now dominates in critical areas like coding, long-context comprehension, and instruction following. This model scores 54.6 percent on SWE-bench Verified, a benchmark designed to measure real-world software development ability. That puts it well above GPT-4o and even higher than GPT-4.5, which it’s now set to replace. Developers relying on these models to generate accurate patches or edit large codebases are going to find GPT-4.1 a lot more practical.

Continue reading

New LLM-powered engine helps secure complex cloud environments

While moving systems to the cloud delivers many benefits, it also leads to complex dynamic environments that can be a real challenge when it comes to keeping them secure.

With the launch of a new Large Language Model (LLM)-powered cloud detection engine, Sweet Security aims to cut through the noise and allow security teams to tackle these environments with greater precision and confidence.

Continue reading

New LLM assistant helps pinpoint security issues

Malicious actors have been quick to exploit AI, but often security teams are under-equipped with AI solutions to ensure adequate defense.

Red Sift is launching an upskilled LLM assistant that identifies and diagnoses misconfigurations and exposures across email, domains, and internet-facing assets, supporting security teams to prevent incidents before they happen.

Continue reading

Making LLMs safe for use in the enterprise [Q&A]

Large language models (LLMs) in a business setting can create problems since there are many ways to go about fooling them or being fooled by them.

Simbian has developed a TrustedLLM model that uses multiple layers of security controls between the user and the GenAI models in order to create a safer solution.

Continue reading

LLMs vulnerable to prompt injection attacks

As we've already seen today AI systems are becoming increasingly popular targets for attack.

New research from Snyk and Lakera looks at the risks to AI agents and LLMs from prompt injection attacks.

Continue reading

New tool lets enterprises build their own secure gen AI chatbots

Many companies have blocked access to public LLMs like ChatGPT due to security and compliance risks, preventing employees from taking advantage of the benefits of generative AI for day-to-day use.

Even when employees do have access, mainstream LLMs lack the ability to query an organization’s internal data, making insights unreliable and considerably limiting enterprise value for chat applications.

Continue reading

How AI is having an impact on software testing [Q&A]

Artificial intelligence is making its way into many areas of the tech industry, with the introduction of large language models making it much more accessible.

One of the areas where it's having a big impact is software testing, where it allows companies to provide better support to existing software teams and refocus their efforts on development.

Continue reading

The double-edged sword: Navigating data security risks in the age of Large Language Models (LLMs)

Large language models (LLMs) have emerged as powerful business and consumer tools, capable of generating human-quality text, translating languages, and even assisting in business use cases. Their ability to improve efficiency, cut costs, enhance customer experiences and provide insights make them extremely attractive for employees and managers across all industries.

As with all emerging technologies, however, security concerns regarding the interaction of these advancements with sensitive data must be addressed. With LLMs, these risks are compounded by the vast amounts of data they must use to provide value, leading to concerns about data breaches, privacy violations, and the spread of misinformation.

Continue reading

Why structured data offers LLMs tremendous benefits -- and a major challenge [Q&A]

Digital data

ChatGPT and other LLMs are designed to train and learn from unstructured data -- namely, text. This has enabled them to support a variety of powerful use cases.

However, these models struggle to analyze structured data, such as numerical and statistical information organized in databases, limiting their potential.

Continue reading

Understanding LLMs, privacy and security -- why a secure gateway approach is needed

AI Safety

Over the past year, we have seen generative AI and large language models (LLMs) go from a niche area of AI research into being one of the fastest growing areas of technology. Across the globe, around $200 billion is due to be invested in this market according to Goldman Sachs, boosting global labor productivity by one percentage point. That might not sound like much, but it would add up to $7 trillion more in the global economy.

However, while these LLM applications might have potential, there are still problems to solve around  privacy and data residency. Currently, employees at organisations can unknowingly share sensitive company data or Personal Identifiable Information (PII) on customers out to services like OpenAI. This opens up new security and data privacy risks.

Continue reading

Understanding large language models: What are they and how do they work?

intelligence

In recent years, large language models (LLMs) have revolutionized the field of natural language processing (NLP) and artificial intelligence (AI). These sophisticated models are used widely in AI solutions, such as OpenAI's ChatGPT, and have been designed to understand and generate human-like text, enabling them to perform various language-based tasks. People are incredibly excited by the potential of this technology which is poised to revolutionize how we live and work. However, to understand the true potential of LLMs, it is crucial that people know how they function.

LLMs, at their core, are neural networks trained on vast amounts of text data. They learn to predict the next word in a sentence by analyzing patterns and relationships within the training data. Through this process, they develop an understanding of grammar, syntax, and even semantic nuances. By leveraging this knowledge, these models can generate coherent and contextually relevant responses when given a prompt or query.

Continue reading

BetaNews, your source for breaking tech news, reviews, and in-depth reporting since 1998.

Regional iGaming Content

© 1998-2025 BetaNews, Inc. All Rights Reserved. About Us - Privacy Policy - Cookie Policy - Sitemap.