Key developments and challenges in LLMs [Q&A]


Large language models (LLMs) have undergone rapid evolution in recent years, but can often be viewed as something of a 'black-box' as a lack of transparency makes it difficult to identify how decisions are made, trace errors, or understand biases within the model.
We spoke to Pramod Beligere, vice president -- generative AI practice head at Hexaware, to discuss this along with the tools that are being developed, such as explainable AI and interpretable models, to make AI systems more understandable, trustworthy and accountable.
AI agents and what they mean for the enterprise [Q&A]


Artificial intelligence is creeping into more and more areas of business and consequently we’re seeing it used for ever more complex work.
We spoke to Trey Doig, CTO and co-founder at Echo AI, to find out more about AI agents and how they can help solve problems and carry out tasks.
Why safe use of GenAI requires a new approach to unstructured data management [Q&A]


Large language models generally train on unstructured data such as text and media. But most enterprise data security strategies are designed around structured data (data organized in traditional databases or formal schemas).
The use of unstructured data in GenAI introduces new challenges for governance, privacy and security that these traditional approaches aren't equipped to handle.
Evaluating LLM safety, bias and accuracy [Q&A]


Large language models (LLMs) are making their way into more and more areas of our lives. But although they're improving all the time they're still far from perfect and can produce some unpredictable results.
We spoke to CEO of Patronus AI Anand Kannappan to discuss how businesses can adopt LLMs safely and avoid the pitfalls.
How businesses need to address the security risks of LLMs [Q&A]


Businesses are increasingly adopting AI and large language models in search of greater efficiency and savings. But these tools also present risks when it comes to cybersecurity.
We spoke to Aqsa Taylor, director of product management at Gutsy, to learn more about these risks and what organizations can do to address them.
SK hynix announces PCB01 SSD for AI-enabled PCs


SK hynix has announced its newest solid-state drive, PCB01, which incorporates the fifth generation of 8-channel PCIe technology. This development is aimed at improving performance metrics such as data processing speed.
The PCB01 SSD features sequential read and write speeds of up to 14GB/s and 12GB/s, respectively, which is notably higher than previous models. These speeds are intended to support operations involving large language models (LLMs) necessary for AI training and inference tasks.
Get 'Unlocking the Secrets of Prompt Engineering' (worth $35.99) for FREE


Unlocking the Secrets of Prompt Engineering propels you into the world of large language models (LLMs), empowering you to create and apply prompts effectively for diverse applications, from revolutionizing content creation and chatbots to coding assistance.
Starting with the fundamentals of prompt engineering, this guide provides a solid foundation in LLM prompts, their components, and applications. Through practical examples and use cases, you'll discover how LLMs can be used for generating product descriptions, personalized emails, social media posts, and even creative writing projects like fiction and poetry.
New platform aims to enhance AI research accuracy


AI is making its way into more and more areas of life and work. In some areas though, particularly scientific research, it's vitally important to ensure the accuracy of results.
Norwegian company Iris.ai has developed a method to measure the factual accuracy of AI-generated content -- testing precision and recall, fact tracing, and extraction.
Most AI detectors can't tell if a phishing email was written by a bot


The latest Phishing Threat Trends Report from Egress, based on data from its Egress Defend email security tool, reveals that nearly three-quarters of AI detectors can't tell if a phishing email has been written by a chatbot.
Because they utilize large language models (LLMs), the accuracy of most detector tools increases with longer sample sizes, often requiring a minimum of 250 characters to work. With 44.9 percent of phishing emails not meeting the 250-character limit, and a further 26.5 percent falling below 500, currently AI detectors either won't work reliably or won't work at all on 71.4 percent of attacks.
Mayo Clinic embraces Microsoft 365 Copilot


In a contemporary twist of healthcare meeting technological innovation, Mayo Clinic, a global pacesetter in medical care, has initiated the deployment of Microsoft 365 Copilot, setting a new hallmark in enterprise productivity.
Microsoft 365 Copilot is a generative AI service, which marries the prowess of large language models (LLMs) with organizational data harvested from Microsoft 365. At its core, this service is engineered to morph mundane tasks into streamlined processes, thus freeing up critical personnel to focus on pivotal ventures.
Recent Headlines
Most Commented Stories
BetaNews, your source for breaking tech news, reviews, and in-depth reporting since 1998.
Regional iGaming Content
© 1998-2025 BetaNews, Inc. All Rights Reserved. About Us - Privacy Policy - Cookie Policy - Sitemap.