Understanding LLMs, privacy and security -- why a secure gateway approach is needed


Over the past year, we have seen generative AI and large language models (LLMs) go from a niche area of AI research into being one of the fastest growing areas of technology. Across the globe, around $200 billion is due to be invested in this market according to Goldman Sachs, boosting global labor productivity by one percentage point. That might not sound like much, but it would add up to $7 trillion more in the global economy.
However, while these LLM applications might have potential, there are still problems to solve around privacy and data residency. Currently, employees at organisations can unknowingly share sensitive company data or Personal Identifiable Information (PII) on customers out to services like OpenAI. This opens up new security and data privacy risks.
How investing in 'prompt engineering' training can contribute to business success [Q&A]


While some might argue that generative AI is eliminating the need for certain jobs, it's also increasing the need for new roles and skills such as 'prompt engineering'.
With many people looking to upskill in this area to produce better results from AI tools like ChatGPT, and some companies creating new roles to stay ahead of AI's fast-paced developments, we spoke to Mike Loukides, vice president of content strategy for O'Reilly Media, to find out more about prompt engineering and why it’s important.
Understanding large language models: What are they and how do they work?


In recent years, large language models (LLMs) have revolutionized the field of natural language processing (NLP) and artificial intelligence (AI). These sophisticated models are used widely in AI solutions, such as OpenAI's ChatGPT, and have been designed to understand and generate human-like text, enabling them to perform various language-based tasks. People are incredibly excited by the potential of this technology which is poised to revolutionize how we live and work. However, to understand the true potential of LLMs, it is crucial that people know how they function.
LLMs, at their core, are neural networks trained on vast amounts of text data. They learn to predict the next word in a sentence by analyzing patterns and relationships within the training data. Through this process, they develop an understanding of grammar, syntax, and even semantic nuances. By leveraging this knowledge, these models can generate coherent and contextually relevant responses when given a prompt or query.
Microsoft is about to launch multi-modal GPT-4 complete with video


Microsoft has revealed that the company is about to launch GPT-4 and this time it is about more than just text and chat. The next iteration of the AI technology is described as multi-modal, meaning there is support for much more -- including video.
The news came at an event in Germany called KI im Fokus (AI in Focus) on Thursday. Here, Microsoft Germany's CTO and Lead Data & AI STU, Andreas Braun, said: "We will introduce GPT-4 next week, there we will have multimodal models that will offer completely different possibilities -- for example videos".
Recent Headlines
Most Commented Stories
BetaNews, your source for breaking tech news, reviews, and in-depth reporting since 1998.
Regional iGaming Content
© 1998-2025 BetaNews, Inc. All Rights Reserved. About Us - Privacy Policy - Cookie Policy - Sitemap.