Organizations flock to generative AI despite security concerns


A new survey of over 900 global IT decision makers shows that although 89 percent of organizations consider GenAI tools like ChatGPT to be a potential security risk, 95 percent are already using them in some form within their businesses.
The research for Zscaler, carried out by Sapio Research, also reveals 23 percent of those using GenAI aren't monitoring the usage at all, and 33 percent have yet to implement any additional GenAI-related security measures -- though many have it on their roadmap.
Why ChatGPT won't solve your real-time translation needs


New technologies debut almost every day. This constant barrage of novel tools creates a perpetual cycle of overshadowing -- someone is always introducing a new technology that eclipses the previous innovation, and then something even newer comes out, and the cycle repeats itself. However, OpenAI’s ChatGPT broke that cycle.
Since ChatGPT’s debut in late 2022, the generative AI tool has exploded in popularity. It took just two months for the platform to reach 100 million users, a speed that shattered the previous record for fastest-growing app. The creators of ChatGPT expect the tool to generate $200 million this year and project that number will grow to $1 billion next year. Other businesses, like Google and Grammarly, are taking note. Both of these organizations have developed their own generative AI tool to enhance their business operations.
Get 'ChatGPT For Dummies' (worth $12) for FREE


ChatGPT For Dummies demystifies the artificial intelligence tool that can answer questions, write essays, and generate just about any kind of text it’s asked for.
This powerful example of generative AI is widely predicted to upend education and business. In this book, you’ll learn how ChatGPT works and how you can operate it in a way that yields satisfactory results.
How to supercharge your productivity with AI: Tips and tools to work smarter, not harder


Productivity has significant importance in the contemporary dynamic work environment. AI has the potential to enhance operational efficiency and optimize operations by eliminating unnecessary tasks. AI has the potential to automate many procedures, provide valuable insights, and enhance workflows, hence enhancing professional productivity and overall cloud data management.
This piece explores the potential of artificial intelligence (AI) to enhance productivity.
How AI can help secure the software supply chain [Q&A]


Securing the software supply chain presents many challenges. To make the process easier OX Security recently launched OX-GPT, a ChatGPT integration aimed specifically at improving software supply chain security.
We spoke to Neatsun Ziv, co-founder and CEO of OX Security, to discuss how AI can present developers with customized fix recommendations and cut and paste code fixes, allowing for quick remediation of critical security issues across the software supply chain.
Can AI be sneakier than humans?


We've all heard about how AI is being used to improve cyberattacks, by creating better phishing emails for example, but does AI really have the same potential for being sneaky as humans?
New research from IBM X-Force has set out to answer the question, ‘Do the current Generative AI models have the same deceptive abilities as the human mind?’
How investing in 'prompt engineering' training can contribute to business success [Q&A]


While some might argue that generative AI is eliminating the need for certain jobs, it's also increasing the need for new roles and skills such as 'prompt engineering'.
With many people looking to upskill in this area to produce better results from AI tools like ChatGPT, and some companies creating new roles to stay ahead of AI's fast-paced developments, we spoke to Mike Loukides, vice president of content strategy for O'Reilly Media, to find out more about prompt engineering and why it’s important.
Microsoft already has some major improvements planned for Windows Copilot including UI upgrade


Windows 11 users in many parts of the world are now able to access Windows Copilot -- although, notably, not in the EU (privacy law, natch).
Much has been made of Microsoft's AI-powered assistant, and while it is still early days for the ChatGPT-based tool, Microsoft is already working on significant interface changes. The focus is on making Windows Copilot more customizable rather than forcing a one-size-fits-all design onto everyone.
With the launch of Bard Extensions, Google brings AI to many more of its products and services


The relentless march of AI shows no signs of slowing, and Google is eager to steal a piece of the actions from OpenAI's ChatGPT with its own Bard. It is with this aim in mind that Google has launched a huge update to its generative artificial intelligence tool in the form of Extensions.
With Bard Extensions, Google is achieving two things. Firstly, it expands the capabilities of Bard by letting it tap into the power and data of its other services including YouTube and Maps. Secondly, it makes the likes of Gmail and Drive more useful by harnessing the power of AI.
Get '10 Machine Learning Blueprints You Should Know for Cybersecurity' (worth $39.99) for FREE


Machine learning in security is harder than other domains because of the changing nature and abilities of adversaries, high stakes, and a lack of ground-truth data.
This book will prepare machine learning practitioners to effectively handle tasks in the challenging yet exciting cybersecurity space. It begins by helping you understand how advanced ML algorithms work and shows you practical examples of how they can be applied to security-specific problems with Python -- by using open source datasets or instructing you to create your own.
How organizations can safely adopt generative AI [Q&A]


Generative AI tools like ChatGPT have been in the news a lot recently. While it offers many benefits it also brings risks which have led to some organizations banning its use by their staff.
However, the pace of development means that this is unlikely to be a viable approach in the long term. We talked to Randy Lariar, practice director of big data, AI and analytics at Optiv, to discover why he believes organizations need to embrace the new technology and shift their focus from preventing its use in the workplace to adopting it safely and securely.
Countering the rise of AI criminals


As generative AI tools continue to expand, new doors are being opened for fraudsters to exploit weaknesses. Have you experimented with generative AI tools like ChatGPT yet? From beating writer’s block to composing ad copy, creating travel itineraries, and kickstarting code snippets, there’s something for everyone. Unfortunately, "everyone" includes criminals.
Cybercriminals are early adopters. If there’s a shiny new technology to try, you can bet that crooks will explore how to use it to commit crimes. The earlier they can exploit this technology, the better -- this will give them a head start on defenses being put in place to block their nefarious activities. If tech helps boost the scale or sophistication of criminal attacks, it’s extra attractive. It’s no wonder cybercriminals have been loving tools like ChatGPT.
Researchers feel overwhelmed by errm… research


A new study finds that 66 percent of researchers are overwhelmed by the quantity of published work they have to review.
The survey, by research platform Iris.ai, of 500 corporate R&D workers shows that 69 percent spend at least three hours a week reviewing research documents, with 19 percent of those spending over five hours. AI could help to address this problem but is not being widely used.
OpenAI is bringing some exciting new features to ChatGPT this week


Artificial intelligence is not a technology that stands still, and the same is true of its users. As people have become increasingly familiar with AI tool, and used to working with the likes of ChatGPT, they are becoming more demanding.
In response to this, OpenAI has announced a number of significant updates that will be rolling out to ChatGPT over the course of the next few days. Among the changes are suggestions for initial queries to put to the AI, as well as recommended replies so you can delve deeper into your research.
Understanding large language models: What are they and how do they work?


In recent years, large language models (LLMs) have revolutionized the field of natural language processing (NLP) and artificial intelligence (AI). These sophisticated models are used widely in AI solutions, such as OpenAI's ChatGPT, and have been designed to understand and generate human-like text, enabling them to perform various language-based tasks. People are incredibly excited by the potential of this technology which is poised to revolutionize how we live and work. However, to understand the true potential of LLMs, it is crucial that people know how they function.
LLMs, at their core, are neural networks trained on vast amounts of text data. They learn to predict the next word in a sentence by analyzing patterns and relationships within the training data. Through this process, they develop an understanding of grammar, syntax, and even semantic nuances. By leveraging this knowledge, these models can generate coherent and contextually relevant responses when given a prompt or query.
Recent Headlines
© 1998-2025 BetaNews, Inc. All Rights Reserved. Privacy Policy - Cookie Policy.