How AI can help secure the software supply chain [Q&A]
Securing the software supply chain presents many challenges. To make the process easier OX Security recently launched OX-GPT, a ChatGPT integration aimed specifically at improving software supply chain security.
We spoke to Neatsun Ziv, co-founder and CEO of OX Security, to discuss how AI can present developers with customized fix recommendations and cut and paste code fixes, allowing for quick remediation of critical security issues across the software supply chain.
Can AI be sneakier than humans?
We've all heard about how AI is being used to improve cyberattacks, by creating better phishing emails for example, but does AI really have the same potential for being sneaky as humans?
New research from IBM X-Force has set out to answer the question, ‘Do the current Generative AI models have the same deceptive abilities as the human mind?’
How investing in 'prompt engineering' training can contribute to business success [Q&A]
While some might argue that generative AI is eliminating the need for certain jobs, it's also increasing the need for new roles and skills such as 'prompt engineering'.
With many people looking to upskill in this area to produce better results from AI tools like ChatGPT, and some companies creating new roles to stay ahead of AI's fast-paced developments, we spoke to Mike Loukides, vice president of content strategy for O'Reilly Media, to find out more about prompt engineering and why it’s important.
Microsoft already has some major improvements planned for Windows Copilot including UI upgrade
Windows 11 users in many parts of the world are now able to access Windows Copilot -- although, notably, not in the EU (privacy law, natch).
Much has been made of Microsoft's AI-powered assistant, and while it is still early days for the ChatGPT-based tool, Microsoft is already working on significant interface changes. The focus is on making Windows Copilot more customizable rather than forcing a one-size-fits-all design onto everyone.
With the launch of Bard Extensions, Google brings AI to many more of its products and services
The relentless march of AI shows no signs of slowing, and Google is eager to steal a piece of the actions from OpenAI's ChatGPT with its own Bard. It is with this aim in mind that Google has launched a huge update to its generative artificial intelligence tool in the form of Extensions.
With Bard Extensions, Google is achieving two things. Firstly, it expands the capabilities of Bard by letting it tap into the power and data of its other services including YouTube and Maps. Secondly, it makes the likes of Gmail and Drive more useful by harnessing the power of AI.
Get '10 Machine Learning Blueprints You Should Know for Cybersecurity' (worth $39.99) for FREE
Machine learning in security is harder than other domains because of the changing nature and abilities of adversaries, high stakes, and a lack of ground-truth data.
This book will prepare machine learning practitioners to effectively handle tasks in the challenging yet exciting cybersecurity space. It begins by helping you understand how advanced ML algorithms work and shows you practical examples of how they can be applied to security-specific problems with Python -- by using open source datasets or instructing you to create your own.
How organizations can safely adopt generative AI [Q&A]
Generative AI tools like ChatGPT have been in the news a lot recently. While it offers many benefits it also brings risks which have led to some organizations banning its use by their staff.
However, the pace of development means that this is unlikely to be a viable approach in the long term. We talked to Randy Lariar, practice director of big data, AI and analytics at Optiv, to discover why he believes organizations need to embrace the new technology and shift their focus from preventing its use in the workplace to adopting it safely and securely.
Countering the rise of AI criminals
As generative AI tools continue to expand, new doors are being opened for fraudsters to exploit weaknesses. Have you experimented with generative AI tools like ChatGPT yet? From beating writer’s block to composing ad copy, creating travel itineraries, and kickstarting code snippets, there’s something for everyone. Unfortunately, "everyone" includes criminals.
Cybercriminals are early adopters. If there’s a shiny new technology to try, you can bet that crooks will explore how to use it to commit crimes. The earlier they can exploit this technology, the better -- this will give them a head start on defenses being put in place to block their nefarious activities. If tech helps boost the scale or sophistication of criminal attacks, it’s extra attractive. It’s no wonder cybercriminals have been loving tools like ChatGPT.
Researchers feel overwhelmed by errm… research
A new study finds that 66 percent of researchers are overwhelmed by the quantity of published work they have to review.
The survey, by research platform Iris.ai, of 500 corporate R&D workers shows that 69 percent spend at least three hours a week reviewing research documents, with 19 percent of those spending over five hours. AI could help to address this problem but is not being widely used.
OpenAI is bringing some exciting new features to ChatGPT this week
Artificial intelligence is not a technology that stands still, and the same is true of its users. As people have become increasingly familiar with AI tool, and used to working with the likes of ChatGPT, they are becoming more demanding.
In response to this, OpenAI has announced a number of significant updates that will be rolling out to ChatGPT over the course of the next few days. Among the changes are suggestions for initial queries to put to the AI, as well as recommended replies so you can delve deeper into your research.
Understanding large language models: What are they and how do they work?
In recent years, large language models (LLMs) have revolutionized the field of natural language processing (NLP) and artificial intelligence (AI). These sophisticated models are used widely in AI solutions, such as OpenAI's ChatGPT, and have been designed to understand and generate human-like text, enabling them to perform various language-based tasks. People are incredibly excited by the potential of this technology which is poised to revolutionize how we live and work. However, to understand the true potential of LLMs, it is crucial that people know how they function.
LLMs, at their core, are neural networks trained on vast amounts of text data. They learn to predict the next word in a sentence by analyzing patterns and relationships within the training data. Through this process, they develop an understanding of grammar, syntax, and even semantic nuances. By leveraging this knowledge, these models can generate coherent and contextually relevant responses when given a prompt or query.
Cybercriminals get their very own generative AI
We've already seen how generative AI can be used in cyberattacks but now it seems there's an AI model aimed just a cybercriminals.
Every hero has a nemesis and it looks like ChatGPT's could be FraudGPT. Research from security and operations analytics company Netenrich shows recent activities on the Dark Web Forum reveal evidence of the emergence of FraudGPT, which has been circulating on Telegram Channels since July 22nd.
New tool uses AI to help ensure AI-generated content is fit for humans
Experts reckon that over 90 percent of internet content could be AI generated by the end of the decade. But we all know that AI isn't perfect; it can introduce biases and errors.
Checking material to ensure it's suitable for the target audience is therefore essential. User experience research platform WEVO is launching a new research tool, WEVO 3.0, to ensure that AI-generated products and experiences are well received by their target human audience.
When putting AI to work, remember: It's just a talented intern
Artificial intelligence (AI) models have been generating a lot of buzz as valuable tools for everything from cutting costs and improving revenues to how they can play an essential role in unified observability.
But for as much value as AI brings to the table, it’s important to remember that AI is the intern on your team. A brilliant intern, for sure -- smart, hard-working and quick as lightning -- but also a little too confident in its opinions, even when it’s completely wrong.
The official ChatGPT app for Android is just days away -- but you can pre-order now!
The popularity of OpenAI's ChatGPT led to a seemingly endless stream of fake mobile apps popping up in Google Play. Now, a couple of months after the official app was released for iOS, ChatGPT for Android is due to land in the coming days.
OpenAI has announced that the Android version of the ChatGPT app is launching in the last week of July, but the company has not revealed a precise date. If you want to be sure to get hold of the app as soon a possible, you can pre-register, and it will be installed the moment it is released.
© 1998-2024 BetaNews, Inc. All Rights Reserved. Privacy Policy - Cookie Policy.