Countering the rise of AI criminals
As generative AI tools continue to expand, new doors are being opened for fraudsters to exploit weaknesses. Have you experimented with generative AI tools like ChatGPT yet? From beating writer’s block to composing ad copy, creating travel itineraries, and kickstarting code snippets, there’s something for everyone. Unfortunately, "everyone" includes criminals.
Cybercriminals are early adopters. If there’s a shiny new technology to try, you can bet that crooks will explore how to use it to commit crimes. The earlier they can exploit this technology, the better -- this will give them a head start on defenses being put in place to block their nefarious activities. If tech helps boost the scale or sophistication of criminal attacks, it’s extra attractive. It’s no wonder cybercriminals have been loving tools like ChatGPT.
Building cyber resilience in an age of AI
Cybersecurity remains one of the most important business investments amid new threats, including those presented by Generative AI. However, as businesses invest in ways to mitigate cyber risk, many are uncertain if the increased spending is helping their organizations bolster their cyber stance -- often because they lack proof.
As new research highlights that fewer organizations feel confident that their business can withstand a cyber attack, how can businesses build and prove their organization-wide preparedness for threats?
Education, not a watchdog, should power AI regulation
Earlier this year, several prominent tech leaders came together to sign a letter advocating for pausing development of advanced AI models, citing their potentially "profound risk to society and humanity”. This was swiftly followed by British Prime Minister Rishi Sunak proposing the creation of a new UK-based watchdog dedicated to the AI sector.
Although the move garnered mixed responses, an essential aspect seems to have been overlooked amid this debate -- a legislation-led institutional may not be the most effective or comprehensive approach to regulating AI.
You need to adopt AI if you haven't already
New technologies will always receive encouragement and criticism from all sides, and artificial intelligence (AI) is no different. People have various opinions, but it’s here to stay and will continue to change how business is conducted.
Not too long ago, it was difficult for people to imagine how the internet and websites could impact their lives. Some dismissed it as a trend and saw little merit in its business applications. Now, companies need to amplify their online presence.
Does the UK really have the potential to be an AI superpower?
Earlier this year, Prime Minister, Rishi Sunak, announced his desire to cement the UK as an AI superpower. And it has been all hands on deck since then with an AI summit set to take place in November, government funds being channeled into research, and ongoing discussions around regulation. The UK is certainly determined to secure a podium position in the AI race.
It isn't difficult to understand why such high importance is being placed on AI at a governmental level. Predicted to increase UK GDP by up to 10.3 percent by 2030 -- the equivalent of an additional £232 billion -- embracing AI could hugely benefit the economy, whilst also boosting productivity and efficiency for businesses of all sizes and sectors. In the current economic climate, when all budgets are squeezed and workforces are stretched, AI has the potential to be hugely transformative. As Plamen Minev, Technical Director, AI and Cloud at Quantum, explains:
How AI and vector search are transforming analytics [Q&A]
Organizations have more data than ever, but unlocking the information it contains in order to make decisions can be a challenge.
The marriage of of real-time analytics and AI with vector search is a potential game changer for any business that has large amounts of data to crunch. We spoke to Rockset CEO and co-founder Venkat Venkataramani to find out more.
Americans want data privacy and they worry about AI
A new survey of over 1,000 Americans reveals that people are deeply concerned about their personal data, believe their data is priceless, want a national privacy law, and are pessimistic about the rise of AI and personal data.
The study for PrivacyHawk, conducted by Propeller Research, shows 45 percent are very or extremely concerned about their personal data being exploited, breached, or exposed. Over 94 percent are generally concerned. Only 5.7 percent of the US population is not concerned at all about their personal data risk.
Researchers feel overwhelmed by errm… research
A new study finds that 66 percent of researchers are overwhelmed by the quantity of published work they have to review.
The survey, by research platform Iris.ai, of 500 corporate R&D workers shows that 69 percent spend at least three hours a week reviewing research documents, with 19 percent of those spending over five hours. AI could help to address this problem but is not being widely used.
The future of AI lies in open source
I'm almost getting sick of hearing about AI and its ability to change the world for the better, for the worse, for who knows what? But when you get to the heart of what AI is and how it can be applied to unlock value in businesses and everyday life, you have to admit that we're standing on the edge of a revolution. This revolution is likely to change our lives significantly in the short term, and perhaps tremendously so in the medium term.
It wasn't that long ago I felt short-sold by the promise of AI. About eight years ago I saw someone demonstrating a machine's ability to recognize certain flowers. Although impressive, it was a clunky experience, and while I could imagine applications, it didn't excite me. Fast forward a few years, my real moment of surprise came when I found thispersondoesnotexist. My brain couldn't work out why these were not real people, and it stuck with me. My next big moment was podcast.ai and their first AI generated discussion between Joe Rogan and Steve Jobs. But just like everyone else on the planet, the real breakthrough was ChatGPT and the conversation I had with the 'Ghost in the Machine'.
Google launches new Transparency Center as a central hub for policy information
Google has announced a new online hub called Transparency Center, where it will provide information about the policies that relate to its various products and services, including AI-related policies.
The company says that in the Transparency Center, visitors can find details about the decisions and processes that resulted in certain policies, access transparency reports and more. The hub can also be used to report policy violations to Google.
Uncertainty and lack of preparedness holds back enterprise adoption of AI
IT leaders say AI solutions will allow them to accomplish more tasks in a day (78 percent) or improve their work-life balance (70 percent).
But despite this a survey of 2,500 global IT leaders from chip maker AMD finds nearly half (46 percent) say their organization isn't ready to implement AI. Just 19 percent say their organization will prioritize AI within the next year, while 44 percent forecast a five-year timeline.
Detection needs to improve to combat evolving malware
Critical infrastructure protection specialist OPSWAT has released its latest Threat Intelligence Trends survey looking at organizations to manage the current threat landscape and how to prepare for future challenges.
It finds that 62 percent of organizations recognize the need for additional investments in tools and processes to enhance their threat intelligence capabilities. Only 22 percent have fully matured threat intelligence programs in place though, with most indicating that they are only in the early stages or need to make additional investments in tools and processes.
How AI is going to shape the developer experience [Q&A]
Recent developments in generative AI have led to a good deal of debate around whether jobs are at risk. Since new AI applications like OpenAI Codex and Copilot can write code, developers could be among those under threat.
We spoke to Trisha Gee, lead developer evangelist at Gradle, to find out more about how AI is likely to change the way developers work.
Microsoft is finally killing off Cortana in Windows 11 as Windows Copilot heralds an AI future
Cortana may have been Microsoft's response to Siri, but while Apple's digital assistant prevails, the Windows maker's offering has slipped into insignificance. Never much-loved by users, Microsoft's Cortana has been in its death throes for a while and now the company is finally moving on.
With the release of Windows 11 Build 25921 a few days ago, Microsoft has introduced the option to uninstall the Cortana app, and this is just the tip of the iceberg. The company had already announced plans to stop supporting Cortana in Windows as a standalone app, and that time has now come. A Microsoft Store update is also being used to forcibly deprecate the tool.
Understanding large language models: What are they and how do they work?
In recent years, large language models (LLMs) have revolutionized the field of natural language processing (NLP) and artificial intelligence (AI). These sophisticated models are used widely in AI solutions, such as OpenAI's ChatGPT, and have been designed to understand and generate human-like text, enabling them to perform various language-based tasks. People are incredibly excited by the potential of this technology which is poised to revolutionize how we live and work. However, to understand the true potential of LLMs, it is crucial that people know how they function.
LLMs, at their core, are neural networks trained on vast amounts of text data. They learn to predict the next word in a sentence by analyzing patterns and relationships within the training data. Through this process, they develop an understanding of grammar, syntax, and even semantic nuances. By leveraging this knowledge, these models can generate coherent and contextually relevant responses when given a prompt or query.
Recent Headlines
Most Commented Stories
© 1998-2025 BetaNews, Inc. All Rights Reserved. Privacy Policy - Cookie Policy.