Companies lack policies to deal with GenAI use
While 27 percent of security experts perceive AI and deepfakes to be the biggest cybersecurity threats to their organisations not all have a responsible use policy in place.
The third part of a survey of over 200 information security professionals carried out at Infosecurity Europe 2024 has been released today by KnowBe4 and it finds 31 percent of security professionals admit to not having a 'responsible use' policy on using generative AI within the company currently in place.
"AI and deepfakes are the latest tools in the arsenal of cybercriminals, primarily used in social engineering campaigns," says Javvad Malik, lead security awareness advocate KnowBe4. "While the technology itself is undeniably impressive, it's crucial to recognise that the real threat lies in how these tools are wielded to manipulate and deceive unsuspecting individuals. We must invest in helping individuals make better security decisions, providing them with the knowledge and tools they need to identify and avoid potential threats."
The study also shows that nine percent of respondents say they have policies in place but know employees are still using GenAI carelessly. Only two in five (41 percent) have policies that have been agreed and signed by employees.
In addition 10 percent have policies in place but know users have not agreed or signed them yet. Over a third (38 percent) of security professionals say they have witnessed employees using GenAI at work and 31 percent have admitted to using it themselves.
"As AI becomes increasingly integrated into business processes, organisations must establish clear guidelines and policies around its responsible use," adds Malik. "This includes considering the ethical implications of AI, ensuring transparency and accountability, and mitigating potential biases or unintended consequences."
You can get the full report on the KnowBe4 site.
Image credit: Dmyrto_Z/depositphotos.com