AI will create larger issues in 2020

AI

Many predictions that we saw around artificial intelligence (AI) for 2019 leaned towards one extreme or the other -- ranging from the notion that AI will no longer be a thing to the idea that it’ll realize its full potential and completely change how industries work at a fundamental level. Advancing AI is going to be an incremental process and it’s unrealistic to think that the world will suddenly abandon it completely or exponentially accelerate its development in that area.

But in the security industry, we have still seen progress surrounding AI, as we’ve gotten better at using machine learning technology to identify and recognize behaviors to identify security anomalies. In most cases, security technology can now correlate the anomalous behavior with threat intelligence and contextual data from other systems. It can also leverage automated investigative actions to provide an analyst with a strong picture of something being bad or not with minimal human intervention.

In essence, AI is handling a bulk of the work of a security analyst. As a society though, we still don’t have enough trust in AI to take it to the next level -- which would be fully trusting AI to take corrective actions towards those anomalies it identified. Those actions still require human intervention and judgement. And I think the security use case is synonymous with the tech industry and AI as a whole.

Even so, we can expect to see developments in AI throughout 2020 -- both for benign and malicious activities. What’s one of the possibilities? My prediction for 2020 is that an insider will manipulate AI to wrongly put an innocent person in prison.

Because people train AI, AI adopts the same human biases we thought it would ignore. However, this hasn’t stopped the legal system from employing it. Just last year, a judge ordered Amazon to turn over Echo recordings in a double murder case. With AI already primed to make biased decisions based on the information it receives, an insider could exploit this to feed it false information to more directly implicate someone of a crime. In making AI more human, the likelihood that it makes mistakes will increase.

While we’ll certainly see opportunities to expand upon AI’s capabilities, we shouldn’t expect to see the types of breakthroughs that are necessary to realize what people tend to imagine when they think of true AI -- technologies that require zero human intervention to achieve meaningful results.

Image Credit: Mopic / Shutterstock

James Carder is CSO and Vice President of LogRhythm Labs. He brings more than 22 years of experience working in corporate IT security and consulting for the Fortune 500 and U.S. Government. At LogRhythm, he develops and maintains the company’s security governance model and risk strategies, protects the confidentiality, integrity, and availability of information assets, oversees both threat and vulnerability management as well as the security operations center (SOC). He also directs the mission and strategic vision for the LogRhythm Labs machine data intelligence, threat and compliance research teams.

3 Responses to AI will create larger issues in 2020

© 1998-2024 BetaNews, Inc. All Rights Reserved. Privacy Policy - Cookie Policy.