AI -- separating the reality from the hype to deliver better solutions [Q&A]
Artificial intelligence is making its way into more and more areas of our lives. But how much of this is hype and how much is genuine innovation? And can improved AI learning models deliver better commercial solutions?
We spoke to Ben Lamm, CEO of Hypergiant Industries, a company at the cutting edge of solving problems with AI, to find out.
BN: AI seems to be a buzz phrase in everyone's marketing at the moment, how can we distinguish the genuine from the hype?
BL: You should assume a solid majority of it is hype. Most people are still building automation solutions and few people are anywhere near building platforms that qualify as true artificial intelligence. If someone is trying to sell you an 'AI platform' that they say will benefit your business and there is no specific discussion of access to data, use cases, or training of models against your data, then it is probably hype.
True AI tends to be a critical part of the solution to real world problems. Hype AI tends to be added as decoration on conventional rule-driven software or for contrived situations rigged for success. True AI systems will be creating a process flow where the systems involved interact with each other in a smart manner and with minimal human intervention.
When we talk about artificial general intelligence, we would be dealing with a program that has been structured to create and build new things on its own -- like a human. Most solutions you see on the market are simply programs that use data to make decisions. Those programs aren't sentient, and they don't have any of the characteristics of human brains.
If you're confused if something is AI, it's probably not. Even a computer program that outperforms humans at a game, like 'Go!', is just a program that can rapidly process many if/then scenarios with prediction patterns and probability matrices. The same program cannot create a new and different game and then win at it.
Culturally and contemporarily we have softened the definition of AI -- so now when we talk about AI, we often use it as a shorthand for programs that make a human task easier or more effective. To determine what is hype, ask yourself: does this simplify a process, or can it create a new process unaided? If it's the later, it's true AI. If it's the former, we are probably still calling it AI.
A fair amount of companies are talking about AI as if they have it and are keeping it mysterious. That's a clear indication of potential hype, as AI is an entire field with multiple different parts, all of which have practical applications and require an enormous amount of data to train the appropriate models to implement value to businesses. If someone is talking about AI without talking about data or talking about data in a buzz-wordy / hand wavy way, it's probably hype.
BN: Do businesses that don't adopt AI risk being left behind?
BL: Technology advancement is an immutable force and like all such forces will destroy those that don’t adopt it or adapt to it. Advances in machine intelligence and automation processing make businesses more efficient and reduce labor costs, production times, errors in effectiveness, system inefficiencies, etc. These improvements reduce costs and improve products which helps companies to grow market share and increase margins. For most large companies or companies in competitive industries, AI will be critical to their success.
BN: How can AI learning models be improved, what's the importance of 'supervised' learning?
BL: Bias is a well-discussed problem with AI models. Most models are biased because they are built from biased data. All data is in some basic way biased because of biases created by the very nature of how the data was collected.
We need to start to look at various other ways of improving the data. One hypothesis that we are piloting is a program called the Hypergiant Ethical Reasoning Model. The Ethical Reasoning Model is designed to ensure a rigorous ethical review for any AI use case proposed by our clients. The three core steps are:
- Establishing Goodwill (the use case has a positive intent)
- Applying the Categorical Imperative Test (the use case can be applied broadly without a negative ethical impact)
- Conforms to the Law of Humanity (the use case has a direct benefit for people and society)
This helps us to limit what we work on to ensure that we are building AI models from a thoughtful position. We need more people thinking about the broad impacts of technology in order to improve the models, and we need to have many more checks on the technology to ensure that it is working exactly as we want and not in ways harmful to the environment.
Finally, we improve AI by improving technology as an industry: this means more diversity of ideas, of people, of backgrounds, and of points of view. When we have people with the same backgrounds, cultures, and viewpoints in any field, we end up with limited solutions that are blind.