IBM launches new toolkit to boost understanding of and trust in AI
AI and machine learning have demonstrated some impressive abilities in recent years, but the models behind the technology and the reasons why it came to the decision it did are often hard for the people interacting with it to understand.
In order to help people gain an insight into machine decision making, IBM Research is launching AI Explainability 360, a comprehensive open source toolkit of state-of-the-art algorithms that support the interpretability and explainability of machine learning models.
AI Explainability 360 includes eight new algorithms developed by IBM Research, along with quantitative metrics to help measure explainability. It's open source, so others can build on the knowledge and can learn from each other. Building on the successful release of the Adversarial Robustness 360 Toolbox (2018) and AI Fairness 360 Toolkit (2018), this is the latest from IBM Research, underscoring IBM's commitment to trust and transparency.
"To provide explanations in our daily lives, we rely on a rich and expressive vocabulary: we use examples and counterexamples, create rules and prototypes, and highlight important characteristics that are present and absent," says Aleksandra Mojsilovic writing on the IBM Research blog. "When interacting with algorithmic decisions, users will expect and demand the same level of expressiveness from AI."
You can find out more about the toolkit on the IBM Research site and see an overview of what it offers in the video below.