IBM fairness toolkit aims to eliminate bias in data sets
IBM is announcing changes to its AI Fairness 360 toolkit to increase its functionality and make it available to a wide range of developers.
AIF360 is an open source toolkit that contains over 70 fairness metrics and 11 state-of-the-art bias mitigation algorithms developed by the research community to help examine, report, and mitigate discrimination and bias in machine learning models throughout the AI application lifecycle.
There's increased compatibility with popular data science tools scikit-learn and R. This includes the ability to swap in debiasing algorithms and fairness metrics, plus out-of-the-box support for datasets including Adult Census Income and Bank Marketing. Bias mitigation algorithms on offer include Reweighing, Adversarial Debiasing, and Calibrated Equalized Odds.
Writing on the IBM Developer blog, Willie Tejada, GM and chief developer advocate at IBM Cloud and Cognitive Software says, "AI fairness is an important topic as machine learning models are increasingly used for high-stakes decisions. Machine learning discovers and generalizes patterns in the data and therefore, could replicate systematic advantages of privileged groups. To ensure fairness, we must analyze and address any cognitive bias that might be present in our training data or models."
These efforts come as part of IBM's broader mission to develop and apply tools that wire AI systems for trust, fairness and accountability. In 2018 the company launched Watson OpenScale which was the first solution of its kind giving business users and other non-data scientists the ability to monitor their AI and ML models for algorithmic bias and explanations for AI outputs.
You can find out more on the IBM Developer blog.