Assessing AI Bias in Enterprise Systems: A Harvard Business Review Case Study

The case study "Programming a 'Fairer' System: Assessing Bias in Enterprise AI Products (A)" by Mary Gentile and Adriana Krasniansky, published on December 8, 2020, delves into a critical ethical dilemma surrounding the use of Artificial Intelligence in the criminal justice system. The central focus is on COMPAS, an AI software developed by Northpointe, Inc., designed to predict a defendant's likelihood of recidivism.
The Problem:
Timothy Brennan, the founder and CEO of Northpointe, initially created COMPAS to standardize decision-making and reduce human bias in court rulings. However, an investigative report revealed that COMPAS disproportionately flagged Black defendants as higher risks of reoffending compared to White defendants. This revelation presents a significant challenge for Brennan and Northpointe.
The Dilemma:
Brennan faces a complex situation:
- Investigating Bias: He needs to thoroughly investigate the claims of bias within COMPAS.
- Protecting Intellectual Property: Any adjustments to the software's code to address bias could potentially reduce its performance or expose proprietary algorithms to competitors.
- Maintaining Trust and Reputation: The company's reputation and the integrity of the justice system are at stake.
The Case Study's Focus (A Case):
This "A" case study focuses on Brennan's challenge in organizing a response to investigate the bias. It requires students to consider how he should approach the investigation, manage the technical and ethical complexities, and balance the need for fairness with the protection of his company's intellectual property and product performance.
Key Considerations for Students:
- How should Brennan structure the investigation?
- What are the ethical implications of deploying biased AI?
- What are the potential technical solutions for mitigating bias?
- How can Northpointe communicate its response to stakeholders (courts, public, legal professionals)?
- What are the long-term consequences for the company and the justice system if the bias is not addressed effectively?
Related Content:
The document also lists related products and topics, including "Programming a 'Fairer' System: Assessing Bias in Enterprise AI Products (B)," "How IBM Is Working Toward a Fairer AI," and "Will AI Reduce Gender Bias in Hiring?". These suggest a broader discussion on AI ethics, fairness, and bias mitigation in various applications.
Product Details:
- Product #: UV8182
- Pages: 5
- Publication Date: December 08, 2020
- Source: Darden School of Business
- Format Information: Available in PDF, Audio MP3, Audio M4A, Audio CDROM, Audio Cassette, Bundle, DVD, Event Live Conference, Event Virtual Conference, Word Document, Electronic Book, Enhanced Electronic Book, ePub, Financial, Ebook, Hardcover/Hardcopy, Hardcover/Hardcopy (Color), Hardcover/Hardcopy (B&W), Web Based HTML, Kit, License, Magazine, Mobi, Multimedia CDROM, Multimedia Windows Media, Paperback Book, Powerpoint, Paperback/Softbound, Paperback/Softbound (Color), Paperback/Softbound (B&W), Registration Fee, Short Run, Subscription, Service, Video CDROM, Video DVD, Video Flash, Video VHS (NTSC), Video VHS (PAL), Video Real Player, Microsoft Excel Spreadsheet, XML, Zip File.
- Price: $11.95 (USD)
- Related Topics: Race, AI and machine learning, Cognitive bias.
The case study serves as a valuable tool for understanding the real-world challenges of implementing AI responsibly and ethically, particularly in sensitive domains like the justice system.
Original article available at: https://store.hbr.org/product/programming-a-fairer-system-assessing-bias-in-enterprise-ai-products-a/UV8182