1. Recognize bias → Create better algorithms
Can we humans better recognize our biases before we turn the machines loose, fully automating them? Here’s a sample of recent caveats about decision-making fails: While improving some lives, we’re making others worse.
Yikes. From HBR, Hiring algorithms are not neutral. If you set up your resume-screening algorithm to duplicate a particular employee or team, you’re probably breaking the rules of ethics and the law, too.
Our biases are well established, yet we continue to repeat our mistakes. Amos Tversky and Daniel Kahneman brilliantly challenged traditional economic theory while producing evidence of our decision bias. View this recap of our founder’s recent Papers We Love talk on behavioral economics and bias in software design. Their early research identified three key, potentially flawed heuristics (mental shortcuts) commonly employed for decision-making: Representativeness, availability, and anchoring/adjustment. The implications for today’s software development must not be overlooked.
Algorithms might be making the poor even less equal. In Automating Inequality, Virginia Eubanks argues that the poor “are the testing ground for new technology that increases inequality.” She argues that our “moralistic view of poverty… has been wrapped into today‘s automated and predictive decision-making tools. These algorithms can make it harder for people to get services while forcing them to deal with an invasive process of personal data collection. As examples, she profiles a Medicaid application process in Indiana, homeless services in Los Angeles, and child protective services in Pittsburgh.”
Prison-sentencing algorithms are also taking fire. “Imagine you’re a judge, and you have a commercial piece of software that says we have big data, and it says this person is high risk…now imagine I tell you I asked 10 people online the same question, and this is what they said. You’d weigh those things differently.” [Wired article] Dartmouth researchers claim that a popular risk-assessment algorithm predicts recidivism about as well as a random online poll. Science Friday also covered similar issues with prison sentencing algorithms.
2. Lack of acceptance → Analytics denied.
Not every baseball manager is enamored with the explosion of available analytics. Great New York Times story about the extreme reluctance of a Mexico league team to change their conventional decision-making. Baseball enthusiast and Johns Hopkins computer science professor, Anton Dahbura, learned the hard way that The Analytics Guy Failed to Compute One Thing: How to Be Accepted in Mexico. Said a team VP: “It’s completely new down here so, yeah, it’s been a bit of a culture clash.”
3. Soft skills training → 250% ROI.
Encouraging results for the value of ‘soft’ skills training for workers on the factory floor. (When will we stop referring to these crucial, hard-to-master capabilities as soft skills?) Namrata Kala and colleagues ran a randomized controlled trial in five Bangalore factories. They delivered substantial returns [pdf here] from a 12-month soft skills training program focused on communication, decision-making, time and stress management, and financial literacy. ROI benefits were measured as boosts in worker productivity, ability to perform complex tasks more quickly, short-term gains in improved attendance, and increased employee retention.
4. Gather evidence → Retain employees
On Science for Work, Iuila Alina Cioca explains The Thorny Issue of Employee Turnover: An Evidence-Based Approach.
Posted by Tracy Allison Altman on