28186015

The advantages of diversity in the tech industry have been carefully studied and reported on. Companies that hire women and minorities outperform their competitors and see higher financial returns. New studies, reported on by FastCompany, show that industry leaders are well-aware of the value of diversity. Yet, we aren’t seeing the rush for inclusion that we would expect. Advice for how to increase diversity focuses on things like improving the language of recruitment and creating a more friendly working environment. However, there is a key component of the modern hiring landscape that gets overlooked.

Take a moment to think about what makes a good employee. What qualities do they have? What type of impact do they have on their company?

Once you have an answer, pause and reflect on the decision-making process you just used. What information helped you decide? Did you think about someone you’ve worked with in the past? What was it that made them good? Did you think about a friend or family member that works hard at their job? Did you skip straight to thinking about workers in general terms and the positive qualities they might have?

No matter how you came to your answer, your brain went through some process of making a decision: comprehending the question, recalling the things you know, making judgments about which things matter most, and outputting a response. Now, if I were to ask you whether John Doe is a good employee, you could compare him to the list of qualities in your head and come up with an answer. That answer may be more or less objective, but to some extent is grounded in your personal opinions, experiences, and biases.

An algorithm is a process of making and replicating these types of decisions on a large scale. They’re used every day for processes from deciding what advertisements to show you online to college admissions. One field that has become highly reliant on algorithmic decision-making is hiring. When there are too many applications for a human being to reasonably sort through, the job is often passed off to an algorithm which pulls out a short list of top tier applicants. It’s a cheap, efficient solution.

The problem with this method is that algorithms are very good at duplicating patterns and very bad at innovating on them. If an algorithm is trained by looking at the resumes of proven employees in an industry that is mostly male and white or Asian, it’s going to look for candidates that are similar. Even if algorithms aren’t told to categorize applicants by race or gender, they can easily pick up on the implicit signs and use these to recommend (or disregard) some applicants over others. Before we’ve even gotten to a human being, important decisions are being made.

We invited Cathy O’Neil, data scientist, HI advisor, and author of the new book, Weapons of Math Destruction, into our office to talk about the influence that the proliferation of algorithmic decision-making is having on society.

In Cathy’s words, algorithms are “opinions embedded in mathematics.” They are not objective and independent, but instead are dependent on the way their creator defined success in a particular area of life. Algorithms make decision-making fast and standardized, but also replicate both past successes and mistakes indiscriminately. Without careful examination, which many algorithms are not given, they can become what Cathy refers to as a, “weapon of math destruction.” These types of algorithms are defined by three qualities. They are:

  1. Widespread and used in a way that narrows or opens up opportunities for people on a large scale;
  2. Opaque and therefore unaccountable without options to appeal results;
  3. Engendering of a destructive feedback loop which may be well-intentioned but is instead undermining of its own goals.

These types of algorithms are much more prevalent than we may realize. They are used in education to evaluate both students and teachers, criminal justice to determine sentences and set parole options, and online to advertise predatory education opportunities to the poor. In all cases, these algorithms create unfair systems which often target those who are already at a disadvantage.

How do we avoid making and maintaining these systems? The first and largest step is clearing away the myths surrounding algorithms and treating them less as magic and more as tools. Algorithms need to be created based on explicit goals and principles, evaluated for biases and fairness, and made accessible to auditors and the people affected by their results. These changes will require both new legislation and a shift in perceptions of data scientists from solutions wizards to translators of values into mathematics. Once we have made these changes, we can begin to challenge the culture of practice around data science to include features such as education in coding ethics and laws of liability.

In the case of the tech industry, if we want diversity, we need to either think creatively about how to teach our algorithms to prize diversity, or take a critical look at this part of the selection process. An algorithm may still be the right tool for the job, but it needs to be specially tailored to reflect the changing values and priorities of the industry.

To read more about the impact algorithms have on our lives and what needs to be done about it, you can buy Cathy’s book on Amazon.