AI Bias: Mitigating Your Risk

man-submit-resume-to-employer-to-review-job-application-the-concept-presents-the-ability-for-the_t20_W7oPmY.jpg

Bias is an inherent aspect of even the most rational human intelligence. In theory, artificial intelligence should reduce the risk of bias, because the elimination of human involvement in decision-making means that malicious or unconscious bias should not occur. Accordingly, artificial intelligence is being more widely adopted by businesses today to help process large amounts of data and documents efficiently and accurately. AI is used to aid decision making for a wide range of applications in housing, banking, hiring, and loan approvals.

Data privacy laws like the European Union’s General Data Protection Regulation (GDPR) govern consumer access to their data and include provisions that allow them to ask how decisions were made by AI and that require a human to oversee the results of automated data processing. These transparency mechanisms help create human oversight to reveal possible biases so that mistakes can be corrected. These requirements are broad enough to apply to AI companies.

In the real world, however, bias can creep into AI systems, and systems intended to make objectively fair decisions may themselves be prone to bias. Humans are the ones designing and training artificial intelligence systems, and the ones gathering data to train those systems. Consequently, the risk of bias is real. How can we understand AI bias, minimize its prevalence, and develop risk-management strategies to deal with potential claims of bias?

What is AI Bias

AI bias refers to the tendency of an artificial intelligence system to produce skewed results that are systematically prejudiced against individuals or groups based on factors that should have no bearing on outcomes. Bias can be caused by unrepresentative, limited, or incomplete data sets. Biased data sets used to train AI systems to create biased outcomes. 

An AI system vendor creating a biased system could face liability from its customers that expected to procure a fair and effective system. Operators purchasing a biased system could face discrimination suits from individuals affected by biased outcomes when the system operates. For example, a system dedicated to sorting through resumes could develop a racial bias based on input criteria and training, leading to unintentional discriminatory hiring practices that could wind up costing an employer millions of dollars in settlements and penalties as well as a profound loss of reputation.

How Bias Happens

AI systems that are being trained on large amounts of data are going to continue practices found within the data they are given. Keeping with our previous example regarding (potentially) discriminatory hiring practices, if a hiring software vendor looks only at successful hires in the past to train its AI systems to identify future successful candidates, the AI system may show a preference for privileged segments of society in previous years and decades and avoid recommending potentially qualified candidates from other groups that have faced discrimination in the past.

AI producers and users must be aware that data selected to train AI could eventually be under intense scrutiny in court during a discrimination case. More laws are being proposed at state, national, and international levels, including the federal Algorithmic Accountability Act of 2019, that would require documentation into the logic guiding how AI systems make decisions. Aside from these new proposals, government regulators may use existing general laws against unfair or deceptive trade practices to bring actions against software manufacturers and purchasers using systems that create biased outcomes. In either case, businesses should give consumers more insight into and control over their personal data, and they will need to govern how they train their AI systems and what data sets they utilize very carefully.

Risks of AI Bias

AI decision-making will only become more prevalent and powerful as time goes on. This technology is already used by online recruiting agencies like ZipRecruiter to match candidates to job opportunities instantly. This instant matching technology is used in various other industries already including credit and housing to give end-users rapid feedback that would have only been possible only through expensive, lengthy human reviews just a short time ago. However, as this technology continues to penetrate every sector of our lives, it brings a unique and complicated risk profile. All companies intending to profit off of AI’s power, no matter the scale, should develop robust strategies to insulate themselves from as much risk as possible.

A company using an artificial intelligence system perceived as being discriminatory may be subject to complaints, at minimum, and crippling lawsuits in severe cases. Claims can arise from the government — both federal and state — or from an individual. Outside of obvious legal fees involved in these cases, the expense of investigating and remediating the root issues could be huge. Those costs can be avoided if a company is proactively aware of their data or systems’ propensity for bias in advance and by conducting their own bias and explainability audit before the product or service is offered to customers, and regularly over time. By self-governing their own data, AI companies are able to show a defensible explanation for an AI system’s decisions when it really counts, in court. Similar issues can come up in other sensitive areas such as housing and credit.

Reducing Bias

For companies that are either the vendors or the operators of AI systems, it’s important to have an idea of how bias is introduced in the first place. It’s also important to have a plan of operation moving forward. The following steps will help minimize risks of bias: 

  1. Use good data. This involves finding and utilizing data sets that haven’t been subjected to bias. The design of the database itself and collecting proper data are appropriate first steps.

  2. Properly train the AI. You often need to feed AI programs massive amounts of data for applications intended for widespread use. This data needs to be investigated internally for bias before being used to train the AI.

  3. Human interaction. Make sure that someone reviews the data from a human perspective, rather than just running an automated set of inquiries.

  4. Test. Build-in testing processes so you understand, as best you can, how the AI perceives data and why it returns what it does as results.

  5. Holistic assessment. Before making a product or service available to customers, there should be an overall assessment to ensure it’s free of bias.

  6. Human auditing of results. It would be beneficial to have an external auditor confirm the system produces unbiased results, and have an accounting firm issue an audit report.

  7. Continuous improvement. Continuous improvement means constantly looking at results from assessments and testing, then integrating potential areas of improvement.

SVLG is Your Partner in AI Bias Management

I help my clients by working with data science and auditing experts that guide you to insulate your company from many of the risks associated with AI bias and beyond. The company sorting through resumes presents a perfect example — while I don't do employment law, I work together with employment counsel from our firm to analyze potential risks. At the same time or in conjunction with this legal focus, I work together with my clients’ technologists to understand how they're implementing the system. Once we have a holistic view of the data, the system, the company, and its operations, I then provide guidance about how to go through a development process to make sure that they're not introducing sources of bias. I can facilitate third-party audits to detect bias and help commercialize a product or service after rigorous testing processes and implementing continuous improvement systems. SVLG is here to help you make sure you’re able to use your AI systems to their maximum value, with minimal bias risks.

Previous
Previous

Securing Healthcare Robots and AI Systems for HIPAA Compliance

Next
Next

Cybersecurity and National Security