Technology Bias in AI: How to Harness it Effectively

Harnessing AI Ethically

Whether we are conscious of it or not, humans are inherently biased. Our experiences create internal preferences that we often are not even aware of, and whether we want them to or not, our biases impact our decision-making processes. Many of the leading advocates of artificial intelligence (AI) have championed this technology as the cure for human bias, while many of its biggest critics warn that AI will do more harm than good in making ethical, equitable impacts on society. But what is the truth? Can AI be biased, and if so, how can brands mitigate it in their activities? 

How does AI become biased?

AI is not inherently biased, but this technology’s outputs are only as good as the data that is input to inform it. When you put garbage in, you can expect to get garbage out. For example, let’s consider an organization that is looking to AI for its hiring activities. To ‘train’ the software to find candidates who are a good fit for the company, the business would use data from its current and former staff to help outline what that ideal candidate looks like. If the organization is currently and historically male-dominated, the AI will learn to favour male candidates. The same applies to ethnicities.

What this does is perpetuate stereotypes and power imbalances in the workplace. Often our own human bias gets in the way and blinds us to the issues right in front of us. If we are not aware of the fact that AI puts out what we put into it, then we run the risk of confirmation bias and fooling ourselves that our way was the right way all along, even if it is not. If we do fail to understand AI’s limitations, we end up putting blind faith in the technology and trusting its outcomes implicitly. But AI is not all-knowing, and it cannot always right the wrongs that are used to inform it.

Impacts on customers

This lack of diversity can have major impacts on the overall functioning of your organization if left unaddressed. Without diverse voices in the room, members of your audience group may be either underrepresented or misrepresented in your company’s decision-making processes. If your staff is predominantly white and male, who on your team can properly present the perspectives of your female or POC customers? AI can help generate insights into these groups, but it falls on the human staff to turn these insights into action. If you allow biased hiring practices to shape who you have on hand to make these decisions, you run the risk of overlooking opinions and opportunities relevant to various customer segments.

Think of AI like a mirror whose outcomes reflect to us truths about ourselves. The makeup of our organization is reflected to us in the types of hiring recommendations AI makes. The needs, demographics and feelings of the customer are reflected in the insights AI generates and the initiatives it produces, which can have major impacts on the customer journey.

One of the top post-pandemic priorities for sales and marketing teams is offering personalization at scale, and rightfully so. In a 2021 survey from Accenture, 91% of respondents said that they are more likely to shop with businesses and brands that offer better personalization over ones that do not. The benefits for the business are clear as well, with Adweek reporting that it can reduce customer acquisition costs by up to 50% while increasing marketing spend efficiency by up to 30%.

Typically, personalization happens in the form of tailored product recommendations or website experiences. Other times, it happens via dynamic pricing or personalized offers. If you have ever abandoned your online shopping cart and received an email afterwards offering you a discount to complete your purchase, then you have experienced this practice first-hand. We tend to see personalized discounting in e-commerce and retail customer journeys where item prices are fixed, and dynamic pricing more often used in service-based industries where prices vary on a customer-by-customer basis.

Similar to our hiring example, dynamic pricing is informed by current and historical data. For example, a bank may train its AI using data from its current and past customers to help identify certain factors that might indicate a customer may default on their loans. The AI may scan incoming applications, identify whether these potential risks are present, and render a decision without any intervention from a human worker at all. Similarly, an insurance tool might use data from existing customers to generate a policy or price for new customers with similar needs and habits. While these practices can often save time for the human staff and help to offer customers fairer prices, bias can easily slip in. If left unchecked, fewer loans, insurance policies, or product discounts could be offered to underserved populations.

Overcoming bias

So how do we fix it? The first step is understanding that AI is a malleable technology that creates outcomes based on the data it is fed and becoming more conscious of what we are inputting. If you are aware that your staff is too slanted to one demographic and that your data reflects this, you can train the AI to seek out candidates that are different rather than similar. AI can be moulded and shaped.

AI should never be unsupervised. While it can perform many tasks autonomously and often without issue, certain activities should require human intervention and review, such as decisions regarding loans or finances that have a major impact on the customer’s life. If using AI-generated insights, think critically about the impacts on the business and the customer should you follow through on these recommendations. Is there any cause for concern?

Additionally, try not to generalize when it comes to your customers. Segmenting is a common facet of marketing, and AI allows us to do it smarter and more effectively. Cast aside your own beliefs about what certain customer groups may want and listen to the data. Gather feedback from your customers about their experiences with your AI-powered initiatives to highlight potential issues or shortcomings. AI is only as fair as we allow it to be and requires periodic check-ins to ensure it is doing the job properly. So long as you understand that, you are on the right track for harnessing it ethically.