More and more companies are relying on Artificial Intelligence (AI) when it comes to their offers, since the algorithms suggest products and services, breathe life into chatbots and translate texts.
Pegasystems, a provider of customer loyalty software for sales, marketing, service, and operations, however, points out that you should be wary of rushing decisions and make sure you use responsible Artificial Intelligence, defining the four central characteristics of a responsible AI as follows:
Businesses should ensure that their AI treats all genders, races, age groups, social classes, income groups, religious communities, and sexual orientations equally. To do this, they need to build bias-free AI models, proactively monitor them, and constantly analyze their results. Certainly, the aspect of fairness is often not the top priority when using AI, however, if algorithms make discriminatory decisions, this can have extremely damaging consequences for companies' reputation and business.
In case of doubt, companies should be able to show how their AI came to a decision. This is especially true for highly regulated industries such as financial services or insurances. For example, a financial service company recently came into the line of fire because it apparently offered women with the same income background as men worse credit card conditions. Since the company couldn't explain how its AI came to the relevant decisions, it has been regarded as misogynistic. Companies should therefore use transparent and explainable algorithms for regulated and high-risk use cases.
The need for companies to show empathy towards their customers has never been greater than in the stressful times of the corona pandemic; hence, emphatic AI should be considered. Such AI makes decisions that are relevant, helpful, and put customers' needs first. To do this, it must be able to consider the full context of a customer. Only then can AI know what is needed in a specific moment.
A few years ago, the Twitter bot Tay hit the headlines, when Twitter users were asked to interact with the bot. Within just a few hours the request turned into a nightmare, as the underlying AI model became misogynist and racist through the data it ingested from the Twitter conversations. This case makes it clear that companies need robust AI; an AI that cannot be easily influenced by built-in protection mechanisms, rules, and guidelines.
"Fairness, transparency, empathy, and robustness are the cornerstones of responsible Artificial Intelligence. With them, companies can ensure that their Artificial Intelligence reflects the values of our society and focuses on customer needs," Pegasystems explained. "This is not only ethically required, but also has a positive effect on the company's turnover in the long term."