Ethical AI for developers requires transparency

Ethical AI for developers requires transparency

Spread the love

AI can help developers gain efficiencies and meet deadlines, but it takes additional tools to create ethical AI, which provides transparent and explainable decisions that won’t plunge users into murky moral waters.

This is according to Don Schuerman, CTO of Pegasystems and Vice President of Product Strategy and Marketing, who said that Pega has developed its AI with such anti-bias and explainability mechanisms. That way, Pega’s AI doesn’t suffer from some of the issues that plague AI-based development tools, such as sexism and copyright infringement, Schuerman said.

According to Schuerman, transparency and explainability are key to avoiding bias in AI decision-making, especially when it comes to businesses that must meet regulatory guidelines, such as financial institutions that must follow fair lending laws.

In this Q&A, Schuerman talks about the challenges facing AI development, how ethical AI could take customer service back to the 1970s, and what’s next for Pega.

How does Pega’s approach to AI differ from other AIs, such as GitHub Copilot, which have copyright issues?

Don Schuermann

Don Schuerman: We don’t form one-size-fits-all models that are then generically applied to every client we have. We provide a decision engine that our client puts their own data into and then builds models unique to their business.

Another term you hear a lot in the industry right now is explainability, which is important when making decisions about how you interact with a customer, what products you offer, whether to suggest a particular service, or how you assess the risk of a particular transaction. You can explain how you made the decision, both for regulatory reasons and in some cases it’s important so that users learn to trust the models and can see how decisions are made.

If I’m wondering whether to offer someone a loan – boy, it better be really explainable, why and how I made this decision.

Don SchuermannCTO, Pegasystems

We built [what] we call it the “transparency switch”, which allows you to control the level of transparency and explainability of your models. Suppose I decide what type of ad to show for a particular offer. Maybe I don’t need so much explainability to select this image. But if I’m wondering whether to offer someone a loan — hell, it better be really explainable, why and how I made that decision.

One of the big challenges in the AI ​​world is that AI is trained on data. Data may contain, for better or worse, the biases of our society. So even if your AI predictors – the information you use – aren’t necessarily aligned with protected classes, you can still make decisions or track trends that align with something that’s protected, like race. or sexual orientation. So we’ve built ethical bias testing into the platform so our clients have the tools to test and validate their algorithms to make sure they don’t.

But the people who created the ethical bias test feature have their own biases. So how do you make sure this feature itself isn’t biased?

Schuerman: I think part of that is getting as broad a set of perspectives as possible into the conversation, both from our own internal developer community, but also from our customer community, which represents our community. advisors – the people we talk to in the industry space.

The first thing we need to address in our biases is to be aware of them. This won’t guarantee that every customer in the world won’t, at some point, do something that’s not in line with best practices for bias, but we’re bringing it to the fore. We ask people to consider this when thinking about how they deploy their AI models. We also ask people to think about empathy for customers.

If you give your customers too many AI-generated suggestions, won’t they stop?

Schuerman: One of the clients we work with explains that his goal was to take banking back to the 1970s. elephant and disco, but that you would use AI to, as much as possible, capture the personal relationships you would have had with your local banker, who you knew as a private individual and saw in the local branch each week. You don’t have that anymore.

We need to use some of the AI ​​tools to have that knowledge and understanding of the customer – regardless of which member of the business the customer is engaged with. Maybe it’s someone from the contact center. Maybe it’s someone from the branch that just started this week. Maybe it’s the customer on a self-service. We always have this “I understand you, I know you, I know your goals and your needs” [message]. That’s what we’re trying to do, is to make it a human experience, but on a large scale.

What can last month’s release of Pega Infinity 8.8 do for developers that they couldn’t do before?

Schuerman: This latest release gives developers the ability to apply AI and its decision-making broadly enough through a process to understand: “Are there opportunities to improve efficiency?” Can I predict that this process will miss its level of service [agreement]? Wait, we’re going to miss a deadline. They have an AI model that predicts this and automatically escalates the process before they miss a deadline and either have to explain it to a customer or, in the worst case, pay regulatory fines.

What’s next for Pega in the area of ​​ethical AI?

Schuerman: We work with a lot of companies that now find themselves in a world where they can’t deploy a single app for payment exceptions, because they have to keep their Swiss customer data in Switzerland, and then they have to keep the data of their UK customers. in the UK and data for Singapore customers in Singapore. They must have distributed versions of this application. We support this architecturally, but we also think about connecting these physically distributed applications. How do you relate that to a holistic experience?

Editor’s note: This Q&A has been edited for clarity and brevity.

Leave a Comment

Your email address will not be published.