As AI remakes finance, legal concerns pile up

Just as some people are starting to fear whether driverless cars will do us more harm than good, there's a growing concern that the use of artificial intelligence — letting software and machines make decisions rather than people — could lead to legal and ethical problems in financial services.

Letting a software program decide on its own who may obtain a checking account, who can get a loan, and what rates they should be charged, could have unintended consequences that include people being excluded from mainstream finance.

Some would argue there is nothing fairer than an algorithm — pure math, after all, cannot be biased, right?

"A machine or a robot is going to base decisions on facts and criteria," said Steve Palomino, director of financial transformation at Redwood Software. "Removing the human element means you're going to apply criteria based on facts and circumstances, not based on preference or personal biases or experiences." A machine could also keep better records about exceptions, he said.

BIAS CONCERN
One flaw in this argument is algorithms are written by people, who can be and usually are biased, often in subtle ways they themselves fail to recognize. Those biases can be built into algorithms. For example, last year Carnegie Mellon University researchers tested how Google's algorithms deliver ads to people. They found that Google showed jobs with higher salaries to people who had selected "male" in their ad settings than to those who clicked "female."

At an extreme, an artificial intelligence engine making credit decisions could approve only people who graduated from Ivy League schools, for example, or who have household incomes above $300,000.

Artificial intelligence, by its very nature, cannot be completely controlled the way today's rules-based systems are. In AI, computers learn what to do over time the way people generally do: by receiving information, making decisions based on that data and observing the results. As they learn from mistakes and from good choices, they modify their own rules and algorithms and start to draw their own conclusions.

question-screen-ai-bloomberg-news

Amitai Etzioni, a professor of international affairs and director of the Institute for Communitarian Policy Studies at George Washington University, noted that driverless cars are instructed not to speed, but they are also designed to learn. When they are around other cars that are speeding, they will speed.

The same principle could apply with AI-delivered mortgage decisions. "The bank tells the program, under no circumstances should you use race as a criterion," he said. "The program goes out there and sees that risks are associated with income, education and ZIP codes. And the program says you know what, race is a factor, because education and ZIP code associate with race. Why don't I use race as my criterion?"

AI programs lack a conscience. "It's hard to encode morals and colorblindness into a machine — its only purpose is to look for solutions for the company," said Steve Ehrlich, associate at Spitzberg Partners, a New York corporate advisory and investment firm.

CHECKS AND BALANCES?
Etzioni suggested that what's needed are AI "guardians" — companion AI systems that make sure the artificial intelligence engines do not stray from certain values.

Couldn't an AI guardian end up learning bad habits, too?

"That question has been asked since Plato," Etzioni said. "Who will guard the guardians? In the end, a human being would have to be in the loop."

This leads to a third issue — the inner workings of artificial intelligence programs tend to be hidden, even from their creators. In other words, AI "darkens the dark box," Etzioni said. The decisions financial institutions make about who they take on as customers, to whom they lend, the rates they charge, and so forth become unknowable.

These are not just problems to consider in traditional financial institutions. Many fintech companies have black-box automated decision-making and a heavy reliance on algorithms. The marketplace lender Social Finance has declared itself a "FICO-free zone," meaning it does not use the FICO score most lenders rely on to determine creditworthiness.

But it will not answer questions about what data it does use in its algorithms. Prosper's Chief Executive Ron Suber has said the company analyzes 500 pieces of data on each borrower, but the company will not say which data points those are.

Ehrlich says letting artificial intelligence engines make financial decisions also raises privacy issues around the data being fed into the machines.

"Say a company wants to look at your social media or your search engine history to determine your creditworthiness," he said. "They go into Facebook and find a picture of you that you didn't upload — it's a picture of you at a bachelor party or gambling at a casino — and that data gets fed into the algorithm." The bank should at least tell the customer it plans to use that information.

Etzioni offers another example: a bank's AI program could "learn" that a customer has cancer based on hospital or doctor bills or buying patterns (e.g. a wig, vitamin supplements, anti-nausea medication) and call in her loan.

Of course, data privacy issues exist with or without artificial intelligence. But the success of an AI program hinges on its ability to analyze massive amounts of data. Before IBM's Watson competed on the game show "Jeopardy!" it consumed 200 million pages of structured and unstructured content, including the full text of Wikipedia. Without all that data, Watson would have been stumped by many of the questions.

Unlike IBM in that instance, banks cannot just throw everything into their AI engines, Ehrlich said.

"It's tempting to just suck up as much as you can and keep it in one central repository, without saying what you'll be using it for in the future," he said. Banks need to be upfront about what information they collect, how it is collected and what it is used for. This is also a concern for Facebook's payment services and PayPal's credit services, where the companies have personal, intimate information about their customers.

RECOURSE
Another issue with automating decisions using AI, especially if they are executed automatically through smart contracts, is recourse.

"If we're not careful, we may not be doing the good we think we are by automating everything," said Christine Duhaime, a Toronto lawyer who specializes in anti-money-laundering rules, counterterrorist financing and foreign asset recovery and is the founder of the Digital Finance Institute. "The reason is because the more we automate, the less easy it is to talk to a human about what your problem is. Try to reach Google. I don't know that I've ever succeeded in reaching Google for anything in my life."

"It's hard to encode morals and colorblindness into a machine — its only purpose is to look for solutions for the company."

Ehrlich also noted that if an automated decision has a negative outcome for the customer, the person needs to have a way to protest it.

There is also extra onus on the company in this case to make sure all the data used in the decision is accurate and up-to-date, he said. Automated decisions should not be allowed, he said, unless users give explicit consent for them, the company has appropriate technical safeguards and privacy policies in place, and the company is only using data to which it has authorized access, that is accurate, and that is categorized properly.

And there is a danger in AI of excluding people who do not use computers or mobile devices because they are disabled or elderly, Duhaime noted. "Because we're talking to machines, we're going to lose a lot of customer service and we're going to lose a lot of ability to solve the banking problems we have," she said. "That's going to create other banking problems that can never get solved because there aren't humans ever available to solve them."

AI systems could be used to build technology for disabled people, she said. "If we don't accommodate disability in a lot of this, I think we'll be causing more harm than good."

For reprint and licensing requests for this article, click here.
Investment technology Technology Banking IBM
MORE FROM FINANCIAL PLANNING