How can clients be protected against robos gone bad?
Robo advisors are pitched as conflict-free, automated advisors. But can they be designed to work against client interests?
Some scholars say more questions need to be asked about the digital advice models being developed.
“Regulators need to learn more about how these things are actually working,” says Tom Baker, a professor at University of Pennsylvania Law School and co-author of a recent paper on the future of robo regulations.
With the digital investment space now projected to take a $1 trillion bite out of client assets by 2020, watchdogs could have a lot of catching up to do if they don’t act fast, he warns.
FINRA and the SEC have released recommendations and exam guidelines for robo advice use and application by RIAs. But Baker is concerned not enough inquiry has been made into the algorithms at the heart of every robo platform used to suggest investment products.
These algorithms synthesize millions of data points — from broad market data to client risk tolerances — and suggest suitable products. But, what if those equations are skewed to suggest one product over another?
“Regulators need to get involved and ask for clear explanations of exactly what algorithms are doing and why,” Baker says. “Then, test the algorithm — that’s not rocket science.”
Not only do algorithms have to treat all products equally, he says, they need to be technically sound as well. Because the equations are built by people, human error could be introduced at a fundamental level that could compromise the entire system. “The other issue is incompetence,” he says. “They might just be badly coded.”
One particularly troubling scenario is a bait and switch on a hybrid advice platform, Baker says.
Investors may see one product available on the firm’s website and decide to learn more about it. But, when they call their advisor over the phone, suddenly that product no longer exists. “The website could say one thing,” Baker says, “but the advisor could switch the product out for one with a higher fee.”
And while algorithms can theoretically skew which products are suggested, the platform’s architecture itself can influence how clients choose the recommended products, Baker says.
“There’s been a lot of research by people in marketing and behavioral economics around ways that decisions can be structured for good or ill,” Baker says.
For example, changing the colors in which some products are displayed over others — or just manipulating the order in which they’re displayed — can have a serious impact on the client’s final decision.
“Now, imagine that being done in an evil way,” Baker says, adding that a more lucrative product with higher fees could be displayed in green, while competitors’ products show up in orange.
Baker is quick to note there is no evidence of any manipulation. “There isn’t any indication that choice architecture is being used to mislead people,” he says, “but it sure could be.”