(TND) — People appear just as willing to sue if they're wronged by an artificial intelligence algorithm as opposed to a human decision-maker, according to a RAND experiment.
As more companies, organizations and agencies leverage AI tools for a greater variety of tasks, there’s risk that those algorithms could spit out wrong and harmful verdicts.
It might be an AI system rejecting the job application for a highly qualified applicant. Or it could be an AI system unfairly flagging an application for unemployment benefits as fraudulent.
But will people hold AI accountable?
“There's been some concern that, if you're on the receiving end of a bad algorithmic decision, you might not even know whom you could sue. But, as it turned out, at least in our experiment, that didn't stop people,” RAND researcher Elina Treyger, a lawyer and political scientist, said in an article on the organization’s website.
Treyger and her team surveyed thousands of people about two scenarios.
The first was the job rejection scenario.
The second involved a legitimate benefits claim that was flagged as potentially fraudulent.
Different groups considered both scenarios, both as if they involved human decision-makers and separately as if they involved AI decision-makers.
They found people tended to judge algorithms more harshly than humans for otherwise identical decisions. People were more likely to perceive AI decisions as unfair, error-prone and nontransparent, according to RAND.
And people voiced a willingness to take legal action against a harmful AI decision.
In the job applicant scenario, 24.6% were willing to sue after an AI decision, while 27.5% were willing to sue following a bad human-made decision.
About 48% said they would be willing to file a complaint or an appeal or even join a class-action lawsuit based on an AI decision in the job applicant scenario.
In the second scenario, the one involving an application for unemployment benefits, 43.4% were willing to file a lawsuit following a detrimental AI decision. They found 41% would be willing to sue after a human-made decision.
In the second scenario for an AI decision, over 87% would file a complaint, and over 61% would be willing to join a class-action lawsuit.
Anton Dahbura, an AI expert and the co-director of the Johns Hopkins Institute for Assured Autonomy, said there’s “absolutely” merit in the idea of pursuing a legal remedy to an adverse AI-powered result.
Problems will happen, he said. We’re using AI to address very complex issues.
And sometimes we’re willing to give up control in the pursuit of getting more accomplished with AI tools.
“We're saying, ‘Well, I can accomplish more in the sum of things, if I let it do the decision-making.’ But the trade-off is it's not always going to get it right,” said Dahbura, who wasn’t involved with the RAND experiment.
Placing the blame is “going to be messy,” he said.
People at different stages of the AI supply chain, he’s found, might be willing to point fingers. Perhaps it’s a company that incorporates AI into one of its products or services, but it blames the developer if something goes wrong.
“There's going to be a big game of passing the buck,” he said. “And so, there are legal and there are ethical challenges ahead.”
The RAND researchers said policymakers should consider spelling out specific legal rights for people to contest AI decisions.
Dahbura expects we’ll get some sort of AI-focused consumer rights eventually, but it’ll probably take years to sort things out.
“I really think that this kind of issue cuts to the heart of how we accept AI-enabled applications in society,” he said.Treyger, the RAND researcher, said we have existing standards, including antidiscrimination laws, that could be applied now to an algorithmic context.
“But it's not always so clear,” she said. “And so one implication of our study is, yes, we should establish pretty clearly these rights in the law.”
ncG1vNJzZmivp6x7uLbAnKuvZpOkunC6xLCqaKaRqbawuoywpquklGTAtrGMmqVmmZycvLO106GkZqqRo7Fuv9SrrZ6xXZbArL%2BMop1mqJWkva2xjLCcq51drLatuMinnmasn2K1sLjDZpiiZZGYsLDBza2Ym6SVYrmms8ClZKuhl53BtHnAq6uinpmYtqK4jKKlrZ2cobaosc2cnGaslZi1r7vLqJ6yZZmjerS7wqKcrbFdp7K0scCrmqFlo6q%2Ft7HYZqqtrZSuequ7x6eqZqCfpbiqutJmoKerpJ7BtsDEZp2oql2WwLTB0Z6bZpmlqbyvu8yy