As AI Comes to Life So Does the Potential for Litigation


In a time when Amazon’s Alexa can converse in near-natural language and human connections are mediated by the internet, perhaps it’s not surprising that someone might accidentally come under a robot’s romantic spell. But that’s not what men using a popular dating website signed up and, more pertinently, paid for.

In re Ashley Madison Customer Data Security Breach Litigation, some 18 gentlemen sued ruby Corp, formerly Avid Life Media, in the Eastern District of Missouri over payment for credits to communicate with other users by email or by real-time chat. The plaintiffs allege they were not aware that some of the website’s estimated 38 million members were fake “bot” profiles.

“This is the first case where a chat bot or a computer-automated program that interacts with humans is being held accountable for use as a mechanism for fraud,” David Walton, partner and co-chair of the Privacy, Data and Cyber Security practice at Cozen O’Connor in Philadelphia, told PacerMonitor.

According to ad verification company Adloox, invalid traffic cost brands $12.5 billion in 2016 but agency group The Partnership claims the cost for ad fraud will increase by some $4 billion in 2017.

“It’s a huge issue outside of just dating websites,” Mr. Walton told PacerMonitor. “It’s fake advertising.”

At the heart of the Ashley Madison case is scope of liability and whether there was reasonable reliance on the fake female chat bot.

Scope of liability is a legal obligation that is imposed while performing any acts that could foreseeably harm others where plaintiffs must show breach of duty of care, according to Walton. A determination of reasonable reliance is fundamental tort law that typically depends on individual factors of a case when decided by a jury unless adjudicated by a judge.

“When a developer designs an AI product or program and it’s employed by a vendor, that creates a duty to the user,” said Walton. “What the courts may struggle with in future AI litigation is how far that duty extends.”

In the 1928 precedent setting scope of liability case Palsgraf v Long Island Railroad Co, the New York Appellate Division affirmed the lower court’s decision to favor the plaintiff who had been struck by fireworks in an explosion on the train’s platform caused by another boarding passenger, but the Court of Appeals of New York reversed the ruling.

Among the dissenters was Justice William Andrews, who noted that negligence may be defined roughly as an act or omission which unreasonably does or may affect the rights of others or which unreasonably fails to protect oneself from the dangers resulting from such acts.

“Every one owes to the world at large the duty of refraining from those acts that may unreasonably threaten the safety of others,” Andrews wrote. “When injuries do result from our unlawful act we are liable for the consequences. It does not matter that they are unusual, unexpected, unforeseen and unforeseeable. But there is one limitation. The damages must be so connected with the negligence that the latter may be said to be the proximate cause of the former.”

According to the complaint, the use of bots was instrumental in driving income to the website, defendants had benefited from extracting money under those false pretenses, and plaintiffs had been harmed by making payments for worthless services.

“The consumer was not getting what they thought they were buying regardless of one’s judgment of the service itself,” Ben Snipes, a product director with Wolters Kluwer Legal & Regulatory U.S. in New York, told PacerMonitor News.

Because the Ashley Madison website is designed to facilitate intimate relationships for individuals who are married or in a committed relationship, the majority of the affected men brought claims under pseudonyms in order to reduce the risks associated with being publicly identified in the litigation, which is why the plaintiffs in the bot suit also sued for damages arising from a data security breach that allegedly occurred in July 2015.

The data security breach allegedly resulted in the electronic theft of their personally identifiable and financial information on August 18 and 20, 2015.

“For many of the website’s members, the fact that this information has been made public has caused and will continue to cause irreparable harm, including public humiliation, ridicule, divorce, extortion, loss of employment and increased substantial risk of identity theft and other types of fraud, among other catastrophic personal and professional harm,” wrote the plaintiff’s attorney John J. Driscoll in court filings.

An $11.2 million settlement fund has been proposed in an order signed by U.S. District Judge John A. Ross on July 21, 2017. Subject to final approval in November, all parties agree that defendants do not admit wrongdoing and categorically deny the material allegations and claims.

As a result, findings that set a legal binding precedent will be nonexistent for use in similar AI cases, according to Walton.

“But the $11.2 million settlement is precedential itself in the sense that it’s going to raise awareness around these types of AI incidents, which will create more lawsuits by mere fact that there’s been a 10 figure dollar settlement,” said Walton.

As a result of the suit, disclaimers informing users of various services that they are interacting with a bot and not a human could soon become de rigueur.

“It’s a wake-up call because now AI developers would do well to be concerned about whether the end-user thinks the bot is a human being and whether bots will pass the Turing Test,” said Snipes.

The Turing Test is a measure of a machine’s ability to imitate or exhibit human behavior that was conceived in the 1950s by Alan Turing..

The proposed Ashley Madison multimillion-dollar settlement could potentially raise the legal bar to the point that stating a service, product or website employing AI technology is “for amusement purposes only” will no longer be an adequate legal defense.

“If a consumer’s attorney can successfully argue that the method of notice and consent was not adequate given the type of service and potential harm, then the defense may not be deemed adequate by a Court,” Snipes said. “Further, courts may determine that a service knew or should have known that the information and service they provided was actually being used for purposes other than just amusement.”

In the interest of avoiding lawsuits that compensate as handsomely as ruby Corp is poised to pay, don’t be surprised if and when conversations with Siri or Alexa start with “Hi, I’m Siri the bot” or “I’m Alexa the bot.”

Print Friendly, PDF & Email

About Author

Leave A Reply