Texas has secured a groundbreaking settlement with an artificial intelligence company accused of exaggerating the effectiveness of the healthcare tools it supplied to state hospitals.

The company, Pieces Technologies, had been under investigation by Attorney General Ken Paxton’s office for alleged false and misleading statements about the accuracy and safety of its products before the settlement in late August.

“AI companies offering products used in high-risk settings owe it to the public and to their clients to be transparent about their risks, limitations, and appropriate use. Anything short of that is irresponsible and unnecessarily puts Texans’ safety at risk,” stated Paxton.

Legal documents showed that Pieces was given real-time healthcare data from at least four Texas hospitals. The company then fed the information into its AI to summarize patients’ conditions and recommend treatment for hospital staff.

But according to Paxton’s office, Pieces Technologies was far less accurate than it had claimed when it pitched the product to Texas hospitals, putting patients at risk.

One advertisement campaign even claimed that Pieces’ AI products had an error rate of only “<1 per 100,000.” The state’s subsequent investigation found that the number was inaccurate and that Pieces may have violated the Texas Deceptive Trade Practices Act in marketing it as such.

Paxton’s statement concludes:

Hospitals and other healthcare entities must consider whether AI products are appropriate and train their employees accordingly.

However, Pieces took issue with how the attorney general’s office framed the settlement.

The company told Law360 that the agreement, officially known as an assurance of voluntary compliance, made no overt mention of the risks of the company’s AI tools.

Pieces also maintained that it “accurately set forth and represented” the error rate of its AI product, continuing to deny any wrongdoing even while agreeing to the settlement.

“Despite the disappointing and damaging misrepresentation of this agreement in the Texas OAG’s press release, Pieces will continue to work collaboratively at both state and national levels with organizations that share a common commitment to advancing the delivery of high quality and safe health care across communities,” contended Pieces.

Ultimately, the settlement stipulates that Pieces must inform customers about any potential harm from its products, disclose the metrics and benchmarks it uses to advertise its AI tools, and refrain from unsubstantiated claims about its products.

Paxton’s office will drop litigation against the company in return.

David Dunmoyer, the campaign director for Better Tech for Tomorrow at the Texas Public Policy Foundation, told Texas Scorecard that Paxton is “right over the target” for demanding that AI companies be more “transparent and ethical.”

“This case is a harrowing example of the harm that can befall patients when AI companies are more interested in rapacious profit over developing responsibly for the sake of humanity,” asserted Dunmoyer.

“This settlement signals that the ‘move fast and break things’ California model of tech is not welcome in Texas when companies use highly consequential technology in ways that deceptively harm consumers,” he added.

Luca Cacciatore

Luca H. Cacciatore is a journalist for Texas Scorecard. He is an American Moment inaugural fellow and former welder.

RELATED POSTS