Machine learning can revolutionize healthcare, but it also carries legal risks
In a preview of the HIMSS Machine Learning & AI for Healthcare event, Carium general counsel Matt Fisher explains two areas of potential liability concerning artificial intelligence – and explains how healthcare organizations can protect themselves.
And as Matt Fisher, general counsel for the virtual care platform Carium, pointed out, those potential impacts can, in turn, leave organizations open to possible liabilities.
"It's still an emerging area," Fisher explained in an interview with Healthcare IT News. "There are a bunch of different questions about where the risks and liabilities might arise."
Fisher, who is moderating a panel on the subject at the HIMSS Machine Learning & AI for Healthcare event this December, described two main areas of legal concern: cybersecurity and bias. (HIMSS is the parent organization of Healthcare IT News.)
When it comes to cybersecurity, he said, the potential issues are not so much with the consequence of using the model as with the process of training it. "If big companies are contracting with a healthcare system, we're going to be working to develop new systems to analyze data and produce new outcomes," he said.
And all that data could represent a juicy target for bad actors. "If a health system is transferring protected health information over to a big tech company, not only do you have the privacy issue, there's also the security issue," he said. "They need to make sure their systems are designed to protect against attack."
Some hospitals that are victimized by ransomware have faced the double whammy of lawsuits from affected patients who say health systems should have taken more action to protect their information.
And a breach is a matter of when, not if, said Fisher. Fisher said synthetic or de-identified data are options to help alleviate the risk, if the sets are sufficient for training.
"Anyone working with sensitive information needs to be aware of and thinking about that," he said.
Meanwhile, if a device relies on a biased algorithm and results in a less than ideal outcome for a patient that could possibly lead to claims against the manufacturer or a health organization. Research has shown, for instance, that biased models may worsen the disproportionate impact the COVID-19 pandemic has already had on people of color.
"You've started to see electronic health record-related claims come up in malpractice cases," Fisher pointed out. If a patient experiences a negative result from a device at home, they could bring the claim against a manufacturer, he said.
And a clinician relying on a device in a medical setting who doesn't account for varied outcomes for different groups of people might be at risk of a malpractice lawsuit. "When you have these types of issues widely reported and talked about, it presents more of a favorable landscape to try and find people who have been harmed," said Fisher.
In the next few years, he said, "We'll start to see those claims arise."
Addressing and preventing such legal risks depends on the situation, said Fisher. When an organization is going to subscribe to or implement a tool, he said, it should screen the vendor: Ask questions about how an algorithm was developed and how the system was trained, including whether it was tested on representative populations.
"If it's going to be directly interacting with patient care, consider building [the device's functionality] into informed consent if appropriate," he said.
Fisher said he hopes panel attendees leave the discussion inspired to engage in discourse about the legal risks at their own organizations. "I hope it spurs people to think about it and to start a dialogue," he said.
Ultimately, he said, while an organization can take steps to reduce liability, it's not possible to fully shield yourself from the threat of legal action. "You can never prevent a case from being brought," he said, but "you can try to set yourself up for the best footing."