Medical Device Makers May Soon Face Increased Risk of Product Liability Lawsuits
A new push to bring more artificial intelligence (AI)-based medical devices to the market could also open the door for more product liability lawsuits to be brought against manufacturers and companies if the technology changes after the device is approved by the Food and Drug Administration (FDA). Specifically, if the device changes significantly, it could even become a different product than what the FDA originally approved, which would then cause manufacturers to lose any liability immunity that they obtained when the device was approved and provide consumers with more leeway to sue under the allegation that the product has become defective. In other words, there would be a “gray area” as to what was preempted, especially when it comes to state tort liability.
While the FDA is reportedly seeking to update its regulations in order to help companies develop new, innovative technologies, still, the agency will always have to approve medical devices before they hit the market and has been working with the same approval processes – involving federal law generally preempting state personal injury lawsuits – for years. This includes requiring the pre-market review process anytime the devices “algorithm” undergoes a “major” change; which usually translates into a significant amount of red tape for the companies. Companies currently rely on a US Supreme Court decision in enjoying preemption from product liability claims. In this decision, the Court held that the medical device amendments of 1976 to the Food Drug and Cosmetic Act preempt any state law tort claims against these manufacturers that go through the FDA’s premarket approval process.
New Proposal Could Interfere With Preemption Guarantees
According to a recent discussion paper put out by the FDA, the agency is now considering what they call a “life cycle-based regulatory framework” that would allow the algorithm to “adapt to actual experiences” without undergoing additional reviews as long as the device itself remained “safe and effective,” It is now accepting comments from the public on this proposal through early June. Still, it is already unclear as to whether these regulations would be rigorous enough to support legal preemption for the companies.
As attorneys who regularly represent involved manufacturers in product liability defense, we would argue that preemption should apply at all times to these devices requiring premarket approval–as long as any changes still fall within the scope of what the FDA originally approved–because the FDA would essentially be approving the methodology of the device; not just a device itself. This is because medical technologies in particular have essentially become more of a process than an actual object that stays the same throughout time. Of course, if the algorithm changes the device beyond what it was originally approved for, preemption then becomes more of an open question.
Other Options Regarding Protection from Liability Also Unclear Under New Proposal
Other options for manufacturers of AI devices include seeking approval designation under what’s known as Section 510(k) for devices that are “substantially similar” to other devices already on the market or what is known as “de novo” classification for new devices that present low-to-moderate risk. Under this new proposal, it is unclear what exactly would be needed for non-high-risk AI devices; in other words, would they be similar enough to existing devices to qualify for the 510(k) clearance? And at what point would an algorithm create a “new product,” requiring the manufacturer to reapply for approval?
Contact Our Texas Product Liability Defense Attorneys To Find Out More
As of now, the FBI has cleared AI devices with what is known as “locked algorithms.” Conversely, the proposal would set the stage for “machine-learning algorithms” that can automatically change to incorporate feedback from user data. If the proposed regulation was implemented, device makers would have to maintain what are known as “good machine learning practices” that would provide algorithmic transparency and allow manufacturers to submit modification plans during the premarket review; which spell out changes to a device’s performance or intended use. Ideally, this would also include methods for controlling anticipated changes. Manufacturers would also be expected to collect and monitor performance in order to identify ways of improving the devices.
Opening the door to manufacturers potentially facing additional liability risk in exchange for providing the market with innovation is a dangerous concept. Our Brownsville product liability defense attorneys provide skilled legal advice and/or representation to manufacturers concerned about facing any liability claim. Contact us today at Colvin, Saenz, Rodriguez & Kennamer, L.L.P. to find out more.