spot_img
Monday, December 23, 2024
spot_img
HomeHappening NowUnregulated AI is already at work in your doctor's office

Unregulated AI is already at work in your doctor’s office

-

“There’s no good evidence and then they’re being used in patient-facing situations, and that’s really bad,” Brown University computer scientist Suresh Venkatasubramanian said of AI systems being adopted by doctors .

Venkatasubramanian has a unique point of view on the subject. He helped draft the blueprint for an AI bill of rights that the Biden administration issued in October 2022. The blueprint called for strong human oversight to ensure AI systems do what they’re told to do. suppose they have to.

But the document is still just a piece of paper; President Joe Biden has not asked Congress to ratify it, and no lawmakers have moved to do so.

There is evidence that Venkatasubramanian’s concern is justified. New York City Forms Coalition to End Racism in Clinical Algorithms and is putting pressure on health systems to stop using AI that the coalition says is based on data sets that underestimate black individuals’ lung capacity and their ability to deliver vaginally after a C-section, and overestimate their muscle mass.

Even some AI developers are worried about how doctors use their systems. “Sometimes when our users got used to our product, they started to blindly trust it,” said Eli Ben-Joseph, co-founder and CEO of Regard, a company that has 1.7 million diagnoses made with the its technology, which incorporates in the medical records of health systems.

Be mindful of the safeguards in place, alert clinicians if they move too quickly or don’t read all of the system’s output.

However, Congress is far from a consensus on what to do celebrating a summit with tech industry leaders last month.

The Food and Drug Administration, which has taken over from Biden, has authorized new AI products before they hit the market, without the kind of very comprehensive data required by drug companies and medical device makers. high risk The FDA determined that 3.5 percent of the AI ​​products it approved required a higher level of data, similar to other high-risk devices and drugs.

Troy Tazbaz, the director of the agency’s Digital Health Center of Excellence, said the FDA recognizes it needs to do more. AI products made for health care and similar to ChatGPT, the bot that can pass medical exams, require “a very different paradigm” to regulate, he explained. But the agency is still working on it.

Meanwhile, the adoption of AI in health care is advancing even though the systems, Venkatasubramanian said, are “incredibly fragile.” In diagnosing patients, he sees risks of error and the possibility of racial bias. He suspects that doctors will too easily rely on the judgments of the systems.

Almost all of the 10 innovators who built the technology who spoke to POLITICO acknowledged the dangers, without oversight.

“There are probably already a number of examples today, and there will be more in the next year, where organizations are deploying large language models in a way that is actually not very secure,” said Ross Harper, founder and CEO of Limbic, a company that uses AI in a behavioral therapy app, said.

“They would start to blindly trust it”

Limbic has secured medical device certification in the U.K., and Harper said the company is making progress in the U.S. despite regulatory uncertainty.

“It would be a mistake not to take advantage of these new tools,” he said.

Limbic’s chatbot, which the company said is the first of its kind in the United States, works through a smartphone app, along with a human therapist.

Patients can send messages to the bot about what they’re thinking and feeling, and the bot follows therapy protocols to respond, using artificial intelligence and a separate statistical model to ensure responses are accurate and helpful.

A therapist provides input for the AI ​​to guide its conversations. And the AI ​​informs the therapist with notes from their chats, better informing the patient’s future therapy sessions.

Another company, Talkspace, uses AI that it says can help flag people at risk of suicide after analyzing conversations with therapists.

Other AI products create and summarize patient charts, as well as review them and suggest a diagnosis.

Much of it is intended to help overworked doctors lighten their burdens.

Security and innovation

Students of the technology said AI systems that change or “learn” as they gain more information could become more or less useful over time, changing their safety or effectiveness profile.

And determining the impacts of these changes is made even more difficult because companies closely guard the algorithms at the heart of their products — a proprietary “black box” that protects intellectual property but stands in the way of regulators and external researchers.

The Office of the National Coordinator for Health Information Technology at HHS has proposed a policy The goal is to achieve more transparency about AI systems being used in health, but it does not focus on the safety or effectiveness of these systems.

“How do we actually regulate something like this without necessarily losing the pace of innovation?” Tazbaz asked, addressing the agency’s main challenge for AI. “I always say that innovation always has to work within a parameter, a safety parameter.”

There are no existing regulations that specifically address the technology, so the FDA is planning a new system.

Tazbaz believes the FDA will need a process of ongoing audits and certifications of AI products, hoping to ensure continued safety as systems change.

The FDA has already approved about 700 devices with artificial intelligence, mostly for radiology, where the technology has shown promise in reading X-rays, said FDA Commissioner Robert Califf at an August meeting he believed the agency has done well with predictive AI systems, which take data and guess an outcome.

But many products currently in development use newer, more advanced technology capable of responding to human queries, which Califf called a “kind of fear area” of regulation. Experts said they present even more challenges to regulators.

And there’s another risk, too: Too onerous rules could quash innovation that could benefit patients if it can make care better, cheaper, and more equitable.

The agency is being careful not to slow the growth of new technology, Tazbaz said, by talking with industry leaders, listening to their concerns and sharing the agency’s thinking.

The approach of the World Health Organization is not different from that of Washington: of concern, guidance and discussion. But with no powers of its own to regulate, the WHO recently suggested that governments among its members pick up the pace.

AI models “are being deployed rapidly, sometimes without a full understanding of how they might work,” the body said in a statement.

Still, whenever it moves to tighten the rules, the FDA can expect pushback.

Some industry leaders have suggested that doctors are themselves a sort of regulator, as they are experts who make the final decision independently of AI co-pilots.

Others argue that even the current approval process is too complicated (and burdensome) to support rapid innovation.

“I feel like I’m the technology killer,” said Brad Thompson, an attorney at Epstein Becker Green who advises companies on their use of AI in health care, “by fully informing[ing] of the regulatory landscape”.

‘Would I personally feel safe?’

In the past, Thompson would have gone to Congress with his concerns.

But lawmakers aren’t sure what to do with AI, and legislation has slowed as Republicans select a new speaker. Now, lawmakers must reach a deal to fund the government in fiscal year 2024.

“That avenue is not available now or for the foreseeable future,” Thompson said of attempts to update the regulations through Congress, “and it breaks my heart.”

Schumer recently convened an AI forum to try to figure out what Congress should do about the technology across all sectors. The House also has an AI task force, though its output is likely tied to its ability to solve government funding and leadership challenges.

Rep. Greg Murphy (RN.C.), co-chair of the Physicians Caucus, said he wants to let state governments take the lead in regulating the technology.

Senator from Louisiana Bill Cassidythe ranking Republican on the committee that oversees health policy, has said Congress should do more, but without hindering innovation.

Cassidy’s plan addresses many of the concerns raised by researchers, regulators and industry leaders, but he has not proposed any legislation to implement it.

Given the uncertainty, some of the big players in health tech are deliberately targeting “low-risk, high-reward” AI projects, as Garrett Adams of electronic health records giant Epic put it. This includes writing notes, summarizing information, and acting more like a secretary than a co-pilot for doctors.

But the implementation of these technologies could lay the groundwork for more aggressive advances. And several companies are charging ahead, even suggesting that their products will inevitably replace doctors.

“We want to eventually transition parts of our technology to become autonomous, to fully automate and take the doctor or nurse out of the loop,” Ben-Joseph said, suggesting a time frame of 10 or 20 years.

Count Tazbaz among the skeptics.

“I think the medical community needs to effectively look at the responsibilities,” he said of AI used to diagnose patients. “Would I personally feel safe? I think it depends on the use case.”

SOURCE LINK HERE

Related articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Stay Connected

0FansLike
0FollowersFollow
0FollowersFollow
0SubscribersSubscribe
spot_img

Latest posts

en_USEnglish