Guidelines for use of AI in healthcare are on track, says CHAI

The Coalition for AI Health has announced it will meet this month to finalize its consensus-driven framework and share recommendations by year-end in a progress update.

WHY IT MATTERS

CHAI convened in December to develop consensus and mutual understanding with goals to tame the drive to buy artificial intelligence and machine learning products in healthcare and arm health IT decision-makers with academic research and vetted guidelines to help them choose dependable technologies that provide value.

Through October 14, CHAI is accepting public comments on its work examining testability, usability and safety at a workshop with subject matter experts from healthcare and other industries the organization held in July. 

Previously, CHAI produced a sizeable paper on bias, equity and fairness based on a two-day convening and accepted public comments until the end of last month. The result will be a framework, the Guidelines for the Responsible Use of AI in Healthcare, that intentionally fosters resilient AI assurance, safety and security, according to the October 6 progress update.

“Application of AI brings a tremendous benefit for patient care, but so is its potential to exacerbate inequity in healthcare,” said Dr. John Halamka, president of Mayo Clinic Platform and cofounder of the coalition in the update. 

The coalition says it is also working to build a toolset and guidelines for the patient care journey, from chatbots to patient records, so that populations are not adversely affected by algorithmic bias. 

“The guidelines for ethical use of an AI solution cannot be an afterthought. Our coalition experts share a commitment to ensure patient-centered and stakeholder-informed guidelines can achieve equitable outcomes for all populations,” Halamka remarked in the update.

The progress update comes on the heels of the release this week of The White House Blueprint for an AI Bill of Rights.

CHAI was formed by Change Healthcare, Duke AI Health, Google, Johns Hopkins University, Mayo Clinic, Microsoft, MITRE, Stanford Medicine, UC Berkeley, UC San Francisco and others, and is being observed by the U.S. Food and Drug Administration and the National Institutes of Health, and now, the Office of the National Coordinator for Health IT, according to the announcement.

Some of the organizations are also part of the Health AI Partnership led by Duke Institute for Health Innovation and are developing open source guidance and curriculum based on best practices for AI cybersecurity. DIHI is currently soliciting funding applications from faculty, staff, students and trainees across Duke University and Duke University Health System for innovation projects related to automation to enhance healthcare operational efficiency.

ONC has focused on the evolving space and in its blog series discussed what it might take to get the best out of algorithms to drive innovation, increase competition and improve care for patients and populations. 

“What we know from studies to date is that AI/ML-driven predictive technology may positively or negatively impact patient safety, introduce or propagate bias, and result in increased or reduced costs. In short, results have been mixed. But the interest – and potential benefit – remains high,” wrote ONC authors Kathryn Marchesini, Jeff Smith and Jordan Everson in a June blog post.

National need is driving a national framework for health AI that promotes transparency and trustworthiness, according to Dr. Brian Anderson, cofounder of the coalition and chief digital health physician at MITRE. 

“The enthusiastic participation of leading academic health systems, technology organizations and federal observers demonstrates the significant national interest in ensuring that health AI serves all of us,” he said in the CHAI progress update. 

THE LARGER TREND

The AI collaboration was also launched to address compromised programs that present risks of harm to clinicians and patients and boost discernment and understanding about the proliferation of AI software across the healthcare industry.

CHAI researchers are also preparing to develop an online curriculum to help educate health IT leaders, set standards for how staff should be trained and how AI systems should be supported and maintained.

“These systems can embed systemic bias into care delivery, vendors can market performance claims that diverge from real-world performance and the software exists in a state with little-to-no software best practice guidance,” according to the CHAI launch statement.

But by defining fairness and efficiency goals upfront in the machine learning process and designing systems to achieve those goals, many in healthcare believe slanted outcomes can be prevented and the benefits of AI in healthcare operations and patient care can be realized.

ON THE RECORD

“It is inspiring to see the commitment of the White House and U.S. Department of Health and Human Services towards instilling ethical standards in AI,” said Halamka in the update. 

“As a coalition we share many of the same goals, including the removal of bias in health-focused algorithms, and look forward to offering our support and expertise as the policy process advances,” he said.

Andrea Fox is senior editor of Healthcare IT News.
Email: [email protected]

Healthcare IT News is a HIMSS publication.

Source: Read Full Article