News: Responsible AI use in healthcare guidelines released by new coalition
A document offering tips on how to meet clinical and quality standards while using artificial intelligence (AI) in healthcare has been released by the Coalition of Health AI (CHAI), an organization launched last year made up of several leading academic and healthcare institutions, a few tech giants, and overseen by a few federal agencies. In guidelines titled Blueprint for Trustworthy AI Implementation Guidance and Assurance for Healthcare, CHAI offers tips to help providers adopt and use technology responsibly.
The 24-page document builds off of the White House Office of Science and Technology Policy’s Blueprint for an AI Bill of Rights as well as the United States Commerce Department’s National Institute of Standards and Technology’s AI Risk Management Framework, HealthLeaders reported.
“In a world with increasing adoption of artificial intelligence for healthcare, we need guidelines and guardrails to ensure ethical, unbiased, appropriate use of the technology,” John Halamka, MD, MS, president of the Mayo Clinic Platform and a co-founder of CHAI, said in a press release. “Combating algorithmic bias cannot be done by any one organization, but rather by a diverse group. The blueprint will follow a patient-centered approach in collaboration with experienced federal agencies, academia, and industry.”
Laura Adams, senior advisor of the National Academy of Medicine (one of the collaborating academic institutions), said they are working to make separate guidelines for responsible AI development and adoption when it comes to healthcare delivery. She emphasized that the time is now to honor, reinforce, and align efforts on this issue nationwide.
“The challenge is so formidable and the potential so unprecedented,” she said. “Nothing less will do.”
Editor’s note: To read HealthLeaders’ coverage of this story, click here. To read CHAI’s guidelines, click here.