
Photo: Leon Neal/Getty
The U.S. Department of Health and Human Service has opened access to OpenAI's ChatGPT to employees across its agencies, but the security guardrails in place are unclear at this time.
WHY IT MATTERS
HHS Deputy Secretary Jim O'Neill has cautioned users to watch for potential bias and "treat answers as suggestions," according to an agency email obtained by FedScoop.
O'Neill also reportedly specified that agencies subject to HIPAA rules may not disclose protected health information to the tool, according to the article.
The news website 404 Media says it also obtained the same email, and reported that O'Neill said the use was secure and encouraged widespread use across the health department.
"You can input most internal data, including procurement sensitive data and routine non-sensitive personally identifiable information, with confidence," he said, according to that article.
HHS has not responded to a request for comment from Healthcare IT News, but this article will be updated if it does.
THE LARGER TREND
ChatGPT and tools like it have been a key area of exploration at health systems of all shapes and sizes since OpenAI changed the conversation about artificial intelligence since 2022. Large language models like GPT have big potential for healthcare.
Some providers are using their own LLM software to interact with data in electronic health records. For instance, Stanford Medicine is piloting a homegrown tool called ChatEHR that uses an LLM similar to OpenAI's GPT-4 to automate the summarization of patient charts, ask questions about patient medical histories and perform other administrative tasks.
But there are risks. AI hallucinations can cause significant damage in healthcare.
Artificial intelligence "struggles with context," explained Dr. Jay Anders, chief medical officer at Medicomp Systems, a clinical AI vendor. "If I'm discussing a physical exam, it might introduce elements that have nothing to do with physical examinations. It loses track of what we're actually talking about."
A recent study by the Mount Sinai Icahn School of Medicine compared six LLMs and found that they were all susceptible to adversarial hallucination attacks.
"Our results highlight that caution should be taken when using LLM to interpret clinical notes," they said in their report published this week in Nature last month.
More recently, a tool developed by researchers at Mount Sinai was shown to find and reduce biases in datasets used to train machine learning models – helping improve AI's accuracy.
ON THE RECORD
"I'm excited to move us forward by making ChatGPT available to everyone in the Department effective immediately," said O'Neill in the email sent to HHS employees, according to 404 Media.
He noted that some divisions, such as FDA, "have already benefitted from specific deployments of large language models to enhance their work, and now the rest of us can join them. This tool can help us promote rigorous science, radical transparency, and robust good health. As Secretary Kennedy said, 'The AI revolution has arrived.'"
Andrea Fox is senior editor of Healthcare IT News.
Email: afox@himss.org
Healthcare IT News is a HIMSS Media publication.