Decision Support
In a wide-ranging HIMSS19 week conversation with Healthcare IT News, Dr. Kyu Rhee discusses artificial intelligence, value-based care, healthcare quality, and how client health system Health Quest is dealing with PCMH, MIPS, CPC+ and HEDIS efforts.
Experts from Microsoft, AMA and Cleveland Clinic weigh the serious considerations that must be addressed as AI and machine learning increasingly embed themselves in clinical and consumer applications.
The vendor’s new system leverages its machine learning engine to offer an interactive retrospective analysis of order utilization, matched up against the evidence-based interventions proven to affect quality outcomes.
The provider of case management services to various Indiana Medicaid programs linked family caregivers with remote coaches to relieve caregiver burden and improve outcomes.
CEO Dan Burton explains what the billion-dollar mark means to the company and the industry, where its technology is headed – and where it stands on an IPO.
An antibiotic prescribing app developed at Capital and Coast District Health Board (DHB) has improved doctors’ adherence to prescribing guidelines.
Developed by the DHB’s Infection Services with computer science students from the Victoria University of Wellington, the Empiric app gives prescribers easy mobile access to antibiotic guidelines and assists with clinical decision making.
Empiric prescribing is when a doctor chooses an antibiotic before knowing exactly what micro-organisms are involved, so they prescribe according to the symptoms. Between 35–50 per cent of hospital inpatients are on antibiotics at any one time.
Most large DHBs will have their own empiric antibiotic prescribing guidelines, which are often used by the smaller local DHBs.
Previously these guidelines were either in a booklet or on a website, which meant doctors had to either carry a paper copy or find a computer terminal to look them up.
Infection Services clinical leader Dr Michelle Balm says that while adherence to the guidelines was good, there was room for improvement.
“We wanted to make it a lot easier for prescribers to make good clinical decisions about antibiotics use as close to bedside as possible,” Balm said.
Empiric is automatically downloaded on to all the DHB smartphones that are given to junior doctors in place of the traditional pager.
Figures show all junior doctors are using it weekly and most on a daily basis, and adherence to the guidelines has increased since the app was introduced. Doctors report that it increases confidence around prescribing.
The app is also free to download on the Apple and Android app stores and has been downloaded 700 times outside the DHB.
Balm says that while there are regional differences, the prescribing guidelines are broadly applicable across New Zealand, and that the DHB made Empiric freely available for anyone to use, “in the interest of transparency and to try and get a national discussion going on this topic”.
Empiric takes prescribers through a set of questions to produce a personalised – rather than generic – prescription recommendation. Features include options for when a patient has an allergy and where there is a risk of multi-drug resistant organism.
This article first appeared on eHealthNews.nz.
“At HIMSS19, we’ll highlight the shifts from retrospective analytics to predictive analytics, a health system chart to a longitudinal record and plan, and population health to personalized health,” a top Cerner exec says.
A new report from the Duke-Margolis Center for Health Policy explores some of the policy changes that should be made to enable safer and more effective deployment of artificial intelligence in healthcare.
As AI and machine learning become de facto ingredients in many key clinical technologies, a better understanding of how they can best be leveraged for optimal analytics and decision support is the goal of the study, "Current State and Near-Term Priorities for AI-Enabled Diagnostic Support Software in Health Care."
WHY IT MATTERS
The Duke report takes stock of the existing legal and regulatory landscape for algorithm-based CDS and diagnostic support software, and lays out some essential priorities to work toward in the years ahead to ensure safe deployment of AI in clinical settings.
These aren't just theoretical concerns. AI and ML are making inroads all over healthcare, of course, and current legislation and regulatory policy – whether it's the massive 21st Century Cures Act or FDA's new updates to the Software Pre-Cert Pilot Program – are adequate but still not optimal for a future that promises to evolve at a dizzying pace.
The Duke-Margolis paper, meant as a "resource for developers, regulators, clinicians, policy makers, and other stakeholders as they strive to effectively, ethically, and safely incorporate AI as a fundamental component in diagnostic error prevention and other types of CDS," looks at some of the major challenges and opportunities facing AI in the years ahead.
Stakeholders like those listed about will need to grapple with big questions, more than a dozen researchers and authors write. Such as:
Making a case for the value of more widespread adoption of these technologies. Such evidence would include how the software improves patient outcomes, boosts quality and lowers cost of care, gives clinicians relevant information in a manner they find "useful and trustworthy."
Assessing the potential risk of using those products in clinical settings. "The degree to which a software product comes with information that explains how it works and the types of populations used to train the software will have significant impact on regulators’ and clinicians’ assessment of the risk to patients when clinicians use this software," said Duke researchers. "Product labeling may need to be reconsidered and the risks and benefits of continuous learning versus locked models must be discussed."
Seeing to it that such systems are deployed in a way that's both flexible and ethical. More and more health systems will need to develop best practices that can mitigate any bias that could be introduced by the training data used to develop software, they explained. That's the only way to ensure that "data-driven AI methods do not perpetuate or exacerbate existing clinical biases."
Also, these organizations will have to think hard about the data implications as the products scale up into settings that may be different from initial use cases. And, of course, "new paradigms are needed for how to best protect patient privacy," according to the report.
THE LARGER TREND
As the technological capabilities and clinical applications of AI-enabled decision support continue to expand, the Duke researchers said more regulatory clarity from agencies such as FDA, which has signaled an appetite for much wider approval of machine learning apps, is needed to protect patients from wanton use of the "black box" algorithms that many have warned about.
In addition, there are other major areas that need ironing-out. Among them: proper allowances for patient privacy and data access, and the ability for these fast-emerging technologies demonstrate value and ROI for providers. In all of those, hospitals and health systems have an active role to play.
Then there are all sorts of other technical questions that exist – but haven't necessarily been answered, certainly not on a consistent or widespread basis. Such as: how new approaches to labeling different software might improve understanding of its inner workings; how to weigh the relative risks and benefits of locked versus continuously learning models of AI; how to evaluate its performance over time most effectively; how to mitigate data bias; how to assess "algorithmic adaptability" and more.
ON THE RECORD
"AI is now poised to disrupt health care, with the potential to improve patient outcomes, reduce costs, and enhance work-life balance for health care providers, but a policy process is needed," said Greg Daniel, deputy director for policy at Duke-Margolis, in a statement.
"Integrating AI into healthcare safely and effectively will need to be a careful process, requiring policymakers and stakeholders to strike a balance between the essential work of safeguarding patients while ensuring that innovators have access to the tools they need to succeed in making products that improve the public health," he said.
"AI-enabled clinical decision support software has the potential help clinicians arrive at a correct diagnosis faster, while enhancing public health and improving clinical outcomes," added Christina Silcox, managing associate at Duke-Margolis and co-author of the report. "To realize AI’s potential in health care, the regulatory, legal, data, and adoption challenges that are slowing safe and effective innovation need to be addressed."
Twitter: @MikeMiliardHITN
Email the writer: mike.miliard@himssmedia.com
Healthcare IT News is a publication of HIMSS Media.
Thanks to consumerism in the healthcare industry, providers are looking to improve patient experience and satisfaction in their care. HIMSS19 attendees who are focused on this topic for their organizations should check out these curated sessions and events.
Minor differences in co-payments sometimes mask exponential differences between similar generic formulations.