Skip to main content

'Enter generative AI not as a moonshot but as digital duct tape'

Dr. Matt Crowson, the genAI expert at Wolters Kluwer Health, dives into the company's new survey of caregivers and execs on the technology and explains where hospitals and health systems need to do the work of properly getting it up and running.
By Bill Siwicki
Dr. Matt Crowson of Wolters Kluwer Health on genai
Jack (right) and he who feeds Jack, Dr. Matt Crowson, director of AI/genAI product management at Wolters Kluwer
Photo: Dr. Matt Crowson

Nursing workforce concerns top the list of healthcare priorities that caregivers and executives want generative AI to address, according to the "2025 Future Ready Healthcare Survey Report" from Wolters Kluwer Health.

85% of respondents cited "recruiting/retaining nursing staff" as a top priority, while 76% identified "reducing clinician burnout" as a main concern.

Leaders are focusing on the basics to keep the enterprise running: GenAI-driven technologies are likely to be part of the solution for longstanding challenges, such as addressing the burdens of prior authorizations (67%), electronic health record management (62%), cybersecurity preparedness (68%), and supporting telehealth/virtual care programs (65%), the survey found.

The "2025 Future Ready Healthcare Survey Report" is based on a nationally representative survey conducted by Ipsos, an independent marketing research firm, in early 2025. Respondents included physicians, nurses, pharmacists, allied health professionals, administrators and medical librarians across the U.S.

We spoke with Dr. Matt Crowson, director of AI/genAI product management at Wolters Kluwer Health, to get a deep dive into the survey and get his expert analysis into what the results mean.

Q. Please explain the nurse staffing and workforce concerns that are at the top of the priority list in your study for healthcare genAI applications.

A. The nursing workforce has specific requirements and unique challenges that nurse leaders are acutely aware of. From my vantage point, healthcare's workforce crunch is no longer a slow-leak problem – it's a full-on blow-out. Independent projections from the National Council of State Boards of Nursing warn that 1.6 million U.S. nurses could leave by 2029. This is a talent drain roughly equal to the population of Philadelphia.

Yet the crisis isn't only about warm bodies at the bedside. Nurses went to school to deliver care, not to not to get bogged down in layers of documentation and digital red tape. Still, they're drowning in paperwork.

The survey found that 67% of health system leaders flag prior-authorization paperwork as a genAI target, and 62% point to soul-sucking EHR click-fests. That drag on clinical time fuels burnout, which now ranks alongside staffing and cost pressures as a top three strategic threat: 82% of executives put "fix staffing" first, 77% aim to wring out administrative friction, and 76% want to slash burnout ASAP.

In short, the workforce pipeline leaks at both ends when people exit faster than new grads can enter, and the remaining staff is buried under process sludge.

Where does generative AI fit? Think of it as a digital shop-vac, not a pink-slip machine. Half of all respondents believe genAI can expand capacity for innovation by automating inbox triage, data wrangling and routine decisions.

Clinicians want to vaporize middle-layer administrative toil and free up humans for the work only humans can do. GenAI could be a magnet for recruitment and up-skilling, envisioning partnerships with universities to shorten onboarding and keep veterans in the game longer.

Bottom line: The staffing emergency is a two-headed beast. There are too few clinicians, and too much clerical drag. GenAI won't conjure new nurses out of thin air, but deployed against documentation, scheduling and decision support bottlenecks, it can buy time, sanity and budget while the pipeline is rebuilt.

The mandate now is ruthless prioritization: map workflows, quantify the hassle cost and drop AI where it erases real minutes instead of adding another dashboard no one opens. External research from the NCSBN, the American Hospital Association I noted, backs the same prescription: cut waste, invest in people, and let tech pick up the clipboard so nurses can pick up the stethoscope.

Q. Your study says leaders are focusing on the basics to keep the enterprise running. What is the connection with genAI-driven technologies?

A. We're in firefighter mode, hoses trained on the most boring, low-glamour fires. Health system leaders told us their 2025 priority is to "keep the lights on": pay the staff, file the paperwork and stay off the regulator's bad books. Eight out of ten executives highlight workflow optimization, think prior-auth ping-pong, eligibility lookups and endless inbox triage, as a top goal – yet barely six in ten feel even minimally ready to implement systems for the problem.

What's fueling the panic? Administrative sludge is now eating clinical time and balance sheets: hospital labor costs jumped $42.5 billion between 2021 and 2023, nearly 60% of total expenses. When two-thirds of respondents also admit that prior-authorization chores and EHR usage are slowing productivity, "back to basics" stops sounding lazy and begins to look like a survival tactic.

Enter generative AI not as a moonshot but as digital duct tape. Our survey shows most organizations deliberately point genAI at the grunt work first. Leaders see quick wins in automating denial letters, summarizing encounter notes, pre-filling claims and translating payer policy legalese into plain English.

These are all tasks where large language models can Hoover up unstructured text and spit out structured answers with negligible patient-safety risk. A mantra in many hospital C-suites these days is "if a bot can shave 30 seconds off every chart, that's millions back to the bedside."

The instinct is supported by external evidence. For instance, the American Hospital Association estimates that every 1% efficiency gain in rev-cycle operations can free up enough cash to fund dozens of nursing FTEs.

But brute-forcing algorithms into broken processes gives you expensive chaos. Our data flag a readiness gap: plenty of enthusiasm, thin governance. Only one in five organizations has published guardrails for genAI use; fewer mandate formal staff training. Successful leaders start with old-school process mapping: clock the baseline, find the bottleneck, then drop the model where it deletes a measurable minute.

They also benchmark outputs against neutral yardsticks such as the National Academies of Medicine's AI transparency guidelines to avoid black-box creep. In short, focusing on "basics" doesn't mean small thinking; it means staging genAI where it alleviates today's administrative choke-points, buys headroom for tomorrow's clinical innovation, and keeps the finance team from pulling the fire alarm again next quarter.

Q. Formal genAI policies and guidance are scarce, your study finds. What needs to be done?

A. Indeedour survey shows that only 18% of healthcare organizations have published any authorized-use policy for generative AI, and just 20% require staff training on the technology. But even the basics are missing. Fewer than half have rules for validating output, and barely 42% spell out how a model should plug into day-to-day workflows.

The lowest score of all, 31%, covers the line between what a clinician owns versus what the algorithm owns. In plain English, most hospitals let pilots fly without a safety checklist.

Step one: Pick a strong team to lead the charge. A governance framework is not a big consultancy slide deck. It needs accountable humans who understand clinical risk, data science and liability. That starts with hiring or appointing a cross-functional governance team: a physician champion who knows bedside realities, a data scientist who can explain model drift, an ethicist or compliance lead who keeps regulators happy, and an IT security pro who locks the back door.

Give that team real decision rights and a budget or stop pretending the organization is "AI-ready."

Step two: Borrow homework from people who have done it before. Do not reinvent rules in a vacuum. Anchor policy language to reputable public frameworks like the National Academy of Medicine's six pillars for trustworthy AI, the WHO ethics guidance on AI in health, and the NIST AI Risk Management Framework.

These documents outline critical guardrails: data stewardship, transparency, human oversight and continuous monitoring. These are guardrails that every hospital can adopt without paying a consultant to "discover" them.

Step three: Make it real, not theoretical. Translate those guardrails into operational playbooks. A model card and bias audit are required before any deployment. Embed a RACI matrix so staff know who reviews outputs, signs off on updates, and calls a time-out if the algorithm misbehaves. Mandate annual refresher training like you handle HIPAA or CPR recertification.

Finally, a transparent issue-tracking pipeline should be created so clinicians can flag odd outputs and see how the fix was handled. Governance only works when the people closest to the patient believe the system listens and acts.

Bottom line: The cure for "policy scarcity" is neither more enthusiasm nor another pilot. It is a hard-edged governance system staffed by professionals, backed by proven external standards and wired directly into clinical workflows. Hire the talent, adopt a public framework, operationalize it, and then your fancy AI curiosity becomes a disciplined program instead of a compliance nightmare.

Q. As a result of these various takeaways from your study, concerns about appropriate implementation of genAI persist. How can hospitals and health systems overcome these concerns?

A. Concerns linger because the governance plumbing is still missing. As I stated, not enough hospitals have a policy that spells out how staff may use large language models, and only one in five requires any formal training on the technology. Frontline clinicians notice the vacuum.

Some survey respondents worry that overreliance on genAI could blunt clinical decision-making skills before the tools are fully vetted. Without clear road rules, every new pilot feels like a trust exercise run on good vibes. This is precisely the scenario that fuels hesitation from boards, regulators and malpractice insurers.

The fastest antidote is qualified leadership. Governance is a contact sport that demands people who have developed or shipped production AI in high-risk settings, not just AI conference influencers with a flashy slide deck. A cross-functional council comprised of a clinician champion, data scientist, compliance attorney and security lead needs budget and authority to approve, monitor, and, if necessary, yank any model from service.

External playbooks that lay out concrete checkpoints for transparency, bias mitigation and human-in-the-loop oversight already exist. Adopting these frameworks off the shelf lets a hospital move quickly without reinventing doctrine.

Next comes operational muscle. Classify use cases by patient-safety risk, then start with the clerical tier: draft denial letters and summarize visit notes where errors are annoying, not harmful. Embed automated monitoring for drift and hallucinations into the CI/CD pipeline so updates cannot deploy unless they pass predefined tests.

End-user training that is as routine as HIPAA refreshers is required, and an open ticket queue must be published so clinicians can flag dodgy outputs and see resolutions in real time. These steps turn abstract principles into muscle memory for the entire workforce.

Follow Bill's HIT coverage on LinkedIn: Bill Siwicki
Email him: bsiwicki@himss.org
Healthcare IT News is a HIMSS Media publication.

WATCH NOW: How to launch a healthcare AI project, per the VA AI chief