
As cybersecurity threats and attacks become more sophisticated due to AI, utilising the same technology has also become indispensable to shore up the healthcare industry's defences.
At HIMSS25 APAC, Kumar Krishnamurthy Venkateswaran, CIO of Indian private hospital chain Narayana Health, warned how malicious actors can easily target healthcare organisations with the use of AI.
"With AI tools today, you can create original phishing messages through which you can launch attacks. By paying $5, you can launch a denial-of-service attack," he said. "This is the next-generation digital theft that is happening."
With the use of AI, hackers can now send a million breach attempts per second. " It's just nearly impossible for us to resolve all those events," Kumar stressed.
Fortunately, he said, healthcare organisations can also deploy AI-based automated defence systems to thwart this deluge of attacks. To fortify defence, they must build their digital health security and privacy architecture.
According to Kumar, "AI is going to play a huge part in ensuring that the data is looked at and appropriate defence mechanisms are initiated or triggered to resolve these."
" AI in healthcare – from a cybersecurity perspective – has to be something real-time, active, and continuously learning. It should not be an afterthought. It should not launch a defence after an event has happened."
Proceed with caution
However, the use of AI does not come without risks. "It is also important that we look out for risks," Kumar said.
One of the biggest risks, he said, is the lack of explainability. He mentioned a potential scenario where an AI can just shut an entire network down following attacks on a certain firewall port without explaining its rationale. "You can't have an AI like that. You can't have the AI run amok. It is very important to have explainability built in so that we are able to understand the reasoning behind all this."
Every AI's decision must be reviewed by a human, he stressed. "For every AI system that you build, please [appoint] appropriate, knowledgeable subject matter experts to ensure that these decisions (like remediations) are reviewed, analysed, and then approved accordingly," Kumar appealed.
"AI should augment and not replace human judgment. It should be like a secondary decision support for a security analyst, a security manager, a security head, or a CSO."
He also underscored the need to mask and anonymise data from data sources properly before connecting them to AI systems.
In seeking AI-powered solutions, the hospital CIO said to consider those that can easily adapt to an existing architecture and effectively launch defensive or offensive actions immediately when it spots a threat like a denial-of-service attack.
Kumar suggested "self-healing AI" with minimal human involvement. "If you're able to have this kind of system, it can effectively complement and help us to create a much more secure healthcare environment."