The College Medical Centre Groningen (UMCG) makes use of an AI chatbot to assist reply the tons of of questions it receives from sufferers weekly to assist ease the workload of already overstretched healthcare suppliers.
In early December, EU policymakers reached a political settlement on the AI Act, which is about to grow to be a worldwide benchmark for regulating this more and more widespread know-how. Shortly earlier than, the UMCG hospital within the north of the Netherlands started utilizing AI to assist reply the tons of of questions it receives from sufferers.
The Chief Medical Data Officer at UMCG, Dr Tom van der Laan, urged the EU to not over-regulate synthetic intelligence to permit the know-how to ease the area’s healthcare provision constraints throughout an interview with Euractiv.
“Let’s not regulate this to dying. It is likely to be our solely likelihood to have some stage of healthcare shortly for older individuals as an alternative of being constrained and of a decrease high quality than what we’re used to,” van der Laan mentioned.
“Not with the ability to use this know-how goes to have graver penalties than utilizing it and possibly exceeding the chance profile a bit bit,” he added.
Dr Chatbot
The medical officer is spearheading the UMCG’s use of AI, and it’s the first in Europe to make use of the know-how to draft solutions to affected person emails {that a} healthcare skilled then checks earlier than being despatched.
Each week, the UMCG receives greater than 1,200 written questions on varied matters, together with medicine use and ache administration, rising the executive burden on docs and different healthcare professionals.
Whereas van der Laan praised AI’s capabilities, he mentioned that healthcare stays human work.
“Synthetic intelligence can help and make work simpler, however healthcare professionals are irreplaceable in healthcare in the interim,” he mentioned.
Nonetheless, he mentioned that AI would change the sector of drugs and that it got here at an affordable time with ageing populations and fewer individuals with the talents to take care of them.
Μore time with sufferers
In accordance with van der Laan, healthcare suppliers spending much less time on administrative duties means they’ve extra time with sufferers. Moreover, healthcare suppliers might must be transient of their replies when answering questions through e mail due to their immense workload.
The AI might be able to tweak the replies to make them sound extra empathetic – the AI wished one affected person a contented vacation on the finish of 1 e mail, a line that will have gone unsent by a doctor in a rush.
Van der Laan additionally makes use of AI to assist with affected person rounds and summarising his sufferers’ medicine adjustments.
“It’s like asking a human language query to a different doctor. It’ll provide you with a solution,” mentioned van der Laan.
The US firm Epic developed the AI software, which additionally provides the hospital’s digital affected person file (EPD) software program and has a strategic partnership with Microsoft.
A spokesperson for Epic instructed Euractiv that AI information processing for European prospects happens in a safe atmosphere in Europe.
“Healthcare organisations within the EU and all over the world use our AI instruments to extend effectivity, make clinicians’ working lives simpler and extra satisfying, and enhance the affected person expertise,” they mentioned.
The AI used throughout the trial was not a self-learning system, so the chatbot didn’t be taught from the affected person information. It was built-in inside the EPD to maintain affected person information safe, rendering it inaccessible to the provider. It makes use of OpenAI’s GPT-4 mannequin, however they will change at any time if, for instance, a particular medical mannequin emerges sooner or later.
When unsure, the AI refers sufferers to people
In Sweden, an investigation was launched after a triage chatbot didn’t appropriately prioritise one in 5 sufferers, as Euractiv reported.
Within the Netherlands, van der Laan mentioned that reasonably than giving inaccurate info, their mannequin typically mentioned it lacked the mandatory info to offer an correct reply, and in these cases, it really helpful {that a} affected person contact a (human) healthcare skilled.
“The principle limitation is that we’re instructing it to not give medical recommendation for it to not cross the road and grow to be a medical gadget,” he mentioned.
Within the coming weeks, extra Dutch hospitals with an EPD from the identical provider will use this software in collaboration with different hospitals from the EPIC Dutch Affiliation. In accordance with van der Laan, its use in varied American hospitals has been very constructive.
[By Christoph Schwaiger, Edited by Vasiliki Angouridi | Euractiv.com]
Learn extra with Euractiv