One in five GPs use AI such as ChatGPT for daily tasks, survey finds | GPs

A fifth of GPs are utilizing synthetic intelligence (AI) instruments reminiscent of ChatGPT to assist with duties reminiscent of writing letters for his or her sufferers after appointments, based on a survey.

The survey, printed within the journal BMJ Well being and Care Informatics, spoke to 1,006 GPs. They had been requested whether or not they had ever used any type of AI chatbot of their scientific apply, reminiscent of ChatGPT, Bing AI or Google’s Gemini, and had been then requested what they used these instruments for.

One in 5 of the respondents stated that that they had used generative AI instruments of their scientific apply and, of those, nearly a 3rd (29%) stated that that they had used them to generate documentation after affected person appointments, whereas 28% stated that that they had used the instruments to recommend a distinct prognosis.

1 / 4 of respondents stated that they had used the AI instruments to recommend therapy choices for his or her sufferers. These AI instruments, reminiscent of ChatGPT, work by producing a written reply to a query posed to the software program.

The researchers stated that the findings confirmed that “GPs could derive worth from these instruments, significantly with administrative duties and to help scientific reasoning”.

Nonetheless, the researchers went on to query whether or not these AI instruments getting used may threat harming and undermining affected person privateness “since it’s not clear how the web firms behind generative AI use the knowledge they collect”.

They added: “Whereas these chatbots are more and more the goal of regulatory efforts, it stays unclear how the laws will intersect in a sensible means with these instruments in scientific apply.”

Dr Ellie Mein, medico-legal adviser on the Medical Defence Union, stated that the usage of AI by GPs may increase points together with inaccuracy and affected person confidentiality.

“That is an attention-grabbing piece of analysis and is in line with our personal expertise of advising MDU members,” Mein stated. “It’s solely pure for healthcare professionals to wish to discover methods to work smarter with the pressures they face. Together with the makes use of recognized within the BMJ paper, we’ve discovered that some docs are turning to AI packages to assist draft criticism responses for them. We’ve cautioned MDU members in regards to the points this raises, together with inaccuracy and affected person confidentiality. There are additionally information safety issues.”

She added: “When coping with affected person complaints, AI drafted responses could sound believable however can comprise inaccuracies and reference incorrect tips which could be onerous to identify when woven into very eloquent passages of textual content. It’s very important that docs use AI in an moral means and adjust to related steering and laws. Clearly that is an evolving space and we agree with the authors that present and future docs want better consciousness of the advantages and dangers of utilizing AI at work.”

Sensi Tech Hub
Logo