General

Chat GP

Twenty-four-year-old Maddy Forbes sits in the chair at her local GP. It’s clearly a busy day at the office. Outside, the waiting room is completely full. Inside, the doctor seems distant, ready to move on to the next patient.

With a blood pressure monitor strapped to her arm, Forbes watches as her doctor copies her blood test results, along with her age and date of birth, and pastes them into ChatGPT. As her doctor reads Forbes the AI chatbot’s response word-for-word, her blood pressure begins to rise. Seeing this, the doctor is puzzled. She tells her to try and relax.

“Of course, my blood pressure is going up,” Forbes thinks, “I’m angry at what’s going on right now.”

Forbes says the doctor read the AI response immediately as it was generated, taking no time to consider what it was saying. Photo: Chloe Woods.

“I ended up questioning her about it. I was wondering if this was legitimate advice,” Forbes says.

“She minimised the tab, and said it was what she recommended.”

“I left quite angry and upset.”

When Forbes voiced her concerns to the clinic about the doctor’s behaviour, she was surprised when they didn’t try to deny it.

“[The practice manager] kept calling me ‘darling’ on the phone. She kept saying that I’m young, that I know a lot about technology, that I should be used to AI now … You can have this all in writing, but I’m telling you it’s not that big of a deal.”

Following an email from the practice, Forbes’ concern grew.

“Dr. — has confirmed that during your appointment, she used ChatGPT – a generative AI tool – to assist in summarising medical information, and that no identifiable personal data was entered into the platform.”

“Please be assured that the doctor involved obtained verbal consent from the patient prior to using ChatGPT.”

“While Dr — believed she had your verbal consent to use this tool, we understand and respect that you did not feel adequately informed or comfortable with its use during your consultation.”

Part of the email Maddy Forbes received from her GP clinic

“She did not ask for consent,” Forbes says.

“She didn’t say anything about it at all, she just opened ChatGPT and used it without asking. And I wouldn’t have given consent for that specific use of my information.”

Forbes says she was also never asked to sign paperwork with the clinic regarding the use of AI.

During her appointment, the response Forbes’ doctor read from ChatGPT advised to increase her dose of a drug called Thyroxin. When Forbes had this same blood test read by a different GP afterwards, she was told her blood levels indicated she didn’t need to.

“I wouldn’t call any of this sufficient. I thought it was just really lazy,” she says.

Using online tools, Forbes found out the email she received from the clinic was likely AI generated as well.
Video: Chloe Woods. Reporter: Chloe Woods.

Following the clinic’s response, Forbes decided to lodge a formal complaint with the Australian Health Practitioner Regulation Agency (AHPRA)

Member of ECU’s Centre of Artificial Intelligence and Machine Learning Dr Syed Zulqarnain Gilani works with artificial intelligence to train machines in specific health related tasks. He says he doesn’t believe the public are well-informed about how generative AI software like ChatGPT creates answers.

“These platforms have been trained on huge amounts of data, so they can generate new text. The idea is to learn a lot, then generate new content. When you put in a query, you are asking ChatGPT to generate something new,” he says.

“Now, it’s not a human. It doesn’t know whether it’s generating something real or not. it’s just mathematical equations.”

Zulqarnain Gilani says each AI software differs in the reliability of its answers. He likens the reliability of generative AI, to the tuning of a dial.

Zulqarnain Gilani says the more an AI bot it is tuned to create variations in text, the more likely it is to generate wrong information. Graphic: GIPHY.

“If you turn the dial a little bit, it will generate new text, but it won’t be very different from the existing one. If you turn the dial a lot, you can generate text that is vastly different from the original, but it may generate hallucinations and wrong information.

“All researchers in this domain remind people that ChatGPT and AI models, they’re not search engines. There’s a huge difference,” he says.

“A search engine is where you put in a prompt and search from an existing database, whereas ChatGPT generates new text. So, if you ask it a question, and then repeat that question, it is unlikely to give you the same answer again.”

“If you’re an expert in a field and you ask ChatGPT a question about that field, you will notice the answers are very vague. It uses words and sentences that sound very high brow, but inside it’s just hollow.”

Zulqarnain Gilani says people’s concerns regarding AI and patient personal information are valid.

“ChatGPT and other AI platforms, they say they don’t use your information for training. Generally, that’s what they claim. But what happens in the background is anyone’s guess. It’s difficult to ascertain exactly how these platforms use your data,” he says.

Six months after submitting her complaint, AHPRA responded to Forbes. They had decided not to take any further action.

Former AMA WA President Dr Mark Duncan Smith says there aren’t any specific laws about the use of generative AI tools in doctors’ offices, but there are clear guidelines for WA and Australia.

“The forms of AI used in medicine fall into roughly two categories. There are ones for the transcription of notes, and then there’s decision-making AI which provides answers to questions or gives you guidelines for treatment programs,” he says.

AI transcription is becoming increasingly popular inside Australian doctors’ offices. Medical AI software, such as Heidi and Lyrebird, record and transcribe GP appointments to help doctors save time when writing patients’ notes.

These tools can make treatment suggestions but Duncan-Smith says doctors need to rely on their own judgement.

“AI has the potential in the future to be very effective in medicine but ultimately it still needs to be the doctor making the decision. AI shouldn’t make decisions; it should recommend them,” he says.

Forbes’ main concern about her doctor’s use of ChatGPT was the uncertainty she felt about the security of her personal information.

“Because ChatGPT is essentially an open forum, data that goes into it is not adequately protected. It really shouldn’t be used for decision-making with patient confidential information,” Duncan-Smith says.

“It can be used for general purposes, to guide a clinician either with information or with some decision options, but it’s not a proper medical tool as such.”

How would you feel if your doctor searched your symptoms on ChatGPT? Video: Chloe Woods.

Duncan-Smith says there is an absolute requirement for doctors to explain about the use of AI, if they’re doing it in a consultation.

“The first requirement of AHPRA’s regulation is accountability. If a doctor uses AI, then ultimately that healthcare professional is still responsible for any outputs from the AI,” he says.

“There needs to be transparency, this is very very important. The patients have to be informed that AI is being used.”

Dr Mark Duncan-Smith

Duncan-Smith says he believes the use of ChatGPT is becoming more commonplace at the doctor’s office.

“I don’t think that it is appropriate at this stage to be using specific patient information. Using de-identified information may be acceptable, but ultimately this is about patient confidentiality. Being able to trust that what the patient tells the doctor, stays with the doctor,” he says.

“Ultimately, any output from the AI is still the responsibility of the doctor. We all know that sometimes AI comes out with absurd suggestions and that needs to be kept an eye on.”

Health Consumers’ Council WA executive director Clare Mullen says Forbes is not the only patient with concerns about AI in healthcare.

“AI is already changing people’s experiences of healthcare – in both good and bad ways,” Mullen says.

“The experience she has outlined sounds awful and would not meet anyone’s expectations of the expertise you have a right to expect when a GP is being funded (either by the patient, or by Medicare) to provide a specialist service.”

“Already, we’re aware that some consumers are tapping into ChatGPT to make sense of symptoms or health-related information in lay terms,” Mullen says.

A report published earlier this year by the University of Sydney estimated 9.9% of Australians had turned to ChatGPT for medical advice within the past six months.

It found people from groups with barriers to accessing healthcare were more likely to use ChatGPT for health reasons. This includes those born in non-English speaking countries, people who do not speak English at home, and people with limited healthcare literacy.

The report concluded there was an ‘urgent need’ to equip the wider community with the skills to use generative AI safely.

This year in Ireland, a 37-year-old man reported he had been diagnosed with stage-four throat cancer, after he delayed visiting a health professional due to ChatGPT advising his sore throat was ‘highly unlikely’ to be cancerous.

“While it seems the tools are not yet mature enough to be confidently relied upon to give consistently accurate advice, it’s not hard to imagine that at some point in the future, as these tools become more reliable, that consumers may turn to them to help make sense of test results, or to seek advice about possible treatments,” Mullen says.

Forbes decided not to return to the GP clinic, after being a patient there for more than five years.

“I have had hesitation in going to a new doctor, mostly because I now have to re-give them all of my medical information.”

“I wonder ‘Are you going to sit there in front of me and use ChatGPT? Or are you going to actually talk to me about the problem?'”

Maddy Forbes

“I’d honestly been delaying seeing a GP because I just didn’t have the time or the money, and I knew that it would all be a matter of relating to my weight, that was always a conversation I was going to have with my GP no matter what.”

Statistics from last year’s ABS study into patient experiences, show more patients are reporting delaying their GP appointments due to cost. The numbers also show that women are more likely to delay accessing health services than men.

“I do often dread going to the doctor’s because they’ll immediately ask for your height and weight and then tell you about your BMI, which I do not like at all. Being a short person, but also a plus size person. It’s difficult because they chalk everything up to weight, no matter if you have a genuine issue or not,” says Forbes.

Forbes says she often feels like doctors are quick to recommend weight-loss drugs like Ozempic rather than taking the time to listen to her symptoms.

The Royal Australian College of General Practitioner’s Health of the Nation report shows average consult lengths are on the rise, as more patients come in with complex issues. The cost of GP appointments continues to increase, while Medicare rebates stagnate, with the amount of practices bulk-billing all patients dropping from 13 to 12 per cent in the last year.

Despite her experience, Forbes says she still believes AI can be a useful tool in the GP’s office.

“AI can be used for a lot of good things. Copying and pasting someone’s blood test results in it? No. Using it to help you transcribe your notes? Sure, go ahead.”