Use of ChatGPT to obtain health information in Australia, 2024: insights from a nationally representative survey

The Medical Journal of Australia

Julie Ayre, Erin Cvejic, Kirsten J McCaffery

Since the launch of ChatGPT in 2022,1 people have had easy access to a generative artificial intelligence (AI) application that can provide answers to most health-related questions. Although ChatGPT could massively increase access to tailored health information, the risk of inaccurate information is also recognised, particularly with early ChatGPT versions, and its accuracy varies by task and topic.2 Generative AI tools could be a further problem for health services and clinicians, adding to the already large volume of medical misinformation.3 Discussions of the benefits and risks of the new technology for health equity, patient engagement, and safety need reliable information about who is using ChatGPT, and the types of health information they are seeking.

To examine the use of ChatGPT in Australia for obtaining health information, we surveyed a nationally representative sample of adults (18 years or older) drawn from the June 2024 wave of the Life in Australia panel.4 Participants who completed the Life in Australia survey online or by telephone were asked how often they used ChatGPT for health information purposes during the preceding six months, the type of questions they asked, and their trust in the responses. Participants who were aware of ChatGPT but had not used it for health information purposes were asked about their intentions to do so in the following six months. Health literacy was assessed using a validated single-item screener: “If you need to go to the doctor, clinic or hospital, how confident are you filling out medical forms by yourself?“5 Demographic information was derived from previously collected panel data. Residential postcode-based socio-economic standing was classified according to the Index of Relative Socio-economic Advantage and Disadvantage (IRSAD; by quintile).6 Participant responses were weighted to the Australian population using propensity scores. Associations between respondent characteristics and survey responses were assessed using simple logistic regression; we report odds ratios (ORs) with 95% confidence intervals (CIs). Analyses were conducted in SPSS 26. Unless otherwise stated, we report unweighted results (further study details: Supporting Information, part 1). Our study was approved by the University of Sydney Human Research Ethics Committee (2024/HE000247).

Previous
Previous

Can co-designed educational interventions help consumers think critically about asking ChatGPT health questions? Results from a randomised-controlled trial

Next
Next

The quality and safety of using generative AI to produce patient-centred discharge instructions