Clicky

//

Generative AI in medical writing: co-author or tool?

Richard Armitage is a GP and Honorary Assistant Professor at the University of Nottingham’s Academic Unit of Population and Lifespan Sciences. He is on X: @drricharmitage

ChatGPT is now one year old. This large language model (LLM), which was created by OpenAI and made freely available to the public on 30 November 2022, made such broad and disrupting impact before its first birthday that many believe the dawn of generative AI constitutes a technological era of similar import to electrical power.1 In late 2023, while GPT-4 (the latest model of ChatGPT) still leads the user-friendly LLM landscape, it faces growing competition from the likes of Meta’s Llama, Microsoft’s Bing AI, Quora’s Poe, Anthropic’s Claude-2, and Google’s Bard.

The generative power of these AI tools is rapidly disrupting almost every industry, including clinical medicine, healthcare and health systems, and medical writing.2 Indeed, authors who hopefully submit their manuscripts to the The Lancet and its sub-journals are now required to make a declaration regarding their use of generative AI and AI-assisted technologies in their work, attesting to their responsibility for the article contents. The Lancet declares that “generative AI is not an author”, and dictates that “these tools should only be used to improve language and readability”.3 But are these statements – the first a factual claim, the second a normative assertion – entirely true? Let’s deal with each in turn.

But are these statements – the first a factual claim, the second a normative assertion – entirely true?

Firstly, is generative AI an author? To answer this, we must first ascertain what constitutes an author in the context of medical writing. For authors to succeed in this domain they must competently demonstrate a variety of capabilities including idea generation, literature searches, evidence reviews, statistical analysis, information synthesis, findings summarisation, conclusion formulation, manuscript writing and abstract generation. These aptitudes are in addition to the basic requirement to produce written work using academic language that is concise, readable and with flawless spelling and grammar. In December 2023, it is obvious that the leading LLMs harbour all of these capabilities to degrees approaching or sometimes exceeding those of human authors (and are clearly super-human in terms of speed),2,4,5,6,7 such that leading medical journals have taken public positions on the use of LLMs in the works that they publish.3,8,9,10,11 It seems clear, therefore, that generative AI could be considered an author with regards of its proficiencies in medical writing (although it does not – at least for now – have the capacity to autonomously decide to act as such an author, but must be prompted to do so by the human who controls it).

Secondly, should generative AI only be used to improve language and readability in medical writing, or should the capabilities of LLMs be harnessed to conceive of, formulate and improve such works? Before responding to this, it must first be acknowledged that these technologies simply will be used for this purpose, regardless of whether they ought to be. A complete absence of their influence in medical writing would require not a single instance of their use in the 1.3 million papers (most of which have multiple human collaborators) added to the MEDLINE database alone each year.12 Given the rapid and widespread uptake of LLMs within the last 12 months, such perfect abstinence is deeply improbable. On the backdrop of this reality, should generative AI be used for this purpose? In response to this, a straight-forward yet powerful consequentialist argument can be mounted, which supports their use if they bring about the best outcome (which is, in our domain, the improved health of our patients through the influence of high quality medical writing). This argument supports the immediate deployment of generative AI in medical writing, since its utility in augmenting the output of human authors has already been established.

As such, it seems that generative AI can be perceived as an author, and a strong ethical case can be made for the full utilisation of its capabilities to bring about better patient outcomes. The following question is therefore raised: should LLMs be recognised as independent co-authors in medical writings in the same manner that all other (human) collaborators are, or not?

…this raises the question of to whom the accountability is transferred (the owner of the LLM, the engineer that wrote it, or some other entity entirely)?

I think not for three main reasons: firstly, the human author’s prowess with wielding generative AI will soon be considered a necessary component of the author’s skillset in a manner akin to their proficiency with word processors and internet browsers. Since no author recognises MS Word and Google Chrome – which are widely-used tools in the production of medical writings – as collaborators to their work, LLMs should similarly not be recognised as co-authors but merely regarded as tools that authors master and deploy in the production of their writings. Secondly, the landscape of LLMs is rapidly expanding (indeed, custom versions of ChatGPT can already be created by individual users).13 This means that recognition of individual LLMs as co-authors would soon become a meaningless exercise since they might not be available to or understandable by those that do not use them (in addition to LLMs themselves being both uncontactable by readers of the works and unaware of any authorship recognition bestowed unto them). Thirdly, assigning authorship to generative AI might serve to transfer accountability for the work at least partially away from the human co-author. Without the LLM harbouring legally recognised personhood,14 this raises the question of to whom the accountability is transferred (the owner of the LLM, the engineer that wrote it, or some other entity entirely)? These ambiguities, combined with the fact that the human author autonomously chooses to utilise the generative AI in their work, means that authorship should exclusively be assigned to humans.

Accordingly, while generative AI already harbours impressive capabilities that often meet and even exceed those of human authors, and while a strong case can be made for its power to be deployed in medical writing, these technologies should not be recognised as the independent co-authors of human collaborators. Instead, they should be regarded as indispensable tools that augment human authors, enhance their capabilities, and constitute a newly required proficiency in the skillset of real authors.

Featured Image/Author Statement: Generative AI (DALL·E 3) was used to produce the article’s image, but no other use of generative AI was deployed in the production of this article.

References

  1. UC Berkeley Sutardja Center for Entrepreneurship & Technology. “AI is the New Electricity”: Insights from Dr. Andrew Ng. 06 October 2023. https://scet.berkeley.edu/ai-is-the-new-electricity-insights-from-dr-andrew-ng/ [accessed 06 December 2023]
  2. B Gordijn and H ten Have. ChatGPT: evolution or revolution? Medicine, Health Care and Philosophy 19 January 2023; 26, 1-2. DOI: 10.1007/s11019-023-10136-0
  3. https://www.thelancet.com/pb-assets/Lancet/authors/tl-info-for-authors-1690986041530.pdf [accessed 31 December 2023]
  4. T Dave, SA Athaluri and S Singh. ChatGPT in medicine: an overview of its applications, advantages, limitations, future prospects, and ethical considerations. Frontiers in Artificial Intelligence 04 May 2023; 6: 1169595. DOI: 10.3389/frai.2023.1169595
  5. AS Doyal, D Sender, M Nanda, et al. ChatGPT and Artificial Intelligence in Medical Writing: Concerns and Ethical Considerations. Cureus 10 August 2023; 15(8): e43292. DOI: 10.7759/cureus.43292
  6. S Biswas. ChatGPT and the future of medical writing. Radiology 02 February 2023; 307(2). DOI: 10.1148/radiol.223312
  7. H Li, JT Moon, S Purkayastha et al. Ethics of large language models in medicine and medical research. The Lancet Digital Health June 2023; 5(6): e333-e335. DOI: 10.1016/S2589-7500(23)00083-3
  8. Editorial. Tools such as ChatGPT threaten transparent science; here are our ground rules for their use. Nature January 2023; 613(7945): 612. DOI: 10.1038/d41586-023-00191-1
  9. M Hosseini, LM Rasmussen and DB Resnik. Using AI to write scholarly publications. Accountability in Research January 2023: 1-9. DOI: 10.1080/08989621.2023.2168535
  10. A Flanagin, K Bibbins-Domingo, M Berkwits et al. Nonhuman “Authors” and Implications for the Integrity of Scientific Publication and Medical Knowledge. Journal of the American Medical Association 2023; 329(8): 637–639. DOI: 10.1001/jama.2023.1344
  11. H. Holden Thorp. ChatGPT is fun, but not an author. Science 2023; 379: 313-313. DOI: 10.1126/science.adg7879
  12. National Library of Medicine. Citations Added to MEDLINE® by Fiscal Year. 22 December 2022. https://www.nlm.nih.gov/bsd/stats/cit_added.html [accessed 06 December 2023]
  13. OpenAI. Introducing GPTs. 06 November 2023. https://openai.com/blog/introducing-gpts [accessed 06 December 2023]
  14. VAJ Kurki. ‘The Legal Personhood of Artificial Intelligences’ in A Theory of Legal Personhood. Oxford, 2019; online edn, Oxford Academic, 19 Sept. 2019. DOI: 10.1093/oso/9780198844037.003.0007

The BJGP is the world-leading primary care journal. At BJGP Life we add multi-media comment and opinion for the primary care community.

Ethics of the Ordinary is a regular column on BJGP Life that explores ethical and moral concerns relevant to general practice and primary care.

Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Previous Story

Grieving for a lost Christmas cake…

Next Story

Episode 147: Coeliac disease and its diagnosis in primary care – what is the patient experience?

Latest from BJGP Long Read

0
Would love your thoughts, please comment.x
()
x
Skip to toolbar