
Reflective portfolios sit at the heart of UK GP training. They are reviewed at the Annual Review of Competence Progression (ARCP) and are intended to demonstrate a trainee’s development across professional capabilities — evidence that they are ready for independent practice.1
At the centre of this system is a simple assumption: that reflective writing reflects authentic reflection. That the words in a portfolio represent a trainee’s thinking — their clinical judgement, their insight, their professional growth.2
Artificial intelligence has made that assumption untenable.
…if AI can produce the reflection, what exactly are we assessing?
Recently, as a GP Trainer and Programme Director, I encountered an AI tool capable of generating reflective entries mapped directly to the Royal College of General Practitioners curriculum. With a brief clinical scenario and a selected capability, it produced a structured reflection aligned to expected portfolio frameworks — discussing clinical reasoning, patient-centred care, ethical considerations, and learning needs. The result was polished, coherent, and entirely plausible. It was also unsettling.
At that point, an uncomfortable question becomes unavoidable: if AI can produce the reflection, what exactly are we assessing?
Portfolios were designed in an era when reflective writing required time, effort, and personal engagement. The process itself was the point: trainees stepped back from clinical encounters, examined their decision-making, and articulated lessons for future practice. The portfolio entry was both the method and the evidence of reflection. AI fractures that link and we are left assessing a product that can be generated without the process that was meant to give it meaning.
Yet, for many trainers, this will not feel entirely new. Even before AI, there was a quiet recognition that portfolios were an imperfect proxy. Well-written entries did not always reflect deep insight, and the real understanding of a trainee’s development has always come from elsewhere. It comes from conversations — in debriefs after difficult consultations, in tutorials where uncertainty is explored, in moments where trainees explain their reasoning out loud. It is in these interactions that trainers see how someone thinks, not just how they write.
Clinical judgement, ethical reasoning, and professional identity are not easily reducible to text.
Clinical judgement, ethical reasoning, and professional identity are not easily reducible to text. They are revealed through dialogue, challenge, and time. They are built through relationships. Artificial intelligence does not replace this. It exposes how little of it we were actually assessing.
Perhaps the most important implication of AI in medical education is that it pushes us back toward the relational foundations of training. General practice has long been distinctive in the closeness of the trainer–trainee relationship. As artificial intelligence becomes increasingly capable of producing documentation, the educational system will need to place greater emphasis on what cannot be automated: human judgement, mentorship, and trust. Trainers will need to rely less on the written portfolio as evidence and more on their longitudinal understanding of the trainee.
If AI reduces the emphasis on producing lengthy written reflections, it may free both trainees and educators to focus on what truly matters: meaningful discussion, thoughtful supervision, and shared reflection grounded in real clinical work.
Portfolios will likely continue to have a role. They provide structure, encourage documentation of experiences, and allow progress to be tracked over time. But their function may need to evolve — from evidence of reflection to prompts for it. There is always a need for reflection in medical training. If anything, AI sharpens it. But it reminds us that reflection is not simply something that is written; it is something that is discussed, challenged, and understood between people.
In the age of artificial intelligence, the future of assessment in GP training may lie not in what is written in the portfolio, but in the conversations that surround it — and in the trust between trainer and trainee that no algorithm can replace.
References
- https://www.rcgp.org.uk/mrcgp-exams/wpba [accessed 21/3/26]
- Lim JY, Ong SYK, Ng CYH, Chan KLE, Wu SYEA, So WZ, Tey GJC, Lam YX, Gao NLX, Lim YX, Tay RYK, Leong ITY, Rahman NDA, Chiam M, Lim C, Phua GLG, Murugam V, Ong EK, Krishna LKR. A systematic scoping review of reflective writing in medical education. BMC Med Educ. 2023 Jan 9;23(1):12. doi: 10.1186/s12909-022-03924-4. PMID: 36624494; PMCID: PMC9830881.
Featured Photo by Nikita Kozlov on Unsplash