
In meetings where ideation or problem solving is required, how long is it before a colleague exclaims, “I put (our question) into AI and it says this”?
The AI-phant in the room is that we are personifying this tool, recruiting it into our teams without so much as an application process. Generative AI does not think or understand. It generates outputs based on how we interact with an agent and how that agent has been trained. It is important for users to have AI literacy1 which includes prompt writing and interpreting outputs in context.
Generative AI does not think or understand. It generates outputs based on how we interact with an agent and how that agent has been trained.
The language we use when talking about AI is also not neutral. When we say “AI thinks,” “AI knows,” “AI decided,” or “AI says,” we implicitly attribute human cognitive qualities such as intention, judgment, and understanding to what is a statistical system predicting likely sequences of words.
Personifying AI influences how people instinctively take its outputs. Research in cognitive science and human-computer interaction shows that anthropomorphism increases perceived authority, competence, and trust in a system,2 even when users know it is not a human mind. This effect could be magnified in groups where the introduction of an apparently objective algorithmic voice could shift the tone of discussion and suppress dissent.
In professional discussions, especially clinical, academic, and policy contexts, language that assigns agency to AI risks:
• Over elevating automated output: If “AI says X,” colleagues may feel that disagreeing is disagreeing with an authority, not a tool.
• Obscuring responsibility: “The AI recommended…” displaces accountability from the human who chose the tool, framed the question, and interpreted the answer.
• Masking uncertainty: Statistical prediction becomes misremembered as fact.
• Flattening nuance: We speak about AI as if it were a monolithic entity, when outputs vary across models, versions, and with prompts, which depend on who is prompting.
More accurate alternatives include:
• “The model generated…”
• “When prompted with…, the system output…”
• “This is one possible response produced by…”
These formulations maintain clarity about who is doing what, preserving the distinction between human thinking and machine generated text. They also support better critical engagement, reminding us that outputs are artefacts, not insights.
A call for thoughtful engagement
Consider your personal and organisational values. How do you want to balance leveraging technology with the human dimensions of creativity and criticality?
When sharing AI generated suggestions, share your methods. Which agent and version did you use? What framing did you try?
Consider the data used to train your agent. Ask how this might introduce bias or omit important perspectives.
Upskill in prompt generation. Web search evolved to respond to poorly written inputs but there are still strategies to make them more effective. Similarly, more nuanced AI prompts can lead to more useful outputs. For example, instead of “What’s the best strategy for X?” try: “What are the various strategies for X, considering the context of Y?” or frameworks such as TRACI (Task, Role, Audience, Create, Intent)(3).
When sharing AI generated suggestions, share your methods. Which agent and version did you use? What framing did you try? Transparency helps colleagues adopt a more thoughtful approach and reduces the illusion that AI outputs are “truths.”
Critically assess outputs before sharing them. Consider what else is being said in the meeting, your domain knowledge, relevant literature, and whether the output logically follows.
Avoid personifying language in descriptions of AI. It reinforces unhelpful intuitions about what these tools can do.
Encourage your teams to practice AI literacy. This includes understanding the limitations of generative systems, interpreting outputs critically, and recognising when human expertise must take precedence.
Featured photo by Google DeepMind on Unsplash.