Large language models (LLMs) like OpenAI’s ChatGPT, Google’s Gemini and Anthropic’s Claude can be useful tools in psychiatric practice, helping with tasks such as searching for information, managing administrative work and supporting education. This article demystifies how these tools work by explaining their core operational principles and noting their key limitations, including the risks of confabulation (fabricating information), sycophancy and knowledge cut-offs. It provides practical guidance on mitigating these risks through structured ‘prompt engineering’ and offers a safety framework for integrating LLMs into low-risk administrative and educational workflows. The article stresses the importance of approaching these technologies with caution by independently verifying information, adhering to UK data protection laws and upholding the principles of best practice in patient care. The goal is to help clinicians use these powerful but fallible technologies wisely, ensuring that patient safety and professional responsibility remain paramount as they explore these new tools.