
A persistent and troubling error has surfaced across leading artificial intelligence platforms: despite Donald Trump’s inauguration for a second term in January 2025, many widely used AI models—including ChatGPT and Perplexity—continue to refer to him as the “former” president, not the current one (Here’s an example of such output). This discrepancy, while seemingly minor, exposes deeper issues about how AI models handle rapidly changing real-world information and the risks of misinformation in the digital age.
A Recurring Error Across Platforms
Reports from users and AI communities have documented repeated instances where AI chatbots and virtual assistants incorrectly label Trump as a former president, even months after his return to office. One user, Sylvestre Conceicao, highlighted the issue in a public complaint: “The AI has been incorrectly referring to Donald Trump as a ‘former US President,’ even though he is currently serving as President of the United States after his second inauguration in January 2025”. This is not an isolated incident; similar errors have been observed in multiple interactions.
Why Do AI Models Make This Mistake?
The core of the problem lies in how AI models are trained and updated. Most large language models, including those behind ChatGPT and Perplexity, are trained on vast datasets that often lag behind real-time events. Unless their training data or knowledge base is updated frequently, these models may continue to rely on outdated information. As a result, they may default to referring to Trump as a “former president,” reflecting the period between 2021 and early 2025 when he was out of office.
Additionally, AI models are designed to avoid making unverified claims about recent events, often erring on the side of caution by sticking to information last confirmed in their training data. This conservative approach, while generally intended to prevent the spread of false information, can ironically result in persistent factual errors when major events—like a presidential inauguration—are not incorporated promptly.
Broader Implications: Misinformation and Public Trust
This factual inaccuracy is not just a technical glitch; it has broader implications for public trust in AI and the fight against misinformation. As AI systems become more integrated into newsrooms, customer service, and public information channels, their authority as sources of truth increases. Repeated factual errors, especially about high-profile political figures, can undermine confidence in these tools and contribute to confusion or the spread of outdated narratives.
The issue is particularly acute in an era where AI-generated content is already fueling concerns about misinformation and political manipulation. As AI becomes a key player in shaping public opinion, ensuring that these systems reflect current realities is critical for both democratic discourse and the responsible development of technology.
Industry Response and the Path Forward
Leading AI companies, including OpenAI and Anthropic, have acknowledged the challenge of keeping models current and have taken steps to address misinformation, especially around elections and major political events. However, as these recent errors show, the technical and operational hurdles remain significant.
Experts suggest that more frequent updates, real-time data integration, and mechanisms for rapid correction of factual errors are needed to ensure AI systems remain accurate and trustworthy. Until then, users are advised to approach AI-generated information—especially about fast-changing events—with a critical eye.