When users provide personal inputs (e.g., their own face for profile pictures), the AI image generator should ensure that these specific features are not unintentionally embedded or distributed into subsequent, unrelated image generations. This prevents 'weird and uncanny' results and addresses privacy concerns related to data influence.
I keep seeing dodgy AI-generated images and people blaming LLMs for them. I don’t know who needs to hear this, but AI image generators are not LLMs. An LLM is a Large Language Model and it cannot generate that map or photos of past presidents. As it stands ChatGPT is also not an LLM, it is an AI system interface that can send your request to different AI models that may or may not be LLMs like: GPT-5 (LLM) which handles most language and reasoning tasks. GPT-5 thinking (LRM) which is a Large Reasoning Model with deeper reasoning mode for harder problems. DALL·E 3 (not an LLM) creates images from text prompts. The router now decides whether to use standard GPT-5, GPT-5 thinking, or an image model like DALL·E based on the task. So in all those distorted images you are seeing (like my weird AF nightmare face below 😅), the LLM did not make it, the image generator did.