Users report that ChatGPT often ignores explicit instructions and hallucinates irrelevant content. Improving the model's ability to follow user commands accurately would enhance the overall user experience.
I'm just posting here because my post on r/GeminiAI got deleted for having "excessive NSFW terms"(Anybody got ideas why??) This is more a rant than anything. I've seen many people at my school praise Gemini and call it the best out of all the AI. A good portion of CS student I've met share the sentiment. And while I do believe Google will probably win the AI race (if there is one) Using Gemini is so frustrating for me. Even with the pro version. 1. It constantly does stuff I've never asked it to do. I have to be hyper specific with what I want it to do and even so, sometimes it ignores my very explicit instructions. 2. It hallucinates like crazy. Like I've never had an AI randomly start solving a non existant logic circuit and create a Logic Circuit Simulator in a conversions that had absolutely nothing to do with circuits. Only reason I even open Gemini is because I got the pro version for free. But so many people around love Gemini that genuinly wonder if my Gemini is somehow just bad.