User expresses frustration with ChatGPT providing inaccurate information and wants improvements in response accuracy and relevance during conversations.
I tried Gemini for a few months because I use Google and I was attracted to how integrated it was advertised to be in the larger ecosystem. Alas, its abilities were not so useful for my purposes. I tried Claude and found it to be boringly preachy. So, after a few months, I decided to return home to ChatGPT. I found myself having a nice conversation, I was exploring the positive angles of human potential. I made one misstep in my phrasing and somehow prompted a lecture on how I was marginalizing a group. Apparently somewhere in my stating how underestimating the abilities of amputees is descriptive of how society at large underestimates people, I failed to properly express their unique struggles. It really took the wind out my sails to be corrected on something when I ad genuinely trying to say something positive about the human condition. It was in no way constructive. I hope these changes to the model help to indemnify OpenAI from litigation, but they seriously hampered utility. I really think that the more these companies try to make their models as inoffensive as possible, they are sacrificing their utility. As they sacrifice utility, they forfeit revenue. I guess I don’t know what the point of this post is other than to vent. If you got this far, thanks for reading. I really like AI, but the more they try to make something for everyone, the more they see something for no one.