User expresses a preference for AI models that provide disclaimers but still give information, unlike corporate models that refuse. They would be fine with censored models if the censorship could be easily switched off, advocating for user control over content filtering.
I actually really like this approach where it puts a massive disclaimer up the top essentially telling you that what it's about to tell you is illegal/dangerous and that you shouldn't attempt to follow any of the guidance it's about to give you, but then does proceed to tell you, unlike corporate models that will straight up refuse to tell you. I get why most large AI models, not just LLMs, but also Diffusion-based image generators and the like, get censored, the companies training them don't want to be liable for their potential outputs, but having any kind of system that restricts access to information and/or creativity based on arbitrary criteria is a real slippery "wrongthink" slope that I think should be avoided. If a person goes out and buys a car, then proceeds to kill somebody with that car, the responsibility falls squarely on the driver's shoulders. Not the dealership that sold them the car. Not the manufacturer that built the car. The person driving. And yes, I know things like automatic braking are a thing now, but most of the time you can turn those features off if you really want to. I would have no problem with censored models if the censor could be easily switched off. I'm excited by all these open source models taking off. And with models getting cheaper and easier to train, I figure it won't be long before we have open source equivalents of GPT-4, possibly even running locally on consumer hardware. That's when things get interesting.