My point was that it’s fine to use it as a tool to dig up relevant sources (which should be verified before sharing), but not rely on or necessarily even share its output in these cases.
Ideally we’d have a dedicated tag for it with a UI element users could click to add it. However, I don’t think there are any existing plugins for this forum software that adds anything like this. And I don’t think it makes sense to spend time/resources on developing and maintaining such a plugin. Or maybe I can ask ChatGPT to do it? ![]()
So let’s use the Hide Details feature for now. Instead of remembering the markup you can also insert it with two clicks in the UI:
Yes, please flag instances of it. I’ll see if I can add a custom flag for it, for now just use Inappropriate or custom message.
There’s now an option to flag a post as containing unmarked LLM/AI output:

