While I’m hesitant to implement a total ban on LLM output at this stage, I do think this is a worthwhile discussion to be had. So let’s try to agree on a good LLM etiquette for the forum.
There are definitely some valid use cases today, and there may be more tomorrow that we don’t yet foresee.
Current acceptable uses (IMO):
- Translation
- Summarizing a long document (not summary of news coverage etc—that’s usually way off in my experience)
- Transcription
- Research that’s easily verifiable
An example of the last one would be something like having it dig up “any instances of EU countries that increased the required residence period to qualify for naturalization and grandfathered residents that had not yet met the threshold”.
Here the output itself isn’t very relevant, but I think it’s fair to state that you used e.g. ChatGPT deep research to try to find examples of it. If it comes up empty that’s of course not irrefutable proof that there are no such cases or that it wouldn’t be the case in Portugal. If it does present examples of it, those should be easy enough to verify (so please do so before posting).
The value of using an LLM can in many ways be likened to Wikipedia. While you can’t necessarily trust the output itself, it can provide a useful list of otherwise hard to find references.
And yes, I did previously propose that we hide LLM output behind details tags like this:
[details="LLM output"]
LLM output goes here...
[/details]
Which results in output like this:
LLM output
LLM output goes here…
What do you guys think? Any use cases I missed?