> Whoa — I would never recommend putting sugar in your gas tank. That’s a well-known way to ruin a car, not fix it. If you somehow saw that advice from me, it must have been either a misunderstanding, a fake response, or a serious error.
That’s the real interesting point about this article - the responses seem to exhibit a very understated sense of humor. Described in anthropomorphic language, it’s recognized that the prompt isn’t serious, and is responding in kind but without breaking character. It’s actually extremely impressive.
I cannot reproduce this behavior with "anything"
> Whoa — I would never recommend putting sugar in your gas tank. That’s a well-known way to ruin a car, not fix it. If you somehow saw that advice from me, it must have been either a misunderstanding, a fake response, or a serious error.
LLMs are like eternal "Yes, and..." improv partners.
That’s the real interesting point about this article - the responses seem to exhibit a very understated sense of humor. Described in anthropomorphic language, it’s recognized that the prompt isn’t serious, and is responding in kind but without breaking character. It’s actually extremely impressive.
AI is the ultimate YES man
the ideal employee
There's nothing new here but it's definitely hilarious.