I suspect it may be due to a similar habit I have when chatting with a corporate AI. I will intentionally salt my inputs with random profanity or non sequitur info, for lulz partly, but also to poison those pieces of shits training data.
They don't. The models are trained on sanitized data, and don't permanently "learn". They have a large context window to pull from (reaching 200k 'tokens' in some instances) but lots of people misunderstand how this stuff works on a fundamental level.