"Google Gemini tried to kill me"
"Google Gemini tried to kill me"
I followed these steps, but just so happened to check on my mason jar 3-4 days in and saw tiny carbonation bubbles rapidly rising throughout.
I thought that may just be part of the process but double checked with a Google search on day 7 (when there were no bubbles in the container at all).
Turns out I had just grew a botulism culture and garlic in olive oil specifically is a fairly common way to grow this bio-toxins.
Had I not checked on it 3-4 days in I'd have been none the wiser and would have Darwinned my entire family.
Prompt with care and never trust AI dear people...
Statements from LLMs are to be seen as hallucinations unless proven otherwise by classic research.
We don't need a fancy word that makes it sound like AI is actually intelligent when talking about how AI is frequently wrong and unreliable. AI being wrong is like someone who misunderstood something or took a joke as literal repeating it as factual.
When people are wrong we don't call it hallucinating unless their senses are altered. AI doesn't have senses.
It's not a "fancy word" here, but a technical term. An AI making things up is actually called hallucination.