I've been putting a lot of thought into automating an AI bot that would do something randomy to generate seed prompts and then feed that through an image generator and post it.
It then gave me step-by-step text instructions on how to use the OCR feature in Microsoft Word to import text from a picture, and admitted in step 3 that the function doesn't exist. There were 6 steps.
Some of the current thought on shortcomings of LLM capabilities actually takes influence from human cognitive science, and what can be learned from those with neurological impairments. It's thought that human language abilities are strongly dissociated from other reasoning abilities because individuals with aphasia can lack the ability to speak or comprehend language, yet be able to solve mathematical problems, engage in logical reasoning, enjoy music, categorize objects and events, etc.
It's shown that LLMs develop a crude world model for performing reasoning tasks, yet it's inextricably tied up with their language functionalities (since they are ONLY language based). The hope for future research is to develop AIs with world models and planning faculties that are decoupled from the language analysis module, which would mitigate hallucination and aid in interpretability.
I was following it correctly up until the part where you have to place a child on your laptop. I wish these things would let you know the parts required beforehand.
I'm actually impressed by the reasonably coherent (though nonsense) text. If you think about how generative AI works it's very surprising it could form words in images.
Microsoft's image generator has been getting better and better at text. There are still plenty of problems, especially with small text, but someone on another forum was able to get it to output this with a very small prompt:
It's not AGI that's terrifying, but how people are so willing to let anything take over their control. LLMs are "just" predictive text generation with a lot of extras to make things come out really convincing sometimes, and yet so many individuals and companies basically handed over the keys without even second guessing its answers.
These past few years have shown how if (and it's a big if) AGI/ASI comes along, we are so screwed, because we can't even handle dumber tools well. LLMs in the hands of willing idiots can be a disaster itself, and it's possible we're already there.