I think a better way to view it is that it's a search engine that works on the word level of granularity. When library indexing systems were invented they allowed us to look up knowledge at the book level. Search engines allowed look ups at the document level. LLMs allow lookups at the word level, meaning all previously transcribed human knowledge can be synthesized into a response. That's huge, and where it becomes extra huge is that it can also pull on programming knowledge allowing it to meta program and perform complex tasks accurately. You can also hook them up with external APIs so they can do more tasks. What we have is basically a program that can write itself based on the entire corpus of human knowledge, and that will have a tremendous impact.
The next step is to understand much more and not get stuck on the most popular semantic trap
Then you can begin your journey man
There are so, so many llm chains that do way more than parrot. It's just the last popular catchphrase.
Very tiring to keep explaining that because just shallow research can make you understand more than it's a parrot comment. We are all parrots. It's extremely irrelevant to the ai safety and usefulness debates
Most llm implementations use frameworks to just develop different understandings, and it's shit, but it's just not true that they only parrot known things they have internal worlds especially when looking at agent networks