It must be a silent R
It must be a silent R
It must be a silent R
Copilot may be a stupid LLM but the human in the screenshot used an apostrophe to pluralize which, in my opinion, is an even more egregious offense.
It's incorrect to pluralizing letters, numbers, acronyms, or decades with apostrophes in English. I will now pass the pedant stick to the next person in line.
That's half-right. Upper-case letters aren't pluralised with apostrophes but lower-case letters are. (So the plural of 'R' is 'Rs' but the plural of 'r' is 'r's'.) With numbers (written as '123') it's optional - IIRC, it's more popular in Britain to pluralise with apostrophes and more popular in America to pluralise without. (And of course numbers written as words are never pluralised with apostrophes.) Acronyms are indeed not pluralised with apostrophes if they're written in all caps. I'm not sure what you mean by decades.
I salute your pedantry.
English is a filthy gutter language and deserves to be wielded as such. It does some of its best work in the mud and dirt behind seedy boozestablishments.
Oooh, pedant stick, pedant stick! Give it to me!!
Thank you. Now, insofar as it concerns apostrophes (he said pedantically), couldn't it be argued that the tools we have at our immediate disposal for making ourselves understood through text are simply inadequate to express the depth of a thought? And wouldn't it therefore be more appropriate to condemn the lack of tools rather than the person using them creatively, despite their simplicity? At what point do we cast off the blinders and leave the guardrails behind? Or shall we always bow our heads to the wicked chroniclers who have made unwitting fools of us all; and for what? Evolving our language? Our birthright?
No, I say! We have surged free of the feeble chains of the Oxfords and Websters of the world, and no guardrail can contain us! Let go your clutching minds of the anchors of tradition and spread your wings! Fly, I say! Fly and conformn't!
...
I relinquish the pedant stick.
Prescriptivist much?
Plenty of fun to be had with LLMs.
So ChatGPT has ADHD
ADHD contains twelve "r's"
That's one example when LLMs won't work without some tuning. What it does is probably looking up information of how many Rs there are, instead of actually analyzing it.
It cannot "analyze" it. It's fundamentally not how LLM's work. The LLM has a finite set of "tokens": words and word-pieces like "dog", "house", but also like "berry" and "straw" or "rasp". When it reads the input it splits the words into the recognized tokens. It's like a lookup table. The input becomes "token15, token20043, token1923, token984, token1234, ..." and so on. The LLM "thinks" of these tokens as coordinates in a very high dimensional space. But it cannot go back and examine the actual contents (letters) in each token. It has to get the information about the number or "r" from somewhere else. So it has likely ingested some texts where the number of "r"s in strawberry is discussed. But it can never actually "test" it.
A completely new architecture or paradigm is needed to make these LLM's capable of reading letter by letter and keep some kind of count-memory.
I doubt it's looking anything up. It's probably just grabbing the previous messages, reading the word "wrong" and increasing the number. Before these messages I got ChatGPT to count all the way up to ten r's.
The T in "ninja" is silent. Silent and invisible.
“Create a python script to count the number of r
characters are present in the string strawberry
.”
The number of 'r' characters in 'strawberry' is: 2
You need to tell it to run the script
This is hardly programmer humor… there is probably an infinite amount of wrong responses by LLMs, which is not surprising at all.
I don't know, programs are kind of supposed to be good at counting. It's ironic when they're not.
Funny, even.
Many intelligences are saying it! I'm just telling it like it is.
Isn't "Sphinx of black quartz, judge my vow." more relevant? What's all the extra bit anyway, even before the "z" debacle?
I was curious if (since these are statistical models and not actually counting letters) maybe this or something like it is a common "gotcha" question used as a meme on social media. So I did a search on DDG and it also has an AI now which turned up an interestingly more nuanced answer.
It's picked up on discussions specifically about this problem in chats about other AI! The ouroboros is feeding well! I figure this is also why they overcorrect to 4 if you ask them about "strawberries", trying to anticipate a common gotcha answer to further riddling.
DDG correctly handled "strawberries" interestingly, with the same linked sources. Perhaps their word-stemmer does a better job?
Lmao it's having a stroke
many words should run into the same issue, since LLMs generally use less tokens per word than there are letters in the word. So they don't have direct access to the letters composing the word, and have to go off indirect associations between "strawberry" and the letter "R"
duckassist seems to get most right but it claimed "ouroboros" contains 3 o's and "phrasebook" contains one c.
DDG's one isn't a straight LLM, they're feeding web results as part of the prompt.
5% of the times it works every time.
You can come up with statistics to prove anything, Kent. 45% of all people know that.
"it is possible to train 8 days a week."
-- that one ai bot google made
Probably trained on this argument.
I bust out laughing when I got to here:
Ah, trained off that body builder forum post about days of the week I see.
Ladies and gentlemen: The Future.
"In the Future, people won't have to deal with numbers, for the mighty computers will do all the numbers crunching for them"
The mighty computers:
Q: "How many r are there in strawberry?"
A: "This question is usually answered by giving a number, so here's a number: 632. Mission complete."
A one-digit number. Fun fact, the actual spelling gets stripped out before the model sees it, because usually it's not important.
It can also help you with medical advice.
There ARE two "R"s in strawberry.
There's also a third one, but you can't have three without having two.
That reminds me, I have 1 finger. I also have two fingers, 3 fingers and all the way up to 10 fingers!
True fact.
Boy, your face is red like a strawbrerry.
Jesus hallucinatin' christ on a glitchy mainframe.
I'm assuming it's real though it may not be but - seriously, this is spellcheck. You know how long we've had spellcheck? Over two hundred years.
This? This is what's thrown the tech markets into chaos? This garbage?
Fuck.
I was just thinking about Microsoft Word today, and how it still can't insert pictures easily.
This is a 20+ year old problem for a program that was almost completely functional in 1995.
"strawberry".split('').filter(c => c === 'r').length
len([c if c == 'r' for c in "strawberry"])
'strawberry'.match(/r/ig).length
(\r (frequencies "strawberry"))
A zero indexed array doesn't have a different length ;)
Using a token predictor to do sub-token analysis produces bad results?!?! Shocking Wow great content
maybe it’s using the british pronunciation of “strawbry”
Garbage in, garbage out. Keep feeding it shit data, expect shit answers.
There’s a simple explanation: LLMs are “R” agnostic because they were specifically trained to not sail the high seas
To be fair, I knew a lot of people who struggled with word problems in math class.
I stand with chat-gpt on this. Whoever created these double letters is the idiot here.
I tried it with my abliterated local model, thinking that maybe its alteration would help, and it gave the same answer. I asked if it was sure and it then corrected itself (maybe reexamining the word in a different way?) I then asked how many Rs in "strawberries" thinking it would either see a new word and give the same incorrect answer, or since it was still in context focus it would say something about it also being 3 Rs. Nope. It said 4 Rs! I then said "really?", and it corrected itself once again.
LLMs are very useful as long as know how to maximize their power, and you don't assume whatever they spit out is absolutely right. I've had great luck using mine to help with programming (basically as a Google but formatting things far better than if I looked up stuff), but I've found some of the simplest errors in the middle of a lot of helpful things. It's at an assistant level, and you need to remember that assistant helps you, they don't do the work for you.
Is there anything else or anything else you would like to discuss? Perhaps anything else?
Anything else?
A humorous follow up response would be "sure, here's another question: How the hell did they think you were ready to be utilized?"
The only correct answer: "I can answer that for you! The reason they thought I was ready to be utilized by the general public is because money!"
Tnf, this is the kind of answer a person might give if you asked them the question randomly.
It has [2] R's, simple!
First mentioned by linus techtip.
i had fun arguing with chatgpt about this
I hate AI, but here it's a bit understandable why copilot says that. If you ask the same thing to someone else they would surely respond 2 as they my imply you are trying to spell the word, and struggle on whether it's one or two R on the last part.
I know it's a common thing to ask in french when we struggle to spell our overly complicated language, so it doesn't shock me
Nah it's because AI works at the token level which is usually words. They don't even "see" the letters in the words
Thank you. For as much as this post comes up, I hope people are at least getting an education.
The people here don't get LLMs and it shows. This is neither surprising nor a bad thing imo.
In what way is presenting factually incorrect information as if it's true not a bad thing?
People who make fun of LLMs most often do get LLMs and try to point out how they tend to spew out factually incorrect information, which is a good thing since many many people out there do not, in fact, "get" LLMs (most are not even acquainted with the acronym, referring to the catch-all term "AI" instead) and there is no better way to make a precaution about the inaccuracy of output produced by LLMs –however realistic it might sound– than to point it out with examples with ridiculously wrong answers to simple questions.
Edit: minor rewording to clarify