This always cracks me up, because it's a perfect example of a snake eating it's own tail. "Based" was originally just a shortened way of saying "based in reality" or "based in fact", but new people didn't get the original context, so it just became it's own word. Then, the uninitiated started making the "Based? Based on what?" joke, completely oblivious of the original meaning.
Why do the leaders in AI know so little about it? Transformers are completely incapable of maintaining any internal state, yet techbros somehow think it will magically have one. Sometimes, machine learning can be more of an art than a science, but they seem to think it's alchemy. They think they're making pentagrams out of noncyclic graphs, but are really just summoning a mirror into their own stupidity.
It's really unfortunate, since they drown out all the news about novel and interesting methods of machine learning. KANs, DNCs, MAMBA, they all have a lot of promise, but can't get any recognition because transformers are the laziest and most dominant methods.
Honestly, I think we need another winter. All this hype is drowning out any decent research, and so all we are getting are bogus tests and experiments that are irreproducible because they're so expensive. It's crazy how unscientific these 'research' organizations are. And OpenAI is being paid by Microsoft to basically jerk-off sam Altman. It's plain shameful.
The issue with sonnet 3.5 is, in my limited testing, is that even with explicit, specific, and direct prompting, it can't perform to anything near human ability, and will often make very stupid mistakes. I developed a program which essentially lets an AI program, rewrite, and test a game, but sonnet will consistently take lazy routes, use incorrect syntax, and repeatedly call the same function over and over again for no reason. If you can program the game yourself, it's a quick way to prototype, but unless you know how to properly format JSON and fix strange artefacts, it's just not there yet.
I don't know what GNV/HYDRA is, but I like the sound of it. What's the repo called?
Recently, research has suggested that LLMs can solve moderately more difficult problems if prompted to use "chain of thought" reasoning (CoT). In CoT, the LLMs essentially pretends to be thinking about the problem, where it comes up with a couple intermediate stages to process the problem. Of course, this doesn't really stop them from giving bad solutions to established problems, but it does cause it to be better at novel problems.
This whole thing reminds me of the proverb of the frog & scorpion crossing the river. It is simply the nature of the scorpion to act like a scorpion, regardless of what intelligence we ascribe to it.
One theory about nightmares is that they serve as exposure therapy for stressors. If your nightmares are too extreme, maybe you could set aside during the day to enter a calmer environment and try to review them without getting overwhelmed. It might desensitize you a little bit and make them less severe.
I haven't had severe nightmares since I was a preteen, but when I did, they tended to be pretty stressful (being kidnapped, being abandoned, my friends commiting suicide in front of me, etc). I'm not sure exactly why they stopped, but I eventually 'burnt out', and I just stopped really caring about them.
Can't say I turned out great, but I can say I don't have nightmares anymore.
Everything can be done in constant time, at least during runtime, with a sufficiently large look-up table. It's easy! If you want to simulate the universe exactly, you just need a table with nxm entries, where n is the number of plank volumes in the universe, and m is the number of quantum fields. Then, you just need to compute all of them at compile time, and you have O(1) time complexity during runtime.
The cure to to saying stupid things is death, too. Funny, that.
In fact, the only thing that can't be cured by death is dying.
There are bindings in java and c++, but python is the industry standard for AI. The libraries for machine learning are actually written in c++, but use python language bindings. Python doesn't tend to slow things down since machine learning is gpu-bound anyway. There are also library specific programming languages which urges the user to make pythonic code that can be compiled into c++.
I completely agree that it's a stupid way of doing things, but it is how openai reduced the vocab size of gpt-2 & gpt-3. As far as I know–I have only read the comments in the source code– the conversion is done as a preprocessing step. Here's the code to gpt-2: https://github.com/openai/gpt-2/blob/master/src/encoder.py I did apparently make a mistake, as the vocab reduction is done through a lut instead of a simple mod.
Can't find the exact source–I'm on mobile right now–but the code for the gpt-2 encoder uses a utf-8 to unicode look up table to shrink the vocab size. https://github.com/openai/gpt-2/blob/master/src/encoder.py
This might be happening because of the 'elegant' (incredibly hacky) way openai encodes multiple languages into their models. Instead of using all character sets, they use a modulo operator on each character, to make all Unicode characters represented by a small range of values. On the back end, it somehow detects which language is being spoken, and uses that character set for the response. Seeing as the last line seems to be the same mathematical expression as what you asked, my guess is that your equation just happened to perfectly match some sentence that would make sense in the weird language.
I don't know about that guy, but I used to have a speech impediment that meant I couldn't pronounce the letter R. I went to several speech therapists, so I started to annunciate every other letter, but that made people think I had a British accent. Anyway, I eventually learned how to say R, so now I have a speech impediment that makes me sound like a British person doing a fake American accent.
Yeah, mine. EYYYYOOOOO! (I may or may not have ED)
Oh, so the goal is to get the certain doom?
"An anaconda that is sprung?" What does that mean?
At first, I thought you were referencing the Old testament.
The tweet is referring to saying "The [group] are xyz" instead of saying "[group] people are xyz"