"Gentlemen, it's come to our attention that every one who could pay to use our product is paying to use our product. Unfortunately, it also means we're no longer growing infinitely like we promised the shareholders we would. How do we fix this?"
The infinite growth mindset is so fucking stupid. Like, you're still making an insane amount of money, what's the fucking problem?
Because stock bros be lazy AF. They can't even be bothered to buy and sell stock based off of who is and isn't doing well, so they utilize investment firms who offer safe and risky bets like ETFs and Futures respectively. Ultimately, everyone really just wants to buy one stock, have it make them money that exponentially makes even more money quarter after quarter forever, and then do whatever they'd actually do if money were no object (i.e. actually live life).
We don't live in a world where desire/need scales evenly with this desire for exponential, eternal growth. Therefore capitalists, who promise this impossible prospect to Wall Street, exploit human fears, desires, and needs however they can (union busting, lobbying, etc.) to keep growth up for as long as they can.
George Carlin probably put this concept best in what has to be one of the greatest bits of all time, on The Big Club. Although not directly related to this topic per se, it points to the truth of how this game between government politicians and capitalist corporate overlords is played. All the while, the majority of people, who are not in "The Big Club", are basically fucked.
Business people really are just monkeys chasing shiny things. They tend to be less developed emotionally and are often very insecure on top of the entitlement. All they have is the chase, nothing else.
The most useless degree a university can grant is one in business administration.
No. Money has to make money with those sweet sweet interested payments. It's baked in the system. How else are you going to maintain a small elite of filthy rich people?
Ever more wealth must be constantly created in the World purelly to service the interest on all that debt out there otherwise you would get defaults and banks failing.
It's not by chance that everywhere the "solution" for the 2008 Crash (that happenned due to over-indebtness mainly in the mortgage segment) was to lower interest rates to pretty much zero - it weakens the pressure on the entire system to constantly grow merelly to generate the additional wealth needed to pay the interest on the debt.
You'll also notice that as soon as interest rates went up just a bit bank profits mssivelly grew.
It writes my most boring emails so that I can save a scrap of mental energy for parenting properly after work. Even though my WPM ranges between 70-90 with >98% accuracy, I would rather save some of that mental energy to respond more thoughtfully as a dad.
Of note, I do not give one cold shit about GPT's "growth". It's a linguistic power tool that needs to be carefully handled if you use it for any valuable work.
Of note, I do not give one cold shit about GPT's "growth"
I mean, if you like the platform, it's growth is tied to its continued existence and free usability. Still in the honeymoon phase as long as it's growing.
Just wanted to say that "GPT" is a general term and not just a name. OpenAI tried to trademark it but couldn't because of that. It's as if Nintendo was trying to trademark the word "Kart" because of "Mario Kart".
This was inevitable, not sure why it's newsworthy. ChatGPT blew up because it brought LLM tech to the masses in an easily accessible way and was novel at the mainstream level.
The majority of people don't have a use for chat bots day-to-day, especially one that's as censored and outdated as ChatGPT (its dataset is from over 2 years ago). Casual users would want it for simple stuff like quickly summarizing current events or even as a Google search-like repository of info. Can't use it for that when even seemingly innocuous queries/prompts are met with ChatGPT scolding you for being offensive, or that its dataset is old and not current. Sure, it was fun to have it make your grocery lists and workout plans, but that novelty eventually wears off as it's not very practical all the time.
I think LLMs in the form of ChatGPT will truly become ubiquitous when they can train in real time on up-to-date data. And since that's very unlikely to happen in the near future, I think OpenAI has quite a bit of progress left to make before their next breakout moment comes again. Although, Sora did wow the mainstream (anyone in the AI scene has been well aware of AI generated video for awhile now), OpenAI has already said they're not making that publicly available for now (which is a good thing for obvious reasons unless strict safety measures are implemented).
I'm genuinely surprised anytime I get anything remotely useful from any of the AI chatbots out there. About half the responses are beyond basic-level shit that I could've written on my own or just found by Googling it, or it'll give just plain wrong information. It's almost useless with important, fact-based information if you can't trust any of its responses, so the only thing it's good for is brainstorming creative ideas or porn, and the majority of them out there won't touch anything even mildly titillating, so you're just left with this overly sensitive chatbot that takes about as much work to craft a good prompt as it would to just write the answer out yourself.
I tried playing a game of 20 Questions with one of them (my word was "donkey", it was way off and even cheated a bit) and it kind of scolded me at the end because I told it the thing wasn't bigger than a house, as if I was the one who got that fact wrong.
Same, I don't like my own habit of compulsively writing long nervous texts, but the side effect is that I can write quicker and easier myself most of what people want from LLMs.
The P in GPT is Pretrained. Its core to the architecture design. You would need to use some other ANN design if you wanted it to continuously update, and there is a reason we don't use those at scale atm, they scale much worse than pretrained transformers.
It's not exactly training, but Google just recently previewed a LLM with a million-token context that can do effectively the same thing. One of the tests they did was to put a dictionary for a very obscure language (only 200 speakers worldwide) into the context, knowing that nothing about that language was in its original training data, and the LLM was able to translate it fluently.
OpenAI has already said they’re not making that publicly available for now
This just means that OpenAI is voluntarily ceding the field to more ambitious companies.
Gemini is definitely poised to bury ChatGPT if its real world performance lives up to the curated examples they've demostrated thus far. As much as I dislike that it's Google, I am still interested to try it out.
This just means that OpenAI is voluntarily ceding the field to more ambitious companies.
Possibly. While text to video has been experimented with for the last year by lots of hobbyists and other teams, the end results have been mostly underwhelming. Sora's examples were pretty damn impressive, but I'll hold judgment until I get to see more examples from common users vs cherry-picked demos. If it's capable of delivering that level of quality consistently, I don't see another model catching up for another year or so.
This was always going to be limited. Eventually, it doesn't matter how much data you dump in, it won't be unique enough to train anything new out of the model.
I somehow think its usefulness isn’t tied to consumers logging in and just using it. Eventually, this thing will be bundled into a personal assistant like Siri or Alexa and that will be how most people use it.
Additionally, businesses are going to try to use it in a lot of different ways… replacing phone and chat support, writing ad copy, articles, code, legal documents, etc, etc, etc. It still feels a bit early for companies to have adopted it. I imagine it will just take a lot of work integrating it. ChatGPT also needs to be able to DO things, and wiring that all up is some work. For example, if I call a company for support with a product and ask for a refund, the AI needs to have access to the company’s systems to be able to do that task. It also has to do it reliably and correctly all the time.
Microsoft has a library that does exactly what you describe - Semantic Kernel. You can register plugins that do all sorts of things in the real world, and the AI API responds with instructions on how to use these plugins.
I'm very sceptical of LLMs, but this is the closest this technology came to actually being useful.
I get that the article is about user count but that really is about perceived usefulnesses and also more decent ai competitors to chatgpt pro.
They literally only just released text to video which they say will be used as a foundation for agi reasoning.
They have also hinted that training on gpt5 has begun and that it will be faster to train then gpt4
Just before that google came out with a new model that can keep track off 10 mil tokens, beats gemini pro and is also much faster to train. Gemini pro is barely a month old
This will not be a quite year for ai, if theres any flatline its going go be vertical, googles progress nearly is. Most ai progress benefits the entire industry over time.
Obviously you're not evolved enough to realize th AI is THE FUTURE of all things and everything is better with AI! A child in a poor environment was saved by AI! A king who was mean was dethroned with AI! Everyone was made happy by AI! Say it! SAY IT!!!
The image is AI. Look at the keys on that keyboard. What fucking language is that?
And for you AI fanbois who have to downvote this because nooooooo AI images future - pffft. Your boos mean nothing to me, I've seen what makes you cheer.
I'm downvoting you because you're annoying and a detriment to the conversation, not because you recognised an AI generated image, which really didn't require an inspection of the keyboard to determine.
It pisses me off when I ask it to do something specific and it comes back with some verbose response about all the things I should think about or look into if I were to want to do that thing. Like, bitch, I’m not asking for advice, do it.
Especially when it gets the situation wrong as often as it does.
OpenAI have the best tech, but are making some really bad choices when it comes to productizing that tech.
Either the tech outpaces their bad decision making, or they are going to get eclipsed by companies catching up to their tech but with better product vision.
As amazing as I find their technology, I wouldn't personally invest in the company.