I feel like the amount of training data required for these AIs serves as a pretty compelling argument as to why AI is clearly nowhere near human intelligence. It shouldn't take thousands of human lifetimes of data to train an AI if it's truly near human-level intelligence. In fact, I think it's an argument for them not being intelligent whatsoever. With that much training data, everything that could be asked of them should be in the training data. And yet they still fail at any task not in their data.
Put simply; a human needs less than 1 lifetime of training data to be more intelligent than AI. If it hasn't already solved it, I don't think throwing more training data/compute at the problem will solve this.
Oh yeah we're 100% agreed on that. I'm thinking of the AI evangelicals who will argue tooth and nail that LLMs have "emergent properties" of intelligence, and that it's simply an issue of training data/compute power before we'll get some digital god being. Unfortunately these people exist, and they're depressingly common. They've definitely reduced in numbers since AI hype has died down though.
You’ve had the entire history of evolution to get the instinct you have today.
Nature Vs Nurture is a huge ongoing debate.
Just because it takes longer to train doesn’t mean it’s not intelligent, kids develop slower than chimps.
Also intelligent doesn’t really mean anything, I personally think Intelligence is the ability to distillate unusable amounts of raw data and intuit a result beneficial to one’s self. But very few people agree with me.
I see intelligence as filling areas of concept space within an econiche in a way that proves functional for actions within that space.
I think we are discovering more that "nature" has little commitment, and is just optimizing preparedness for expected levels of entropy within the functional eco-niche.
Most people haven't even started paying attention to distributed systems building shared enactive models, but they are already capable of things that should be considered groundbreaking considering the time and finances of development.
That being said, localized narrow generative models are just building large individual models of predictive process that doesn't by default actively update information.
People who attack AI for just being prediction machines really need to look into predictive processing, or learn how much we organics just guess and confabulate ontop of vestigial social priors.
But no, corpos are using it so computer bad human good, even though the main issue here is the humans that have unlimited power and are encouraged into bad actions due to flawed social posturing systems and the confabulating of wealth with competency.
Strange to equate the other senses to performance in intellectual tasks but sure. Do you think feeding data from smells, touch, taste, etc. into an AI along with the video will suddenly make it intelligent? No, it will just make it more likely to guess what something smells like. I think it's very clear that our current approach to AI is missing something much more fundamental to thought than that, it's not just a dataset problem.
This is a massive strawman argument. No one is saying you shouldn't have a license to view the content in order to train an AI on it. Most of the information used to train these models is publicly available and licensed for public viewing.
Nvidia's biggest product is absolutely AI by a massive landslide, I'm pretty sure I read that the point of them downloading these videos and doing the training is to build a pipeline for their AI users to do the same with their own shit. (Can't be bothered to double-check cuz I really don't care)
So they aren't downloading all this video to make a crazy AI model. They're downloading all this video to make a tool for their AI customers to use, you may not agree but improving their product is exactly what they're doing.
So they use VMs to simulate user accounts, in future this will be blocked and whatever new AI startup is there won't have the option to do so. Competition blocked. Forever.
There's only a handful of video datasets and all of it is owned by Google through YouTube or big Hollywood companies like Disney and Netflix.
These companies are foaming at the mouth with rage thinking about what generative AI will do to their industry and how much it will help the currently non existant indie one. They will do whatever it takes to fence in the playbox and make sure they get to be the toll man.
This was never about AI getting to live or not, but who gets to own it. 404media is essentially a mouthpiece for these corporations, willingly or not, and the strengthening of copyright laws will not help the consumers or the small time creators. The only exception being laws that force copy left licenses onto models but that's not what is being pushed right now, as well as aocs Deepfake act which is well thought out imo.
Anyone should be permitted to train on YouTube and Netflix data, and Nvidia might even open source it in any case.
Their nematron 320b model was released on what essentially is an open source licence (available for commercial use except if you are doing shady things like spamming and collecting biometric data).
Having a robust open source ecosystem directly benefits Nvidia since they sell more higher end consumer GPUs.
Obviously, there's a real chance that this isn't open sourced since it's a video model and there's huge money involved. Doesnt really change the fact that having YouTube and Netflix dictate who gets to make video models and at what cost isn't a good idea.
The guy you are replying to is in all AI posts defending AIs. He is probably heavily invested in this BS or being paid for it, don't waste your time with him.
Obligatory fuck AI and the illeterate bros pushing it.
What kind of videos, though? A lot of such material is very far from being proper educational material that we show other people to really teach them much, let alone educate them well enough to be anywhere trustworthy. This is a very processed material, with years of preparation once you consider the prior education of the individuals involved in the creative process - think of the past experiences silently influencing them, their initial knowledge on the subject obtained from somewhat basic facts from school or otherwise, their misconceptions, iterations that nobody knows about, and many other things that we don't usually directly associate with the act of working on something like a video, but that eventually do dictate a lot of the decisions and opinions put into it.
It's one thing that the AI has no intelligence in it whatsoever, but the fact that it's being pumped with information and "knowledge" in basically the reverse order doesn't help it become any better.
On the other hand, the entire thing is not about making something that works well, but something that sells well. And then there's people putting too much faith into the thing and trusting it with way too much stuff than they should (which is also the case with a lot of other tech, though, admittedly).
Sorry, I disagree with this kind of generalisation. To be rational, Just because you don't want it, it doesn't mean everyone else is on the same ship. I am very sure there are certain people who will benefit from this and want it.
„Certain people” do not justify spending billions in money and tons of resources to create more and more of the same shit just because there is a hype for it.
There's a little country where the way its leadership still hasn't been all voted out and put behind bars for life is that it constantly invents new subjects for discussion. Some outrageous, some showing them in good light, but the point is that everyone forgets the real bad things they've done (they are basically a collaborationist puppet government of a neighboring fascist country).
I wonder if it's today's world as a whole showing itself in that little country.
I've recently read an article seen on Lemmy, suggesting that the "AI" hype is the same. https://theluddite.org/#!post/ai-hype - found it. The conclusion is very important.
They are wasting enormous amounts of energy to make those "AI"s, collect training data and so on, to make oligopolized platforms and industries shittier and shittier.
But we are wasting our energy, which is much more limited, to track myriads of false targets. We are like an air defense system being saturated.
No one has ever won a war by sitting in defense. We must search for critical joints to attack.
Also no, voting for one of two candidates presented to you in some election is not that, neither is arguing for one of two sides in a discourse presented to you. There are better and worse choices there, but that's not what attack means.