The Perfect Response
The Perfect Response
The Perfect Response
Then there's always that one guy who's like "what about memes?"
MSpaint memes are waaaaay funnier than Ai memes, if only due to being a little bit ass.
Ass is the most important thing in a meme
EDIT: I feel like I could have written this better but I will stand by it nonetheless
Thank you for your service
Plus it let people with...let's say no creative ability to speak of to be nice about it...make memes.
Anyone can have creative ability if they actually try though. And a lot of the low tech, low quality tools can produce some fun results if you embrace the restrictions.
It seems we've come full circle with "copying is not theft"... I have to admit I'm really not against the technology in general, but the people who are currently in control of it are all just the absolute worst people who are the least deserving of control over such a thing.
Is it hypocritical to think there should be rules for corporations that dont apply to real people? Like why is it the other way around and I can go to jail or get a fine for sharing the wrong files but some company does it and they just say its for the "common good" and they "couldnt make money if they had to follow the laws" and they get a fucking pass?
Yeah, I've been a pirate for so long I have zero moral grounds to be against using copyrighted stuff for free...
Except I'm not burning a small nations' worth of energy to download a NoFX album and I'm not recreating that album and selling it to people when they ask for a copy of Heavy Petting Zoo (I'm just giving them the real songs). So, moral high ground regained?
On the other hand, now I can download all the music I want because I'm training an AI. (Code still under development).
Some many in these comments are like "what about the ethical source data ones?"
Which ones? Name one.
None of the big ones are. Wtf is ethically sourced? E.g. Ebay wants to collect data for ai shit. My mom has an account, and she could opt out of them using her data but when I told her about it, she told me that she didn't understand. And she moved on. She just didn't understand what the fuck they are doing and why she might should care. But I guess it is "ethically" sourced as they kinda asked by making it opt out, I guess.
That surely is very ethical and you can not critic it for it... As we all know, an 50yo adult fucking a 14yo would also be totally cool as long as the 14yo doesn't say no. Right? That is how our moral compass work. /S
Fucking disgusting. All of you tech bro complain about people not getting ai or tech in general and then talk about ethically sourced data. I spit on you.
I love IT, I work in it and I live it, but I have morals and you could too
Edit: after a bunch of messages telling me that I am wrong. I wonder when they will realize that they are making my point. I am saying that it isn't ethically sourced without consent and uninformed consent isn't consent. And they are tell me, an it professional with an interest in how machine learning functions ever since alphago and 7 years before the ai hype, that I don't understand it. If I don't understand it, what makes you believe the general public understands and can consent to it. If I am wrong about ai, I am wrong about ai but I am not wrong about the unethical nature of that data, people don't understand it.
I don't mean to "um achtually" you or diminish the point you're making, but I would like to highlight one example of an ethnically trained AI.
Voice Swap pay artist to come in and record data for training, the artist then get royalties any time someone uses their voice. I discovered it through Benn Jordan's video about poising music track from AI training.
Yeah, except royalties in music are almost always a joke. Those artists are going to make much less off their AI voice than if they actually appeared in studio and the end product is going to be worse. If AI cost the same or more, there would be no market for it. Relevant story about Hollywood actors who sold AI likenesses.
Even if it was actually "ethically trained", the end result is still horrible.
Also, paying to have an AI Snoop Dogg in your song is the lamest shit I've ever heard.
That AI was trained on absolute mountains of data that wasn’t ethically gained, though.
Just because an emerald ring is assembled by a local jeweler doesn’t mean the diamond didn’t come from slave labor in South Africa.
Mozilla's Common Voice seems pretty cool, but I'm not sure if that counts.
It's fun to record the clips.
I've contributed to labeling and scoring some of the Common Voice data before. Definitely a fun little thing to do when you have some free time.
I was also pretty happy when I saw Open Assistant making a fully public, consensually contributed to database for text models, but they unfortunately shut down, and in the end there was only really enough data to fine-tune models rather than creating one from scratch.
Which ones? Name one.
What's wrong with what Pleias or AllenAI are doing? Those are using only data on the public domain or suitably licensed, and are not burning tons of watts on the process. They release everything as open source. For real. Public everything. Not the shit that Meta is doing, or the weights-only DeepSeek.
It's incredible seeing this shit over and over, specially in a place like Lemmy, where the people are supposed to be thinking outside the box, and being used to stuff which is less mainstream, like Linux, or, well, the fucking fediverse.
Imagine people saying "yeah, fuck operating systems and software" because their only experience has been Microsoft Windows. Yes, those companies/NGOs are not making the rounds on the news much, but they exist, the same way that Linux existed 20 years ago, and it was our daily driver.
Do I hate OpenAI? Heck, yeah, of course I do. And the other big companies that are doing horrible things with AI. But I don't hate all in AI because I happen to not be an ignorant that sees only the 99% of it.
AllenAi has datasets based on
GitHub, reddit, Wikipedia and "web pages".
I wouldn't call any of them ethically sourced.
"Webpages" as it is vague as fuck and makes me question if they requested consent of the creators.
"Gutenberg project" is the funniest tho.
Writing GitHub, reddit and Wikipedia, tells be very clearly that they didn't. They might asked the providers but that is not the creator. Whether or not the provider have a license for the data is irrelevant on a moral ground unless it was an opt-in for the creator. Also it has to be clearly communicated. Giving consent is not "not saying no", it is a yes. Uninformed consent is not consent.
When someone post on Reddit in 2005 and forgot their password, they can't delete their content from it. They didn't post it with the knowledge that it will be used for ai training. They didn't consent to it.
Gutenberg project... Dead author didn't consent to their work being used to destroy a profession that they clearly loved.
So I bothered to check out 1 dataset of the names that you dropped and it was unethical. I don't understand why people don't get it.
What is wrong? That you think that they are ethical when the first dataset that I look at, already isn't.
It's incredible seeing this shit over and over, specially in a place like Lemmy, where the people are supposed to be thinking outside the box, and being used to stuff which is less mainstream, like Linux, or, well, the fucking fediverse.
Lemmy is just an opensource reddit, with all the pros and cons
What the fuck data collected could ebay use to train AI? The fact people buy star trek figurines??
You could train it to analyze sales tactics for different categories of items or even for specific items, then offer the AI's conclusions as an 'AI assistant' locked behind a paywall.
Plenty of use cases for collecting e-commerce data.
To sell you more stuff, that is how Amazon got ahead of the competition.
Thanks for making my point. People don't understand and therefore can't consent and therefore it isn't ethically sourced data.
But I guess it is “ethically” sourced as they kinda asked by making it opt out, I guess.
No.
As your mother's case shows, making it "opt out" is emphatically not the ethical choice. It is the grifter's choice because it comes invariably paired with difficult-to-find settings and explanations that sound like they come from a law book as dictated by someone simultaneously drunk and tripping balls.
The only ethical option is "opt in". This means people give informed consent (or if they don't bother to read and just click OK at least they get consented hard like they deserve). This means you have to persuade that the choice is good for them and not just for the service provider.
TL;DR: Opt-in is the way you do things without icky "I don't understand consent" vibes.
Ethical small data: https://youtu.be/eDr6_cMtfdA
One ethical AI usage I've heard was a few artists who take an untrained bot and train it on only their own artwork
I don’t know who this guy is, but I’m with at least on this.
He's a weird guy.
Makes a website sourcing screenshots from games for UI inspiration.
Sources UI art from Twitter with a bunch of contributors.
Has a boyfriend.
Active on Twitter, a site where you can "Heil Hitler" but you get blocked by saying Cis.
ITT: People who didn’t check the community name
I mean it does show up on the feed as normal and sometimes people feel like it's fine to give an differing perspective to such communities.
To be fair, I thought I blocked this community...
Sure... And you just had to reply with that info.
Damn, I had no idea the Game UI DB guy was so based. Huge respect from me.
Tfw a community is called fuck_ai so you decide to march in and defend honour of Sam Altman.
It's the same with c/Linux and folks marching in and defending Windows to the death. Some people just like to be contrarian ¯(ツ)/¯
Honestly, I didn't intend to block a dozen AI Bros today, but this has been like shooting fish in a barrel.
Every now and then you gotta rattle the trees.
Of course copying is not stealing, but everything else.
But if a tangent from the post, but you raise a valid point. Copying is not theft, I suppose piracy is a better term? Which on an individual level I am fully in support of, as are many authors, artists and content creators.
However, I think the differentiation occurs when pirated works are then directly profited off of — at least, that’s where I personally draw the distinction. Others may has their own red lines.
e.g. stealing a copy of a text book because you can’t otherwise afford is fine by me; but selling bootleg copies, or answer keys directly based off it wouldn’t be OK.
If your entire work is derived from other people's works then I'm not okay with that. There's a difference between influence and just plain reproduction. I also think the way we look at piracy from the consumer side and stealing from the creative side should be different. Downloading a port of a sega dreamcast game is not the same as taking someone else's work and slapping your name on it.
Totally lost, can someone give me a tldr?
It's a person who runs a database of game UI's being contacted by people who want to train AI models on all of the data en masse.
Damn straight!
I haven't heard of this before, but it looks interesting for game devs.
Game UI Database: https://www.gameuidatabase.com/index.php
Me either. Seems like it would be a really handy site to use if you were making your own game and wanted to see some examples or best practices.
Had no idea it existed, I don’t make games though. But it’s always so cool to see something built that serves a unique niche I had never thought of before! Consulting used to be like for me but after a while, it was always the same kinds of business problems just a different flavor of organization.
This is what luddites destroying factories must have been like lmao
luddites destroying factories
Every time a techbro parrots the word 'luddite' I want to cause them physical harm.
No, it is not - there is empirical evidence that AI is accelerating climate change. Every AI model has been trained on stolen or unethically sourced data. Your strong desire to create something that you can make money off of is not a moral justification for using AI.
Yeah they made sense and had a point, but clowns with bad takes won anyway cuz 'muh capitalism' and ratfucked the human race while their cheerleaders hooted like chimps on meth
Read a book, Slappy
That guy needs help, seriously. But then, this is Fuck-AI
This kind of deranged behaviour will only increase AI adoption not decrease it.
Of course people will see this as an attack on Anti-AI people & not me being concerned. But at this point I'm not surprised
Everyone who disagrees with me has mental health issues
This one surely has, you might not. Also isn't this the same tactic you lot use ?
Did you ask ChatGPT for this opinion?
oh, the luddites have their own instance now, huh? cat's out of the bag, folks. deal with it.
As an artist, all y'all need to chill. The problem is capitalism, and it's not like artists make a living anyway. Democratizing art opens up a lot of possibilities, you technophobes.
all art is stolen. no one has had an original idea since the early 20th century.
Ideas are not art.
no one has had an original idea since the early 20th century.
PROJECTING
The early 20th century? I'd say physical philosophy would beg to differ, and do you see how you just killed your own argument by citing a time period? I think ideas don't have value and that intellectual property stifles innovation. You had me in the first half, where I assumed you meant that people don't just intuit new ideas from nowhere, then you cited a date and lost me.
The general rule was it had to be 25 percent different. This is why AI cant directly copy an image. You may remember some horrendous boundary pushing of art in the 2000s like that artist who straight up blew up celebrity and media influencers instagram posts and sold massive photos of it directly without giving the influencer/celebrity a cent. Avril Lavinge's ex published her song lyric notes and won the case against her. Copyright has always been awful. The Marvin Gaye estate is notorious for bluffing that his IP is stolen, but music can legally sample 6 seconds of any song or sound without permission. Robin Thickes song was completely different and when that family is hard up they go after another obscure artist. Dont be swayed. If its original its original. Not like Selena Photos Y Recueredos and Back on the Chain Gang by the Pretenders, that one was blatant. And all she did was change the lyrics back in the 80s. Copyright changes, but you are protected just like the big guys. Don't be afraid to create, you'll be missing out on experience. Copy dont wory about originality just make art. Trust me I couldnt paint more than a stroke for years because of fear of being a copycat and infringing and unoriginal. Just copy copy until you have your own style. I promise it will come. Its impossible for two people to play the moonlight sonata exactly the same. I was friends with an Oxford music professor. He can tell anyone by the way they play a piano. The nuances are always going to show. You're too original, you're not a robot. Even 3d printers never print the same piece the same because of environmental factors.
All you need to know is change your art 25 percent from the original. Even if it is color choice, and anything you publish online is automatically protected in American courts. It doesn't matter if you copy AI. If its 25 percent different its yours. Also I;ll remind you that AI legally cannot duplicate images to infringe on copyright. Thats why all images look slightly off. The nuances are set with parameters partially to keep it legal. If courts find it is copy beyond artistic expression, then in comes the hammers and bats to the ai server stacks. Serious.
Missing the point.
The ai companies used works of people without their permission to create the ai to then dump the prices of the labor to force the creators out of the business. The quality is worse but good enough for a lot of work. You can say "but that is how capitalism works" but you would be wrong, because they stole in step one.
music can legally sample 6 seconds of any song or sound without permission
I guess it is inevitable that self centred ego-stroking bubble communities appear in platforms such as Lemmy. Where reasoned polite discussion is discouraged and opposing opinions are drowned.
Well, I'll just leave this comment here in the hope someone reads it and realises how bad these communities actually are. There's a lot to hate about AI (specially companies dedicated to sell it), but not all is bad. As any technology, is about how you use it and this kind of community is all about cancelling everything and never advance and improve.
There is utility in ai. E.g. in medical stuff like detecting cancer.
Sadly, the most funded ai stuff is LLMs and image generation AIs. These are bad.
And a lot of ai stuff have major issue with racism.
"But ai has potential!!!" Yeah but it isn't there and actively harms people. "But it could..." but it isn't. Hilter could have fought against discrimination but sadly he chose the Holocaust and war. The potential of good is irrelevant to the reality of bad. People hate the reality of it and not the pipe dreams of it.
Hitler
Oh, you had to deliberately Godwin a perfectly good point. Take my upvote nevertheless 😂
The way it's being used sucks tho
Yeah, there's legitimate complains against GAI and most of all the companies trying to lead it. But having a community where the only point of view accepted has to be direct and absolute hatred including towards people trying to look at adequate and ethical usage of the technology is just plainly bad and stupid just like any other social bubble.
Well the environmental costs are pretty high... One request to chatgpt 3.5 was guessed to be 100 times more expensive than a Google search request. Plus training.
This doesn't change no matter the use case. Same with the copyright issues.
Obviously not all AI is bad, but it's clear the current way GenAI is being developed, and the most popular, mainstream options are unethical. Being against the unethical part requires taking a stand against the normalization and widespread usage of these tools without accountability.
You're not the wise one amongst fools, you're just being a jerk and annoying folks who see injustice and try to do something about it.
I guess it's inevitable that self-centered, pseudo-intellectual individuals like you would appear in platforms such as Lemmy to ask for civility and attention while spouting bullshit.
Also these "Fuck AI" people are usually the biggest hypocrites, they use AI behind the scenes
The more I see dishonest, blindly reactionary rhetoric from anti-AI people - especially when that rhetoric is identical to classic RIAA brainrot - the more I warm up to (some) AI.
It is in fact the opposite of reactionary to not uncritically embrace your energy guzzling, disinformation spreading, proft-driven "AI".
As much as you don't care about language, it actually means something and you should take some time to look inwards, as you will notice who is the reactionary in this scenario.
"Disinformation spreading" is irrelevant in this discussion. LLM's are a whole separate conversation. This is about image generators. And on that topic, you position the tech as "energy guzzling" when that's not necessarily always the case, as people often show; and profit-driven, except what about the cases where it's being used to contribute to the free information commons?
And lastly you're leaving out the ableism in being blindly anti-AI. People with artistic skills are still at an advantage over people who either lack them, are too poor to hire skilled artists, and/or are literally disabled whether physically or cognitively. The fact is AI is allowing more people than ever to bring their ideas into reality, where otherwise they would have never been able to.
Yes, I like the unethical thing... but it's the fault of people who are against it. You see, I thought they were annoying, and that justifies anything the other side does, really.
In my new podcast, I explain how I used this same approach to reimagine my stance on LGBT rights. You see, a person with the trans flag was mean to me on twitter, so I voted for—
Wow, using a marginalized group who are actively being persecuted as your mouthpiece, in a way that doesn't make sense as an analogy. Attacking LGBTQI+ rights is unethical, period. Where your analogy falls apart is in categorically rejecting a broad suite of technologies as "unethical" even as plenty of people show plenty of examples of when that's not the case. It's like when people point to studies showing that sugar can sometimes be harmful and then saying, "See! Carbs are all bad!"
So thank you for exemplifying exactly the kind of dishonesty I'm talking about.
You won't be the only one😉
Oh boy here we go downvotes again
regardless o the model you're using, the tech itself was developed and fine-tuned on stolen artwork with the sole purpose of replacing the artists who made it
that's not how that works. You can train a model on licensed or open data and they didn't make it to spite you even if a large group of grifters are but those aren't the ones developing it
If you're going to hate something at least base it on reality and try to avoid being so black-and-white about it.
I think his argument is that the models initially needed lots of data to verify and validate their current operation. Subsequent advances may have allowed those models to be created cleanly, but those advances relied on tainted data, thus making the advances themselves tainted.
I'm not sure I agree with that argument. It's like saying that if you invented a cure for cancer that relied on morally bankrupt means you shouldn't use that cure. I'd say that there should be a legal process involved against the person who did the illegal acts but once you have discovered something it stands on its own two feet. Perhaps there should be some kind of reparations however given to the people who were abused in that process.
I think his argument is that the models initially needed lots of data to verify and validate their current operation. Subsequent advances may have allowed those models to be created cleanly, but those advances relied on tainted data, thus making the advances themselves tainted.
It's not true; you can just train a model from the ground up on properly licensed or open data, you don't have to inherit anything. What you're talking about is called finetuning which is where you "re-train" a model to do something specific because it's much cheaper than training from the ground up.
Name one that is "ethically" sourced.
And "open data" is a funny thing to say. Why is it open? Could it be open because people who made it didn't expect it to be abused for ai? When a pornstar posted a nude picture online in 2010, do you think they thought of the idea that someone will use it to create deepfakes of random women? Please be honest. And yes, a picture might not actually be "open data" but it highlights the flaw in your reasoning. People don't think about what could be done to their stuff in the future as much as they should but they certainly can't predict the future.
Now ask yourself that same question with any profession. Please be honest and tell us, is that "open data" not just another way to abuse the good intentions of others?
Rejecting the inevitable is dumb. You don't have to like it but don't let that hold you back on ethical grounds. Acknowledge, inform, prepare.
You probably create AI slop and present it proudly to people.
AI should replace dumb monotonous shit, not creative arts.
Ai isn't magic. It isn't inevitable.
Make it illegal and the funding will dry up and it will mostly die. At least, it wouldn't threaten the livelihood of millions of people after stealing their labor.
Am I promoting a ban? No. Ai has its use cases but is current LLM and image generation ai bs good? No, should it be banned? Probably.
You could say fascism is inevitable. Just look at the elections in Europe or the situation in the USA. Does that mean we cant complain about it? Does that mean we cant tell people fascism is bad?
No, but you should definitely accept the reality, inform yourself, and prepare for what's to come.
Wait, you don't have to like it, but ethical reasons shouldn't stop you?
They said the same thing about cloning technology. Human clones all around by 2015, it's inevitable. Nuclear power is the tech of the future, worldwide adoption is inevitable. You'd be surprised by how many things declared "inevitable" never came to pass.
Tools have always been used to replace humans. Is anyone using a calculator a shitty person? What about storing my milk in the fridge instead of getting it from the milk man?
I don't have an issue with the argument, but unless they're claiming that any tool which replaced human jobs were unethical then their argument is not self consistent and thus lacks any merit.
Edit: notice how no one has tried to argue against this
People have begun discussing it, although i suppose it was an unfair expectation to have this discussion here. Regardless, after i originally edited this, you guys did have tons of discussions with me. I do appreciate it, and it seems that most of us support the same things. It kinda just seems like an issue with framing and looking at things in the now vs the mid term future.
The people who made calculators didn't steal anything from mathematicians in order to make them work.
Your "argument" is called false equivalence.
The issue isn't automation itself. The issue is the theft, the fact that art cannot be automated and the use of it to further enshittification.
First, the models are based off theft of OUR data and then sold back to us for profit.
Secondly, most AI art is utter crap and doesn't contribute anything to human society. It's shallow slop.
Thirdly, having it literally everywhere while also being completely energy inefficient is absolutely dumb. Why are we building nuclear reactors and coal plants to replace what humans can do for cheap??
Edit: further, the sole purpose of AI is to hoard wealth to a small number of people. Calculators, hammers etc. do not have this function and do not also require lots of energy to use.
Ive responded to a lot of that elsewhere, but in short: i agree theft bad. Capitalism also bad. Neither of those are inherit to ai or llms though, although theft is definitely the easy way.Art can be automated, nature does it all the time. We cant do it to a high degree now, i will concede.
Quality is of course low, its new. The progress in the last year has been astounding, it will continue to improve. Soon this will no longer be a valid argument.
I agree, modern ai is horribly innefficient. It's a prototype, but its also a hardware issue. Soon there will be much more efficient designs and i suspect a rather significant alteration to the architecture of the network that may allow for massively improved efficiency. Disclaimer: i am not professionally in the field, but this topic in particular is right up mutiple fields of study i have been following for a while.
Edit: somehow missed your edit when writing. To some extent every tool of productivity exists to exploit the worker. A calculator serves this function as much as anything else. By allowing you to perform calculations more quickly, your productivity massively increases in certain fields, sometimes in excess of thousands of times. Do you see thousands of times the profits of your job prior to the advent of calculators, excluding inflation? Unlikely. Or the equivelent pay of the same amount of "calculators" required for your work? Equally unlikely. Its inherit to capitalism.
Would you replace a loved-one (a child, spouse, parent etc.) with an artificial "tool"? Would it matter to you if they're not real even when you couldn't tell the difference? And if your answer is yes, you had no trouble replacing a loved-one with an artificial copy, then our views/morals are fundamentally so different that I can't see us ever agreeing.
It's like trying to convince me that having sex with animals is awesome and great and they like it too, and I'm just no thanks, that's gross and wrong, please never talk to me again. I know I don't necessarily have the strongest logic in the AI (and especially "AI art") discussion but that's how I feel.
Thats a lot of different questions in a lot of different contexts. If my parent decided to upload their conciousness near the end of their life into a mech suit covered in latex(basically) that was indistinguishable physically from a human(or even not, who am I to judge) and the process of uploading a conciousness was well understood and practiced, then yes, I would respect their decision. If you wouldn't, you either have difficulty placing yourself in hypothetical situations designed to test the limits of societal norms, or you abjectly do not care about the autonomy of your parent.
Child, I have no issue adopting. If they happen to be an artificial human I don't see why that should proclude them from being allowed to have parents.
Spouse, I'm not going to create one to my liking. But if we lived in a world with AI creating other AI that are all sentient, some of which presumably choosing to take a physical form in an aforementioned mech, why shouldnt i date them? Your immediate response is sex, but lets ignore that. Is an asexual relationship with a sentient robot ok? What about a friendship with said robot? Are you even allowed to treat a sentient robot as a human? Whats the distinction? I'm not attempting a slippery slope, I genuinely would like to hear where your distinctions between what is and isn't acceptable lies. Because I think this miscommunication either stems from a misunderstanding about the possible sentience of ai in the future, or from the lack of perspective of what it might be like from their perspective.
Edit: just for the record, i dont downvote comments like yours, but someone did, so i had to upvote you.
You're a tool.
This may come as a shock to you, but nobody was working as a refrigerator. Refrigerators didn't replace the milk man, the stores did. Which was fine at first since those stores were supposed to buy the milk from the milkman and just make it more readily accessible. Then human greed took over, the stores or big name brands started to fuck over the milk man, and conspired with other big name stores to artificially increase the price of bread while blaming covid and inflation, and now some, although few people are trying to buy it back from the milk man if they can afford / access it.
Those tools that did replace humans, did not steal human work and effort, in order to train themselves. Those tools did not try to replace human creativity with some soulless steaming pile of slop.
You see, I believe open source, ethically trained AI models can exist and they can accomplish some amazing things, especially when it comes to making things accessible to people with disabilities for an example. But Edd Coates' is specifically talking about art, design and generative AI. So, maybe, don't come to a community called "Fuck AI", change the original argument and then expect people argue against you with a good will.
The "milkman" is a delivery person who works for milk producers. The company that produces milk still exists, the role of the milkman was just made unnecessary due to advances in commercial refrigeration - milk did not have to be delivered fresh, it could be stored and then bought on-demand.
https://en.wikipedia.org/wiki/Milk_delivery
"Human greed" didn't take over to fuck over the milkman, they just didn't need a delivery person any more because milk could be stored on site safely between shipments.
Tons of people do! I browse /all and dont want to block /fuck_ai because a ton of you do have great discussions with me. Im not brigading, i have never once saught out this community, but ive always tried to be respectful and i havent gotten banned. So I'd say all is well.
As far as the crappy stuff, that really seems like just another extension of consumerism. Modern art has irked people for a while because some of it is absurdly simplistic. But if people are willing to buy into it, thats on them. Llms have very limited use case, and ethically sourcing your data is clinically neccassary for both ethical and legal reaosns. But the world needs to be prepared for the onset of the next generation of ai. Its not going to be sentient quite yet, but general intelligence isnt too far away. Soon one ai will be able to outperform humans on most daily tasks as well as some more spcified tasks. Llms seemingly took the world by surprise, but if youve been following the tech the progression has been somewhat obvious. And it is continuing to progress.
Honestly, the biggest concern i have with modern ai outside of how its being implemented is that it is environmentally very bad, but im hoping that the increase in the ai bubble will lead to more specialised energy efficient designs. I don't remember what the paper was but they were using ai to generatively design more efficient chips and it was showing promising results. On a couple of the designs they werent entirely sure how they functioned(they have several strong theories, but theyre not certain. Not trying to misrepresent this), but when they fucked with them they stopped behaving as predicted/expected(relative to them being fucked with, of course a broken circuit isnt going to function correctly)
And this is where I split with Lemmy.
There's a very fragile, fleeting war between shitty, tech bro hyped (but bankrolled) corporate AI and locally runnable, openly licensed, practical tool models without nearly as much funding. Guess which one doesn't care about breaking the law because everything is proprietary?
The "I don't care how ethical you claim to be, fuck off" attitude is going to get us stuck with the former. It's the same argument as Lemmy vs Reddit, compared to a "fuck anything like reddit, just stop using it" attitude.
What if it was just some modder trying a niche model/finetune to restore an old game, for free?
That's a rhetorical question, as I've been there: A few years ago, I used ESRGAN finetunes to help restore a game and (seperately) a TV series. Used some open databases for data. Community loved it. I suggested an update in that same community (who apparently had no idea their beloved "remaster" involved oldschool "AI"), and got banned for the mere suggestion.
So yeah, I understand AI hate, oh do I. Keep shitting on Altman an AI bros. But anyone (like this guy) who wants to bury open weights AI: you are digging your own graves.
Oh, so you deserve to use other people's data for free, but Musk doesn't? Fuck off with that one, buddy.
Using open datasets means using data people have made available publicly, for free, for any purpose. So using an AI based on that seems considerably more ethical.
To be fair, he did say he "used some open databases for data"
Musk does too, if its openly licensed.
Big difference is:
That's another thing that worries me. All this is heading in a direction that will outlaw stuff like fanfics, game mods, fan art, anything "transformative" of an original work and used noncommercially, as pretty much any digital tool can be classified as "AI" in court.
Just make a user interface for your game bro. No need to bring AI into it