I'd be on board if it was actually useful and accurate. But it has proven time and time again to be hot garbage 99% of the time as they shove it down everyone's throat. They keep talking about it being a new age of AI and how it's going to change the world but it's only made the internet a worse place and changed nothing or made things worse.
They keep talking about it being a new age of AI and how it's going to change the world but it's only made the internet a worse place and changed nothing or made things worse.
TBH an open source multiple ledger system for financial transactions was an amazing idea which has helped stabilize a lot of economies around the world whose currencies became undervalued to extremes.
NFTs was fucking dumb tho, lol, basically a way to print a receipt for traded goods without any legal enforcement on tying the property to the receipt.
NFTs would make sense for things like tradable software licenses. E.g steam is going to be forced to allow users to sell their games soonish (they're appealing the ruling and it's only a matter of time until they lose) and you wouldn't want such a license to be tied to a particular marketplace, so NFTs make sense: The game publisher mints it, it's tradable freely, sites like steam and gog can look at them and say "yep this hasn't been tampered with and was minted by the publisher", and serve you the game files. Presumably they'd want you to occasionally buy something on their platform to let you use their servers to download games they didn't sell you, or you could pay a small sum for the service.
The NFT itself, of course, doesn't enforce anything. It's just a non-fungible token representing usage rights in the game. Like a cd key but more secure, for the publisher (key can't be duplicated / used multiple times, I mean a platform that would allow that could just as well go all the way and be a torrent release group) and the buyer (can check validity of key before spending money) and seller (buyer can't claim bullshit like "key didn't work").
What you probably would not do is put that stuff on already-existing blockchains because why should the industry pay ludicrous transaction fees when you can roll your own.
We could call it an "Activation Code" or "Software License Key"...
Jokes aside how is it more secure than a CD Key to be on the blockchain? Also remember that form of validation would require online status, making it equivalent to CD Key security in that regard.
A CD doesn't really mean anything, the license and the physical medium generally aren't tied. If you break the medium but have a backup you're not pirating anything. I'd say the primary difference to a CD isn't more or less secure but physical or not.
Downloading the game also requires an online connection. You'd only need one when you're buying or selling the license NFT or moving it from one download platform to another, and of course to download the game. Whether you need an online connection to play depends on the game, not the NFT.
Most people have no issue with what we were calling AI before the LLM fad hellscape we're currently in.
No one sane is going to object to using machine learning to optimize the performance of an antenna, or crash safety of a car frame. People aren't against the existence of AI opponents in video games. No one was ranting about fuzzy search algorithms, or neural nets on their own. Beyond that, data science has been a thing for ages with no contreversy.
The issue is generative AI and how it is being used. The best case use scenarios are just supplanting tech that already exists at higher cost and delivering worse results. The worst case use scenarios are attempting to cannibalize multiple creative pursuits to remove the need for humans and maximize profits.
Yeah, that's what I've meant: the issue with generative ai is how it is being used, another issue is the lack of compensation to stolen training data.
But these are human / capitalist set of incentives problems.
As a developer it helpd me countless of times, by helping me understand legacy code, or new concepts, in a chatty way, by helping me write corporate friendly formal emails. I use it to recommend and discover music or just mindlessly chatting with it about nothing.
The technology is genuinely useful.
(I do click stack overflow and other sites links when it provides, and turned off my ad blocker for some sites)
Honestly the problem is that it simultaneously works too well and not well enough.
The truth is, it's proven time and time again to be hot garbage about 85% of the time. But that 15% of the time that it works great, that's why it's being shoved down our throats. That's what's ruining this for everyone, that fact that on rare occasions, it does actually work...
Yeah, I agree with that. And their solution instead of actually fixing the problem is throwing money and computing power at it in the hope that brute force will make it "better." When in reality it hasn't even changed that much in the past year besides more eloquently saying complete bullshit. Call it a conspiracy, but I think with nobody ever telling the truth on the internet, LLM's have only taught themselves to bullshit everyone into believing them.
Yeah, I agree with that. And their solution instead of actually fixing the problem is throwing money and computing power at it in the hope that brute force will make it "better."
Haha, well to be fair, that usually works... Most big problems could be solved by throwing effort and money at them. Hell, when I think about a lot of national issues, education, infrastructure, energy, crime, poverty, most of these could be solved by throwing money at them. And it would take less money than you might guess.
Call it a conspiracy, but I think with nobody ever telling the truth on the internet, LLM's have only taught themselves to bullshit everyone into believing them.
And yeah, you're definitely not imagining that. I'd say there's something to that theory.
The purpose of this project is not to restrict or ban the use of AI in articles, but to verify that its output is acceptable and constructive, and to fix or remove it otherwise.
There's nothing fundamentally wrong with LLMs. Users just need to know their capabilities and limitations and use them correctly. Just like any other tool.
As far as Wikipedia is concerned, there is pretty much no way to use LLMs correctly, because probably each major model includes Wikipedia in its training dataset, and using WP to improve WP is... not a good idea. It probably doesn't require an essay to explain why it's bad to create and mechanise a loop of bias in an encyclopedia.
That is not inherently true. For example, there was an instance when I read a Wikipedia article, and a chart was simply incomplete, there were entries in the chart left blank, when I knew that data existed. All I had to do was look up those exact items in Wikipedia and the correct numbers were there, readily available.
I think that was when I first created a Wikipedia account for editing. There was an article clearly missing information and I knew it would be both non controversial and quite easy to fill in that information.
My point is, that first article could definitely be meaningfully improved, using only information already available on Wikipedia.
You're probably assuming that someone would just go to an LLM and say "write a Wikipedia article about subject X"? That wouldn't work well, but that's very far from the only way to use LLMs for Wikipedia work.
For starters, it doesn't have to actually write content at all. You could paste an existing article into an LLM and ask it "What facts in this article lack references to back them up? Are there any weasel-worded statements, or statements that don't appear to follow a neutral point of view?" And get lists of things that require attention.
Or you could paste a poorly-worded article in and tell it to rewrite it with all the same information but better phrasing or structure. You could put a bunch of research materials you've gathered into the LLM's context and tell it to write a summary in the style of a Wikipedia article, with references to the sources for each fact mentioned. Obviously you'd check the LLM's work afterward and probably do some manual editing, but this would be a great time and effort saver to get a first draft written. You could take an existing article and tell the LLM that some particular fact had changed or been discovered to be incorrect and ask it to rewrite the relevant parts to account for that.
Wikipedia is in many, many languages. You could have a multilingual LLM automatically compare the contents of different language versions of a Wikipedia article and ask it to spot differences in content or tone. You could have an LLM translate an article from one language to another as a starting point for creating an article in that new language.
You could have the LLM check the references of an existing article - look up each referenced work on the web and see whether it genuinely says what the article that's using it as a reference says. It could flag all manner of subtle problems that way. Perhaps the reference sounds biased, or whoever used it as a reference misinterpreted it, or the link was simply incorrect and points to unrelated material. Being able to have an AI do a first-pass check of all that in a completely automated way would save huge amounts of time.
This is all just brainstorming off the top of my head, so I'm sure there's plenty of other good uses that aren't coming to mind.
I don't get the impression you've ever made any substantial contributions to Wikipedia, and thus have misguided ideas about what would be actually helpful to the editors and conductive to producing better articles. Your proposal about translations is especially telling, because the machine-assisted translations (i.e. with built-in tools) have already existed on WP long before the recent explosion of LLMs.
In short, your proposals either: 1. already exist, 2. would still risk distorsion, oversimplification, made-up bullshit and feedback loops, 3. are likely very complex and expensive to build, or 4. are straight up impossible.
Good WP articles are written by people who have actually read some scholarly articles on the subject, including those that aren't easily available online (so LLMs are massively stunted by default). Having an LLM re-write a "poorly worded" article would at best be like polishing a turd (poorly worded articles are usually written by people who don't know much about the subject in the first place, so there's not much material for the LLM to actually improve), and more likely it would introduce a ton of biases on its own (as well as the usual asinine writing style).
Thankfully, as far as I've seen the WP community is generally skeptical of AI tools, so I don't expect such nonsense to have much of an influence on the site.
Heh. I fell off of contributing in recent years, but there was a time back in the day when my edit count was in the top hundred or so. Your impression is completely wrong.
Anyway, this discussion here isn't going to affect what the people on Wikipedia are doing, so it doesn't really matter. I linked to the project page above and it's quite clear that even this "AI Cleanup" project is not in any way fundamentally opposed to using AI, they're just focused on ensuring that editors using it are adhering to Wikipedia's guidelines. If you think AI can't do that then clearly your concept of how AI is useful is too limited.