An offsite server is not under your control and accessible by who knows. Surely it is still a privacy concern.
Privacy is like security in that it costs time. Most people don't spend time on even having a conversation like this but if something bothers you then finding a spare day is easier.
There are many other cameras but most have the same potential to do this sort of shit. Sending video to some server you don't control, on cameras you don't control because it's proprietary, isn't going to cut it if privacy is your goal.
People who claim they don't value privacy are simply ignorant of how this can affect them. They don't consider the data falling into the wrong hands. Surely they don't want criminals with unauthorized access at least. It should be obvious that governments don't always have their best interests either.
Unaware of a ad system that would be AGPL compatible. If the games are worth playing then someone would probably fork it, remove the ads, and upload the clone to FDroid.
Worse still it's not even clear what is being discussed. It implied "violence" but that is a wide range from just pushing to serious shooting.
% can also be misleading when a scale is arbitrary. A temperature increase measured in Fahrenheit will be a rather different % when converted to Kelvin.
Headline should have been: porn sites have no spunk. Screw the government and just plug the whole country. Though we'll no longer have easy access various VPNs will still allow us to reach around the block with IP protection (just like consuming BBC service without a license).
"hallucination refers to the generation of plausible-sounding but factually incorrect or nonsensical information"
Is an output an hallucination when the training data involved in that output included factually incorrect data? Suppose my input is "is the would flat" and then an LLM, allegedly, accurately generates a flat-eather's writings saying it is.
Artificial neural networks are simple versions of the neurons arranged in a brain. It's a useful solution when you know what the output should be but you don't know what algorithm would produce it from a desired input. To claim "AI" is learning the same way as complex human brains seems a bit farfetched. If you want to say human brains are ultimately just an algorithm then fine, but look at the outputs between the two.
AI art may not look like duplication but it often looks like derived-work which could trigger copyright infringement (to my non-artist eyes). AI code on the other hand looks much closer to duplication to me and it doesn't seem right they can use other's code to produce code while ignoring the license because the algorithm had "learned like a human". Many software licenses are there to protect users, rather than monopolize, and get totally ignored for profit.
"Innovative" these days seems to means new ways to fuck-over users, rather than the past where it meant products got better and/or cheaper.
I have hope for running games on Linux that are currently blocked by anti-cheat.. but zero hope for client-side anti-cheat to stop cheating. It's not as if Windows has stopped cheating. A win eventually becomes a loss as the cheat-makers adapt.
If you're using the minimum amount, in a transformative way that doesn't compete with the original copyrighted source, then it's still fair use even if it's commercial. (This is not saying that's what LLM are doing)
I'd like to blame the voting system for the lack of meaningful voting options.