Losing privacy for convenience has been happening. We use GPS on our smart phones for better directions. We install listening devices to add things to shopping carts and to play music by voice. We install cloud security cameras at home. We accept free WiFi in stores which gives them our cell phone info and our location. We use digital cash instead of physical cash. We buy things online rather than going to the store. Every device, like a toaster, has a MAC address.
I think we need to be very cautious with the AI narrative where we are being lead to confuse mass surveillance with intelligence and by doing so initiate these corporate technologies into the core of our social and governmental institutions.
there was a mental word search, glitch in the matrix moment right at that point - read into that what you will, cuz these days all options are valid.
"insinuate" is absolutely the very best word, but publicly one has to walk the fine line between complicity and hair-on-fire alarm, and so "initiate" came out of her mouth.
for the record, I think we are past the face-melting stage.
Signal is using Google Push messaging, but it could be used with websocket.
And officially it's not on fdroid because they don't want forks of their app
Yet the Molly fork supports UnifiedPush so I can reuse my connection with mf XMPP server to deliver notification from a server I control. Folks have asked for UnifiedPush or MQTT as an alternative to having multiple persistent socket connections open on your device, but Signal doesn’t seem to care.
How would that prevent you from forking the app? F-droid isn't a repository for the code of the app. I don't think this is related at all.
I don't actually know the reason why it's not on F-droid but I assume it has some historical reason. It has never been on F-drroid since Text-secure. Moxy Marlinspike was strictly against it afaik. If somebody has more detail on it, feel free to share it.
I have yet to meet a person who gives a shit about AI. I have yet to meet a person who has intentionally used AI. It's all marketing bs and a way to mine our data.
I had a classmate that was really exstatic about AI, like he basically believed its the second coming of christ. And then another one who was like "ohh look i can use this to make neat wallpapers". That was about all the resonance i got from my social circles.
I don't like the idea o LLMs everywhere, but I do use chatgpt quite a lot as a point of entrance to any topic that I might not know the existence of yet
I use LLMs just about every day. It's better than web-search for certain things, and is useful for some coding tasks. I think they're over-hyped by some people, but they are useful.
It's also trained on data people reasonably expected would be private (private github repos, Adobe creative cloud, etc). Even if it was just public data, it can still be dangerous. I.e. It could be possible to give an LLM a prompt like, "give me a list of climate activists, their addresses, and their employers" if it was trained on this data or was good at "browsing" on its own. That's currently not possible due to the guardrails on most models, and I'm guessing they try to avoid training on personal data that's public, but a government agency could make an LLM without these guardrails. That data could be public, but would take a person quite a bit of work to track down compared to the ease and efficiency of just asking an LLM.