Everything you say to your Echo will be sent to Amazon starting on March 28.
Everything you say to your Echo will be sent to Amazon starting on March 28.
Everything you say to your Echo will be sent to Amazon starting on March 28.
Amazon really got people to pay to be spied on. Wild world we live in bois
They literally could just leave the feature on the device, but then you can't force your users to send you all their data, voices, thoughts and first borns
Fuck Amazon, fuck Bezos
If anyone remembers the Mycroft Mark II Voice Assistant Kickstarter and was disappointed when development challenges and patent trolls caused the company's untimely demise, know that hope is not lost for a FOSS/OSHW voice assistant insulated from Big Tech..
FAQ: OVOS, Neon, and the Future of the Mycroft Voice Assistant
Disclaimer: I do not represent any of these organizations in any way; I just believe in their mission and wish them all the success in getting there by spreading the word.
Want to setup a more privacy friendly solution?
Have a look at Home Assistant! It’s a great open source smart home platform that recently released a local (so not processing requests in the cloud) voice assistant. It’s pretty neat!
I've seen something about this pop up occasionally on my feed, but it's usually a conversation I'm nowhere close to understanding lol
Could you recommend any resources for a complete noob?
I have one big frustration with that: Your voice input has to be understood PERFECTLY by TTS.
If you have a "To Do" list, and speak "Add cooking to my To Do list", it will do it! But if the TTS system understood:
The system will say it couldn't find that list. Same for the names of your lights, asking for the time,..... and you have very little control over this.
HA Voice Assistant either needs to find a PERFECT match, or you need to be running a full-blown LLM as the backend, which honestly works even worse in many ways.
They recently added the option to use LLM as fallback only, but for most people's hardware, that means that a big chunk of requests take a suuuuuuuper long time to get a response.
I do not understand why there's no option to just use the most similar command upon an imperfect matching, through something like the Levenshtein Distance.
Because it takes time to implement. It will come.
I didn't even know this was a feature. My understanding has always been that Echo devices work as follows.
Unless they made some that were able to do step 3 locally entirely I don't see this as a big deal. They still have to do step 4 remotely.
Also, while they may be "always recording" they don't transmit everything. It's only so if you say "Alexaturnthelightsoff" really fast it has a better chance of getting the full sentence.
I'm not trying to defend Amazon, and I don't necessarily think this is great news or anything, but it doesn't seem like too too big of a deal unless they made a lot of devices that could parse all speech locally and I didn't know.
It was a non advertised feature only available in the US and in English only
No way! The microphones you put all over your house are listening to you? What a shocker!
If you bought these this is on you. Trash them now.
If you traveled back in time and told J. Edgar Hoover that in the future, the American public voluntarily wire-tapped themselves, he would cream his frilly pink panties.
Maybe I misread the actual text, but it sounds like the exact opposite, that it's going to auto-delete what you say.
I can’t believe people are still voluntarily wire tapping themselves in 2025
How disheartening. I knew going in that there would be privacy issues but I figured for the service it was fine. I also figure my phone is always listening anyway.
As someone with limited mobility, my echo has been really nice to control my smart devices like lights and TV with just my voice.
Are there good alternatives or should I just accept things as they are?
There aren't any immediate drop in replacements that won't require some work, but there is Home Assistant Voice - It just requires that you also have a Home Assistant server setup, which is the more labor intensive part. It's not hard, just a lot to learn.
I have always told people to avoid Amazon.
They have doorbells to watch who comes to your house and when.
Indoor and outdoor security cameras to monitor when you go outside, for how long, and why.
They acquired roomba, which not only maps out your house, but they have little cameras in them as well, another angle to monitor you through your house in more personal areas that indoor cameras might not see.
They have the Alexa products meant to record you at all times for their own use and intent.
Why do you think along with Amazon Prime subscriptions you get free cloud storage, free video streaming, free music? They are categorizing you in the most efficient and accurate way possible.
Boycott anything Amazon touches
They backed out of the Roomba deal. Now iRobot is going down the shitter.
I agree with your sentiment and despise Amazon but they do not own roomba the deal fell through.
Christ, finally a win
People are saying don't get an echo but this is the tip of an iceberg. My coworkers' cell phones are eavesdropping. My neighbors doorbells record every time I leave the house. Almost every new vehicle mines us for data. We can avoid some of the problem but we cannot avoid it all. We need a bigger, more aggressive solution if we are going to have a solution at all.
How about regulation? Let's start with saying data about me belongs to me, not to whoever collected the data, as is currently the case
My clunky old bike ain't listening to shit bro. Neither is my android phone using a custom rom.
Jam the mic? https://www.amazon.com/gp/aw/d/B08Y5GGP4D
Works on my phone...
the irony of posting an amazon link...
be aware, everything you say around amazon, apple, alphabet, meta, and any other corporate trash products are being sold, trained on, and sent to your local alphabet agency. it's been this way for a while, but this is a nice reminder to know when to speak and when to listen
Everyone literally carries a personal recording device.
So... if you own an inexpensive Alexa device, it just doesn't have the horsepower to process your requests on-device. Your basic $35 device is just a microphone and a wifi streamer (ok, it also handles buttons and fun LED light effects). The Alexa device SDK can run on a $5 ESP-32. That's how little it needs to work on-site.
Everything you say is getting sent to the cloud where it is NLP processed, parsed, then turned into command intents and matched against the devices and services you've installed. It does a match against the phrase 'slots' and returns results which are then turned into voice and played back on the speaker.
With the new LLM-based Alexa+ services, it's all on the cloud. Very little of the processing can happen on-device. If you want to use the service, don't be surprised the voice commands end up on the cloud. In most cases, it already was.
If you don't like it, look into Home Assistant. But last I checked, to keep everything local and not too laggy, you'll need a super beefy (expensive) local home server. Otherwise, it's shipping your audio bits out to the cloud as well. There's no free lunch.
I honestly have no idea why anyone who cares even 1% about their privacy would have ever bought one of these abominations in the first place. If I ever receive one as a gift I will burn it with fire.
I have the things so that I can understand how to protect myself from them. I have a similar thing going on with AI video right now. Hate it but watch the growth to understand it.
Better yet, crack it open and find a way to load alternative firmware onto it
Amazon employee with no piss breaks listening in on my echo:
"How many fucking cats does this guy have? Just chose one name and call it that!"
Edit: "I don't know Jeff, sell him a fucking dr seuss book or something the guys mental."
Publicly, that is. They have no doubt been doing it in secret since they launched it.
Off-device processing has been the default from day one. The only thing changing is the removal for local processing on certain devices, likely because the new backing AI model will no longer be able to run on that hardware.
If you look at the article, it was only ever possible to do local processing with certain devices and only in English. I assume that those are the ones with enough compute capacity to do local processing, which probably made them cost more, and that the hardware probably isn't capable of running whatever models Amazon's running remotely.
I think that there's a broader problem than Amazon and voice recognition for people who want self-hosted stuff. That is, throwing loads of parallel hardware at something isn't cheap. It's worse if you stick it on every device. Companies --- even aside from not wanting someone to pirate their model running on the device --- are going to have a hard time selling devices with big, costly, power-hungry parallel compute processors.
What they can take advantage of is that for a lot of tasks, the compute demand is only intermittent. So if you buy a parallel compute card, the cost can be spread over many users.
I have a fancy GPU that I got to run LLM stuff that ran about $1000. Say I'm doing AI image generation with it 3% of the time. It'd be possible to do that compute on a shared system off in the Internet, and my actual hardware costs would be about $33. That's a heckofa big improvement.
And the situation that they're dealing with is even larger, since there might be multiple devices in a household that want to do parallel-compute-requiring tasks. So now you're talking about maybe $1k in hardware for each of them, not to mention the supporting hardware like a beefy power supply.
This isn't specific to Amazon. Like, this is true of all devices that want to take advantage of heavyweight parallel compute.
I think that one thing that it might be worth considering for the self-hosted world is the creation of a hardened network parallel compute node that exposes its services over the network. So, in a scenario like that, you would have one (well, or more, but could just have one) device that provides generic parallel compute services. Then your smaller, weaker, lower-power devices --- phones, Alexa-type speakers, whatever --- make use of it over your network, using a generic API. There are some issues that come with this. It needs to be hardened, can't leak information from one device to another. Some tasks require storing a lot of state --- like, AI image generation requires uploading a large model, and you want to cache that. If you have, say, two parallel compute cards/servers, you want to use them intelligently, keep the model loaded on one of them insofar as is reasonable, to avoid needing to reload it. Some devices are very latency-sensitive --- like voice recognition --- and some, like image generation, are amenable to batch use, so some kind of priority system is probably warranted. So there are some technical problems to solve.
But otherwise, the only real option for heavy parallel compute is going to be sending your data out to the cloud. And even if you don't care about the privacy implications or the possibility of a company going under, as I saw some home automation person once point out, you don't want your light switches to stop working just because your Internet connection is out.
Having per-household self-hosted parallel compute on one node is still probably more-costly than sharing parallel compute among users. But it's cheaper than putting parallel compute on every device.
Linux has some highly-isolated computing environments like seccomp that might be appropriate for implementing the compute portion of such a server, though I don't know whether it's too-restrictive to permit running parallel compute tasks.
In such a scenario, you'd have a "household parallel compute server", in much the way that one might have a "household music player" hooked up to a house-wide speaker system running something like mpd or a "household media server" providing storage of media, or suchlike.
Easy fix: don't buy this garbage to begin with. It's terrible for the environment, terrible for your privacy, of dubious value to begin with.
If every man is an onion, one of my deeper layers is crumudgeon. So take that into account when I say fuck all portable speakers. I'm so tired of hearing everyone's shitty noise. Just fucking everywhere. It takes one person feeling entitled to blast the shittiest music available to ruin everyone in a 500yd radius's day. If this is you, I hope you stub your toe on every coffee table, hit your head on every door jam, miss every bus.
The part that really gets me is that you have to opt out to not have everything you say saved. Bonkers that that isn't the default! There's no good user-based reason for this. Alexa doesn't remember shit for users, like any AI there's no recall feature. You can't say remember what I told you last night - give the address for that place, I was drunk and don't remember the name.
Today: "...they will be deleted after Alexa processes your requests."
Some point in the not-so-distant future: "We are reaching out to let you know that your voice recordings will no longer be deleted. As we continue to expand Alexa's capabilities, we have decided to no longer support this feature."
“We lied and paid a $3M fine.”
And finally "We are reaching out to let you know Alexa key phrase based activation will no longer be supported. For better personalization, Alexa will always process audio in background. Don't worry, your audio is safe with us, we highly care about your privacy."
Or simply "...they will be deleted after Alexa processes your request and generates a token for AI training".
They could also transcribe the recording and only save that. I mean they absolutely will and surely already did do that.
What happens if I buy one and start playing porn on my computer ?
Why not fuck around and find out?
I don't think Google home listens in.
Because I'd absolutely be disappeared by now if it did.
Only a fool would put an Amazon listening device in their home.
Always listening AI. I always thought the future would be awesome but capitalism has figured out a way for it to not be that.
It's always been this way for the cheap speakers. They've no processing power on-board and need the cloud just to tell you the time.
In the age of techno-fascism, the people willingly pay to install the listening devices into their own homes.
Now they can hear me scream “shut the fuck up Alexa!!!!” every time she says “…by the way…” when I just want to know what time it is.
Me while cooking mac and cheese for the kids:
"Echo, set timer for 8 minutes"
Echo: "GOOD EVENING [me], SETTING TIMER FOR 8 MINUTES
"
No, shut the fuck up and just set the goddamn timer without the extra fluff. I've seen Ex Machina, I know you have no empathy, so knock off the "nice" shit and do what I fucking ask without anything else.
My family has one in most rooms of our house...ugh
To the recycling bin you go, Alexa
And people wonder why I never bought any of these kinds of things.
It’s not mine but I frequently tell the one in house our to “fuck off”?
Everything you say to your Echo...
I don't have an Echo.
be aware, everything you say around amazon, apple, alphabet, meta, and any other corporate trash products are being sold, trained on, and sent to your local alphabet agency. it's been this way for a while, but this is a nice reminder to know when to speak and when to listen
Wow there are way fewer "so what it's the same as your smartphone" and "everyone does it, google, apple, it's no big deal" comments on Lemmy.
People seem upset about this. I’m over here wondering wtf is an echo?