These types of projects are driven by metrics, and teams have some kind of quota/goal that they need to reach by a certain date to keep the project on schedule. Bonuses or job security may be on the line here, and so you may see some desperate employees “going the extra mile” to reach their goals.
Relatedly, Alexa’s voice activation sensitivity is essentially a tunable number. It can be changed to be more sensitive, so that it will activate more easily (e.g. maybe you say “Alex” instead of “Alexa”). The people who control this are likely on the team with that deadline, so the incentives are there to lower this value in order to collect more data by recording personal conversations “accidentally”. Maybe a bad update goes out that causes Alexa to activate randomly, and they quickly fix it after a few days when they collected all the non-Alexa personal conversations they need for their AI.
That’s maybe a bit too deep into the paranoia/tinfoil hat spectrum for some, but history has shown that you can’t give big tech the benefit of the doubt. Especially when you see some of the documents from the Google trial, where executives discuss rolling back new features to improve arbitrary metrics in the short term so that they can get their bonuses for the quarter, even if it hurts consumers.
These types of projects are driven by metrics, and teams have some kind of quota/goal that they need to reach by a certain date to keep the project on schedule. Bonuses or job security may be on the line here, and so you may see some desperate employees “going the extra mile” to reach their goals.
Relatedly, Alexa’s voice activation sensitivity is essentially a tunable number. It can be changed to be more sensitive, so that it will activate more easily (e.g. maybe you say “Alex” instead of “Alexa”). The people who control this are likely on the team with that deadline, so the incentives are there to lower this value in order to collect more data by recording personal conversations “accidentally”. Maybe a bad update goes out that causes Alexa to activate randomly, and they quickly fix it after a few days when they collected all the non-Alexa personal conversations they need for their AI.
That’s maybe a bit too deep into the paranoia/tinfoil hat spectrum for some, but history has shown that you can’t give big tech the benefit of the doubt. Especially when you see some of the documents from the Google trial, where executives discuss rolling back new features to improve arbitrary metrics in the short term so that they can get their bonuses for the quarter, even if it hurts consumers.