A challenging feature of machine learning is that exactly how a given system works is opaque. Nobody — not even those who have access to the code and data — can tell what piece of data came together with what other piece of data to result in the finding the program made. This further undermines the notion of informed consent, as we do not know which data results in what privacy consequences. What we do know is that these algorithms work better the more data they have. This creates an incentive for companies to collect and store as much data as possible, and to bury the privacy ramifications, either in legalese or by playing dumb and being vague.
The Latest Data Privacy Debacle (cache)
Even worse that that, you can easily play a learning algorithm (cache). This is not anymore about mutual consent (cache) because even the company you consent with has no idea what they are doing. They hardly see the power in all this data and are almost blind on risks.
If privacy is the next big thing (cache), it must start with an expiry date on pushed data and it must be encrypted on the client-side. For sure, it’s becoming harder to share with somebody else but that’s where peer-to-peer comes to the rescue! Keys to your data should not pass through a third-party actor which has all rights on deciding to whom they are granted.
From social networks to local networks.