What do tech companies do with our big data and how can we counteract misuse?
Ever since the rise of social networks, smartphones and online advertising, all kinds of companies monitor us on a daily basis. Behaviour, movements, social relationships, interests, weaknesses, and private moments are stored, evaluated and analysed in real-time. Earlier, I have written about the techniques that tech companies use to make users addicted to their programmes.
The relationship between data companies and individuals is often compared to a poker game in which one of the players has an open hand and the other keeps their cards firmly pressed against their chest. Most users try to protect their data with their privacy settings. But do these offer sufficient guarantees? Because what really happens behind the scenes at tech companies? An example: the iPhone X can identify both faces and expressions with its True Depth camera. Apple now wants to give app developers limited access to that face data. If you had known about this development, would you as a user have given your permission for this? Probably not. The point is that we use such tools as WhatsApp, Telegram, Facebook Messenger without thinking twice about who has access to the data that we generate.
More and more devices, such as smartwatches for body functions are monitoring us all the time. And at home, the digital receptors of Google Home and Amazon Echo listen in on our private conversations. When we go to sleep, when we wake up, where we go and what we buy. In recent years, Visa and Mastercard have begun to make information concerning their customers purchases available to the the digital trace and profile universe.
Google stated that it monitors ‘approximately 70 percent of the credit and debit card transactions in the US’ via ‘external partnerships’. There are examples of this also in the Netherlands. Collection agencies violate the privacy act with the mass sale of debtors’ information. Data companies have created profiles of virtually every Dutch household in recent years without informing any of these households.
It’s only becoming more and more all the time. “Facebook now invests in Virtual Reality, the next step, which will enable people to completely overwhelm their senses in a virtual world created by the tech companies. Perhaps some people will not see the danger in such an intimate entanglement with machines. But everyone must know: you not only fuse into those machines, but also into those companies that manage the machines.” (Franklin Foer, World without Mind, the existential threat from Big Tech).
The British journalist John Lanchester calls Facebook ‘the largest surveillance-based enterprise in the history of mankind’. A very creepy article about Facebook has recently been written by Kashmir Hill from Gizmodo: “Behind the Facebook profile that you have created yourself is another profile, a shadow profile, compiled from the inboxes and smartphones of other Facebook users.” She sites a whole series of examples of people who receive the most unexpected, undesired friend suggestions.
Not only via devices in the neigbourhood of your body or home but you are also being monitored in public spaces. We see enormous growth in the number of sensors in public spaces. Cameras, motion detectors and noise meters are present in ever increasing numbers in shopping streets. This is what Geonovum, an organisation that makes geographical information accessible to municipalities, has observed.
Weapons of Math Descruction
According to Cathy O’Neil, author of the best-selling Weapons of Math Destruction, entire groups are being discriminated against by algorithms. In the past years, automated programmes based on previously entered data sets have caused numerous scandals. In 2016, when a student searched Google images for ‘unprofessional haircuts for the workplace’, the results showed mostly photos of black people; when this student changed the first search term to ‘professional’, Google displayed mostly photos of white people. But this is not the result of prejudice on the behalf of Google programmers; it is sooner a representation of how people had labelled images on the Internet.
Andrew Reece of Harvard University and Chris Danforth from the University of Vermont are developing an algorithm of photos on Instagram that can determine whether someone is suffering from depression. This is surely something you don’t want everyone to know, right? Teenagers use Finstagram – Fake Instagram – and a second Instagram account where they, despite the name, can still be themselves. On these protected accounts, only accessible for a few good friends, the lesser, selfie-unworthy moments are also shared.
The dark side
“Technology has crossed over to the dark side. It’s coming for you; it’s coming for all of us and we may not survive its rise.” A terrifying sentence written by Farhad Manjoo in the New York Times. A recent example of this is what China is planning to implement. The Chinese government it planning on launching its Social Credit System in 2020. With it, China wants to evaluate the reliability of its 1.3 billion inhabitants.
In the Financial Times, a recent article written by Rana Foroohar states: “Why we must regulate technical platforms. Companies must open the black box of their algorithms. Here, Foroohar harks back to the example of the 10th century railway barons. They dominated both the economy and society. They were able to keep prices high, buy out competitors and avoid taxes and legislation. This was all swiftly put a stop to by the creation of the Interstate Commerce Commission.
Foroohar now advises that we create an Internet Commerce Commmision. In the Dutch Financieel Dagblad newspaper, Bar Boorsma from Cisco Northern Europe argues for a similar strategy because “without a modern ethical framework, digitalisation will take over”- Boorsma continues with, “we are dealing with two opposing sides: the technology evangelists and the technology doomsayers. Their perspectives are miles apart, while what we really need is a synthesis of these two schools of thought.” Boorsma pleads for a “New Digital Deal” where ethics and digitalisation come together.
Governments are also struggling with what to do with the data. Some data is now even wilfully left out or no longer visible. “Would you like to have detailed information about arrests, murders or gang-related incidents in 2016? Or what about the melting polar ice? You will not be able to find much on these items. All of the information that could invoke just the slightest bit of resistance is thrown right out the window,” suggests one article in Wired magazine.
This is in direct opposition to the open data movement which is gaining ground all the time and for which governments, especially, make data available.
In a featured article of the German newspaper Der Spiegel, Armin Mahler (Chief Science Editor) hopes that the German government has made it an objective to stimulate digitalisation in Germany with the following: “It is correct and important. But, it is equally as important to simultaneously tackle the problems related to digital capitalism. Otherwise, only a few companies will dominate our future economy.” “Do social media form a threat to our democracy?” This was the title of the The Economist only a few days ago. Here too, a plea is made for tech companies to make more effort in providing accurate information and fight against fake news and manipulation.
The most spectacular developments in AI come out of a data-intensive technique known as machine learning. Machine learning requires a lot of information to make, test and “train” AI. Data and AI are inseparable from one another.
According to AI developer Nick Bostrom, and author of the book ‘Super intelligence’: “We are at the cusp of developing a self-learning system that transcends human thought. We only have one chance to get this right.”
According to Holger Hoos, Leiden Professor of Machine Learning, “a human level of artificial intelligence, no matter how intellectually fascinating, is not desirable. Instead, we should concentrate making artificial intelligence supplement out capacities and compensate for our weaknesses.” “Big data can only quantify and it qualifies nothing. Every piece of information that comes out of it, deserves a human appraisal,” says journalist and researcher Timandra Harkness.
The fact is, technology and technological developments cannot be stopped. “The trick is to see technology not as some great big demon that stands across from us and wants to crush us, but as something that shapes us into who we are,” says philosopher Peter-Paul Verbeek.
Arianna Huffington, CEO of Thrive Global, says: “We must embrace technology in order to liberate ourselves from that very same technology, so that we can connect to the people around us again.”
As long as we, the consumers, the individuals understand what can happen with our data, we can also protect ourselves against it, up to a certain degree. Companies have for years now been claiming that they don’t do anything with our data, but if that’s really the truth... Maybe we should all switch to using Signal (instead of Whatsapp), Duckduckgo, WolframAlpha (instead of Google) etc.
Would you like to read more blogs by Erdinc? http://www.dutchcowboys.nl/bloggers/erdinc-sacan