Most of us can't visualise the impact of big data
During the Geopolitics and Global Futures Symposium 2021, Fontys Project Leader for ICT & Business Dr. Mark Madsen spoke about the implications, impact and risks of big data sets. After all, only a few people, including administrators and politicians, really understand the impact of big data and therefore AI. That is why Madsen pleads for a broader awareness.
Geneva Centre For Security Policy
The conference is an annual initiative of the GCSP (Geneva Centre For Security Policy), an organisation whose mission is to promote peace, security and stability. Housed in the 'Maison de la Paix’, the Swiss UN-allied institute facilitates education, training, advice and research. The focus is on looking ahead; what implications do new (technological) developments have for an ensuing. Think of space technology, but also more frequently topics related to cyber warfare. As a speaker, Madsen is associated with the GCSP and speaks annually on big data, this time within the programme section Transformative Technologies and the Future of Global Security.
Abstract, immense and misunderstood
According to Madsen, the abstract idea of big data is widely misunderstood: "Most of us can't visualise the impact of big data and that itself is a risk. Think of something like the Dutch DigiD system, what if it falls into the wrong hands? Would you then get something like in Afghanistan where population data falls into the hands of the Taliban and becomes a tool of persecution instead of social order? The problem is not the technology, but our awareness of these implications. We need to be aware of that. " Big data is huge, Madsen explains. And with the rise of artificial intelligence, it will only get bigger: "Look at your phone, how much data can you store on that device? The National Security Alliance in the United States makes recordings of conversations, to analyse them for possible threats. They record many terabytes of data per minute. That's an amount of data we can never manually check, so the analysis is done by the AI. But who makes the AI and how smart is it? We already see false positives far too often. The AI may detect a peak, but the cause is not a threat. But what if the AI takes preventive action anyway?"
Data sets are targets
Whether you're talking about advertising or global politics, data is power and so the large data sets we create are a political target in cyber espionage, says Madsen: "We often use the term cyberwar, but I don't think that term is accurate. There is continuous cyber warfare but without physical repercussions. It is about disruption and sabotage, but data sets can change that." Besides the example of Afghanistan, he mentions the 2016 US elections: "We know Russia used Facebook to influence them and that's where the cat-and-mouse game changes. It shows how vulnerable datasets are, but also how manipulable AI is." And it is not only the AI that is vulnerable, but data itself can be tampered with, or it can be copied into other systems. Imagine a flight ban, that is registered but later removed. If that data was copied, false data about you as a person is now in circulation. Imagine this happening on a much larger scale.
Ethical challenge: opportunities and risks
However, Madsen is not convinced that big data sets only have disadvantages: "Big data and AI can mean an awful lot to us, Think of opportunities in preventive medicine because you can infer and even predict disease outbreaks from data sets, Data can help us to properly deploy disaster relief based on experience." What he draws particular attention to in his talk at GCSP is the ethical challenge that comes with big data sets and AI. How can we make AI work for us, while avoiding increasing risks: "When we use large datasets, the consequences can involve the whole spectrum from beneficial and convenient to invasive and dangerous. We need to be aware of that downside. That is why our choices in how we manage and control data are decisive, and that is where our future challenge lies."
The GCSP offers many opportunities for researchers and educators working on questions of security and peace in relation to ICT. Want to know more? Then contact Mark Madsen.
The question of AI and ethics is an important one. We will pay more attention to it in the coming period, in the run-up to our 'Ai & Ethics' debate on 14 October. Want to know more or participate? Send an e-mail to Yvonne van Puijenbroek.