Can any of us truly maintain that we choose what we choose freely in today’s world? What is unethical manipulation and how can we discern it or protect ourselves from it?
On the train to Brussels, as I see the neat, flat and rainy terrain of the Netherlands gradually turn into the slightly more lush november landscape of northern Belgium, a single thought keeps coming back to my mind. Kind of like an annoying wasp.
I have been invited to attend the European Parliament seminar on “fighting disinformation in the digital age”, which is right up my alley as a philosopher of marketing and technology. I can’t wait to learn more about the workings and ethics of cutting-edge communication and data technologies.
What makes a person choose anything? The question that keeps coming back to my mind is: what really made me choose to come to this seminar?
Contemporary scientific consensus holds that free will does not exist. The logical idea that determinism rules and that free choice or free will can not exist sits very uncomfortably with me as a human being, a writer, and a marketer.
So, my genes combined with my life experiences made me choose to write this article? But what if I didn’t want them to?
If free will indeed does not exist, then how bad can it really be that marketers and political campaigners alike use Big Data and methods like microtargeting, psychological profiling? Or that they deploy bots posing as (human) social media accounts, neuromarketing and other advanced science and technology to sway people in their preferred direction?
One of the main reasons I was interested to see what the seminar in Brussels was all about, was that Jeroen van den Hoven, professor of Big Data Ethics at the Technical University of Delft (TUD), would be present during a panel discussion on the second day. So I would be able to ask him precisely that question.
At least, that’s what I thought my reason for choosing to attend was. But who knows?
A large part of the seminar was dedicated to explaining why the European Parliament and European Commission have been closely watching elections around the world — including the ones surrounding Brexit and the infamous Trump election, but also in the eastern European region, South and Central America and Sweden. And, how the EU is learning and analyzing what effects disinformation and innovative communication technologies can have on the outcome of elections.
The trend is of course alarming, especially considering the crisis of politics EP vice president Pavel Telicka reminds us we are currently seeing the consequences of around Europe. Throughout the continent we’re seeing a rise in populism and a wave of local nationalism, which are both exacerbated by the rise of fake news or ‘misinformation’ as the EP members prefer calling it.
Even in the 2018 local elections in the small state of The Netherlands, where I live, we learn that both (possibly illegal) microtargeting and twitter Bots were used, mostly to the benefit of Geert Wilders’ right-wing PVV.
The EU has a number of projects and task forces dedicated to solving this complex problem. Among them are STOA (panel for Science and Technology Options Assessment) and the initiative the EU helped set up that resulted in a self-regulatory code of practice, signed by Facebook, Mozilla, Google and Twitter among others and directed at fighting fake news online.
There’s even talk of raising taxes on online advertising to a level that would be comparable to the level of taxes on offline advertising. This would indeed be a very bold move, and I am curious to see how seriously the EU is thinking of making this a reality.
But with the EP elections of may 2019 coming into focus, the aim of the seminar on fighting disinformation was clear. What the European federal government wanted was to reach out to journalists and voters alike, to crank up the debate about misinformation and its effects on the political climate and elections.
When we speak of discerning the correctness of information provided to us, we’re essentially speaking about trust and the robustness of records against tampering. If it’s anything the much-hyped blockchain and other forms of decentralized technology could bring to the table, it would be solving for trust.
One of the core ideas of Nakamoto’s brain child was the immutability of records that are always public. Additionally, various blockchain initiatives have been associated with the countering of fake news in the cryptosphere.
Is the European Parliament looking into using decentralized technology in the fight against fake news? Eva Kaili, MEP and STOA chair, tells us that in her opinion Blockchain and decentralized technologies are as yet not mature enough to use for problems of this complexity and scale. However, she admits the philosophy behind them is certainly very interesting and possibly even revolutionary.
The EU is currently looking into use cases for blockchain, potentially combined with A.I., for fighting disinformation, equalizing the media and advertising landscape and for safer use of data and identity.
Carl Miller, research director of CASM at Demos and author of Death of the Gods, tells us a marvellous story about being a ‘Fake New sMerchant’. He illustrates what an amazing economic opportunity it can be in regions of Eastern Europe where youth unemployment can be as high as 60%. His plea is that we ‘not only burn the poppy fields, but also give them something else to grow’.
After Millers inspiring talk, we learn more about what we ourselves can do to fight fake news, both as journalists and informed citizens.
Eoghan Sweeney from Firstdraftnews.org takes us on a tour of their free online educational resources, but also a lot of free online tools and their usability for “Intel Techniques” that we can all apply. Among them are staple apps such as Google Maps, Google Earth, Russian Yandex and Wikimapia, and a fantastic app by the name of ‘Suncalc’, which lets you determine what direction the shades in a given photo should point to if it was indeed taken at the stated time and location.
The main thing, though, is not tools or the use of them. It’s ourmindset. The main point is that we firstly raise awareness about mis- and disinformation amongst ourselves, and secondly adapt a thoroughly critical way of thinking about information we receive in any way, shape or form.
And, maybe, that we realize the existence of our online and offline filter bubbles.
What role does the notion of ‘free will’ have to play in all this?
On the second day of the seminar Professor Van den Hoven wonders if ‘Democracy will survice Big Data and A.I.’. He argues that we must all take this issue much more seriously than we have thus far.
People and organizations that use technology like Big Data and A.I., combined with others such as (political) microtargeting and combined with expert knowledge on psychology and psychometrics, can achieve much — even to “devour human autonomy”. Quite the bold statement.
It can be argued that marketers and business also have a responsibility for the misinformation mayhem we’re experiencing as a global society. I ask the professor wether he sees a responsibilty or a role for businesses in fighting misinformation and manipulation, considering businesses are the paying customers who helped create and sustain our current digital ecosystem.
The main concern Van den Hoven raises is the following: there has been and continues to be a strongly asymmetrical build-up of knowledge about persuasion and the use of technology in it. Both politicians and businesses know vastly more about them than the general public, allowing them a huge and unfair advantage.
This needs to be regulated, the professor states, and I agree. But, why wouldn’t a sort of self-regulatory initiative be set up where parties and organizations take responsibility into their own hands?
‘Free will’ is a complex and vague construct at best. The same could be said about persuasion, an obviously related concept.
But regardless your personal stance on the matter — believer in free choice or not, you decide — at some arbitrary point we must agree that the providing of information to facilitate a person’s choosing ends, and unethical manipulation begins. Where does Van den Hoven stand on the matter?
According to the professor of Ethics & Big Data, there is a fairly decisive line of division between convincing and manipulation or brainwashing. He illustrates his point with the legal definition and treatment of what is called ‘entrapment’. If someone is put in a certain position, with a certain level of cues and coaxing to commit an illegal act, we as a society rule that they can not be found guilty of choosing to commit the crime.
In the same sense perhaps, we can not think that a voter or consumer can be viewed as ‘freely choosing’ what he chooses, when we use some of the techniques and technology being used all over the world, today.
Transparency, and the active propagation of ‘Information-Awareness’ is what is owed to citizens, patients and consumers.
On my train ride back to The Netherlands, my mind is still — or even more — dazed by the riddle of free will.
But my resolve is strengthened: I intend to double down and to learn more and talk more about the ethics of the use of Big Data, and the immense impact digital communication technology has on all of our lives.
I already talk about these subjects to my friends, readers, and the people that I work with in marketing and tech. Sometimes to their wonder and inspiration, other times to their boredom and indifference. It is somewhat tedious subject matter at times, surely, but I think maybe that should be a price we want to pay for our freedom.
When the GDPR ‘hit’ our marketing department, I was the one advocating we go beyond the law, and stop to seriously think about the impact and responsibility that we have on people’s lives with our own digital marketing practices. Even if that meant generating less leads. We, too, used to make the use of Facebook and Google pixels a standard in all of our websites, I’ll admit.
Maybe we should all go beyond the GDPR and the constraints as well as the awareness it bestows on us. As politicians, businesses, and individuals alike. To protect our own and each other’s ‘free will’. You decide.
<a href="https://medium.com/media/3c851dac986ab6dbb2d1aaa91205a8eb/href">https://medium.com/media/3c851dac986ab6dbb2d1aaa91205a8eb/href</a>