Algorithms Know What We Like – but Do We Like What They Want?
Identity refers to an individual’s beliefs about oneself. It says who we are and what we believe in. Identity mirrors our values and guides our conception of what is right and what is wrong. It also directs our social behavior as most of us prefer to interact with people who share the same values. Instead of coming ready-made, identity is shaped through one’s life. For example, a baby is neither a vegan nor patriotic.
Sounds so simple — except it is anything but simple.
Ubiquitous digitalization has affected our identities. The law warns us of identity thefts, which are violations where an imposter obtains key pieces of personally identifiable information in order to impersonate someone else. Identity theft barely affects an individual’s self-perception, but it can still be an uncomfortable experience. In social media, for example, imposters may share embarrassing content on others with nasty consequences such as the end of friendships, or complicated lawsuits. Online shopping makes a victim’s everyday life messy, but may also lead to a bad credit rating.
The number of identity thefts has increased, but social media also reshapes our identities more routinely. For example, when we are exposed to our friends` posts on social media, we may end up comparing our own lives to others’ who appear to be having fun and feeling happy. Many of us fear of missing out, e.g. when we notice that we have not invited to the barbecue. Sometimes, we may also enjoy others` difficulties since they can be interpreted as the evidence of the success of our own choices.
We rarely show our true personalities – in social media hardly ever. On the one hand, we express ourselves in positive ways to strengthen our social bond. We post on Instagram happy holiday pictures and retweet in Twitter content we think is appropriate and in our interest. On the other hand, our social media behavior reveals embarrassing details and influence on how others perceive us.
A Canadian-American sociologist Erving Goffman (1922–1982) said that social media transforms the way we are both on stage and behind the scenes. Somewhat simplifying Goffman’s thinking, perhaps our life is a theatre where we simultaneously act as scriptwriters and directors, as well as playing the main roles. Social media provides context for performances where we create and maintain roles which we think will promote our aspirations. Social media can also invoke hypocrisy, as there it is easy to support good causes without being accountable for them. Social media provides a low-threshold channel, for example, for protesting against climate change and fighting for gender pay equality.
Manuscripted and improvised social media performances are conscious identity building. But what if our identities are built on something else, and something that is not in our own hands?
Social media, search engines and mobile apps have made our everyday life transparent. More and more of our daily operations can be converted to digital data. Every single Facebook like, Google search, YouTube video view and the use of mobile map app creates data sets which can indicate the things we like and hate, where we are and with whom. Machine learning algorithms cannot only know what we have done, but they are also astonishingly good at predicting what we are going to do. By analyzing, combining and selling our behavioral data, Google, Facebook, YouTube and other internet giants transform our digital footprints into their cash flows.
However annoying this commercial use of our data may be, there are some more fundamental issues at stake. In the world of algorithms, knowledge probably means more power than ever before. Professor Yuval Noah Harari, the author of three bestsellers Sapiens: A Brief History of Humankind (2015), Homo Deus: A Brief History of Tomorrow (2016), and 21 Lessons for the 21st Century (2018), has vividly described how disruptive technology might change the very nature of humanity. In the wrong hands, algorithms enable the effective and systematic manipulation of social interactions. The more accurately algorithms can detect our interests and habits, the more vulnerable we also become to disinformation and fake news. Obviously, Harari is not the only one who sees the future dangers of democracy emerging from machine learning algorithms. Many scholars have pointed out that in order to succeed, democracy requires an informed public.
Cathy O’Neil, the author of Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (2016), has claimed that algorithms are making it harder and harder to get unbiased information. An early investor in Facebook, Roger McNamee wrote in similar tone in TIME magazine (January 28, 2019) that “on Facebook, information and disinformation look the same; the only difference is that disinformation generates more revenue, so it gets better treatment. To Facebook, facts are not an absolute; they are a choice to be left initially to users and their friends but then magnified by algorithms to promote engagement. […] Facebook’s algorithms promote extreme messages over neutral ones, which can elevate disinformation over information, conspiracy theories over facts.”
A gap has widened between those who collect, store, and mine large quantities of data and those whom data collection targets. This has influenced a wide array of domains, from policy making to policing, business operations to social and healthcare, entertainment to education (Dow Schüll in Papacharissi 2019) and challenged our fundamental notions of human power and agency (Neff & Nagy in Papacharissi 2019).
It seems reasonable to expect that in an algorithmic age the meaning of and the controllability of identity have changed and become more complex. This is a key argument in Professor John Cheney-Lippold’s book We Are Data: Algorithms and the Making of Our Digital Selves (2018). According to him, we have little control over who we are algorithmically, because our identities are put upon us by drawing on our every search, like, click and purchase. As a consequence, our behavioral data does not only tell others what we have done, but it also defines who we are and what we are able to do. The question is not about the surveillance in Orwellian style, but a subtler and more mundane dataveillance (see Roger Clarke) based on our more or less voluntary disclosure of private data.
If things go badly, we may end up in a situation which Antoinette Rouvrey (2013) described as algorithmic governmentality. Algorithmic governmentality is based on logic that “simply ignores the embodied individuals it affects and has as its sole ‘subject’ a ‘statistical body’ […] In such governmental context, the subjective singularities of individuals, their personal psychological motivations or intentions do not matter.”
The more we rely on algorithms in making decicions and value judgements, the more critical it is to ensure the decisions and judgements are in accordance with our understanding about human agency, entailing the claim that it is humans – not algorithms – that make the most crucial decisions and enact them on the world.
Harri Jalonen, Director of the CoSIE project
Adjunct Professor, Turku University of Applied Sciences