Hypernudging — When algorithms become manipulative

Remember the distance markers in a store that indicate 1,5 meters distance for covid safety? The small fly in the urinal? Or the “eat your veggies” sign in the canteen? Those are all so-called nudges.

They describe any aspect of the choice architecture that alters people's behavior in a predictable way without forbidding any options or significantly changing their economic incentives. [1]

Photo by Klemens Köpfle on Unsplash

In short form: it “helps” you choose, but leave leaves the choice up to you.

There are different kinds of nudges, from behavior-based ones, that offer convenience enhancements for the “better” choice, to knowledge-based ones, that try to change your behavior by increasing knowledge. And if you take a moment to think about your day-to-day life, you will realize how omnipresent nudges are.

But are they always good for us? What even is “good” and can’t the same techniques be used for example in marketing to slowly facilitate behavioral changes in people convenient for the product that is advertised?

Definitely! And there are critiques arguing, that nudges diminish autonomy, threaten dignity, violate liberties, or reduce welfare [2]. Even coming from a pure intention, sometimes the lines between lightly steering and manipulating someone can become blurry.

Two main forms of critique arise:

  1. Illegitimate motive critique: this is “active” manipulation in the sense that the nudge may be used for illegitimate purposes
  2. Nudge as deception critique: this is “passive” manipulation where the nudge deliberately seeks to exploit cognitive weaknesses to provide the desired behavior

Till now, we only spoke about the “analog” version which utilizes some pictures, stickers, or printed text. But what becomes of nudging, when it is paired with the power of Big Data and different algorithms.

Big Data is the combination of technology and processes. Technology is hardware capable of sifting, sorting, and interrogating vast quantities of data very quickly. Processes involve mining data for patterns, distilling it into predictive analytics, and applying the analytics to new data. [1]

The game-changer is, that Big Data systems find useful correlations within (huge) datasets, an ordinary human would never be capable of.

The recursive feedback loop in such a system then allows dynamic adjustments of:

  1. The individual’s choice environment in response to changes in behavior
  2. Data Feedback on the choice architect
  3. Population-wide trends

The continuous adjustment of the algorithm, while generating and collecting new data, results in new data-driven digital decision-making processes. Those can be distinguished into two configurations. The automated decision-making process automatically issues some kind of decision without any need for human intervention beyond the user input of relevant data, and the digital decision guidance process. The latter is designed, such that the targeted individual makes the relevant decision, but the underlying software offers ‘suggestions’ which were automatically identified as rational.

Actually, both can be a form of nudging and with the additional possibilities, Big Data brings to the table, the term Hypernudging originated.

Hypernudging relies on highlighting algorithmically determined correlations between data items within data sets, that would otherwise not be observable through human cognition. [1]

For example, an HR tool automatically accesses the candidate and compares her to others. It then outputs the results to the recruiter and highlights areas in which the candidate exceeds or underperforms the others. The final decision on who to hire then still lies in the hands of the recruiter making this a decision guidance process, but also obviously a nudge towards the candidate the algorithm deems the best.

Let us think about the Instagram timeline. An automated and highly sophisticated algorithm decides what we see when we open the app. This is an automated decision process, as we have little to no control about which content (from people we follow) gets displayed in which order. Yet it is still only a nudge. We can open the profiles individually and look at the new post from that account. It is therefore a Hypernudge highlighting certain content on our main page, and going back to the definition of a nudge we realize, that it is not restricting our options — it is just a bit less convenient.

But where does this convenience ends? When are we manipulated into seeing certain posts or ads and spending more time on the platform just to find the content that interests us?

By Julie Cohen [3] the ‘self’ as a legal subject is defined by 3 principle attributes:

  1. it is an autonomous being possessed of liberty rights that are presumed capable of exercise regardless of context
  2. the legal subject possesses the capacity for rational deliberation and his capacity too is detached from context
  3. the selfhood of the legal subject is transcendent and immaterial. It is distinct from the body in which the legal subject resides

This results in the overall principle that should be valued at all costs:

Individuals are allowed to choose and pursue their different plans or paths of life for themselves without interference from others [4]

Now take a moment and think about some of the digital services you use, especially the big ones like Google, Facebook, and Apple. They are so omnipresent and it is near impossible to waive their services, to the point where one can speak of a huge power asymmetry between those digital service providers and individual service users. And this can become very dangerous. Like in a mentally abusive relationship it may seem that you have the choice to leave, to escape the nudge, the manipulation, and move on from your predator. But due to this asymmetry, it isn’t!

Photo by engin akyurt on Unsplash

All your friends and colleagues still use the services and your relations can be severely impacted by leaving — like in a relationship with common friends. You can even be excluded from convenient services that some deem essential, try ordering a cab by phone in some cities.

I myself lived multiple years with an old Nokia 6230 that had no smartphone features at all. Then I switched to a Nokia 6400 4G with Kai OS since it supported Whatsapp and communicating just via SMS, Phone or E-Mail became very tedious. In the end, I upgraded to a Smartphone with Graphene OS — a pure Android with no Google Services.

Sure, there is a choice. But when you try it out yourself you will realize how technically challenging it can be and that convenience losses can be so impactful, that only a few people are willing to live with that.

This is important to understand why first of all the creators of a digital service that uses some form of Hypernudging hold responsibility and secondly laws exist to protect the ‘self’ of a user.

Especially because Big Data based Hypernudging algorithms introduce a third, additional critique to the “Nudge as deception” and “Illegitimate motive” mentioned in the beginning:

3. Lack of transparency critique: algorithms intentionally influence the decisions of others yet operate as “black boxes”

Still, for some liberals, the targeted individual consented to the use of her data and to the terms of the service. And for them being notified of the choice architect’s purpose justifies the application of different forms of Hypernudging within their services.

Law provides individuals with a set of rights aimed at enabling them to exercise control over the use of their personal data, with individuals deciding for themselves how to weigh the costs and benefits of personal data sharing [1]

The problem such a view poses is, that there are multiple studies showing, that individuals are incapable to give meaningful consent [5] and it is often impossible for them to adequately assess the risk [6]. On top of that, the terms & conditions or privacy statements pose a huge information overload and are filed with complicated language that fails to provide easy-to-understand and sufficiently detailed information to enable users to make informed decisions [7].

In the end, there is no final conclusion. Hypernudging is very effective and here to stay. There is obviously a conflict of interest for data-driven companies between collecting as much data as possible, providing users ML-based services keeping them longer on their platform, and respecting the principle attributes of the ‘self’.

It is important to keep in mind that we hold responsibility when creating a digital service. We are obligated to evaluate at which point we start manipulating the autonomous decisions of a user and exploiting their cognitive weaknesses. Nowadays just getting consent and using it as a wildcard isn’t adequate anymore.

This article is largely based on [1] and tries to summarize and simplify its key messages to bring awareness to the topic.

[1]https://www.researchgate.net/publication/303479231_%27Hypernudge%27_Big_Data_as_a_mode_of_regulation_by_design
[2]https://en.wikipedia.org/wiki/Nudge_theory
[3]https://juliecohen.com/configuring-the-networked-self/
[4]”Human Rights, Legal Rights and Social. Change” John Kleinig
[5]https://www.cmu.edu/dietrich/sds/docs/loewenstein/PrivacyHumanBeh.pdf
[6]https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2171018
[7]https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2567042

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store