13. Algorithmic awareness - the challenges created by artificial intelligence

What are algorithms?

Today, the concept of algorithms is mainly associated with programming and the functionalities of web services and applications. An algorithm, however, is originally a mathematical concept. An algorithm, in general, still means essentially the same thing: it is a series of steps to solve a problem or solve a task. 

Algorithms are usually thought to work automatically, but originally algorithms were manual, i.e. performed by humans. For example, methods taught in primary schools to solve an equation by multiplication by an integer or by a division angle. Similarly, recipes in a cookbook are algorithms on how to prepare delicious dishes from certain ingredients following certain steps.

An algorithm is characterised by the fact that it uses an input such as starting elements or data to produce the desired result. The desired outcome is determined by the creator of the algorithm. In programming, this is referred to by the concepts of input and output, between which the actual execution of the program takes place.

Computer programme algorithms

The most common algorithms used by computers are, for example, the various file formats used to store and compress images, sounds and videos. For example, a digital photograph can be compressed to a fraction of its physical file size using the JPEG compression algorithm. Algorithms are also used when live video is transmitted over a network to viewers, or when Internet servers deliver a particular web page to a user who has typed its address into his browser. 

Sometimes the output data, as well as the measures and results of algorithms are very complex. The complexity is usually related to the fact that the output data used by the algorithm consists of a large amount of previously collected data, or a large number of different variables or data points are used to perform a single task.

For example, the weather in a particular area can be predicted using previously collected data such as temperature, precipitation, wind, barometric pressure and statistical models based on observations. However, today’s weather forecasting models are based on virtual modelling of the area to be forecast, which simulates the real atmospheric phenomena. Algorithms using such modelling are based on a mirror image of the real world. 

Digital twins and recommendation systems

When algorithms are used to predict and influence human behaviour, this is sometimes referred to as a digital twin. It refers to a set of data collected about a person and their activities, and the combination of data from different sources. For example, online advertising networks and recommendation algorithms used by social media content streaming systems aim to provide each user with the most appropriate option based on the data available.

The recommendation systems aggregate data collected on users and on what is recommended. The best-known recommendation system is Google’s search engine. Google’s search was originally based on the PageRank algorithm, the idea being that the value of each web page is measured by how many other websites link to it. At the same time, the PageRank value is influenced by the PageRank values of the linking websites themselves, as well as the correspondence of the topics to the target page of the links. 

PageRank is currently just one of many algorithms used by Google search. Since 2004, Google’s search results have been influenced by data collected from users to personalise the search results, i.e. to recommend different web pages to different users. By 2010, Google reported using more than 250 different variables to personalise search results.

Today, Google search results are influenced by a user’s age, gender, family, occupation, hobbies, location, online shopping, travel, interests and online history, among other things. Google’s recommendation algorithms are not limited to search results, but are primarily used in Google’s advertising system to select ads that are relevant to users. It will come as a surprise to many that recommendation algorithms also select the news that users see, for example in the news view on Android.

AI algorithms

When an algorithm uses machine learning or some other artificial intelligence technique, it is called an AI algorithm. Machine learning means that the algorithm does not give the same result every time, but is trained by constantly collecting new data, so that it “learns” to improve its result time after time. 

The most familiar example of a learning recommendation algorithm is probably the YouTube algorithm, which suggests to users which videos to watch next. YouTube’s suggestions are influenced by previously viewed videos and other data collected by Google, as well as data related to potential suggested videos, such as their topics and average actual viewing times. But instead of only suggesting new videos related to the topics of previously viewed videos, YouTube’s algorithm also suggests videos on topics and channels that the user has not yet viewed.

For YouTube’s AI algorithm, each video suggestion is like a trial balloon thrown to the user, from which the algorithm tries to learn new information: in this case, which video topics are of interest to the user and which are not. A similar type of data collection is used by a number of social media services such as Facebook, Instagram, Twitter and Spotify.

Despite efforts to develop algorithms that take into account a wide range of user interests, user activity still tends to lead to algorithms that provide one-sided recommendations on narrow topics. For example, if you repeatedly click on posts on Facebook and Instagram on the same topic, you will continue to see more and more of the same type of content. This is called algorithm bias.

In AI algorithms, bias can also be caused by the training materials originally used in machine learning. For example, the Google Translator algorithm used to translate a personal pronoun in different occupations into “she” or “he”, depending on the occupation.

Google was even accused of discrimination because of this, even though it was the type of material that had been available for training AI. Today, Google Translator gives two different options for such translations.

Facebook algorithms and emotions

Of all social media services, Facebook has made the greatest effort to harness users’ emotions in its news feed algorithm. Liking publications has been a part of Facebook’s functionality almost since the inception of the service. Emotions were really harnessed in 2016, when Facebook launched the emoji reactions “love”, “haha”, “wow”, “sad” and “angry”. 

Prior to the introduction of these emoji reactions, Facebook had conducted a practical experiment to see how different posts affected users’ actions and emotions. The study found that positive posts caused positive emotions and negative posts caused negative emotions. Using the data collected from the emoji reactions, Facebook’s algorithm was able to select posts for users’ news feeds based on their emotional state. For example, if a user frequently clicked on wow reactions, they would then see more posts that had received a lot of wow reactions. 

From 2017, the value of emoji reactions in the news feed recommendation algorithm was increased to five regular likes. Companies and others studying the algorithm soon found that by making highly emotive posts, they rose to the top of users’ news feeds as a result of the algorithm. This kind of activity, which exploits human behaviour and the algorithms of social media services, is called social media optimisation.

A particularly effective emotion on Facebook proved to be the generation of indignation and anger. With more than two billion users, algorithm changes play a major role: they control the type of posts users see, on the one hand, and the type of posts made by influencers, on the other. So when the algorithm seemed to reward incitement to anger, many publishers started to act accordingly.

The high volume of hate content is one of the reasons why Facebook has been widely criticised for many years. Facebook soon ended up lowering the value of hate emoji in its algorithm: first to four likes in 2018, one and a half likes in 2020 and finally to zero likes in 2021 after thousands of documents leaked by ex-Facebook employee Frances Haugen revealed the above information.

Do algorithms have too much power?

Emerging data on Facebook’s algorithms has fuelled the debate on whether algorithms have too much power over users of online services. The fact is that algorithms do have an impact on the behaviour of their users. Most often, this influence is seen in the content that is recommended to users.

At the same time, it has been rightly questioned whether even the algorithms’ authors always have control over how algorithms work. AI algorithms in particular sometimes produce results that are difficult to predict in advance.

Facebook’s algorithms are very complex: it has boasted of using up to more than 10 000 data points to choose what to show each user. With so many different factors influencing what users see, it is not easy to manage the whole. 

A 2021 document leak revealed that when Facebook introduced emoji reactions, the company had sought to create a mechanism to prevent hate-face emojis from having a disproportionate impact on the visibility of posts. The algorithm had been programmed to halve the visibility score of a post that caused anger in certain situations. However, due to other variables affecting the algorithm, there was no upper limit to the visibility score, so that at worst, publications that garnered “angry” reactions would receive unlimited visibility scores.

Tellingly, while Facebook’s news feed algorithm gave disproportionately high visibility to some posts containing disinformation, hate speech and clickbait, for example, the company’s own moderators sought to weed out the same types of content. However, Facebook did not have enough moderators to remove all the damaging posts that the algorithm elevated to the top of the news feed.

Should the algorithms be published?

An often-heard demand is that online giants such as Google, Facebook and Twitter should publish the principles behind their algorithms. These claims relate mainly to the alleged harmfulness of algorithms, such as their attempt to maximise the time users spend on social media, and algorithms’ problems in preventing the spread of messages containing incorrect information and creating adversariality.

The business of online and social media services is usually based on advert monetisation, i.e. users clicking on adverts targeted at them. This is of course encouraged by the need to ensure that they stay as long as possible. It is therefore clear that algorithms are tuned to do just that, even if the services do not express it on their own. On the other hand, many studies show that a long time spent online and on social media services is not conducive to users’ well-being. The interests of the companies running the services and the users do not coincide in the operation of the algorithms.

Online giants have been reluctant to publish information about algorithms, citing commercial confidentiality and the fact that publishing algorithms would lead to their increasing misuse and manipulation by publishers and other online influencers. This argument is justified, as there has been a constant race to develop and exploit algorithms. On the other hand, it could be argued that it is the responsibility of the web giants to develop algorithms that are good enough to detect and prevent attempts at manipulation. 

In the debate on the openness of algorithms, it is often forgotten that some of the algorithms’ operating principles have already been published. Google, for example, provides a comprehensive, and at the same time, general description of the factors influencing the results of its search engine. Google has also published a nearly 200-page guide online for anyone to read, for use by its own search result evaluators. In addition, Google has produced a number of tools for website developers to test and improve the performance of their websites and, at the same time, their ranking in Google’s search results. Google can be said to be a good example of algorithmic transparency. On the other hand, we have no way of knowing what Google is not telling us.

It is easy to be sceptical about how many users of web services would bother to read hundreds of pages of documents describing the detailed workings of algorithms. In principle, however, this is an important issue. If the principles of how algorithms work were published, awareness would be raised, mechanisms that have been hidden until now would come to light and researchers would be able to study them in much greater depth. For users’ privacy, the most important thing would be to know in what ways their personal data are used in the algorithms. New EU legislative packages are therefore in the process of requiring greater transparency from online data providers on how algorithms work.

References:

Google, 2022, Miten tulokset luodaan automaattisesti, https://www.google.com/intl/fi/search/howsearchworks/how-search-works/ranking-results/

Google, 28.7.2022, Search Quality Evaluator Guidelines, https://static.googleusercontent.com/media/guidelin’s.raterhub.com/fi//searchqualityevaluatorguidelines.pdf

Pönkä, H., 31.10.2021, Infografiikka: Facebookin viha-reaktio ja algoritmin muutokset, https://harto.wor’press.com/2021/10/31/infografiikka-facebookin-viha-reaktio-ja-algoritmin-muutokset/

The Washington Post, 26.10.2021, A whistleblower’s power: Key takeaways from the Facebook Papers, https://www.washingtonpost.com/technology/2021/10/25/what-are-the-facebook-papers/

Wikipedia, 2022a, Luettelo algoritmeista, https://fi.wikipedia.org/wiki/Luettelo_algoritmeista

Wikipedia, 2022b, Tekoäly, https://fi.wikipedia.org/wiki/Teko%C3%A4ly

Wired, 22.2.2010, Exclusive: How Google’s Algorithm Rules the Web, https://web.archive.org/web/20110612022158/http://www.wired.com/magazine/2010/02/ff_google_algorithm/2

Yle, 19.12.2016, Näin sinua ohjataan Facebookissa ja internetissä, https://yle.fi/aihe/artikkeli/2016/12/19/nain-sinua-ohjataan-facebookissa-ja-internetissa

Yle, 12.2.2020, Hölkkääjä päätyy ultrajuoksuvideoihin ja kasvisruuan ystävä vegaanisisältöihin – Youtuben algoritmin tehtävänä on katsojan koukuttaminen, https://yle.fi/aihe/artikkeli/2020/02/12/algoritmin-tehtavana-ei-ole-totuuden-etsiminen-vaan-ihmisten-pitaminen-sivuilla

Harto Pönkä (M.Ed.) has a broad background in e-learning pedagogy, media education, social media and data protection. He has been a trainer since 2008 and has published books and articles on social media. Pönkä provides training and analysis for companies, associations and public administrations. Pönkä works for his companies Innowise and Tweeps.

Artwork: Lumi Pönkä

Download the Digital Information Literacy Guide (PDF).

info@faktabaari.fi

Evästeet

Käytämme sivustollamme yksityisyyden suojaavaa analytiikkaa palveluidemme parantamiseksi.

Lue lisää tietosuoja käytännöistämme täältä.