Skip to main content
From search engines to debit cards: How technology reinforces social inequalities

Essay

From search engines to debit cards: How technology reinforces social inequalities

Author:

Abstract

Our collective eagerness to integrate technology into every aspect of society, in both our personal lives and on an institutional level—from choosing a romantic partner to policing—has resulted in an abundance of rapidly produced and, consequently, poorly designed digital systems. By assuming technology is inherently objective and unbiased, systems are created uncritically, allowing for the codification of discriminatory practices. In this essay, I will draw upon the works of critical race theorists, Marxists, and labelling theorists to argue that technology reinforces and exacerbates existing social inequalities—notably class and racial hierarchies. I will begin by discussing the extensive harm caused by biased search engines and algorithms before unpacking how the ‘digital divide’ leads to the social exclusion of the poor and particularly the homeless.

Keywords: technology, artificial intelligence, digital divide, social exclusion, Marxist, critical race theory, social stratification, class, race, police, policing

How to Cite:

Esat, S., (2024) “From search engines to debit cards: How technology reinforces social inequalities”, Essex Student Journal 15(S1). doi: https://doi.org/10.5526/esj.348

c484489d-c042-4b0b-805a-37287edc2527

Discriminatory search engines

In Algorithms of Oppression, Noble (2018) unfolds how prejudice against women of colour is embedded in search engine results; if you run a Google search for “black girls,” sexually explicit phrases such as “black booty” and “sugary black pussy” are amongst the first results whilst searching “white girls” presents a radically different—far less pornographic—outcome. This is no different when searching for other combined gendered and racial identities (e.g., Asian women, Latina girls, etc.), which not only displays a flood of pornography as the main representation of women but also an abundance of additional stereotypes with mere scraps of information that may be of value. For women of colour, whose identities are already often diminished in the media (Gill, 2007), this only further demeans and erodes efforts for recognition and appreciation. Similarly, a viral tweet by Kabir Alli (@iBeKabi, 2016) exposed how searching for “three black teenagers” displays mug shots whilst searching for “three white teenagers” shows pleasant stock photos. Evidently, the presentation of manipulated information is not an isolated event; instead, social stereotypes are routinely reiterated by search engines—in these examples, the notions that black men are dangerous and women of colour are hypersexual are reinforced. Many sociologists, including Howard and Borenstein, (2018), Garcia (2016), and Noble (2018), have concluded that prejudice has become automated through computer coding, allowing for the “reinforcement of oppressive social relationships and new modes of racial profiling” (Noble, 2018, pp. 1).

Despite this, Google remains popular, with billions of searches a day (Google trend, 2024; Flensted, 2023) and many regarding it as a credible source for information. Few are aware of Google’s profit-oriented approach, which algorithmically prioritises its own properties—regardless of its validity—to the top of the search load and essentially blocks sites that attempt to compete (Consumer Watchdog, 2013; Noble, 2012). Unsurprisingly, Marxists, such as Mager (2012), have critiqued Google’s monopoly, both because of the concentrated wealth of “digital oligarchy” (Taplin, 2017) and because knowledge governed by those in power allows for the reinforcement of capitalist ideologies. Reich (2017) argues that creating racial hierarchies (in this case, by reiterating racial stereotypes) divides the working class, preventing revolution and ultimately serving the ruling class/Bourgeoisie. Evidently, there is an intrinsic link between class, race, and gender; Google’s commercial motives have resulted in discriminatory—racist and sexist—search engine results.

Beauty AI

Search engines are not the only example of biased algorithms; Beauty AI, a seemingly harmless enterprise, presented the first international beauty contest judged by an algorithm. The 2016 project, designed by Youth Laboratories, involved 6000 participants from 100 countries and yet, out of 44 winners, only one finalist had dark skin (Benjamin, 2019). This, understandably, sparked controversy, leading media outlets such as The Guardian to state that “the robots did not like people with dark skin” (Levin, 2016), prompting revived debates on the unintended results of coding. Despite the algorithm not intentionally being built to view white skin as an indicator of beauty, the input data caused the AI to reach that conclusion (Benjamin, 2019). For Critical Race theorists, racism is engrained into the very fabric of society (Parker and Gillborn, 2020); when robots are designed in a world founded and entrenched in racism, it is unsurprising when racism seeps into technological outputs. Put simply, “biased algorithms are produced by human creators who have their own deeply entrenched biases” (Levin, 2016); whether consciously or not, human programmers’ bias towards whiteness is encoded, reinforcing racial hierarchies.

Policing, surveillance, and other real-world consequences

It could be argued that although the examples I have provided so far have displayed shameful discrimination, they have been victimless; therefore, my next example will focus on the real-life consequences of biased algorithms.

Historically, the relationship between ethnic minorities—in particular, black men—and the police has been characterised by intensive and invasive over-policing. A report by Brown (2021) unveils the disproportionate stop and searches black men face (80% of whom are innocent), as well as the use of violence by the police, with “black people three more times likely to be tasered” than white people (Gayle, 2015). With predictive technology, this habitual racial profiling and police brutality is made infinitely easier. Predictive policing refers to police work that utilises technology to forecast the people/places considered likely to be involved in crime in the near future (Sandhu and Fussey, 2021). Ironically, the goal is to make policing fairer by neutralising the subjectivity of police decisions; the problem, of course, is that policing has routinely targeted marginalised communities and so imputing unfair data will yield unfair results. For example, a case study conducted by Piliavin and Briar (1964) found that police decisions to arrest young people are primarily based on physical aspects, such as ethnicity and class (indicated by clothing); as ethnic minority poor men fit best into the police notion of the ‘typical offender,’ they are most likely to be arrested. A more recent study by Dabney et al. (2017) backed up these claims. In contrast, young people from higher classes (not viewed as the ‘typical offender’) are less likely to be arrested, and in the rare cases that they are, they are more likely to talk their way out of being charged (Cicourel, 2017). As labelling theorists put it, police socially construct criminals based on preconceived stereotypes and prejudice typification (Muncie, 2010, pp. 139–152), viewing the same behaviour differently depending on who the offender is. This skews arrest statistics, making crime appear as a majorly ethnic-minority and working-class phenomenon. When these results are input into predictive software, police officers are instructed by the algorithms to, once again, target ethnic-minority, poor areas. Rather than making policing more objective, predictive policing “facilitates amplifications of prior surveillance practices” (Brayne, 2017), only this time, the guise of neutrality prevents officers from taking accountability for only targeting a certain population. As Citron and Pasquale (2014) put it, human judgment is hidden in the black box.

It is also important to note that, regardless of police efforts—and even if the police were not institutionally racist (as Critical Race theorists argue (Alang, 2018))—crime data will always be biased, as it is impossible to record all crimes since not all are reported. In addition, crimes that take place in public (like vandalism) are more visible to police and are more likely to be recorded (Duster, 1997, pp. 260–87) whilst crimes like tax evasion and bribery (often committed by the rich) are less likely to be detected. Undoubtably, missing crime data combined with patterns of over-policing has resulted in inaccurate and discriminatory algorithms.

Similarly, Correctional Offender Management Profiles for Alternative Sanctions (COMPAS) is used by US police to predict whether or not convicts are likely to reoffend; this is determined by the answers given to 137 questions relating to various aspects of the offender (e.g., their age, gender, geography, etc.). COMPAS then determines the risk of recidivism using a scale of 1–10 (1 being very unlikely and 10 being very likely) (Steege, 2021). Like other forms of predictive policing, it is evident that racial profiling is characteristic to this software. In 2014, Brisha Borden, a black 18-year-old girl, was convicted of attempting to steal a bicycle worth £80 when the opportunity arose. That same year, Vernon Prater, a 41-year-old white man, was convicted of robbing a store. Despite being previously convicted of armed robbery, Prater was assessed as unlikely to reoffend (with a score of 3) whilst Border—who had no past history of criminality—was determined as high risk of recidivism (with a score of 8) (Fortes, 2020). On the whole, I believe it is rather clear that predictive technologies reproduce existing patterns of discrimination; algorithms create self-perpetual cycles that prevent social progress.

The digital divide and social exclusion

So far, the focus has been on programming causing social inequality, but of course, it is important to draw attention to the fact that technology is not widely accessible for all. The ‘digital divide’ (a metaphor for the gap between those who own technology/have internet access and those who don’t) has equally perpetuated inequality, specifically reinforcing class divides. The concept of the digital divide links directly to Schniller’s (1996) theory of information inequality and Tichenor et al.’s (1970) research on the knowledge gap; whilst in the past, not knowing how to read was a hinderance, today, not having access to the media can act in the same way. For example, Gomez (2020) reports that despite being among the most vulnerable to contracting COVID-19, some homeless Americans remain oblivious to the virus, since they have had no way of finding out.

That is not the only issue the digital divide has caused the homeless; from a Marxist perspective, Ragnedda and Ruiu (2020) introduce the concept of “digital capital”, arguing that those who do not have access to technology are at a socio-economic disadvantage since they are, for example, unable to research and learn, apply for jobs, communicate with others, etc. This digital exclusion leads to social exclusion, ultimately reducing “life chances” (a term founded by Weber (1978), referring to the opportunities one has to improve their quality of life.)

Cashless societies and their impact on homeless individuals: A deliberate oversight?

Additionally, the recent shift towards contactless payment has resulted in fewer (housed) people carrying change, rendering many unable/less willing to give to homeless people (Noone, 2018), and when homeless people are given change, they are increasingly unable to use it. Since this is a new shift, there is little literature concerned with the effects of a cashless society on the homeless, though it is certain to exacerbate financial difficulties, reinforcing class hierarchies. Though it may seem over-sceptical, it is difficult to believe that during the switch towards a cashless society, the homeless were simply forgotten; instead, it seems that it was a deliberate choice to further disenfranchise an already unprotected population. Visible methods to reduce the number of homeless people, such as ‘anti-homeless spikes’ (metal studs to discourage the homeless from sleeping in public spaces), have been increasingly implanted in urban areas (Petty, 2016). If the government are willing to use hostile architecture against the homeless, why should we assume they are moral enough not to utilise technology to harm the homeless?

Conclusion

Overall, it is abundantly clear that technology—whether through its unequal distribution or its biased coding—reinforces social inequalities. This is particularly concerning since cuts to public education and libraries (through austerity policies) only exacerbate our reliance on technology. So, what can be done? The way in which technology reinforces inequality is simply a reflection of society; social stereotypes were not created by search engines, and the homeless were not socially included before the invention of the iPhone; technology does not create but exacerbates social inequalities, and so, the only way to prevent future discriminatory technology is to create transformative structural change in the real-world. It is impossible to create unbiased algorithms if the creators of said algorithms have deep-rooted prejudices. Likewise, the problem faced by the homeless of the digital divide is much bigger than a lack of access to technology; in a country as rich as Britain, why are resources distributed so unequally?

References

Alang, S. (2018) ‘The more things change, the more things stay the same: race, ethnicity, and police brutality’, American journal of public health, 108(9), pp. 1127–1128.

Benjamin, R. (2019) Race after technology: Abolitionist tools for the new Jim code. John Wiley & Sons.

Brayne, S. (2017) ‘Big data surveillance: The case of policing’, American sociological review, 82(5), pp. 977–1008.

Brown, J. (2021) Police powers: An introduction, House of Commons Library. Available at: https://researchbriefings.files.parliament.uk/documents/CBP-8637/CBP-8637.pdf (Accessed: 20 April 2022).

Cicourel, A.V. (2017) The social organization of juvenile justice. Routledge.

Citron, D.K. and Pasquale, F. (2014) ‘The scored society: Due process for automated predictions’, Washington Law Review, 89(2), pp. 1–34.

Consumerwatchdog.org. (2013) Consumers Are Charged More As A Result of Google’s Search Monopoly. Available at: https://consumerwatchdog.org/sites/default/files/2019-02/Google_Cheats_Consumers.pdf (Accessed: 20 April 2022).

Dabney, D.A., Teasdale, B., Ishoy, G.A., Gann, T. and Berry, B. (2017) ‘Policing in a largely minority jurisdiction: The influence of appearance characteristics associated with contemporary hip-hop culture on police decision-making’, Justice Quarterly, 34(7), pp. 1310–1338.

Duster, T. (1997) Crack in America: Demon drugs and social justice. Edited by C. Reinarman and H.G. Levine. Berkeley: University of California Press

Flensted, T. (2023) How many people use google? statistics & facts (2023), RSS. Available at: https://seo.ai/blog/how-many-people-use-google#:~:text=How%20Many%20People%20Use%20Google%20a%20Day%3F,2%20trillion%20global%20searches%20annually (Accessed: 03 January 2024).

Fortes, P.R.B. (2020) ‘Paths to digital justice: Judicial robots, algorithmic decision-making, and due process’, Asian Journal of Law and Society, 7(3), pp. 453–469.

Garcia, M. (2016) ‘Racist in the Machine’, World Policy Journal, 33(4), pp. 111–117.

Gayle, D. (2015). ‘Black people ‘three times more likely’ to be Tasered’, The Guardian, 13 October. Available at: https://www.theguardian.com/uk-news/2015/oct/13/black-people-three-times-more-likely-to-have-taser-used-against-them (Accessed: 20 April 2022).

Gill, R. (2007) Gender and the Media. Cambridge: Polity.

Parker, L. and Gillborn, D. eds. (2020) Critical race theory in education. New York: Routledge.

Gomez, J. (2020) ‘Some homeless Americans unaware of coronavirus as shelters scramble to adhere to guidelines’, ABC News, 10 April. Available at: https://abcnews.go.com/Health/homeless-americans-unaware-coronavirus-shelters-scramble-adhere-guidelines/story?id=69962473&msclkid=284d94a7c6ad11ec978c9c0bc32a45ad (Accessed: 20 April 2022).

Google trends (2024) Google trends. Available at: https://trends.google.com/trends/trendingsearches/daily?geo=GB&hl=en-US (Accessed: 03 January 2024).

Howard, A. and Borenstein, J. (2018) ‘The ugly truth about ourselves and our robot creations: the problem of bias and social inequity’, Science and engineering ethics, 24(5), pp. 1521–1536.

Kabir Alli [@iBeKabi] 2016. [Twitter], June, https://twitter.com/iBeKabil/status (Accessed: April 2022). (Account currently suspended)

Levin, S. (2016) ‘A beauty contest was judged by AI and the robots didn't like dark skin’, The Guardian, 8 September. Available at: https://www.theguardian.com/technology/2016/sep/08/artificial-intelligence-beauty-contest-doesnt-like-black-people?msclkid=88c0ecf6c6ad11ec8a4dc8d0af7e8770 (Accessed: 20 April 2022).

Mager, A. (2012) ‘Algorithmic ideology: How capitalist society shapes search engines’, Information, Communication & Society, 15(5), pp. 769–787.

Muncie, J. (2010) The Sage Handbook of Criminological Theory. Edited by E. McLaughlin and T. Newburn. London: Sage.

Noble, S.U. (2018) Algorithms of oppression: How Search Engines Reinforce Racism. New York: New York University Press.

Noble, S.U. (2012) ‘Missed connections: What search engines say about women’, Bitch Magazine (54; Spring), pp. 36–41.

Noone, G. (2018) ‘'Sorry, I've only got my card': can the homeless adapt to cashless society?’, The Guardian, 27 February. Available at: https://www.theguardian.com/cities/2018/feb/27/card-cashless-society-homeless-contactless-payments-britain?msclkid=08fd8341c6ae11ec918fa4bf112c58f0 (Accessed: 23 April 2022).

Petty, J. (2016) ‘The London spikes controversy: Homelessness, urban securitisation and the question of ‘hostile architecture’’, International Journal for Crime, Justice and Social Democracy, 5(1), pp. 67–81.

Piliavin, I. and Briar, S. (1964) ‘Police encounters with juveniles’, American journal of sociology, 70(2), pp. 206–214.

Ragnedda, M. and Ruiu, M.L. (2020) Digital capital: A Bourdieusian perspective on the digital divide. Bingley: Emerald Publishing.

Reich, M., 2017. Racial inequality: A political-economic analysis (Vol. 5156). Princeton University Press.

Sandhu, A. and Fussey, P. (2021) ‘The ‘uberization of policing’? How police negotiate and operationalise predictive policing technology’, Policing and Society, 31(1), pp. 66–81.

Schiller, D. (1996) Theorizing communication: A history. New York: Oxford University Press.

Steege, H. (2021) ‘Algorithm-based discrimination by using artificial intelligence. Comparative legal considerations and relevant areas of application’, Eur. J. Privacy L. & Tech., 2021(1), pp. 56–71.

Taplin, J. (2017) Move fast and break things: How Facebook, Google, and Amazon have cornered culture and what it means for all of us. London: Pan Macmillan.

Tichenor, P.J., Donohue, G.A. and Olien, C.N. (1970) ‘Mass media flow and differential growth in knowledge’, Public opinion quarterly, 34(2), pp. 159–170.

Weber, M. (1978) Max Weber: selections in translation. Cambridge: Cambridge University Press.

©Safiyyah Esat. This article is licensed under a Creative Commons Attribution 4.0 International Licence (CC BY).

Share

Downloads

Information

Metrics

  • Views: 317
  • Downloads: 132

Citation

Download RIS Download BibTeX

File Checksums

(MD5)
  • HTML: ae9e0937992c1fd5255f99213d1384ec
  • Word: 2fe2bfc2556511fc5765b364456198e4
  • PDF: 4c4dab8010642ea74b97b3fbead6da53

Table of Contents