The rise of Data Capitalism

di Alessandra Palmeri - 28 Febbraio 2021

  from Lisbon, Portugal

   DOI: 10.48256/TDM2012_00170

What is data capitalism?

Data capitalism, commonly known as surveillance capitalism, is the unilateral claiming of private human experience as free raw material for translation into behavioural data. It is an economic system in which personal data is seen and conceived as a source of profit. Surveillance capitalism arose and had its peak when advertising companies understood the possibilities of using personal data to target consumers more effectively. 

Data collection has characterized the birth of self-optimization, which is the process of autonomously adjusting network parameters so that the network performance can be back to near optimum, as well as societal and service optimization. Capitalism function has been narrowed, expanding the proportion of social life into data collection and data processing, which mainly impacted the efficiency to control society and citizens’ privacy. 

The constant economic pressures given by capitalism have brought an escalation of online monitoring and the value of data. The ultimate goal was providing profit and making social life open to saturation. If communication and information are historically a key source of power (Castells, 2007), data capitalism results in a distribution of power that is asymmetrical and weighted toward the actors who have access and the capability to make sense of data, using it for better or worse. 

 

A short framework of data capitalism

Despite this terminology sounds quite modern, its roots established in the late 17th century when England has adopted the concept of ‘political arithmetic’. A system that applied numbers to social problems to have a better understanding of everyday life. In the same period, the Dutch East India Company employed foreigners from South East Asia to translate cultural aspects of colonial subjects. They were quantifying categories that Western colonizers could use for social control. 

In the 19th century, commercial credit agencies began to develop surveillance networks as a means of evaluating and monitoring the credit of American businesses. By the 1870s, this had evolved into elaborate systems of tracking individuals for the provision of consumer credit (Lauer, 2010). These early cases attributed both political and monetary value to the collection of personal data.

The introduction of database computing substantially augmented corporations’ capacity to collect and file data about individuals. Growth in the use of surveys and polls in the 1950s and 1960s sought to render the post-war “mass society” intelligible as a consumer public to researchers, political pollsters, and marketers (Igo, 2007). By the 1980s, processes to collect data about consumers were largely automatic through the recording of consumer credit card purchases and telephone calls.

 

Stepping through the modernization of data

The introduction of Internet commerce brought a new scope and scale of tracking that proved transformative for data collection practices.

Initially, online commerce focused on the sale of goods online, seeking profit from the anticipated growth of Internet users. However, this was not initially accompanied by profitability for these dotcom businesses, whose business models came to rely heavily on venture capital investment to survive. The dotcoms or tech companies took the world by storm in the late 1990s, with valuations rising faster compared to any other industry in recent memory. However, the dotcom companies faced a crash and lack of profits in 2001. 

Following the crash, there was a demand for new business models that would shift e-commerce in ways that could leverage Web 2.0’s interactivity. Forrester analyst Mary Modahl proclaimed the holy grail of Web 2.0: “Every day, the Internet generates a mind-boggling amount of new data. Every log-on, every click, every Web site registration, and every e-mail creates a trace of data on a computer. But no one has figured out how to use this information . . . a company that develops the ability to act quickly on data that it collects from the Internet will possess a hard-to-copy advantage.” (Modahl, 2000, p. 137)

 

Profit-orienting data

The first experiments of Web 2.0 were made by IBM with the program EasiOrder. The leverage of user data allows bots to take over the process of shopping for groceries and bartering over the price of goods.  A startup music company, Firefly Network, used “intelligent agent” software developed by MIT researchers that predicted which CDs users might like to purchase based on data collected from their online activities.

Amazon used similar software to make book recommendations to its customers based on their purchases. The software included the buying patterns of others that had similar tastes too. Notably, Amazon’s CEO referred to the company as more than a retailer; it was an “artificial intelligence company”. The artificial intelligence discourse posits the collection of user data as a means to a benevolent end.: the use of technology to augment human capacities. 

These experiments leveraged technologies unique to the World Wide Web. For example, by repurposing cookie technologies, to make activities like collecting items for purchase in a web “shopping cart” possible. These activities were originally developed to enable a site to remember a visitor

Reporters at The Wall Street Journal found that the top 50 most-visited sites on the Internet placed more than 3,000 tracking files on their computer. The most active site in terms of tracking technology placement being Dictionary.com (Angwin, 2010). The incorporation of cookie technologies into web browsers lay the groundwork for advertising to become the “business model” of the Internet.

 

Data capitalism today: Google and Facebook’ market behaviours

The new group of third-party advertising companies formed a market ecosystem that treated data as a commodity to be sold and circulated: the data brokers. The data broker industry is both highly complex and relatively non-transparent. Data brokers act in ways that obfuscate the source of their data. They buy information from other brokers, making it difficult for individuals to retrace the paths through which their own data were collected. 

This network of control has a high degree of concentration around two companies: Google and Facebook. They own the 10 most-loaded third-party domains that appear on the million most-visited sites. Google, in its early days, used the keywords that people typed in to improve its search engine even as it paid scant attention to the collateral data — like users’ keyword phrasing, click patterns and spellings — that came with it.  Pretty soon, however, Google began harvesting this surplus information, along with other details like users’ web-browsing activities, to infer their interests and target them with ads. The model was later adopted by Facebook. 

Facebook makes the site’s algorithm categorize the users’ interests via a “Your ad preferences” page. A study by Pew Research Centre demonstrated that 74% of Facebook users say they did not know that this list of their traits and interests existed. When directed to the “ad preferences” page, the large majority of Facebook users (88%) found that the site had generated some material for them. A majority of users (59%) confirmed that these categories reflect their real-life interests. But, 27% say they are not very or not at all accurate in describing them. The 51% say they are not comfortable that the company created such a list.

When it comes to politics, about half of Facebook users (51%) are assigned a political “affinity” by the site. The platform’s categorization of politics is somewhat accurate for 73%, while 27% say it describes them not very accurately.

 

Which are the policies that protect our data?

The ability to control the information revealed about oneself over the internet, and who can access that information, has become a growing concern. These concerns include whether email can be stored or read by third parties without consent. Whether third parties can continue to track the websites that someone visited. Another concern is if websites one visited can collect, store, and possibly share personally identifiable information about users. 

Laws and regulations related to Privacy and Data Protection are constantly changing. It is seen as important to keep abreast of any changes in the law and to continually reassess compliance with data privacy and security regulations. The legal protection of the right to privacy in general – and data privacy in particular – varies greatly around the world.

Over 80 countries and independent territories have now adopted comprehensive data protection laws. Nearly every country in Europe, Latin America and the Caribbean, Asia, and Africa. The European Union has the General Data Protection Regulation (GDPR), in force since May 25, 2018. The United States is notable for not having adopted a comprehensive information privacy law, but rather having adopted limited sectoral laws in some areas like the California Consumer Privacy Act (CCPA). 

These laws typically have broad provisions and principles specific to the collection, storage and use of personal information, including lawfulness, transparency and accuracy. Although data has been an important source for more than 20 years, the data protection policies have been effectively operating in the last 5 years, with an implementation of GDPR made only in 2018, and some other, such as the Data Protection Act, only on January 1st, 2021.  

 

Conclusion

We rushed to the internet expecting empowerment, the democratization of knowledge and help with real problems. But, surveillance capitalism really was just too lucrative to resist. This economic logic has now spread beyond the tech companies to new surveillance-based ecosystems in virtually every economic sector. From insurance to automobiles to health, education, finance, to every product described as “smart” and every service described as “personalized.”

Surveillance capitalism, invented by Google in 2001, benefitted from a couple of important historical windfalls. One is that it arose in the era of a neoliberal consensus around the superiority of self-regulating companies and markets. State-imposed regulation was considered a drag on free enterprise. A second historical windfall is that surveillance capitalism was invented in 2001, the year of 9/11. In the days leading up to that tragedy, new legislative initiatives were being discussed around privacy. Some of them have outlawed practices that became routine operations of surveillance capitalism (Soshana Zuboff, 2018).

Moreover, we now depend upon the internet just to participate effectively in our daily lives. Therefore, the relationship between the personal amount of time spent online is sharply inferior to the amount of protection guaranteed. 

It is time to react and reclaim our own privacy. 

 

References 

Adopting a Virtual Data Protection Officer Published by Dativa, June 7, 2018, Retrieved June 11, 2018.

“California Consumer Privacy Act (CCPA)”. State of California – Department of Justice – Office of the Attorney General. 2018-10-15. Retrieved 2020-07-02.

“Data Protection and Privacy Laws | Identification for Development.” n.d. Id4d.worldbank.org. https://id4d.worldbank.org/guide/data-protection-and-privacy-laws.

Greenleaf, Graham (6 February 2012). “Global Data Privacy Laws: 89 Countries, and Accelerating”. Social Science Electronic Publishing, Inc. SSRN 2000034.

Hitlin, Paul, and Lee Rainie. 2019. “Facebook Algorithms and Personal Data.” Pew Research Center: Internet, Science & Tech. Pew Research Center: Internet, Science & Tech. January 16, 2019. https://www.pewresearch.org/internet/2019/01/16/facebook-algorithms-and-personal-data/.

Keller, M., & Neufeld, J. (2014). Terms of service: Understanding our role in the world of big data. New York, NY: Al Jazeera

Levy, S. (2011). In the plex: How Google thinks, works, and shapes our lives. New York, NY: Simon & Schuster

Singer, Natasha. 2019. “The Week in Tech: How Google and Facebook Spawned Surveillance Capitalism.” The New York Times, January 18, 2019, sec. Technology. https://www.nytimes.com/2019/01/18/technology/google-facebook-surveillance-capitalism.html.

the Guardian. 2021. ‘The goal is to automate us’: welcome to the age of surveillance capitalism. [online] Available at: <https://www.theguardian.com/technology/2019/jan/20/shoshana-zuboff-age-of-surveillance-capitalism-google-facebook> [Accessed 15 February 2021].

West, Sarah Myers. 2017. “Data Capitalism: Redefining the Logics of Surveillance and Privacy.” Business & Society 58 (1): 20–41. https://doi.org/10.1177/0007650317718185.

***

Autore dell’articolo*: Alessandra Palmeri, specializzata in Relazioni Internazionali alla Nankai University di Tianjin e Dottoressa in lingue e culture e società dell’ Asia e Africa Mediterranea presso la Ca’ Foscari di Venezia.

***

Nota della redazione del Think Tank Trinità dei Monti

Come sempre pubblichiamo i nostri lavori per stimolare altre riflessioni, che possano portare ad integrazioni e approfondimenti. 

* I contenuti e le valutazioni dell’intervento sono di esclusiva responsabilità dell’autore.

Editor’s Note – Think Tank Trinità Dei Monti

As always, we publish our articles to encourage debates, and to spread knowledge and original and alternative points of view.

* The contents and the opinions of this article belong to the author(s) of this article only.

Autore