Technology self determination milestones in 2020

The amount of data generated and the overall reliance on internet services swelled because of the pandemic. Because technology – specifically blackbox systems – continues to affect political, industrial and personal areas of modern life, resulted in push backs by citizen and state actors in Europe that will play a key role in potentially rewriting the future of the internet.

In the 9-12 months since lockdown and social distancing measures were introduced around the world, usage of online government, education and collaboration services grew to unprecedented levels. E-commerce alone grew more from March to April 2020 than in the previous 7 years. The high demand for streaming services pushed Netflix and YouTube to lower the video quality in Europe.

These changes and the continued explosion of data generated as a result will most likely be structural, not temporary. Because of this, the already present distrust for big tech platforms (on privacy, security or political interference grounds) in Europe might also develop in unknown ways. Here comes retrospective to try and understand where it might go.

Protesting an algorithm

In March the UK government closed schools nationwide and in April it cancelled all 2020 secondary examinations. To determine the qualification grades for this generation of students, the UK's exams regulator Ofqual designed a an algorithm that relied on hybrid inputs by humans (teachers) and machine. The algorithm would take teacher predicted grades for A Level and GCSE qualifications and standardise or moderate them to ensure fairness.

In total, it followed nine steps including taking into account a school's historic results and predicting the achievement of previous and current students before arriving at a result (thank you to Jeni Tennison, CEO of the Open Data Institute that was founded by Tim Berners Lee, for the explanation).

Fast forward to August when the final results came out. 35.6% of student grades were downgraded by one mark, 3.3% by two and 0.2% went down by three. Moreover, the new system disproportionally affected pupils from disadvantaged and ethnic minority backgrounds – evidenced by public vs private school performances.

Because of the algorithm, thousands of students were left without university places or promised internship opportunities. In the following days, students started demonstrating and parents threatened legal action. Shortly afterwards the government revoked the results and announced that grades would be estimated by teachers and not the algorithm. Protesting worked.

The bigger picture of algorithmic injustice

Rogue algorithms are commonplace. There are many instances – and by now an academic and scientific field of analysing the effects – of racial discrimination by medical, law enforcement and financial services algorithms.

Last year, Algorithm Watch wrote about the racial health bias of algorithms towards Black patients in Switzerland. A cited 2009 recommendation by the American Heart Association gives “non-black” people three additional points when calculating the risk of heart failure. MIT documented many instances where false positives by police algorithms result in wrong convictions.

There's an even bigger, and by now more mature and diverse, body of literature on content and advertising algorithms. These control what kind of information users get and who they interact with. Ultimately, in fact they shape how users think and the discourse they engage in (or don't engage in – i.e. remain in a bubble).

The act of demonstrating an algorithm by students is significant because these systems should be more transparent and fair. Needless to say, algorithms don't have agency – if an injustice occurred, it was by design or acquiescence from its creators and that's the real issue that should be addressed.

Realising digital sovereignty

The feeling that technology platforms – especially US and Chinese Big Tech – are impacting more and more vital parts of our digital lives has also affected European decision makers to accelerate the pursuit of technological self determination.

In addition to not being at the mercy of foreign tech firms (and their countries' intelligence services via backdoors), European digital sovereignty aims to achieve two other things; preserve the values of privacy, dignity and security online; and shape digitisation in an economically beneficial way.

Although the desire to achieve these was voiced across the years on several occasions by different European heads of state, 2020 was different.

In her December 2019 political guidelines for the next five years, Dr. Ursula von der Leyen, the President of the European Commission, outlined the following as one of the goals: to make “Europe fit for the digital age”. The Digital Services Act (DSA) and the Digital Markets Act (DMA) are just some of the instruments for realising this that occurred in 2020 and will manifest in 2021.

The DSA aims to bring more transparency to online content. Among its most impressive features is opening up tech company algorithms, with more than 45 million users in the EU, for review by researchers while preserving their propriety. The DMA, on the other hand, addresses market power beyond the “traditional” anti trust measures.

It will define gatekeepers, aims to limit how they use data across services and level the data economy playing field for other, homegrown, services – ultimately creating favourable market conditions for innovation and competition. Non compliance with the former can result in fines of up to 6% and the latter 10% of annual turnover.

GAIA-X is also a project that was initiated to reclaim Europe's digital sovereignty. It strives to be a secure layer of cloud and edge computing with industrial use cases that aligns with the European principles of data protection, trust and availability.

Announced in October 2019 by German and French ministers, GAIA-X aims to be a supranational and open data project and counts Deutsche Telekom, Siemens and Orange among its founding members. The current schedule puts the prototype sometime throughout 2021.

The shared understanding

… among the different actors above is that for a long time Big Tech risks were socialised and their governance privatised. It is not surprising that individuals and members of civil society affected and concerned by the technology are speaking against it and state actors are doing the same via their tools.

The milestones observed are a culmination of events, research and instances that took place before 2020 but manifested that year. As 2021 unfolds, I hope to get a chance to read about other parts that are also as important and influential.