A short history of predictive mathematics

For thousands of years, humans have attempted to tell the future by gazing into the stars, throwing bone dice or reading palm lines. Despite speculating and betting on the future through the ages, for the most part, humans have been unable to calculate it. However, in 1654, a series of letters written between Blaise Pascal and Pierre de Fermat changed the world we live in by opening up the possibility of predicting the future by mathematically assessing probabilities. Today, we have become used to shaping our lives by calculating risks: How likely am I to find a good job if I study a certain subject at university? How much should I be investing in a pension fund, in order to live a comfortable life when I am old? Before Blaise and Pascal, this was an alien way of thinking. More so now, when we shape our world by letting algorithms calculate these risks for us.

In their exchange of letters, Blaise and Pascal created the mathematical foundations needed for predictive analytics. Predictive analytics describes the practice of “extracting information from existing data sets in order to determine patterns and predict future outcomes and trends”. A number of statistical techniques are today used to conduct predictive analytics, including data mining, modelling, and machine learning, all of which are intended to analyse historical information, or rather, data gathered in the past in order to make predictions about what is unknown and what is to come. Predictive analytics can assess what is likely to happen in the future based on the input data but cannot predict what will happen in the future, even though we often we make the mistake of believing it does exactly that.

Today, we live in a world driven by prediction through data and algorithms. Every day, we let algorithms shape our decisions about the movie we might watch or which stocks to invest in. Algorithms predict which advertisements we’re most likely to react to and what choices self-driving cars should make. Our data is gathered with or without consent and harvested by data scientists who use it to “guess the future”. This is not a negative development per se. Many social use cases are being developed, such as the predictive models developed for the John Jay College of Criminal Justice in New York City to help identify which students are at risk of dropping out of college and support them through to graduation. In our increasingly complex and information-laden world, algorithms can be important tools to help us understand what’s important in our environment.

Why is relying on big data prediction a bad idea?

However, AI-based decision making can also perpetuate existing biases and contribute to further entrenching the surveillance economy. Numerous cases have been documented of predictive policing gone wrong or of racist jail sentences being handed down due to biased data. These examples (many more of which will have not been documented) demonstrate the dangers of relying on simplistic data models in sensitive social environments. Data determinism does not only affect our lives in extreme situations such as law enforcement but on a daily basis – through sexist, racist, ageist and ableist search results, hiring practices and healthcare design and provision. Because these systems are so normalised, it’s easy to play them down, allowing Silicon Valley companies to impose their policies as rules on society and individuals. For example, AirBnB states that every booking “is scored for risk before it’s confirmed. We use predictive analytics and machine learning to instantly evaluate hundreds of signals that help us flag and investigate suspicious activity before it happens.” Because this policy effectively allows the company crawl the web for information, multiple users have been marked down by being “associated” with fake social network profiles, or keywords, images or video that indicated involvement with drugs, alcohol or sex work. This included multiple sex workers, whose accounts were erased, despite having used AirBnB solely for private, touristic purposes. Of course, any of the large tech platforms could have served as an example for this kind of use of predictive analytics – it is in no way particular to AirBnB.

The data-driven realities and futures we are creating are based on the data of the past. To break free of the backward-looking, deterministic structures we are programming today, we need to complement these data. In her TedxCambridge talk, ethnographer and data scientist Tricia Wang explains how having more data does not help us “make better decisions,” when we leave out important, contextualizing perspectives. Instead, she argues for the humanization of data – what she calls “thick data” – big data that has been enriched with non-quantifiable, qualitative data gathered from an ethnographic perspective that “delivers depth of meaning”. Wang draws this conclusion based on her own experience in China in 2009, when she predicted the triumph of the smartphone over the feature phone. At the time, her client Nokia was unwilling to listen to the stories she collected behind the data and instead clung to the belief that people would not be willing to invest so much of their income in such a fragile device.

Polls and other forms of prediction, such as forecasting, fail when data is read without paying attention to the more nuanced shifts in political alliance and voter mobilization, for example President Trump’s election and the UK vote for Brexit. In order to use data effectively, we have to learn to see what the data does not show us. As Wang warns, “There is no greater risk than being blind to the unknown”.

The power of speculation and the political imagination in glimpsing the unknown

By moving away from data and opening our eyes to the possibilities outside of what’s measurable, we can begin to glimpse what’s currently unknown. In his talk, ‘The Political Tragedy of Data-Driven-Determinism’ Mushon Zer-Aviv describes the process of deskilling through the integration of digital services into our everyday lives. Does it matter if we forget how to do simple mental arithmetic, memorize phone numbers or read a map? Perhaps, but it is not acceptable for us to lose our ability to imagine different futures. Zer-Aviv stresses the importance of maintaining and training “our ability for political imagination”, reminding us that the 20th century showed how one person’s utopia might be another person’s worst nightmare. This is why we need to think of the future not as linear and deterministic, but plural. And because we tend to find it easier to formulate non-desirable futures in form of dystopias, we require tools to help us develop desirable futures.

Speculation is one such tool. It is the process of “forming of a theory or conjecture without firm evidence” or “the activity of guessing possible answers to a question without having enough information to be certain”. In today’s data-driven society, speculation can be liberating. As Anthony Dunne and Fiona Raby argue in their book ‘Speculative Everything’:

We believe that by speculating more, at all levels of society, and exploring alternative scenarios, reality will become more malleable and although the future cannot be predicted, we can help set in place… factors that will increase the probability of more desirable futures happening…equally, factors that may lead to undesirable futures can be spotted early on and addressed or at least limited.

Professionals from different disciplines as diverse as design, business development, gaming, and political philosophy can provide such approaches. In recent years, a great number of methodologies and tools have been developed, some of which are featured in this publication, that invite us to speculate, imagine and create, rather than simply calculate, analyse and assess.

In 1952, around three hundred years after Blaise and Pascal’s correspondence enabled people to calculate probability, Christopher Strachey created what has been called the first piece of digital literary art, a combinatory love letter algorithm for the Manchester Mark 1 computer. Today, generative AI is top of the agenda for digital policy makers, activists and theorists. As a society, we are dealing with questions such as what tasks AI should be allowed perform, how to tell the difference between works created by AI and by humans, and how much decision-making power AI should have over our lives. At the same time as we teach our machines to create, let us retain our own creativity and agency to explore what can be achieved through them, rather than relying on them to predict our futures according to our pasts.

Letter from the editors
Interview: The hidden toll of women in content moderation
A fight for generations
Visions of the unseen architect
Stories for Revolution
Obtrusive Relationships
Gathering Multitudes: A bag of stars
Fugitive Memory: for Tu’i Malila
“The Quizumba is On”: Technological Appropriation by Black Women in the Amazônia
No
Big Green Lies
Letter from the Editors
A guide to the visceral science of time travel
The Unbounded Quest
An interview with Joana Varon
An interview with Jonathan Torres Rodríguez
An interview with futures leader Anab Jain
Where would you like to place your pet giraffe?
Afropresentism – On Incantation and the Machine
Letter from the Editors
A Few Notes on the Cult of Sylphis
Speculative Tourism
Letter from the Editors
Tending to wildness: field notes on movement infrastructure
Aveia, espaçonaves, uma folha de babosa, uma pélvis: fui coletar trechos Oats, spaceships, an aloe leaf, a pelvis: I went to collect parts of the future and decided to turn around.
Προφορικό ποίημα για την προέλευση των Δικτύων Εμπιστοσύνης Narrative Poem about the Origins of Networks of Trust
The Battle to Control the Carbon Media Cycle
Archive of Disappearances
Prototyper la Banlieue du TURFU et transcender la réalité
To Become Undone
Digital artivism: pictures worth thousands of words
Ratios / Proporciónes
Shadow Visions
Letter from the Editor
Future Perfect Continuous
Be Water –  Insights into the Hong Kong protest movement
Care in a techno-capitalist world
HammamRadio, your feminist-love radio station
One Vision, One World. Whose World Then?
Play, imagine, build – the collective verbs
Venezuela – the dual crisis
Letter of the Editor
Terraforms – Or, How to Talk About The Weather
On Persistence: The Past Art/Works of An/Other Future
What the Enlightenment Got Wrong about Computers
Community Learning at Dynamicland
Imagining a Universal Declaration of Digital Rights
An interview with Audrey Tang
Dream Beyond the Wounds
The Blurring
More than HumanCentered Design
The Unpredictable Things
When the Path We Walked Blocks Our Ways Forward
Letter of the Editor
A viewpoint on Craft and the Internet
Who Controls the Internet?
Ethical Tech around the World
Interview with Gillian Crampton Smith
Life & Death
Typographic Craft
The Internet as a Lota
A Medieval Crash
A Gandhian Dream
Evolutionary Craft