From Lock-In to Liberation: Escaping the Algorithm’s Worldview
In Part 2, I claimed that optimization is never neutral; it encodes assumptions about what counts as ‘best.’ In Part 3, I want to look at how these assumptions get locked in over time, creating systems that narrow our choices and shape our sense of reality itself.
Performativity, following the methods I’ve outlined above, effectively “locks in” the model’s worldview. This can lead to a phenomenon called Path Dependence. This is a phenomenon that has been studied a lot in specific areas like political theory and the theory of institutional evolution. Generally, Path Dependence says that the initial conditions, which are often contingent on seemingly random circumstances, can have effects later on in the development of the system that make other options become invisible or otherwise no longer an option due to standardization and coordination problems that make deviation prohibitively costly. Even without malicious intent, narrowing “best” causes some unintended consequences. An example of this that I particularly like is the QWERTY keyboard layout that most people are familiar with. It was created as a keyboard layout with typewriters in mind. But once the layout stuck, it became the norm that is so ingrained in the culture of tech that it’s hard to imagine a world in which there were actively competing keyboard layouts. Does this mean that the QWERTY keyboard layout is the “best” or most “optimized” version of a keyboard layout? Absolutely not! It was the simple historical contingencies that resulted in a decision that had a large impact on tech as a whole. So when we begin to consider the combination of path dependence compounded with performativity, it becomes very easy to suspect that the “best” result is just the result of historical, seemingly random, circumstances and the limits of our data and ability to make useful predictions from the data. A perfect example might be a rare disease whose occurrence is so rare that we can’t get enough data from samples to accurately predict a proper diagnosis for the disease.
The French writer and philosopher Jean Baudrillard wrote about how simulations, images, and videos can become, or have an effect on, the perception of reality. Well, these algorithms and models precede and shape the ‘real’ world once they are implemented. They attempt to predict, but end up being prescriptive instead, in that the models end up dictating what the new standard is. The optimized version of ‘reality’ is the one most shown or generated by the models, furthering this idea that these chosen options are the only ones, or at least the ones that are the “best”, and you need not think any further. This is where we can see shades of the philosopher Guy Debord; these constant and recreated results generated by the algorithms and models create a ‘spectacle’. It’s what we see and interact with in society. In this view, optimization could also be seen as a subtle form of self-governance, structuring choices without overt coercion. These systems of control, created and self-enforced by these systems, would fall right in line with the French “historian of ideas” Michel Foucault. He wrote a book on discipline and punishment, and covers how you can break human history down into eras of methods of control. And these algorithms and machine learning models could be the next logical step of the information age and control. If you want a good credit score, there are specific things you need to do; you can’t just do anything to raise your credit score. The simulation becomes the only visible world, structured and enforced through subtle systems of governance.
To help address the issue of performativity, we have 2 great examples to look at that demonstrate some ways to be aware of these blind spots in the development of machine learning algorithms. In our first article again, the Predictive Policing article, they effectively solved their problem by introducing an element in the model that minimizes the effects of arrest incidents of crime in that region. This makes sense in their case to effectively eliminate the portion of the data that itself was influenced by the model. Because if we send more cops to a region, we would naturally expect there to be more arrests to happen, as there are more officers to make arrests. In another model that recently made some waves, known as the Darwin Gödel Machine, this can help reduce the feedback loop by more exhaustively checking all the new iterations of models created by the system that will be used to make results, which also helps to avoid Path Dependence. Since this Darwin Gödel Machine checks all options for ‘surprise’ solutions, it helps avoid path dependence. Other alternatives to consider are things like multi-objective optimization, where you attempt to balance competing values against each other to produce multiple solutions to consider, or to attempt to compare potential outcomes depending on how and why you’re optimizing for your specific metric. Deliberately introducing diversity and noise to a system can help to ensure that the system isn’t going to overfit as well. If Baudrillard warns us about simulation and Debord about spectacle, then introducing diversity and noise into models is one way to resist that narrowing of reality. Even if you account for all of this, there are again real-world incentives that challenge your model for misalignment, such as business pressures pushing toward optimizing narrow KPIs.
The key takeaway I would like from this series of articles is that machine learning and algorithm optimization are performative; they shape the world to match their definition of “best”. This is not a reason to assume that these models are not helpful in some ways or should not be crafted. But it’s a reminder that the process of creating an unbiased model is vast, complex, and very difficult. This, coupled with the quick adoption of these algorithms and models into sensitive areas of society, can have grave unforeseen consequences. We must be diligent to ensure that our models are making as few assumptions as possible. This is just another challenge for software developers and data scientists to overcome, and it can be overcome; we just have to be aware of our blind spots in our analysis. Across this series, I’ve tried to show that algorithms don’t just reflect the world, they shape it, lock it in, and govern it. The challenge now is to design systems that widen possibilities rather than closing them off. If the algorithms tell us what is “best”, how long before we forget any other ways of being?


