In Part 1, we saw how algorithms produce the realities they claim to measure. Here, in Part 2, I want to dig into what we mean when we say an algorithm is ‘optimized’, and why optimization never means a neutral or universal ‘best’. Let’s think more about what we mean when we say an algorithm is “optimized”. There is a branch of philosophy known as post-structuralism. It comes out of a long philosophical history, but they sensibly come from the structuralists. So, let’s look at some of the central ideas or claims that come from the structuralists, and then we can more easily understand what the post-structuralists’ viewpoint is. Structuralists claim that there is a structure inside everything; there’s a structure to the way that languages operate, there’s a structure to the way that, as the central figure in structuralism, Levi-Strauss wrote about in his books, he claims there is a structure to how things like meals are constructed in specific cultures. He analyzed the specific underlying structure, such as opposites in different contexts, for example, cooked/raw, nature/culture. The idea of opposites is a structural aspect. When it comes to Natural Language processing, for example, machine learning models don’t understand the words in the traditional sense. Instead, they analyze the statistical relations between words across a large amount of training data. To a structuralist, this is evidence of a “latent structure” of language. Furthermore, these relationships are reinforced with sentiment analysis, which is based on structural logic; words like ‘good’ and ‘bad’ are understood by their position in opposition. It’s this structure, the opposites, for example, that are under scrutiny by the structuralist. Optimization in this context is then “discovering” the most efficient or probable solution that already exists inside the structure.
In contrast to this, the post-structuralists deny that optimization is neutral. Post-structuralists claim that every act of “optimization” constructs the very field it claims to reveal. It isn’t finding the optimal path in a stable system; it’s deciding what counts as optimal in the first place. Let’s look at these two viewpoints when we apply them to a modern social media site like TikTok. The structuralist reading would indicate that the algorithm discovers which videos are inherently engaging. While the post-structuralist reading would indicate that the algorithm produces the conditions for engagement by privileging specific media content/formatting of said video. What counts as engaging, things like comment count or view duration, are a construction of the algorithm and drive the content to fit that mold to appear as engaging to the algorithm. Because categories like engagement, best, or optimized aren’t natural, they’re definitions chosen by developers or institutions. Optimized for whom? Optimized for what?
From a post-structuralist lens, this means optimization isn’t universal; it only ever exists in a context, and it is goal and data-dependent. Along with this, all of these models are constructed by specific groups of people, with their specific purposes. Following that logic, it’s fair to say that the “best” or “optimized” result is the one that meets their chosen metric within the limits of their data. This is not always nefarious; sometimes it’s the result of technical trade-offs for the performance of the model. But these trade-offs are still decisions based on perspectives embedded in the “why” those trade-offs were made. We must ask ourselves again, optimized for whom, optimized for what?
If we take a closer look at exactly how these algorithms’ results go from prediction to prescription to create feedback loops. Going back to our first article covering predictive policing, the authors cover how systems that use what is known as “batch analysis”. These are systems that train on data, then later, once newer data has been collected, the model is updated by training with the new data. Systems such as this are susceptible to falling into feedback loops because the newest data you’re training the model on will have been influenced by your redistribution of resources after the first results of the model. Because of this, your model is being trained on data that has been reinforced by the data of the model. A more readily available example might be social media ranking content that optimizes for clicks, even if it has adverse effects on the diversity of ideas. Post or videos that get lots of views get pushed to get even more views, and the feedback loop is already kicked off. These types of effects push reality towards what the model can measure and reward. These feedback loops actively produce the reality they are supposed to reflect. These feedback loops don’t just change outcomes; they change what we think is possible. In Part 3, I’ll explore how this narrowing of possibility connects to ideas from Baudrillard, Debord, and Foucault: optimization as simulation, spectacle, and even governance.

Leave a comment