2025-2026 Programming Theme
Learn more about our 2025-2026 theme and research focus, Artificial Stupidity.
The rise of generative machine learning technologies has been the target of an onslaught of discourse, much of it producing more heat than light. A series of breakthroughs in such technology has given many–in tech, in finance, in the press, in academia–the impression that such technologies will continue to improve indefinitely. In fact, large tech firms seem dead set on catastrophic energy and water consumption to realize improvements, with diminishing financial and technical returns. The promise of improvement has been accompanied by savvy marketing efforts of such technologies as "artificial intelligence." Machines that really think seem, perhaps, to be just around the corner–perhaps perpetually so.
We at the Centre for Culture and Technology aren't very impressed with the hype.
Our programming in the 2025/26 programming year will be dedicated to the theme "Artificial Stupidity."
We mean this term evocatively, not strictly; there is no specific thing or attitude we mean by the term artificial stupidity.
Rather, we welcome art and scholarship, research and conversation, that puts pressure on construction of the machine learning revolution as "artificial intelligence."
How have logic, thinking, or reasoning machines relieved us of the burden of thinking for ourselves?
How have people had to impoverish their sense of human thinking to imagine that current technologies are (always almost) capable of such thinking?
How can we understand the present of "AI" technologies in light of past moments of boosterism, vapourware, unfulfilled promises?
What might we learn from past moments of technological critique and activism–from the Luddites' smashing of machines to the Frankfurt School's critique of instrumental reason, to 1960s and 1970s critiques of computer culture?
What might we do with machine learning technologies that goes beyond–or departs in an entirely different direction from–Silicon Valley and Big Tech ideas about these technologies–what they are, might become, or ought to do?
What kinds of intelligence, labour, and creativity must we bolster in the face of LLMs and image GANs?
We are emphatically not interested in "AI ethics" or "AI alignment"; the goal is not harm reduction.
We are interested, instead, in the politics, aesthetics, and economics of machine learning.
How, that is, might we foster worlds–even small, evanescent ones–that host other, more convivial ways of doing with our computational machines?
But there's another way in which we might parse artificial stupidity.
McLuhan argued in Understanding Media and elsewhere that then-new "electric" technologies like television had obviated "literate man."
Linear thinking was no match for an increasingly interconnected world.
Forms of thought focused on text and depth and reflection could not grasp the polyrhythmic changes of a new technologized political and aesthetic economy.
Like Walter Benjamin some 30 years before him, McLuhan theorized arts of attention and forms of intelligence capable of grasping the new world new media were bringing into being. And both Benjamin and McLuhan understood that such techniques might, in fact, appear as a new kind of barbarism (to borrow Benjamin's term): uncultured, distracted, gossipy, ignorant–in a word, stupid.
Perhaps, in an age where you might find an LLM embedded in your dishwasher, we ought to foster new, critical, even activist, forms of stupidity.
And so, what kinds of stupidity might be necessary in a world dedicated to "smart" everything?
2025-26 Programming
- 1 Artist in Residence
- Faculty Fellows
- Graduate Fellows
- "Computer Class", the third iteration of our critical, creative, and historical computing institute, and the inaugural run of its sequel, "Computer Class 2"
- regular programming of Monday Night Seminars