The Algorithmic Turn: Emergent Processes and the Reformation of Knowledge

This is a meditation on the shifting agency of algorithms—once confined to calculation, they have emerged as active forces in the generation of knowledge. It reflects on how this transformation unsettles conventional ideas of authorship, intention, and understanding, inviting us to reconsider the delicate interplay between human thought and machine influence in shaping our reality. A continuation of my earlier post Abstracted Intelligence: AI, Intellectual Labour, and Berkeley’s Legacy in Public Policy. A reading list is below. 

The algorithm has quietly evolved from a tool of calculation into a generative force shaping the very terrain of knowledge. No longer confined to precise computation alone, it now participates actively in structuring how we understand, interpret, and create. As Wendy Chun demonstrates, these systems do more than process inputs—they habituate us, embedding themselves deeply into our cognitive and social rhythms. This evolution signals a fundamental reconfiguration of knowledge itself: no longer solely the product of human cognition or systematic observation, knowledge emerges through recursive, machine-driven processes that entwine human and computational agency.

At the heart of the algorithm lies a set of rules designed to produce outcomes, but its function has expanded far beyond problem-solving. Luciana Parisi’s insight into algorithmic speculation captures how these processes generate novelty and reshape aesthetic and epistemic landscapes rather than simply calculate or represent. Algorithms now inhabit artistic, cultural, and social realms where they do not merely answer questions but frame the very logic through which questions arise. As Alexander Galloway emphasizes, the algorithm operates at the level of interface—a mediator where legibility is constructed and constrained, and where meaning becomes both possible and limited. This shift subtly relocates authority: from human hands to encoded processes, from fixed categories to contingent and often opaque patterns.

The consequences of this shift are profound. Tarleton Gillespie’s work reveals the infrastructural labour behind these systems, which govern visibility and legitimacy in ways frequently invisible to those governed by them. Algorithms do not simply replace human decisions; they reconfigure the conditions of decision-making itself, often beneath the surface. Their generative capacity introduces complexity and opacity, producing outcomes that exceed the understanding of their creators. These recursive patterns complicate verification and accountability, exposing a form of epistemic vulnerability that challenges traditional frameworks for knowledge and governance.

Expanding this perspective, Benjamin Bratton situates algorithms within a planetary computational architecture that transcends local or institutional boundaries, reconfiguring sovereignty, cognition, and identity at a global scale. This shift implicates knowledge production in a vast technical stack that governs infrastructures of power and information flow across geographies and societies. Kate Crawford grounds these theoretical insights in material realities, illustrating how AI and algorithmic systems are embedded in extractive economies, labor conditions, and environmental costs. What may appear as immaterial knowledge production is inseparable from physical and political infrastructures that shape and constrain the possibilities of computation.

Viewed through this lens, algorithmic processes resemble dynamic narratives unfolding through layers of input, context, and recombination. Like storytellers without fixed authorship, these systems orchestrate data flows and conditional operations to produce forms that exceed their components. The outputs are not passive reflections but active interventions that reorient our relationship with knowledge—from stable transmission toward real-time interpretation and negotiation. This dynamism signals both power and precariousness, demanding ongoing reassessment of assumptions and a willingness to confront the shifting locus of interpretive authority.

The visual arts offer a vivid example of this transformation. Generative algorithms produce imagery that moves beyond imitation to invention, collaborating with human creators while introducing unpredictability and chance. This interplay opens new aesthetic spaces but carries risks: the flattening of complexity, amplification of bias, and erosion of clear boundaries between authorship, intention, and effect. The algorithm becomes a co-creator and gatekeeper, shaping the field of possibility even as it expands it.

This transformation reflects a deeper epistemological turn. Knowledge no longer appears as fixed or discrete but emerges within dynamic, recursive systems that resist containment or full comprehension. Algorithms function as agents in the production of meaning, their agency demanding reflection on not only what they enable but also what they obscure or distort. In both artistic and intellectual practice, the tension between human intention and algorithmic variation generates new possibilities while compelling vigilance. When opacity deepens and systemic influences become normalized, the risks extend beyond creativity into the realm of knowledge itself.

This challenge recalls earlier philosophical critiques of abstraction and the limits of knowledge that I have talked about before. The eighteenth-century philosopher George Berkeley, for instance, challenged the legitimacy of abstract mathematical entities—infinitesimals—that lacked direct empirical manifestation. Such critiques resonate today as we grapple with algorithmic processes that often operate as “ghostly inferences,” producing outcomes whose internal workings and assumptions remain intangible or obscured. Like Berkeley’s warning against unmoored abstractions, this calls us to critically examine the epistemic foundations and consequences of the algorithmic turn. See my post on Berkeley for more here.

Emerging from this shift is a new epistemic condition: knowledge as emergent, relational, and mediated through evolving systems. In this environment, we become not only interpreters but stewards—charged with critical engagement and ethical responsibility for the infrastructures of meaning that shape our world. This requires embracing process over product, contingency over fixity, and acknowledging the redistribution of agency from cognition to computation, from conscious intent to iterative dynamics. The challenge moving forward is to interrogate not only what these systems make possible but to ask persistently under what assumptions, for whose benefit, and at what cost.

A short reading list from sources that I have read over the last few years on this topic.

Taken together, these six works form a conceptual constellation that reframes the algorithm not as a neutral instrument, but as an active participant in the production of knowledge, culture, and power. Wendy Chun foregrounds how algorithms habituate us, not just through interface but through repetition and memory, revealing the affective and social dimensions of computation. Luciana Parisi pushes further, showing that algorithms speculate—they generate rather than merely calculate—thus altering aesthetic and epistemic landscapes. Galloway’s analysis of the interface illuminates the algorithm as a mediator of meaning, a site where legibility is constructed and constrained. Tarleton Gillespie turns to the infrastructural labour behind algorithmic systems, exposing how platforms subtly police visibility and legitimacy under the guise of neutrality. Benjamin Bratton scales this transformation globally, mapping a planetary computational architecture that reconfigures sovereignty and cognition alike. And Kate Crawford grounds these abstractions in the material and political, revealing how AI and algorithmic systems are inseparable from extractive practices, labour exploitation, and environmental cost. As a group, these texts chart a shift in thought: from seeing algorithms as tools of control to understanding them as environments—generative, recursive, and contested—within which control, creativity, and understanding are continuously renegotiated.

Liminal Visibility: Migration, Data, and the Politics of Boundaries

The first reading of Canada’s Bill C-2 signals a significant expansion of digital surveillance and data collection powers within immigration enforcement, including enhanced capabilities for electronic monitoring, biometric data use, and information sharing across agencies. These provisions illustrate how the state increasingly relies on computational systems to govern migration, embedding control within data infrastructures that produce visibility and legibility on its own terms. This legislative shift exemplifies the broader Data Turn—where algorithmic models and surveillance reshape who is recognized or excluded. Examining this through the lens of contemporary visual art reveals how artists expose and resist these mechanisms of control, offering critical counter-narratives that emphasize opacity, ambiguity, and the contested politics of representation in immigration regimes. This article stems from my reading of Canada’s Bill C-2 informed by Joy Rohde’s Armed with Expertise (that I just finished reading), connecting contemporary data-driven governance in immigration to its historical roots in Cold War expertise, and exploring how these dynamics shape the politics of visibility and liminality. 

The Data Turn has reordered not just how states govern, but how they see. In systems of immigration control, policing, and security, governance now operates through data—through predictive models, biometric templates, and behavioral scores. These systems do not represent reality; they construct it, enacting a vision of the world in which subjects are rendered as variables and futures as risks. This logic, increasingly dominant across global institutions, marks a shift from rule by law to rule by model. And as it reconfigures power, it also reconfigures aesthetics.

This shift towards data-driven governance deeply affects how migratory subjects are categorized and controlled, often reducing complex human experiences to discrete data points subject to algorithmic prediction and intervention. The imposition of predictive models and biometric surveillance transforms migrants from individuals with agency into risks to be managed, their identities flattened into probabilistic profiles. This reordering not only reshapes bureaucratic practice but also redefines the conditions of visibility and invisibility, inclusion and exclusion. Those caught in liminal states—between legality and illegality, presence and absence—are particularly vulnerable to these regimes of measurement and control, which perpetuate uncertainty and precarity.

Visual artists have responded to this transformation not only by thematizing data regimes, but by dismantling the very mechanisms that render them invisible. They expose the apparatus behind the interface—the wires, scripts, ideologies—and stage counter-visions that assert opacity, indeterminacy, and refusal. In doing so, they challenge the way the Data Turn governs the liminal, especially those living in the suspended space of migration, statelessness, and bureaucratic indeterminacy.

This artistic intervention reframes vision itself—not as a neutral or purely descriptive act, but as a tool of power embedded within technological and bureaucratic systems. By peeling back layers of digital mediation, these artists reveal how contemporary surveillance and data infrastructures actively produce knowledge and enforce hierarchies. Their work highlights that visibility is not simply about being seen, but about how one is seen, categorized, and ultimately governed—a dynamic that is especially acute for those inhabiting the ambiguous spaces of migration and statelessness.

Artists like Trevor Paglen and Hito Steyerl foreground this shift from image to instrument. In their work, surveillance footage, facial recognition outputs, and satellite tracking systems are not just visual materials—they are operational weapons. Paglen’s images of classified military sites or undersea data cables reveal the landscape of surveillance that underpins contemporary geopolitics. Steyerl, in pieces like How Not to Be Seen: A Fucking Didactic Educational .MOV File, explores how machine vision abstracts, targets, and governs. In both cases, the act of seeing is no longer passive; it is a condition of being classified and controlled. The migrant, in such systems, is no longer a presence to be engaged but a deviation to be filtered—a datapoint, a heat signature, a probability.

Paglen and Steyerl’s work exposes the mechanisms through which visibility becomes a tool of control, transforming subjects into data points within vast systems of surveillance. Yet this logic of enforced legibility provokes a critical response: a turn toward opacity as a form of resistance. Where the state insists on clarity and categorization, artists embrace ambiguity and fragmentation, challenging the totalizing gaze and creating spaces where identity and presence refuse easy definition. This dialectic between exposure and concealment reflects the lived realities of migrants caught within regimes that demand transparency but offer exclusion.

If the state’s data infrastructures demand visibility and legibility, many artists respond with strategic opacity. Édouard Glissant’s philosophy of opacity—his insistence on the right not to be reduced—resonates powerfully here. In the works of Wangechi Mutu and Walid Raad, opacity takes material form: fragmentation, distortion, layering, and pseudofactuality unsettle any stable claim to truth or identity. These aesthetic strategies echo the experience of navigating migration regimes—systems that demand transparency from those who are systematically excluded from its protections. Opacity becomes a refusal of capture. It asserts a right to complexity in the face of an infrastructure that reduces lives to binary certainties.

I am guided here by the words of WG Sebald and the art of Gerhardt Richter and their use of things like dust and blur as integral to understanding of history and memory, in addition to the use of light and shadows in works of art immemorial and its relation to knowledge. 

Building on this embrace of opacity, other artists turn their attention to archives—the sites where power not only records but also erases and shapes memory. By interrogating immigration documents, military footage, and bureaucratic data, these artists reveal how archives carry forward histories of violence and exclusion. Their work challenges the illusion of “raw” data, exposing it instead as deeply entangled with structures of power that continue to marginalize and render migrants invisible or precarious. In doing so, they create counter-archives that reclaim erased voices and insist on recognition beyond official narratives, mirroring the ongoing struggles of those living in legal and social liminality.

Other artists interrogate the archive: not just what is remembered, but how, by whom, and with what effects. The work of Forensic ArchitectureSusan Schuppli, and Maria Thereza Alves reveals the afterlife of data—how immigration records, censuses, or military footage embed structural violence into bureaucratic memory. Their work testifies to how data is never “raw”: it is collected through regimes of power, and it carries that violence forward. These artists reanimate what official systems erase, constructing counter-archives that expose the silences, absences, and structural forgettings built into systems of documentation. This resonates deeply with the immigrant condition, in which legal presence is provisional and recognition is always deferred.

As archival artists uncover the hidden violences embedded in bureaucratic memory, another group of practitioners turns to the physical and infrastructural dimensions of data governance. By making visible the often-invisible hardware and networks that sustain digital control, these artists reveal how power operates not only through data but through material systems—servers, cables, and code—that shape everyday life. This exposure challenges the myth of a seamless digital realm, reminding us that governance is grounded in tangible, contested spaces where decisions about inclusion and exclusion are enacted.

Where the logic of governance is increasingly immaterial—hidden in code, servers, and proprietary systems—some artists work to make the infrastructure visibleJames Bridle, in exploring what he terms the “New Aesthetic,” captures the eerie, semi-visible zone where machine perception intersects with urban life and planetary surveillance. Ingrid Burrington’s maps and guides to internet infrastructure render tangible the cables, server farms, and chokepoints that quietly govern digital existence. These works push back against the naturalization of the digital by showing it as a system of decisions, exclusions, and material constraints.

The “Data Turn” can be understood as a continuation of intellectual movements that critically examine the production and mediation of knowledge, much like the “Literary Turn” of the late twentieth century. The Literary Turn foregrounded language and narrative as active forces shaping historical meaning and subjectivity, challenging claims to objective or transparent truth. Similarly, the Data Turn interrogates the rise of data and computational systems as new epistemic tools that do not merely represent social realities but construct and govern them. This shift compels historians to reconsider the archives, sources, and methodologies that underpin their work, recognizing that data is embedded within power relations and ideological frameworks. Both turns reveal the contingency of knowledge and demand critical attention to the infrastructures through which it is produced and deployed.

By revealing the physical infrastructure behind digital governance, artists highlight how power operates through material systems that govern access and control. This focus on the tangible complements artistic engagements with the symbolic and bureaucratic forms that mediate migration. Together, these practices expose how both infrastructure and imagery function as aesthetic regimes—tools that shape and enforce legal and political inclusion, while also offering sites for creative rupture and alternative narratives.

Even the forms that mediate migration—passport photos, visa documents, biometric scans—are aesthetic regimes. They precede legal recognition; they shape it. Artists like Bouchra Khalili, in works like The Mapping Journey Project, appropriate these documentary forms not to affirm their authority, but to rupture them. Her work stages alternative cartographies of movement—ones based not on state control, but on narrative, memory, and resistance. In such works, the migrant is not a risk profile, but a storyteller.

By transforming state documentation into acts of storytelling and resistance, artists reclaim the migrant’s agency from reductive systems of classification. This reimagining challenges the prevailing logic of legibility, opening space for more nuanced understandings of identity and belonging beyond the constraints of bureaucratic control.

Across these practices, art offers not just critique but proposition. It creates space for reimagining how we understand legibility, personhood, and the infrastructures that shape both. In contrast to the Data Turn’s promise of seamless optimization, these works embrace what is incomplete, contradictory, and opaque. They remind us that data is not destiny, and that what cannot be captured might still be what matters most.

Together, these artistic interventions reveal that data regimes are not neutral frameworks but deeply embedded with values and power. By embracing ambiguity and incompleteness, they challenge dominant narratives of control and certainty, opening new possibilities for understanding identity and presence beyond bureaucratic constraints.

For scholars working at the intersection of immigration, data, and liminality, this aesthetic terrain is not peripheral—it is central. Art shows us that the Data Turn is not merely technical; it is philosophical. It carries assumptions about what kinds of life count, what futures are permissible, and how uncertainty should be managed. Visual practices, especially those rooted in the experience of liminality, offer a different grammar of visibility—one attuned not to classification, but to ambiguity; not to risk, but to relation.

Shared Shadows: Samurai and Scottish Kings

After seeing the Donmar Warehouse’s Macbeth starring David Tennant and Cush Jumbo, alongside Andor (see my other post here), a friend suggested I revisit Kurosawa’s Throne of Blood from 1957—a prompt that opened a corridor between seemingly distant worlds.

Across cultures and centuries, Macbeth has proven uniquely adaptable—not because its language is universal, but because its psychological architecture and ritual mechanics resonate beyond context. The play’s core is less about words than about the patterns of human ambition, the cyclical nature of power, and the haunting consequences of guilt. These elemental forces find expression through highly specific cultural forms, yet somehow the underlying emotional and metaphysical structures transcend linguistic and geographic boundaries. When we look at Akira Kurosawa’s Throne of Blood alongside the Donmar Warehouse’s modern staging, what emerges is not merely a contrast in style or medium, but a deep structural affinity. Both works articulate a shared grammar of ambition, guilt, and spectral dread, communicating a universal human crisis through distinct sensory and ritualistic vocabularies.

In Throne of Blood, the influence of traditional Japanese theatre, particularly Noh, shapes the film’s aesthetic and emotional tenor. The soft rustle of Lady Asaji’s kimono, for instance, is not incidental but a deliberate sonic signifier steeped in cultural meaning. In Japanese performance, such sounds evoke the ghostly restraint and suppressed violence characteristic of spirits and doomed aristocracy. This subtle auditory presence externalizes internal psychological turmoil in a way that is deeply evocative yet restrained—an elegiac whisper of fate’s inexorability. Likewise, the persistent motif of crows circling or calling in the background serves as an ominous refrain, a natural chorus underscoring the inevitability of doom. The bird’s symbolic weight crosses cultural boundaries, appearing in both Kurosawa’s and the Donmar production as a harbinger of death and the uncanny.

Conversely, the Donmar Warehouse’s staging, while embedded in contemporary theatrical forms, draws on an equally potent ritual language of its own. The palpable tension, the fractured psychological states, and the ever-present sense of paranoia and surveillance resonate with modern anxieties but also echo timeless human fears. The crows’ calls punctuate the space, anchoring the narrative’s supernatural and fatalistic elements, while the intense physicality and raw vocal performances evoke a different kind of ritual — one rooted in Western dramatic tradition but suffused with a contemporary edge. This juxtaposition reveals how cultural codes operate not to isolate but to illuminate shared affective experiences. Both versions of Macbeth externalize inner collapse and moral disintegration through a rich interplay of sound, movement, and symbolic imagery, adapted to their cultural and historical contexts.

The fascination lies not in erasing these differences, but in tracing how seemingly distinct traditions converge in affective resonance. Shakespearean eschatology, with its linear progression toward an apocalyptic reckoning, contrasts with the cyclical time of East Asian fatalism, yet both frame ambition and guilt within inevitable cosmic orders. Similarly, courtly restraint as embodied by Lady Asaji’s measured silence finds an uneasy counterpart in the martial paranoia of the Donmar’s Macbeth, who is equally trapped by invisible forces and internal demons. These are not mere thematic overlaps but expressions of ontologies that shape how power, fate, and the self are understood and performed. The works do not speak to each other through direct translation but through the vibration of shared human experience refracted through culturally specific prisms.

In this light, Throne of Blood and the Donmar Macbeth are less adaptations of a text and more dialogues between worldviews, each exposing how ritual and narrative craft produce meaning. They remind us that theatre and film are not simply vehicles for storytelling but complex systems of sensory and symbolic mediation where time, space, and identity intersect. The rustling kimono, the haunting caw of crows, the measured silences, and the bursts of violent expression function as nodes in a network of affect, drawing spectators into a shared psychic landscape of dread and desire. By exploring these shared shadows—between samurai and Scottish kings, between East Asian fatalism and Western eschatology—we glimpse the universality of Macbeth’s tragic vision while appreciating the particularities that make each iteration compelling and distinct.

Cassian Andor and the Shakespearean Tragic: Macbeth in a Galaxy Far, Far Away

I just finished watching David Tennant and Cush Jumbo’s Macbeth and the experience lingered long after the final scene. There’s something about the way Shakespeare captures ambition’s darkness, the pull of fate, and the heavy weight of guilt that feels timeless. This production is one of the best that I have seen and I watched it from the comfort of my living room. I have also been watching Andor and suddenly, Cassian Andor’s story in Andor and Rogue One came into sharper focus—not as a simple space rebel, but as a tragic figure shaped by forces beyond his control, haunted by his own choices, and bound to a destiny that feels both cruel and inevitable.

Like Macbeth, Cassian is caught between his will and something larger—something mysterious and powerful. In Macbeth, it’s the witches. Their prophecy cuts through the air, twisting the future and planting seeds of ambition and doubt. They are strange, otherworldly figures—symbols of chaos, fate, and the unknown. In the Star Wars galaxy, that mysterious force takes shape as the Force itself, an invisible current that both guides and traps the characters who try to grasp it. It’s the spiritual undercurrent to Cassian’s rebellion, the unseen power that moves through everything and everyone.

Cassian isn’t driven by ambition like Macbeth—he doesn’t thirst for power or crowns. Instead, his fire burns for justice, freedom, survival. But the price he pays feels just as steep. Watching him, you feel the weight he carries: the betrayals, the violence, the endless paranoia. Like Macbeth’s hallucinations—ghosts and bloodied hands—Cassian’s scars are quieter but no less real. They live in his haunted eyes and his weary silence. Both men are trapped in a cruel dance with their consciences, a struggle that shakes them to their core.

Cassian sits in the shuttle, silent, his face carved in shadow. Jyn speaks beside him, unaware. He stares ahead, burdened—not just by his orders, but by the years that led him here. After Andor, the moment is heavy with history: this is a man unraveling quietly, long before the mission begins.

And yet, here the stories split. Macbeth’s path is a downward spiral—corruption, tyranny, death. Cassian’s is a slow-burning tragedy that ends in a sacrificial blaze. But beneath that sacrifice lies a quieter, deeper pain: the tragedy of a man caught between who he is, who others expect him to be, and who he fears he can never fully become. His death in Rogue One isn’t just an end; it’s a beginning. The bitter loss becomes the spark that lights a rebellion, a defiant hope born from sacrifice. Where Macbeth’s tragedy warns of ambition’s ruin, Cassian’s story whispers that even in loss, even in the failure to fully embody the heroic ideal imposed on him, there is power and meaning.

There’s also something communal in Cassian’s fate. He’s not alone—his sacrifice belongs to the many who fight alongside him, the countless unknown rebels who risk everything. And yet, in this collective struggle, Cassian’s personal fracture remains: the quiet anguish of feeling unable to be the perfect hero, the ideal symbol, or the saviour everyone demands. It’s a chorus of voices, a shared grief and courage that makes his story more than personal—yet his story is also the story of fractured identity, of the lonely burden carried behind the mask of rebellion. It is the collective heartbeat of resistance, shaped by the silent cracks in its most reluctant hero.

In the end, Cassian Andor stands as a tragic hero for our times—haunted and conflicted, caught in the relentless currents of unseen forces that shape his fate and fracture his identity. He wrestles endlessly between what the world demands of him and the limits of what he can give. The weight of sacrifice presses down not just on his actions but on who he is—or who he feels he is failing to be. Like Macbeth, Cassian’s story plunges into the shadows that live within us all: the fears, doubts, and moral ambiguities that make heroism feel at once noble and unbearably heavy. Yet where Macbeth’s descent ends in ruin and silence, Cassian’s darkness carries within it a fragile, flickering hope. His tragedy is not just about loss but about the quiet resilience of that spark—an ember that refuses to die even when the night seems endless. It reminds us that even in the deepest shadows of doubt and sacrifice, there is still light, still meaning, still a reason to keep fighting.

But what sets Cassian apart from the tragic heroes of the past—Macbeth, Oedipus, Hamlet—is the modern complexity of his identity and the fractured nature of his heroism. Classical tragedy often hinges on a fatal flaw—ambition, pride, hubris—that leads to a solitary downfall. Cassian’s tragedy, however, is rooted in a more nuanced tension: between the self he knows and the impossible ideals others impose on him; between the limits of his own being and the vast collective cause he must serve. He is not undone by hubris but burdened by the crushing weight of expectation and the sense that he can never fully embody the hero he is meant to be.

Unlike the solitary tragic figures of old, Cassian’s story emerges from within the murk of a collective struggle—where the self dissolves into the cause, where one life is both vital and disposable. His sacrifice is not singular but shared, echoing the quiet heroism of countless others lost to the margins of history. And yet, this solidarity does not spare him from isolation. If anything, it deepens it. He moves through the rebellion as a man hollowed by experience, forced to wear conviction like armour, even as uncertainty corrodes him from within. After Andor, we see that his courage isn’t blind—it’s bruised. That’s what makes it tragic. That’s what makes it real.

Moreover, Cassian’s tragedy is entwined with mystical and systemic forces—the Force, the Empire, the rebellion itself—which are not mere backdrops but active players shaping his destiny. His struggle is both personal and political, reflecting the modern anxieties of agency and meaning in a world dominated by overwhelming systems beyond individual control. In this way, Cassian Andor is a tragic hero for our fragmented, uncertain age—haunted by fate, fractured by identity, and defined by the delicate balance between resistance and sacrifice.

The Image Thinks: AI, Algorithms, and the Shifting Ground of Knowledge

There was a time when images were evidence. A medieval map was not just a representation but a claim to knowledge, an argument about how the world was structured. A Renaissance painting revealed divine order, a photograph proved that something was. Today, we face a new kind of image—one that does not record but generates, one whose authority does not come from witnessing reality but from statistical inference. AI-generated imagery does not document the world; it thinks the world.  

For centuries, knowledge was structured around categories. Aristotle, Linnaeus, and later the Encyclopédistes built systems to organize the world, classifying nature, history, and human thought into legible hierarchies. Even with the rise of empirical science, knowledge remained something accumulated, structured, and verified through observation.  

mid journey image #prompt = [coffee in St. Peter’s Square –ar1:1]

The algorithm, however, does not organize knowledge in this way. It does not categorize the world from above but learns patterns from within. Unlike an 18th-century taxonomist, an AI system does not define a tiger by its stripes or its feline characteristics—it simply processes vast quantities of data, detecting statistical correlations that allow it to recognize a tiger without ever defining it.  

This is a profound shift. Knowledge, once built through observation and classification, is now generated by inference. The AI-generated image follows this logic. It does not capture a moment, as a photograph once did, nor does it interpret a subject, as a painting might. Instead, it predicts what an image should look like, based on probabilities. The result is something fundamentally different from representation: an image that emerges from a machine’s internal logic rather than from reality itself.  

For centuries, images were linked to material constraints: pigments on a canvas, light on film, a chemical process that left behind a physical trace. Even digital images, while infinitely replicable, still maintained a relationship to a source—a photograph taken, a frame captured. AI-generated imagery untethers itself from this history. It is not a copy but an invention, synthesized from a dataset of other images, none of which serve as the original.  

This is not just a technological change; it is an epistemological one. If we once sought truth in the documentary image, where do we look now? If an AI can generate a face that has never existed, what happens to our belief in the evidentiary power of the portrait? And if an algorithm can create art indistinguishable from human creativity, what happens to the very idea of authorship?  

midJourney image #prompt = [idonthaveacoolname.com –ar 1:1]

We might think of AI as a historian of its own kind—one that does not preserve the past but extracts patterns from it. The great archives of human culture—museums, libraries, film reels—once functioned as repositories of collective memory. AI, trained on these vast datasets, does not remember but predicts. It does not curate the past; it recombines it.  

The implications of this shift extend beyond aesthetics. In medicine, AI does not diagnose based on fixed categories but on pattern recognition, seeing correlations that escape human detection. In law, AI systems sift through precedent not to enforce continuity but to optimize decisions. Across disciplines, knowledge is becoming less about interpretation and more about computation.  

Yet there is something unsettling in this. AI-generated imagery reminds us that knowledge, long thought to be something we built, structured, and controlled, may now be something we train—a vast statistical model that does not explain but predicts, does not reason but generates.  

midJourney image #prompt = [grid of a single leaf –ar 1:1]

If the image was once a window onto the world, AI has made it a hall of mirrors, endlessly reflecting a logic we do not fully understand. The question is no longer whether these images are real, but rather: whose reality do they belong to?

The Architects of Transience: Concrete, Infrared, and the Unraveling of Modernity

The paradox of concrete as both the symbol of modernity and its antithesis—destruction—has been beautifully and vividly re-examined in the 2024 documentary by Viktor Kossakovsky,  Architecton. The film opens with the ravaged remains of concrete structures in Ukraine, setting the stage for an exploration not only of architecture’s relationship to materiality but of its role in the broader narrative of progress and decay. Through this lens, Kazimir Malevich’s geometrically pure forms gain new resonance, shifting from abstract utopian ideals to poignant metaphors for the tension between stability and fragility inherent in all human endeavours.

The ruins of Baalbek, stark against the infrared sky, their massive columns diminished yet unwavering. Their presence in the landscape is both imposing and ghostly, a relic of human ambition that now exists in a state of suspension, neither fully intact nor wholly lost. In the documentary’s meditation on concrete and stone, Baalbek serves as a distant counterpoint—where modern concrete is cast and shaped to fit the needs of the present, these ancient stones, quarried and placed millennia ago, endure as both triumph and ruin, a reminder that all architecture, no matter how permanent it seems, is ultimately subject to time.

One striking quotation from the documentary reads: “After water, concrete is the most widely used substance on Earth.” This simple statement highlights concrete’s ubiquity and significance in shaping the modern world. Water—life’s most fundamental element—has long been the basis of human survival and connection with nature, while concrete, as the second most used material, represents humankind’s drive to dominate and define its surroundings. Yet, despite its ubiquity, concrete’s eventual decay exposes a different truth: the same forces that humans attempt to master—through architecture, engineering, and design—are ultimately beyond control. Concrete, while seemingly permanent, is just as vulnerable as the stone it mimics, subject to the ravages of time, war, and nature.

In one particularly striking image, a solitary man with a wheelbarrow is dwarfed by a massive block of stone, carved millennia ago and abandoned. This visual echoes the evocative imagery of Michelangelo’s Prisoner statues, housed in Florence’s Accademia Gallery. These figures, half-formed, trapped in their stone prison, seem to struggle towards liberation, embodying both the act of creation and the stasis of unfulfilled potential. The abandoned stone, much like these unfinished figures, occupies a space between being and non-being, between intention and entropy. The stone seems to call out for a form that has not yet been realized, just as the massive concrete structures in the documentary gesture toward what could have been—monuments of progress now succumbed to time and violence. In this way, both the material and its artistic potential exist in a state of suspended animation, caught between the historical force of its creation and the inevitable dissolution of all things.

Integral to this exploration is the use of infrared imagery, a technological choice that disrupts our traditional understanding of built structures. Infrared, often used to reveal hidden heat signatures, transforms concrete buildings into spectral forms. What was once solid, monumental, and permanent is reduced to an ethereal presence, a visual manifestation of the invisible energies and decay beneath the surface. It’s as though the material itself is attempting to communicate its vulnerability—an image of architecture that exposes itself not as a static entity, but as a system of energies, histories, and eventual dissolution.

A crucial scene in the film—an extended sequence of a massive rockslide—underscores the inherent power of stone, nature’s counterpoint to human architecture. As colossal boulders cascade down the mountainside, the camera lingers on the massive, unyielding force of the stone. This raw, natural destruction stands in stark contrast to the calculated, human-made beauty of classical architecture. The imagery here is a reminder that stone, while emblematic of permanence, is also vulnerable to the overwhelming forces of nature. This stark juxtaposition of classical ruins, once thought to be eternal, returning to the earth, punctuates the fragility of human ambition and the fleeting nature of monumental achievement.

The pairing of concrete and rock, two materials that symbolize permanence, with such violence and collapse speaks to their liminal nature. Both substances, when used for habitation or as symbols, straddle the boundary between human-made constructs and the natural world. They strain the traditional distinctions between subject and object, man and nature—two concepts that architecture has long worked to contain and define. Concrete, as both a building material and a symbol of modernity, offers the illusion of control over nature. Yet, it is precisely this illusion that makes it so susceptible to forces beyond our grasp. Rock, though an ancient and seemingly immutable material, can also become a harbinger of destruction when untethered from human will. These materials blur the boundaries of the architectural discourse, pointing to an inherent instability between humanity’s ambitions and the larger natural forces at play.

Concrete, though seemingly durable, is as much a material of transience as it is of permanence. The structures it creates can endure for centuries, but the very process of their construction—through human labour, environmental forces, and the inevitable decay—ensures their eventual dissolution. In this, concrete is emblematic of the human condition: the striving for permanence caught in the endless flux of change and decay.

Through this lens, the interaction between concrete and rock becomes a reflection of the tension between human intention and natural forces. These materials are not mere objects to be shaped or controlled but are agents in their own right, influencing the spaces they inhabit. When viewed through infrared, they reveal themselves not as passive backdrops but as active participants in the construction of meaning. Concrete’s malleability and rock’s permanence, when combined, create a tension that straddles the boundary between subject and object, a dialectic that architecture itself has long sought to transcend. If technologies shape our understanding of reality, then the use of infrared here forces us to confront the complex interplay between human creation and the natural world.

Malevich’s Architecton, in this context, becomes more than a study of abstract form. It serves as a blueprint for reconsidering the purpose and meaning of architecture in a time when the very materials that define our spaces are constantly in flux. If the built environment is constantly being reshaped by forces both seen and unseen, then architecture is not a static monument but an ongoing negotiation between humanity and the materials that constitute it. And in the suspended forms of stone and concrete, we find a reminder that art, too, lies at the intersection of creation and destruction—a space where form is constantly being struggled into existence, only to eventually fade back into the material world.

Mechanized Indeterminacy: AI, Asemic Writing, and the Fiction of the Text-Image Collapse

A friend sent me a link to this article: Operative ekphrasis: the collapse of the text/image distinction in multimodal AI by Hannes Bajouhr.

The argument that multimodal AI collapses the text-image distinction is, at first glance, compelling. However, this claim relies on an implicit assumption that such a distinction was ever stable or clearly demarcated. A closer examination reveals that AI’s generative processes do not so much “collapse” the distinction as they do mechanize an already-existing instability—one that has long been explored through avant-garde literary and artistic practices, particularly in asemic writing. 

Throughout the 20th century, artists and writers repeatedly disrupted the supposed boundary between text and image. Dadaist collage, Surrealist automatic writing, and concrete poetry all foregrounded the materiality of language, demonstrating that text could function visually as much as linguistically. In Lettrism, pioneered by Isidore Isou in the 1940s, letters were untethered from conventional phonetic or semantic meaning, transformed into visual compositions. Henri Michaux’s asemic ink drawings similarly dissolved the distinction between writing and mark-making, demonstrating that the act of inscription need not resolve into legibility. These historical precedents complicate the article’s central claim: rather than producing an unprecedented collapse, AI merely accelerates and mechanizes a longstanding artistic impulse to question the division between reading and seeing. 

Asemic writing resists the tyranny of meaning, inviting the reader into an interpretative space where language dissolves into pure form.

If asemic writing operates through intentional illegibility, inviting interpretation while resisting definitive meaning, AI-generated text-image hybrids do not resist meaning so much as they produce an excess of it. The logic of machine learning generates outputs that are overdetermined by probabilistic associations rather than by authorial intent. Cy Twombly’s gestural inscriptions, for instance, suggest meaning without fully disclosing it; their power lies in their resistance to linguistic capture. By contrast, AI-generated multimodal outputs do not refuse meaning but generate an abundance of semiotic possibilities, saturating the interpretative field. The article does not fully account for this distinction, treating AI’s multimodal capabilities as a collapse rather than an overproduction, a shift from resistant ambiguity to computational fluency. 

What is most fundamentally altered by AI is not the existence of an intermediary space between text and image but the industrialization of indeterminacy itself. Asemic writing historically resists institutional legibility, positioning itself against systems of meaning-making that demand clear semiotic functions. AI, however, converts indeterminacy into a computational process, endlessly producing outputs that are neither fully readable nor wholly visual but are nevertheless monetized and instrumentalized. Where the illegibility of Chinese wild cursive calligraphy or Hanne Darboven’s sprawling numerical texts was once a site of aesthetic resistance, AI-driven multimodality turns this ambiguity into a product, systematizing what was once an act of refusal. 

By severing the link between signifier and signified, asemic writing exposes the visual unconscious of text, revealing writing as an act of mark-making rather than communication.

Rather than signaling the collapse of the text-image distinction, AI-driven multimodality reveals how this boundary has always been porous. The article’s central argument overlooks the long history of artistic and literary practices that have anticipated and complicated the very phenomenon it describes. A more nuanced approach would recognize that AI does not dissolve the distinction between text and image so much as it absorbs their instability into a system that operationalizes ambiguity at scale, transforming what was once a site of aesthetic and conceptual resistance into an automated process of production.

Abstracted Intelligence: AI, Intellectual Labour, and Berkeley’s Legacy in Public Policy

This was meant to be a review of Revolutionary Mathematics by Justin Joque, but it became an essay on one of his points. A friend sent me a great review—so I’m off the hook. Joque’s book examines the radical potential of mathematics to reshape society, critiquing conventional practice and positioning math as a tool for social change. He explores its intersections with culture and activism, urging us to rethink its role beyond traditional frameworks. For me, it sparked deeper questions about thinking itself—how knowledge, data epistemology, and human insight are fundamentally threatened by our growing reliance on the technology of ghostly inference, where intellectual labour is not merely automated but restructured, displacing those who once performed it while subtly embedding the very biases and inequalities it claims to transcend.

Joque’s reference to George Berkeley (March 1685 – January 1753) in his book piqued my curiosity, especially as Berkeley’s critique in The Analyst (1734) challenged the abstract nature of infinitesimals in calculus, an idea that I just re-read in Wittgenstein. These are, essentially, like quarks or clouds—elusive and intangible, but unlike quarks, which we can at least observe through their effects, or clouds that we can still see, the infinitesimals remain purely abstract, with no direct manifestation. Berkeley argued that these unobservable entities lacked connection to the empirical world, undermining their validity. This critique feels remarkably relevant today, especially with the rise of Artificial Intelligence (AI: see note below). As machines increasingly make decisions based on data, the human dimension of intellectual labour risks being diminished to mere computational tasks. Just as Berkeley questioned mathematical abstractions, we must consider the implications of this abstraction on human intelligence in the AI era.

The rise of artificial intelligence (AI) has become one of the defining phenomena of the 21st century, promising to revolutionize intellectual and manual labour across sectors; however, this promise comes with an implicit threat: the displacement of human thought and expertise by computational models, transforming the nature of governance and intellectual work. The increasingly widespread belief in AI as an agent of efficiency and progress echoes earlier philosophical debates about the nature of knowledge, reality, and the human condition. From the critique of metaphysical abstraction in the Enlightenment to contemporary concerns about automation, the tension between human intellect and technological systems is palpable.

Artificial Intelligence in this essay refers to a broad range of technologies, including artificial intelligence (AI), augmented intelligence (AI), large language models (LLMs), and other related computational tools that enhance decision-making, learning, and data processing capabilities. These technologies encompass machine learning, deep learning, and natural language processing systems that assist or augment human intelligence using computer algorithms.

This philosophical concern is rooted in the intersection of metaphysics and epistemology, where Bayesian probability can offer a framework for assessing belief and knowledge. As machines take over decision-making, Bayesian inference could be used to model how human understanding is increasingly reduced to probabilistic reasoning, driven by data rather than lived experience. The concept of “infinitesimals” in Berkeley’s work, too small to observe directly, mirrors AI’s abstraction, with Bayesian probability similarly depending on unseen or abstract factors. Just as Berkeley questioned mathematical abstractions, we must scrutinize the abstraction of human intelligence through AI systems and their probabilistic reasoning.

AI systems, particularly in governance, often prioritize efficiency over nuance, leading to challenges in addressing complex social issues. For example, AI-based predictive policing models aim to reduce crime by analyzing past data to forecast criminal activity. However, these systems can perpetuate biases by over-policing certain communities or misinterpreting patterns. In Canada, this is evident in the overrepresentation of Indigenous communities in crime statistics, where AI-driven policies may misdiagnose the root causes, such as historical trauma or systemic discrimination, instead of addressing the socio-cultural context that fuels these disparities.

The implementation of AI in public service delivery also poses risks of oversimplification, especially when addressing the needs of vulnerable groups. For instance, in Canada, Indigenous communities have historically faced barriers in accessing health care, education, and social services. AI systems may identify general patterns of need based on demographic data, but they often fail to recognize specific local and cultural factors that are critical in understanding these needs. By relying solely on data-driven models, policymakers risk overlooking essential aspects of accessibility, such as language, geography, or traditional knowledge systems, which are integral to Indigenous communities’ well-being. This could lead to recommendations that do not effectively support their unique requirements.

Furthermore, while AI can process vast amounts of data, its inability to understand cultural nuances means that these models often miss the lived realities of marginalized groups. For example, the challenges faced by immigrants and refugees in Canada are deeply rooted in socio-cultural factors that are not always captured in statistical datasets. AI systems designed to assess eligibility for settlement programs or integration services may overlook the role of social capital, support networks, or personal resilience—factors crucial for successful integration into Canadian society. As a result, AI can produce one-size-fits-all solutions that neglect the complexity of individual experiences, further deepening inequality.

These examples underscore the limitations of AI in governance. While AI systems can process vast amounts of data, they lack the cultural sensitivity and emotional intelligence required to address the intricacies of human experience. Human oversight remains crucial to ensure that AI-driven decisions do not ignore the lived realities of marginalized communities, particularly Indigenous peoples and immigrants in Canada. The challenge is not just technical, but ethical—ensuring that AI serves all citizens equitably, taking into account diverse cultural and social contexts. It is essential that AI is integrated thoughtfully into governance, with a focus on inclusivity and the preservation of human agency.

Berkeley argues that these "infinitesimal" quantities, which are too small to be perceived, cannot be validly used in reasoning, as they detach mathematics from tangible reality. For Berkeley, mathematical concepts must be rooted in empirical experience to be meaningful, and infinitesimals fail this test by being incapable of direct observation or sensory experience.

AI has begun to transform the landscape of intellectual labour, particularly in fields that heavily rely on data analysis. Where human analysts once crafted insights from raw data, AI systems now process and distill these findings at unprecedented speeds. However, the value of human expertise lies not only in the speed of calculation but in the depth of context that accompanies interpretation. While AI systems can detect patterns and correlations within data, they struggle to navigate the complexities of the lived experience—factors like historical context, cultural implications, or social nuances that often turn a dataset into meaningful knowledge.

Data analytics, now increasingly dependent on algorithmic models, also underscores this divide. Machine learning can spot trends and produce statistical conclusions, yet these models often fail to question underlying assumptions or identify gaps in the data. For instance, predictive analytics might flag trends in employment patterns, but it is the human analyst who can explore why certain trends occur, questioning what the numbers don’t tell us. AI is exceptional at delivering quick, accurate results, but without the reflective layer of human interpretation, it risks presenting a skewed or incomplete picture—particularly in the realm of social data, where lived experiences are often invisible to the machine.

As AI continues to infiltrate sectors like healthcare, immigration, criminal justice, and labour economics, it is increasingly tasked with decisions that once relied on human intellectual labour. However, these systems, built on historical data, often fail to account for the subtle shifts in context that data analysis demands. Machine learning systems may flag patterns of healthcare access based on prior records, but they might miss changes in societal attitudes, emerging public health challenges, or new patterns of inequality. These are the kinds of factors that require a human touch, bridging the gap between raw data and its true significance in real-world terms.

This shift is also reshaping the role of data analysts themselves. Once, data analysts were the interpreters, the voices that gave meaning to numbers. Today, many of these roles are becoming increasingly automated, leaving the human element more on the periphery. As AI systems dominate the decision-making process, intellectual labour becomes more about overseeing these systems than about active analysis. The danger here is the erasure of critical thinking and judgment, qualities that have historically been central to intellectual work. While AI excels at scaling decision-making processes, it lacks the ability to adapt its reasoning to new, unforeseen situations without human guidance.

As AI continues to evolve, its influence on governance and intellectual work deepens. The history of data-driven decision-making is marked by human interpretation, and any move toward a purely algorithmic approach challenges the very foundation of intellectual labour. The increasing reliance on AI-driven processes not only risks simplifying complex social issues but also leads to the marginalization of the nuanced understanding that human intellectual labour brings. This tension between machine efficiency and human insight is not merely a technological concern but a philosophical one—a challenge to the nature of work itself and the role of the intellectual in an age of automation.

This shift invites a reconsideration of the historical context in which intellectual labour has developed, a theme that is crucial in understanding the full implications of AI’s rise. The historical evolution of data analysis, governance, and intellectual work has always involved a negotiation between human cognition and technological advancement. As we look toward the future, we must ask: in an age increasingly dominated by machines, how will we ensure that human experience and judgment remain central in shaping the decisions that affect our societies? This question points toward an urgent need to ground AI in a historical context that recognizes its limitations while acknowledging its potential.

As AI becomes more central in shaping political and social policies, particularly regarding immigration, there are concerns about its ability to reflect the complex realities of diverse communities. The reliance on AI can lead to oversimplified assumptions about the needs and circumstances of immigrants, especially when addressing their integration into Canadian society. AI systems that analyze immigration data could misinterpret or fail to account for factors such as socio-economic status, cultural differences, or regional disparities, all of which are critical to creating inclusive policies.

This evolving landscape signals a deeper erosion of the social contract between Canadians and their governments. In immigration, for example, particularly in light of the 2023–2026 Data Strategy and the findings of CIMM – Responses to the OAG’s Report on Permanent Residents, ensuring human oversight becomes increasingly crucial. Without it, there is a risk of diminishing the personal, human elements that have historically been central to governance. The shift towards automated decision-making could alienate citizens and weaken trust in political institutions, as it overlooks the nuanced needs of individuals who are part of the democratic fabric.

AI’s increasing role in governance marks a shift toward the disembodiment of knowledge, where decisions are made by abstract systems detached from the lived experiences of citizens. As AI systems analyze vast amounts of data, they reduce complex human situations to numerical patterns or algorithmic outputs, effectively stripping away the context and nuance that are crucial for understanding individual and societal needs. In this framework, governance becomes a process of automating decisions based on predictive models, losing the human touch that has historically provided moral, ethical, and social considerations in policy formulation.

The consequences of this abstraction in governance are far-reaching. AI systems prioritize efficiency and scalability over qualitative, often subjective, factors that are integral to human decision-making. For example, immigration decisions influenced by AI tools may overlook the socio-political dynamics or personal histories that shape individuals’ lives. When policy decisions become driven by data points alone, the systems designed to serve citizens may end up alienating them, as the systems lack the empathy and contextual understanding needed to address the full complexity of human existence. This hollowing out of governance shifts power away from human oversight, eroding the ability of democratic institutions to remain responsive and accountable to the people they serve.

The COVID-19 pandemic served as a catalyst for the rapid integration of AI in governance and society. As governments and businesses shifted to remote work models, AI tools were leveraged to maintain productivity and ensure public health safety. Technologies like contact tracing, automated customer service bots, and AI-driven health analytics became critical in managing the crisis. This acceleration not only enhanced the role of AI in public sector decision-making but also pushed the boundaries of its application, embedding it deeper into the governance framework.

The pandemic also saw the domestication of AI through consumer devices, which became central to everyday life. With lockdowns and social distancing measures in place, reliance on digital tools grew, and AI-powered applications—like virtual assistants, fitness trackers, and personalized recommendation systems—found a more prominent place in households. These devices, which had once been seen as niche, became essential tools for managing work, health, and social connections. The widespread use of AI in homes highlighted the shift in governance, where decision-making and the management of societal norms increasingly came under the control of automated systems, marking a techno-political shift in how people interact with technology.In revisiting Berkeley’s critique of infinitesimals, we find philosophical parallels with the rise of AI. Berkeley questioned the very foundation of knowledge, suggesting that our perceptions of the material world were based on subjective experience, not objective truths. Similarly, AI operates in a realm where data is processed and interpreted through systems that may lack subjective human experience. AI doesn’t “understand” the data in the same way humans do, yet it shapes decision-making processes that affect real-world outcomes, creating an abstraction that can be detached from human experience.

This disconnection between machine and human experience leads to the dehumanization of knowledge. AI systems operate on algorithms that prioritize efficiency and optimization, but in doing so, they strip away the nuanced, context-driven understanding that humans bring to complex issues. Knowledge, in this sense, becomes something disembodied, divorced from the lived experiences and emotions that give it meaning. As AI continues to play a central role in governance, the process of knowledge becomes more mechanized and impersonal, further eroding the human dimension of understanding and ethical decision-making. The philosophical concerns raised by Berkeley are mirrored in the ways AI reshapes how we conceptualize and act on knowledge in a tech-driven world.

The rapid integration of AI into intellectual labour and governance presents a profound shift in how decisions are made and knowledge is structured. While AI offers the promise of efficiency and precision, its growing role raises critical concerns about the erosion of human agency and the humanistic dimensions of governance. As AI systems replace human judgment with algorithmic processes, the risk arises that complex social, political, and ethical issues may be oversimplified or misunderstood. The hollowing out of governance, where decision-making is increasingly abstracted from lived experiences, mirrors the philosophical critiques of abstraction seen in Berkeley’s work. The human element, rooted in experience, judgment, and empathy, remains crucial in the application of knowledge. Without mindful oversight, the adoption of AI in governance could result in a future where technology governs us, rather than serving us. To navigate these challenges, preserving human agency and ensuring that AI tools are used as aids rather than replacements is essential to maintaining a just and ethical society.

Berkeley’s philosophy of “immaterial ghosts”, where the immaterial influences the material world, aligns with Richter’s cloud paintings at Ottawa’s National Gallery of Canada, which evoke a similar sense of intangible presence. Both focus on the unseen: Berkeley’s spirits are ideas that influence our perceptions, while Richter’s clouds, as abstract forms, suggest the unknowable and elusive. In this way, Berkeley’s invisible world and Richter’s cloudscapes both invite us to confront the limits of human understanding, where the unseen shapes the visible.

B&W photography and the benefits of looking up!

In black and white, architecture transforms into pure form—sharp lines and intricate textures stand out, while windows become portals to another world. The absence of colour forces the eye to focus on structure, light, and shadow, revealing the timeless beauty of built environments.

Looking up at the National Art Gallery in Ottawa, the stark contrasts of its glass and stone façade come to life in black and white. The sharp edges and sweeping curves of the architecture create a powerful dialogue between light and shadow, revealing the gallery’s majestic presence.

Looking up at the Maman statue outside the National Art Gallery in Ottawa, its towering, spider-like form becomes an intense study in contrast. The black and white frame emphasizes the intricate details of its legs and body, casting dramatic shadows that evoke both awe and vulnerability.

Fables at the National Arts Centre

Really enjoyed this work by Virgine Brunelle the other night at the National Art Centre. After a pretty awesome meal at 1Rideau I sat down for a sensory explosion of both visual and audio sensations.

In Fables, Virginie Brunelle creates a visceral exploration of chaos and resilience, where contemporary feminine archetypes collide in a raw, primal dance. Drawing from her background in violin, Brunelle intricately weaves rhythm and movement, pushing the boundaries of traditional dance. The performers’ bodies, mostly naked and raw, amplified by their breath and cries, move through a sonic landscape composed by Philippe Brault and performed live by Laurier Rajotte on the piano, embodying a world in turmoil yet yearning for hope and humanity.

A particularly striking element is the immersive audio experience in the opening set, where a cast member swings a microphone close to the dancers, amplifying their physicality. This not only heightened the intimacy of the piece but also allowed me to feel the dancers’ movements—every breath, every collision becomes a tactile experience. Very immersive. The live soundscape intertwines with the dancers’ raw physicality, drawing the audience deeper into the emotional urgency of the piece. This fusion of sight and sound creates a profound connection, turning the stage into a space where chaos, music, and movement converge in a shared sensory reality.