Abstracted Intelligence: AI, Intellectual Labour, and Berkeley’s Legacy in Public Policy

This was meant to be a review of Revolutionary Mathematics by Justin Joque, but it became an essay on one of his points. A friend sent me a great review—so I’m off the hook. Joque’s book examines the radical potential of mathematics to reshape society, critiquing conventional practice and positioning math as a tool for social change. He explores its intersections with culture and activism, urging us to rethink its role beyond traditional frameworks. For me, it sparked deeper questions about thinking itself—how knowledge, data epistemology, and human insight are fundamentally threatened by our growing reliance on the technology of ghostly inference, where intellectual labour is not merely automated but restructured, displacing those who once performed it while subtly embedding the very biases and inequalities it claims to transcend.

Joque’s reference to George Berkeley (March 1685 – January 1753) in his book piqued my curiosity, especially as Berkeley’s critique in The Analyst (1734) challenged the abstract nature of infinitesimals in calculus, an idea that I just re-read in Wittgenstein. These are, essentially, like quarks or clouds—elusive and intangible, but unlike quarks, which we can at least observe through their effects, or clouds that we can still see, the infinitesimals remain purely abstract, with no direct manifestation. Berkeley argued that these unobservable entities lacked connection to the empirical world, undermining their validity. This critique feels remarkably relevant today, especially with the rise of Artificial Intelligence (AI: see note below). As machines increasingly make decisions based on data, the human dimension of intellectual labour risks being diminished to mere computational tasks. Just as Berkeley questioned mathematical abstractions, we must consider the implications of this abstraction on human intelligence in the AI era.

The rise of artificial intelligence (AI) has become one of the defining phenomena of the 21st century, promising to revolutionize intellectual and manual labour across sectors; however, this promise comes with an implicit threat: the displacement of human thought and expertise by computational models, transforming the nature of governance and intellectual work. The increasingly widespread belief in AI as an agent of efficiency and progress echoes earlier philosophical debates about the nature of knowledge, reality, and the human condition. From the critique of metaphysical abstraction in the Enlightenment to contemporary concerns about automation, the tension between human intellect and technological systems is palpable.

Artificial Intelligence in this essay refers to a broad range of technologies, including artificial intelligence (AI), augmented intelligence (AI), large language models (LLMs), and other related computational tools that enhance decision-making, learning, and data processing capabilities. These technologies encompass machine learning, deep learning, and natural language processing systems that assist or augment human intelligence using computer algorithms.

This philosophical concern is rooted in the intersection of metaphysics and epistemology, where Bayesian probability can offer a framework for assessing belief and knowledge. As machines take over decision-making, Bayesian inference could be used to model how human understanding is increasingly reduced to probabilistic reasoning, driven by data rather than lived experience. The concept of “infinitesimals” in Berkeley’s work, too small to observe directly, mirrors AI’s abstraction, with Bayesian probability similarly depending on unseen or abstract factors. Just as Berkeley questioned mathematical abstractions, we must scrutinize the abstraction of human intelligence through AI systems and their probabilistic reasoning.

AI systems, particularly in governance, often prioritize efficiency over nuance, leading to challenges in addressing complex social issues. For example, AI-based predictive policing models aim to reduce crime by analyzing past data to forecast criminal activity. However, these systems can perpetuate biases by over-policing certain communities or misinterpreting patterns. In Canada, this is evident in the overrepresentation of Indigenous communities in crime statistics, where AI-driven policies may misdiagnose the root causes, such as historical trauma or systemic discrimination, instead of addressing the socio-cultural context that fuels these disparities.

The implementation of AI in public service delivery also poses risks of oversimplification, especially when addressing the needs of vulnerable groups. For instance, in Canada, Indigenous communities have historically faced barriers in accessing health care, education, and social services. AI systems may identify general patterns of need based on demographic data, but they often fail to recognize specific local and cultural factors that are critical in understanding these needs. By relying solely on data-driven models, policymakers risk overlooking essential aspects of accessibility, such as language, geography, or traditional knowledge systems, which are integral to Indigenous communities’ well-being. This could lead to recommendations that do not effectively support their unique requirements.

Furthermore, while AI can process vast amounts of data, its inability to understand cultural nuances means that these models often miss the lived realities of marginalized groups. For example, the challenges faced by immigrants and refugees in Canada are deeply rooted in socio-cultural factors that are not always captured in statistical datasets. AI systems designed to assess eligibility for settlement programs or integration services may overlook the role of social capital, support networks, or personal resilience—factors crucial for successful integration into Canadian society. As a result, AI can produce one-size-fits-all solutions that neglect the complexity of individual experiences, further deepening inequality.

These examples underscore the limitations of AI in governance. While AI systems can process vast amounts of data, they lack the cultural sensitivity and emotional intelligence required to address the intricacies of human experience. Human oversight remains crucial to ensure that AI-driven decisions do not ignore the lived realities of marginalized communities, particularly Indigenous peoples and immigrants in Canada. The challenge is not just technical, but ethical—ensuring that AI serves all citizens equitably, taking into account diverse cultural and social contexts. It is essential that AI is integrated thoughtfully into governance, with a focus on inclusivity and the preservation of human agency.

Berkeley argues that these "infinitesimal" quantities, which are too small to be perceived, cannot be validly used in reasoning, as they detach mathematics from tangible reality. For Berkeley, mathematical concepts must be rooted in empirical experience to be meaningful, and infinitesimals fail this test by being incapable of direct observation or sensory experience.

AI has begun to transform the landscape of intellectual labour, particularly in fields that heavily rely on data analysis. Where human analysts once crafted insights from raw data, AI systems now process and distill these findings at unprecedented speeds. However, the value of human expertise lies not only in the speed of calculation but in the depth of context that accompanies interpretation. While AI systems can detect patterns and correlations within data, they struggle to navigate the complexities of the lived experience—factors like historical context, cultural implications, or social nuances that often turn a dataset into meaningful knowledge.

Data analytics, now increasingly dependent on algorithmic models, also underscores this divide. Machine learning can spot trends and produce statistical conclusions, yet these models often fail to question underlying assumptions or identify gaps in the data. For instance, predictive analytics might flag trends in employment patterns, but it is the human analyst who can explore why certain trends occur, questioning what the numbers don’t tell us. AI is exceptional at delivering quick, accurate results, but without the reflective layer of human interpretation, it risks presenting a skewed or incomplete picture—particularly in the realm of social data, where lived experiences are often invisible to the machine.

As AI continues to infiltrate sectors like healthcare, immigration, criminal justice, and labour economics, it is increasingly tasked with decisions that once relied on human intellectual labour. However, these systems, built on historical data, often fail to account for the subtle shifts in context that data analysis demands. Machine learning systems may flag patterns of healthcare access based on prior records, but they might miss changes in societal attitudes, emerging public health challenges, or new patterns of inequality. These are the kinds of factors that require a human touch, bridging the gap between raw data and its true significance in real-world terms.

This shift is also reshaping the role of data analysts themselves. Once, data analysts were the interpreters, the voices that gave meaning to numbers. Today, many of these roles are becoming increasingly automated, leaving the human element more on the periphery. As AI systems dominate the decision-making process, intellectual labour becomes more about overseeing these systems than about active analysis. The danger here is the erasure of critical thinking and judgment, qualities that have historically been central to intellectual work. While AI excels at scaling decision-making processes, it lacks the ability to adapt its reasoning to new, unforeseen situations without human guidance.

As AI continues to evolve, its influence on governance and intellectual work deepens. The history of data-driven decision-making is marked by human interpretation, and any move toward a purely algorithmic approach challenges the very foundation of intellectual labour. The increasing reliance on AI-driven processes not only risks simplifying complex social issues but also leads to the marginalization of the nuanced understanding that human intellectual labour brings. This tension between machine efficiency and human insight is not merely a technological concern but a philosophical one—a challenge to the nature of work itself and the role of the intellectual in an age of automation.

This shift invites a reconsideration of the historical context in which intellectual labour has developed, a theme that is crucial in understanding the full implications of AI’s rise. The historical evolution of data analysis, governance, and intellectual work has always involved a negotiation between human cognition and technological advancement. As we look toward the future, we must ask: in an age increasingly dominated by machines, how will we ensure that human experience and judgment remain central in shaping the decisions that affect our societies? This question points toward an urgent need to ground AI in a historical context that recognizes its limitations while acknowledging its potential.

As AI becomes more central in shaping political and social policies, particularly regarding immigration, there are concerns about its ability to reflect the complex realities of diverse communities. The reliance on AI can lead to oversimplified assumptions about the needs and circumstances of immigrants, especially when addressing their integration into Canadian society. AI systems that analyze immigration data could misinterpret or fail to account for factors such as socio-economic status, cultural differences, or regional disparities, all of which are critical to creating inclusive policies.

This evolving landscape signals a deeper erosion of the social contract between Canadians and their governments. In immigration, for example, particularly in light of the 2023–2026 Data Strategy and the findings of CIMM – Responses to the OAG’s Report on Permanent Residents, ensuring human oversight becomes increasingly crucial. Without it, there is a risk of diminishing the personal, human elements that have historically been central to governance. The shift towards automated decision-making could alienate citizens and weaken trust in political institutions, as it overlooks the nuanced needs of individuals who are part of the democratic fabric.

AI’s increasing role in governance marks a shift toward the disembodiment of knowledge, where decisions are made by abstract systems detached from the lived experiences of citizens. As AI systems analyze vast amounts of data, they reduce complex human situations to numerical patterns or algorithmic outputs, effectively stripping away the context and nuance that are crucial for understanding individual and societal needs. In this framework, governance becomes a process of automating decisions based on predictive models, losing the human touch that has historically provided moral, ethical, and social considerations in policy formulation.

The consequences of this abstraction in governance are far-reaching. AI systems prioritize efficiency and scalability over qualitative, often subjective, factors that are integral to human decision-making. For example, immigration decisions influenced by AI tools may overlook the socio-political dynamics or personal histories that shape individuals’ lives. When policy decisions become driven by data points alone, the systems designed to serve citizens may end up alienating them, as the systems lack the empathy and contextual understanding needed to address the full complexity of human existence. This hollowing out of governance shifts power away from human oversight, eroding the ability of democratic institutions to remain responsive and accountable to the people they serve.

The COVID-19 pandemic served as a catalyst for the rapid integration of AI in governance and society. As governments and businesses shifted to remote work models, AI tools were leveraged to maintain productivity and ensure public health safety. Technologies like contact tracing, automated customer service bots, and AI-driven health analytics became critical in managing the crisis. This acceleration not only enhanced the role of AI in public sector decision-making but also pushed the boundaries of its application, embedding it deeper into the governance framework.

The pandemic also saw the domestication of AI through consumer devices, which became central to everyday life. With lockdowns and social distancing measures in place, reliance on digital tools grew, and AI-powered applications—like virtual assistants, fitness trackers, and personalized recommendation systems—found a more prominent place in households. These devices, which had once been seen as niche, became essential tools for managing work, health, and social connections. The widespread use of AI in homes highlighted the shift in governance, where decision-making and the management of societal norms increasingly came under the control of automated systems, marking a techno-political shift in how people interact with technology.In revisiting Berkeley’s critique of infinitesimals, we find philosophical parallels with the rise of AI. Berkeley questioned the very foundation of knowledge, suggesting that our perceptions of the material world were based on subjective experience, not objective truths. Similarly, AI operates in a realm where data is processed and interpreted through systems that may lack subjective human experience. AI doesn’t “understand” the data in the same way humans do, yet it shapes decision-making processes that affect real-world outcomes, creating an abstraction that can be detached from human experience.

This disconnection between machine and human experience leads to the dehumanization of knowledge. AI systems operate on algorithms that prioritize efficiency and optimization, but in doing so, they strip away the nuanced, context-driven understanding that humans bring to complex issues. Knowledge, in this sense, becomes something disembodied, divorced from the lived experiences and emotions that give it meaning. As AI continues to play a central role in governance, the process of knowledge becomes more mechanized and impersonal, further eroding the human dimension of understanding and ethical decision-making. The philosophical concerns raised by Berkeley are mirrored in the ways AI reshapes how we conceptualize and act on knowledge in a tech-driven world.

The rapid integration of AI into intellectual labour and governance presents a profound shift in how decisions are made and knowledge is structured. While AI offers the promise of efficiency and precision, its growing role raises critical concerns about the erosion of human agency and the humanistic dimensions of governance. As AI systems replace human judgment with algorithmic processes, the risk arises that complex social, political, and ethical issues may be oversimplified or misunderstood. The hollowing out of governance, where decision-making is increasingly abstracted from lived experiences, mirrors the philosophical critiques of abstraction seen in Berkeley’s work. The human element, rooted in experience, judgment, and empathy, remains crucial in the application of knowledge. Without mindful oversight, the adoption of AI in governance could result in a future where technology governs us, rather than serving us. To navigate these challenges, preserving human agency and ensuring that AI tools are used as aids rather than replacements is essential to maintaining a just and ethical society.

Berkeley’s philosophy of “immaterial ghosts”, where the immaterial influences the material world, aligns with Richter’s cloud paintings at Ottawa’s National Gallery of Canada, which evoke a similar sense of intangible presence. Both focus on the unseen: Berkeley’s spirits are ideas that influence our perceptions, while Richter’s clouds, as abstract forms, suggest the unknowable and elusive. In this way, Berkeley’s invisible world and Richter’s cloudscapes both invite us to confront the limits of human understanding, where the unseen shapes the visible.

B&W photography and the benefits of looking up!

In black and white, architecture transforms into pure form—sharp lines and intricate textures stand out, while windows become portals to another world. The absence of colour forces the eye to focus on structure, light, and shadow, revealing the timeless beauty of built environments.

Looking up at the National Art Gallery in Ottawa, the stark contrasts of its glass and stone façade come to life in black and white. The sharp edges and sweeping curves of the architecture create a powerful dialogue between light and shadow, revealing the gallery’s majestic presence.

Looking up at the Maman statue outside the National Art Gallery in Ottawa, its towering, spider-like form becomes an intense study in contrast. The black and white frame emphasizes the intricate details of its legs and body, casting dramatic shadows that evoke both awe and vulnerability.

Fables at the National Arts Centre

Really enjoyed this work by Virgine Brunelle the other night at the National Art Centre. After a pretty awesome meal at 1Rideau I sat down for a sensory explosion of both visual and audio sensations.

In Fables, Virginie Brunelle creates a visceral exploration of chaos and resilience, where contemporary feminine archetypes collide in a raw, primal dance. Drawing from her background in violin, Brunelle intricately weaves rhythm and movement, pushing the boundaries of traditional dance. The performers’ bodies, mostly naked and raw, amplified by their breath and cries, move through a sonic landscape composed by Philippe Brault and performed live by Laurier Rajotte on the piano, embodying a world in turmoil yet yearning for hope and humanity.

A particularly striking element is the immersive audio experience in the opening set, where a cast member swings a microphone close to the dancers, amplifying their physicality. This not only heightened the intimacy of the piece but also allowed me to feel the dancers’ movements—every breath, every collision becomes a tactile experience. Very immersive. The live soundscape intertwines with the dancers’ raw physicality, drawing the audience deeper into the emotional urgency of the piece. This fusion of sight and sound creates a profound connection, turning the stage into a space where chaos, music, and movement converge in a shared sensory reality.

More ChatGPT: learning about how to learn about er, learning.

Or how I am spending too much time on ChatGPT. ChatGPT is now being incorporated into the Bing Search engine BTW.

And forget about the hand wringing about teachers being underpaid to read computer generated text or the oft-repeated end of the humanities, most dangerously, this will make it easier for people to write malware.

In terms of data literacy, literacy is literacy, right? So I asked ChatGPT to give me a simplified overview of what it does when a user types text into its interface. As an aside, I have been testing it with gibberish much akin to the Turkish thriller Hot Skull (see another post): communication (ergo civilization) depends on iterative translation of understandable context and content – the Chomskean reference in the first episode of the show, I assume.

As you can see below from ChatGPT’s response it is straight forward semantic analysis but this semiology is, to me, not the same as that for humans: machine semiotics ≠ human semiotics. But how is it different is what I am most interested by. Can machine dynamic semantics be trained solely using iterative utterance pairs? What can I learn about human learning from learning about machine learning trying to mimic human learning. Am I caught in a cyclical redundancy error?

When I receive a user's input, here is a more detailed explanation of the process I go through:
For example, I know that verbs are often used to describe actions or states of being, while nouns are used to name people, places, or things. Adjectives are used to describe nouns, and adverbs are used to describe verbs, adjectives, and other adverbs.
1. Tokenization: The first step is to split the input into individual words and punctuation marks, a process called tokenization. This allows me to analyze the input at the word level.
2. Part-of-speech tagging: Next, I assign a part of speech (such as noun, verb, adjective, etc.) to each word in the input, based on its definition and its role in the sentence.
3. Dependency parsing: I then analyze the relationships between the words in the input, taking into account their grammatical roles and the structure of the sentence. This allows me to understand the meaning of the sentence as a whole.
4. Sense disambiguation: If a word has multiple meanings, I use the context of the sentence and the rest of the input to determine which meaning is most appropriate.
5. Entity recognition: I also try to identify any named entities (proper nouns) in the input, such as people, places, and organizations.
6. Response generation: With all of this information, I can then use my knowledge and understanding of language to generate an appropriate response. This might involve looking up information, using my pre-defined responses to similar inputs, or using my understanding of the user's intentions to formulate a response.
In order to determine the role that a word is playing in a sentence (also known as its part of speech), I use my understanding of grammar and the relationships between words.
By analyzing the relationships between words in a sentence and their definitions, I can determine the part of speech of each word and understand the overall meaning of the sentence.

I followed up with a question about what item 4 was, which seems a key step in its assignation. How does it connect content with context?

Sense disambiguation is the process of identifying the correct meaning of a word in a given context. Words can often have multiple meanings, and it's important to choose the right one in order to understand the intended meaning of a sentence.
For example, the word "bass" can be a noun referring to a type of fish, or it can be a noun referring to a low-frequency sound or a range of notes in music. In order to disambiguate the sense of the word, it's necessary to consider the context in which it is used. If the word appears in a sentence about music, it is more likely to refer to a range of notes, while if it appears in a sentence about fishing, it is more likely to refer to a type of fish.
There are a number of techniques that I can use to disambiguate the sense of a word, including analyzing the surrounding words and phrases, considering the overall topic of the conversation, and using my general knowledge about the world. By using these techniques, I can choose the most appropriate meaning for a word and better understand the intended meaning of the sentence.

ChatGPT miscellany

I asked ChatGPT is come up with the plot for a cop buddy movie between a person (me) and an AI program. While not going to win points for originality, I am really intrigued by the two “mishaps and misadventures”, i.e. fire alarm and the vending machine.

I’m also interested that I didn’t provide the name or gender of the AI but I wonder if the word “sassy” would have been used if the name chosen wasn’t a “feminine” name. This may be an example of learned bias being reinforced.

The movie follows the adventures of two unlikely partners: a bumbling police detective named Michael and a sassy artificial intelligence named Alice. When Michael is assigned to a high-profile case involving a string of robberies, he is paired with Alice, a newly developed AI designed to assist with investigations.
At first, Michael is skeptical of Alice's capabilities and unwilling to trust a machine to help with his work. But as they team up to track down the perpetrators, Michael starts to see that Alice is more than just a machine - she is a valuable partner with her own unique skills and insights.
As they work together to solve the case, Michael and Alice encounter a series of hilarious mishaps and misadventures, including accidentally setting off the building's fire alarm while trying to hack into a computer, and getting into a heated argument with a vending machine over the price of a snack.
Despite the challenges they face, Michael and Alice always manage to come out on top thanks to their combined wit and determination. In the end, they successfully crack the case and bring the perpetrators to justice, becoming the most unlikely crime-solving duo in the city. As they celebrate their victory, Michael realizes that he has learned to appreciate and trust Alice's unique abilities, and the two vow to continue working together to keep the city safe and laughing.

Pranayama and Meditation resources – One

Each time I practice I look to one or more of the following thoughts – what do I confront? What do I transform? What do I integrate or innovate? I always like to have a clearly stated intention when I begin.

I intend to assemble here and in subsequent posts, accessible objects that I use in my daily ritual.

I begin each day during brahma muhurta, one hour and thirty-six minutes before the rise of the sun. This is the penultimate muhurta, a period of 48 minutes – 1/30th of a day. Sunrise in Ottawa today was at 7:38.

I will digress that this word muhurta in Sanskrit is made up of two sounds that refers to the immediate and to how this immediacy functions as an ordering – a perpetual now found in thinkers familiar in the Western tradition from Heraclitus to Nietzsche to Latour.

I will order my bed and brush my teeth before I begin my wake-up routine of pranayama and meditation. I was told a long time ago that monks always start with ordering their space in preparation for practice which makes a lot of sense to me even though I am no monk. And while my meditation always begins during brahma muhurta, my pranayama may end after sunrise.

My goto app in the morning for meditation is Oak. I dont use it any other time except for my morning routine. It is a habit. Its free and does exactly what I needed it to do when I started this ritual when I was at home during the pandemic. It show my progress (good for motivation since I easily depreciate my accomplishments) and has both breathing and mediation timers including box breathing. I dont’ use it for Tummo sessions which I will describe in another post.

Oak is pretty basic but a nice GUI and breathing shouldn’t be anything but basic anyway. IOS only. I did pay for the course and it was worth the 2 bucks I paid, if, for nothing else, as an offering to initiate my ritual.

My current streak. Its on my old phone too so I always bring it with me when I travel since even though the app is on my new devices, my streak record is only on this phone and I cannot break the chain!*
*Breaking the chain refers to Jerry Seinfeld's rule about being successful at things - using a visual indication - in his case a calendar where he puts a big X each day he writes jokes. By having a visual cue, it was much easier for me pay attention to creating rituals and habits. It is ritual, according to Oscar Wilde, that is the origin of religion and, for us moderns, the basis of Cognitive Behavioural Therapy too. 

I was using pillows on the floor or a chair and decided to upgrade my meditation setup during the pandemic. Again, like breathing, this should be simple. I started on a chair and laying down. And even my zabuton from Half Moon, while comfortable, isnt perfect but all you really do just need to sit. Or lie down. Pretty simple. Don’t buy anything, all you need is your breath.

I still enjoy and watch this series : 7 Days to Mindfulness with Kirat Randhawa. I appreciate her calm demeanour in this series along with some guidance on topics such as finding your centre and how clarity and curiosity work in the meditation process and its use in day to day life when I am not in mediation. I also watch Netflix’s Headspace Guide to Mediation which was very accessbile and enjoyable. It was a great beginner resource for me when I initially found it, a perpetual beginner.

I come back to these teachings once in a while and find new things in them, each time! Next up, my pranayama practice and books!

Links for the week of December 12, 2022

A lot of people are talking about the new algorithmic “chat bot” that will, according to some, herald the end of the humanities as we know them. And while I have found some super interesting things that the chat-bot can do with “simple” prompts, it is quite interesting to reverse engineer some things that you might take for granted. More to come on this as I tinker.

The link that I will suggest is that StackOverFlow has added a new, temporary, policy directly about the use of Chat GPT stating that, among other things, the requirement for citations and the ability of the community to judge how accurate the answer responded to the query posted is core to this action. It is this latter part that is most interesting to me: persons with subject matter expertise may interpret the initial question (stimulus) and suggest what the requestor really wanted to know with this line of inquiry. See Metaphoric and Metonymic knowledge by Roman Jacobson and his influence on Lacan and Barthes. Chomsky too, for a delicious connection with Hot Skull below.

And while I tend to avoid Canadian politics, I will suggest that the fact that Canadian business people actively sought and received public COVID aid and then didn’t spend it for its intended purpose of employment is pretty fucked up. I appreciate this type of reporting from Jacobin.

I finished two television series this week, both well worth the time. The first is Mike White’s White Lotus, season 2. Review on Jacobin here that I found after reading above mentioned article about corruption. White Lotus is filled with lots of great characters and the scenery in Sicily was awesome with Mount Aetna in the background as in my screenshot below. I am excited for season 3. And hope to see more of Jack from Essex who, like the son in Season One, remains hopeful although broken. Will Greg make an appearance in season 3? Portia? I’m eagerly awaiting its arrival!

As I explained to a colleague, what I loved the most about White Lotus was that the initial episode (the beginning) was the end of the story but I was constantly amazed at how things that I assumed were true as the story progressed only appeared that way because of how I initially interpreted key scenes from that first episode. Great storytelling that showed many of my own biases and affectations with various herrings strewn about. It certainly kept me entertained.

And TIL that the writer, Mike White, was on the US reality tv show Survivor.

The White Lotus Season 2 – Mount Aetna spewing lava

The second series is Hot Skull from Netflix Turkey. I loved the back drop of Istanbul. A virus spread by speech that drives people into confusion and stupor? Certainly intriguing. It was in the first or second episode where they mention Noam Chomsky and fitting this pandemic along with an ambitious leader exploiting the crises for his own self aggrandizement is a recipe for success.

Hot Skull

News from the Net – Last Week of November

I spend a lot of time online, as many do. Even this website is mostly just used for my own purposes, the WordPress front to this site is a vanitas project that I mostly used while I travelled but who travels anymore. I still code in the background but have been struggling to make posting a habit.

To that end, perhaps I will add links to stories that I enjoyed reading. I told myself that this was a great way to collect data on what I am paying attention to. That was enough to motivate me. I think. I’ll start tagging posts too that will help me create metrics for the data collection exercise.

First up, China. I have many unpublished writings on the geopolitics of China and Ukraine, mostly related to the historiography of Diplomatic History. My draft folder is replete with stories about the decline of empire but as I incorporate more and more history back into my “Cold War” reading list, I realize how important writers such as Odd Arne Westad are still changing my thinking about these issues as much as Melvyn Leffler did in grad school.

This Wired article about the protests in China is the most important piece from this week. COVID mobility restrictions is increasingly oppressive in many areas as officials struggle to immunize this vast population. Couple this with the inability of Xi to take full advantage of the transition of Hong Kong and the death of Jiang Zemin, and it is no wonder that the ghosts of Tiananmen are on the mind. Funerals and earthquakes tend to be external events that motivate social change in China.

The vector to technology and the Great Firewall of China is important. PRC have loosened mobility restrictions in Guangdong province but doing this may fuel discontent in other regions as they learn of this. Can central authorities maintain message control in the light of this “white paper” revolution?

This book on Basquiat is the book release of the week (I type this like it is a thing), but art history remains central to my reading and how I spend (too much of) my time. I am working on an article on Philip Guston and one thing that I am constantly reminded of in his late work is the street graffiti of New York. And while Basquiat is normally seen through the eyes of Warhol, Guston may be more appropriate in an asemic, object oriented, perspective. I’ll write more when I get this book. It will go on the list.

So two links for this week. More to come.

Zhuangzhi

I settle my body like a tree stump.

I hold my arm like the branch of a withered tree; out of the vastness of heaven and earth, the multitude of the myriad things, I am only aware of cicada wings.

I don’t wander or waver, and I would not take all the myriad things in exchange for the wings of the cicada.

How could it be that I’d fail to succeed?