London 2024 – the Food!

Kiln, Palomar and Frog. I cannot really describe how good all the food that I ate was. From the fish and chips to the street Thai food, everything was so good. These are the highlights!

Kiln was so good. I was speaking with a couple out front before it opened and opted to sit kitchen side after hearing their recommendation. Someone else said eat the glass noodles and they were so good. The ox heart was surprisingly tasty and that kale disk had no right being as good as it was.

Palomar’s cuisine was also so good. Middle Eastern and the various dips, sauces, and vegetables were just so damn tasty. I savoured each and every bite. The kitchen was nothing but hot coals and clay pots. My mouth was on fire with the hot spices and I was there for it!

Frog was a spectacular experience! I was sitting kitchen side so it was great being served by chef. He explained each dish as he served and the storytelling was almost as good as the food. Almost. His philosophy of sustainability and how he wants to redefine food “waste” is intriguing to say the least. The whole experience was amazing and the attention to detail was exceptional. Down to the take away box of sweets.

I was spoiled for choice in London and I cannot wait to get back and enjoy more.

London 2024 – Friday

Today was Westminster Abbey and La Boheme! Westminster Abbey is a beautiful space! There was a cool augmented reality exhibition about the reconstruction of Notre Dame in Paris. You walked around with an iPad and used QR codes to visit information packages about the fire and the history of the church. La Boheme was fantastic. All three shows at the Royal Opera House were just incredible. The first few photos are from the window of my hotel room with the Shard in the distance.

London 2024 – Thursday

I’m pretty sure that this was the first day that I was feeling less jet lagged. I was so happy with my hotel choice. The Clermont at Charring Cross was located so perfectly that I was able to access everything that I wanted. I think the furthest that I went was the Tate Modern but even that was only 25 minutes or so away.

I visited the Roman Mithraeum which I loved. I walked past St. Paul’s which is quite an architectural specimen, as were many of the buildings in the area. I was on my way to an archaeological exhibit on the site of a roman temple from 2 millennium ago. The cult of Mithras was imported to Rome and the Zoroastrian heritage is still present. From there I crossed the Millennium Bridge where my seat mate from the ballet the night before had mentioned that I could find some “micro art” on the foot bridge. Sure enough, there were hundreds! I walked the southern shore line and was hoping for a good photo with the reflections in the water.

I was going to a presentation at the University of London in the evening, it was a book launch for a new title Legacies of Migration and a few of the chapter authors were there to discuss their topics. One of them was about Van Gogh and his year in London in his early 20s. The main thesis is, as you can imagine, that migration is a constant in London and the city benefits from its multicultural past and present. I spent the afternoon at the British Museum which is beside the building with the lecture hall.

London 2024 – Wednesday

Did I mention that I was tired? London is just such an exciting city. There is so much to see and explore! I went in search of the Noses of SoHo. I ended up finding 4… or 3 and a nail before I grabbed lunch and some sweets. Then the National Gallery, for a few hours. The National Gallery collection is like visiting an art history textbook. Room after room of amazing art. Massacio, Rembrandt, della Francesca, and Titian around every corner.

Manon was exquisite! I’m certainly no expert on ballet but I loved it. It was a very moving performance and the entire experience at the Royal Opera House was such a treat.

London 2024 – Tuesday

I was tired by day 2. Excited and tired. Im happy that breakfast was great to start the day. I probably had too much coffee but I had a big day ahead of me! The genesis of my trip was the Philip Guston exhibition at the Tate Modern. It was pretty amazing entering the first room and seeing his paintings in real life. I had only seen the one in the archives downstairs at our own Nation Gallery and its presence is felt with the texture of the brushstrokes and the size of the works. These are his murals and I wandered back and forth, room to room just awed by seeing this collection. I stopped at the Courtald for the Frank Auerbach show of his charcoals. Auerbach’s work that I also saw a few years back at the Tate Britain show is architectural in the way that he applies form, perhaps sculptural is a better word? The Courtald also has an amazing collection of the Impressionists and I lingered in front of the Cezanne for too long perhaps. And after a phenomenal dinner, (I’ll make a separate post just for food photos) I wandered the Parliament district is search of some black and white photo opportunities. My camera loved London.

London 2024 Day by Day – Monday

I arrived early in the morning and was grateful for an early check-in at the hotel. I took a friends advice and went to the Churchill Museum and wandered the sunny streets of London. Tosca was fantastic! I saw it in Ottawa with a friend years ago and was looking forward to this production.

London 2024 – some iPhone images

Wow! What a trip. It was only 5 days but I tried to pack a lot into it. I have imported all of my camera photos but always start with my mobile images since they are fewer. The flight, hotel, food, culture and entertainment were fantastic. And the Royal Opera House is a beautiful venue! Tosca, Manon and La Boheme were so beautiful to experience in a live setting.

The Evolving Landscape of Language Models: Exploring Reasoning, Learning, and Future Horizons

Rumination on Q* and what it could potential imply.

Q-learning and STaR are, I think, what OpenAI is talking about when it references Q*.

Language models’ capacity for nuanced reasoning has been a focal point of research. Enter the Self-Taught Reasoner (STaR), a groundbreaking technique that augments language models by integrating sparse rationale examples with vast datasets. This innovative approach fosters an iterative learning process, refining models to generate coherent chains of thought for diverse problem-solving tasks.

See STaR: Self-Taught Reasoner Bootstrapping Reasoning With Reasoning for more details.

The essence of STaR lies in its ability to fine-tune models based on the correctness of generated rationale. This iterative refinement loop catapults language models to not only achieve significant performance improvements but also to rival larger, more resource-intensive models on complex tasks like CommensenseQA. Does this mean that the model has surpassed the human results? From 56% on the original trials to equal 89%, the human performance, or more?

STaR’s success embodies a pivotal shift—a leap forward in language models’ autonomous reasoning. It sets a precedent for future advancements in bridging the gap between artificial intelligence and human-like cognition, redefining the boundaries of what these models can achieve.

Beyond STaR’s iterative prowess, insights gleaned from Q-learning and Markov chains provide critical guidance for scaling language models’ performance. Studies leveraging these concepts reveal a foreseeable decline in model performance as problem complexities increase.

Q-learning is a fundamental concept in reinforcement learning, a type of machine learning. It involves an algorithm that enables an agent to make decisions in an environment to achieve a specific goal. Through trial and error, Q-learning helps the agent learn the best action to take in a given state to maximize its cumulative reward. It does this by updating a Q-table, which stores the expected future rewards for each action in every possible state. Over time, the agent refines its actions based on the values in this table, gradually optimizing its decision-making process in complex environments without prior knowledge of the environment’s dynamics.

An aside – the implications of these insights underscore the necessity of strategically balancing computational resources during both training and testing phases. This balancing act becomes imperative for ensuring sustained model performance across a spectrum of intricate problem landscapes. The parallel nature of these once linear processes is where my interests lie*.

* For those asking for clarification, this has to do with Douglas Hofstadter’s work Gödel, Escher, Bach that discusses a cybernetic hierarchy comprised of a hierarchical “stack” of instructions that carry out functions. For Hofstadter a program that rewrites itself violates this hierarchy.

Consider a scenario where language models seamlessly engage in real-time problem-solving during emergencies, prioritizing resource allocation akin to a human decision-making process. These insights lay the groundwork for future innovations, enabling language models to navigate diverse problem spaces with enhanced adaptability and efficacy. But how future? What defines the constantly shifting reward modeling? How does it allocate rewarding?

Language models, once confined to simple word predictions and text generation, have undergone a paradigm shift. They now navigate intricate reasoning tasks, delve into problem-solving domains, and strive towards human-like cognitive capabilities.

The journey towards refining reasoning capabilities extends into the domain of mathematical problem-solving—a seemingly straightforward yet challenging realm for language models. The GSM8K dataset encapsulates this complexity, revealing the struggle even formidable transformer models face in navigating grade school math problems.

To overcome this hurdle, researchers advocate for training verifiers to scrutinize model-generated solutions. The success of these verification mechanisms showcases their potency in augmenting model performance, especially in handling diverse problem distributions. This essentially not only increases the frequency but also the total distribution of rewards in any space, a clustering of rewards. Makes sense, this mirrors real world learning.

In the pursuit of refining reasoning capabilities, the exploration of supervision techniques emerges as a pivotal aspect. A comprehensive investigation into outcome and process supervision reveals the latter’s superiority in training models for intricate problem domains. Checking each step of a process, enabling reward reinforces accuracy rates.

Process supervision, with its meticulous feedback mechanism for intermediate reasoning steps, exhibits unparalleled reliability and precision. When coupled with active learning methodologies, exemplified by the release of PRM800K, this supervision approach propels related research endeavors, promising a robust foundation for future advancements.

Consider a scenario where these models assist in personalized education, adapting to individual learning styles, or co-create narratives alongside authors, blurring the lines between artificial and human creativity. The potential for language models to revolutionize domains extends far beyond what we envision today.

Imagine language models not just deciphering language but engaging in philosophical discussions about complex moral dilemmas or even participating in real-time collaborative problem-solving scenarios during crises. And I think that a lot of the discussion about the “Crossing of the Rubicon” in the miasma of the last week at OpenAI revolves around the fact that now capable, the ethical “wrapper” is a shadow but imperative. Their ability to actively engage in profound ethical debates remains a nascent area.

Envision language models not just decoding textual content but understanding the depth and nuances of moral quandaries. Imagine a scenario where a language model is posed with a complex moral dilemma, such as the classic “trolley problem,” where decisions involve choosing between utilitarian principles and individual rights. The model, armed with extensive knowledge of ethical theories and moral reasoning, would not only parse the scenario but engage in a dialogue, weighing the pros and cons of different ethical frameworks and articulating its stance on the matter.

For instance, such a model could explore various ethical perspectives—utilitarianism, deontology, virtue ethics, or ethical relativism—articulating arguments, counterarguments, and the implications of each stance. It could draw from historical ethical debates, ethical principles, and even contemporary ethical dilemmas to contextualize its responses.

The implications of this extend far beyond theoretical discourse. Language models proficient in ethical reasoning could aid in decision-making processes across diverse fields. They could assist in ethical assessments in various industries, offer guidance in moral reasoning to individuals facing ethical quandaries, or serve as a tool for educators to facilitate discussions on ethics and morality.

However, such advancements raise profound questions and challenges. Ethical reasoning is inherently complex and often involves subjective considerations, societal norms, cultural context, and emotional intelligence—factors that are intricate for machines to grasp fully. The ethical development of such models would necessitate a deep understanding of not just logic but empathy, context, and the ability to comprehend the subjective nature of human ethical reasoning.

Moreover, the ethical implications of deploying such models into real-world decision-making contexts warrant careful consideration. How would we ensure the models’ reasoning aligns with societal values? How do we mitigate biases or unintended consequences in their ethical assessments?

Future innovations might unveil models that not only traverse language intricacies but also navigate philosophical landscapes, challenging societal norms, and catalyzing groundbreaking innovations across diverse domains. These reflections offer a glimpse into a future where language models not only emulate human-like reasoning but also shape the realms they interact with.

The landscape of language models has traversed a remarkable journey—from simple text generation to sophisticated reasoning and problem-solving. The advent of methodologies like STaR, insights from Q-learning and Markov chains, and the exploration of supervision techniques have thrust these models into realms once deemed unattainable.

As these advancements continue, the horizon of possibilities expands, offering a glimpse into a future where language models not only comprehend language intricacies but also engage in profound philosophical discourse, challenge societal norms, and catalyze innovative breakthroughs. The journey of language models is an ongoing exploration, promising exciting possibilities and transformative impact across various domains.

Naples, Palermo and Rome with my Holga pinhole 50mm

I decided to rummage through my photos for the ones that I took with my Holga lens. The grit of Naples and Palermo, let alone the Pantheon on a dark and rainy night in Rome, made for some nice black and white pictures.