Conceptual Blending and LLMs

I tripped over the concept of “Conceptual Blending” (thanks Kat). From wikipedia: According to this theory, elements and vital relations from diverse scenarios are “blended” in a subconscious process, which is assumed to be ubiquitous to everyday thought and language. Again related back to ensembles/multiple perspectives, connected to the bag of analogies thinking from the other day. The best presentation of the idea is the book “The Way We Think: Conceptual Blending and The Mind’s Hidden Complexities” by Gilles Fauconnier and Mark Turner. ...

January 23, 2025 · 6 min · Jason Brownlee

Incomprehensible Artifacts From Our AIs

LLMs, or models like them, are going to start giving us artifacts that we cannot (easily) comprehend. We’ve been in this boat for a while, first with stochastic optimization algorithms (the classic nasa evolved antenna), and later with automated theorem proving. Algorithms optimize for an objective solution and we get a thing that looks like it solves the problem we want, but it’s opaque (strange, large, complex, etc.). This came to mind because of a tweet the other day (January 20 2025) by George Hotz. ...

January 23, 2025 · 4 min · Jason Brownlee

LLM Prompt Optimization

LLM prompts matter. The quality and nature of the prompt influences the quality and nature of the response. There is a space of candidate input prompts and a corresponding map of LLM responses, some of which are better/worse for whatever problem we are working on. We can frame a black box optimization problem that proposes and tunes candidate prompts for an LLM to optimize a target response. In effect, we would be finding good/better starting points in latent space from which to retrieve the desired output. ...

January 23, 2025 · 4 min · Jason Brownlee

Abstraction and Analogies with LLMs

I was listening to a recent episode of the Machine Learning Street Talk podcast (a very fine podcast!). Specifically: How Do AI Models Actually Think?, Laura Ruis. Fantastic episode! Just great. I need a re-listen. Early in the conversation they touch on (I’m paraphrasing, probably wrongly) whether LLMs think/reason as Douglas Hofstadter suggests, abstracting via a collection of analogies. It’s agreed they do. The host touches on Hofstadter’s often repeated quote: ...

January 22, 2025 · 9 min · Jason Brownlee

Gyms For All The Skills That LLMs Are Eating?

Use it or loose it. We used to do manual labor which had the dual benefit of getting the things we needed (hunt->food, work->money, etc.) and keeping us in reasonable physical (and mental) condition. No longer for many of us, so our bodies atrophy. To fight the entropy, many of us go to the gym. We simulate the labor we used to do in order to keep our bodies in good condition and reap the rewards (energy, look/feel better, longer life, etc.). ...

January 22, 2025 · 4 min · Jason Brownlee

LLMs as Fitness Functions in Stochastic Optimization

The hard part of stochastic optimization is the evaluation function. You get whatever you’re optimizing-for or toward and it’s always a trade-off, even if you can’t see it at first. This got me thinking, there must be tons of problems that we cannot optimize easily because we don’t have good (cheap) fitness functions where we could use an LLM to step in. I know I’ve read papers on something like this in the openendedness literature. ...

January 22, 2025 · 5 min · Jason Brownlee

Mountain Climbers Collect Peaks

I was reading Perdurabo about Aleister Crowley. In his youth he was accomplished mountain climber and the book touches on a suite of his climbing achievements in the early chapters. This got me thinking. Mountain climbing is a challenging hobby where the climbers “collect” peaks. That is, they pick hard maintains to climb, that may or may not have been climbed by others, and climb them for fun. It is about personal accomplishment. ...

January 22, 2025 · 5 min · Jason Brownlee

That Scene From "The Fountainhead"

I’ve probably read Ayn Rand’s The Fountainhead about a dozen times over the years. I’m not really into the philosophy of Objectivism, but I’m a fan of the individualism in the story. Really, I’m a sucker for simple hero stories (think Ender’s Game, Dune, etc.) Not sure I’ll read the book again for a while. It’s comforting to re-read, but I’ve had enough for now. Anyway, there’s a scene that I think back on often. ...

January 22, 2025 · 2 min · Jason Brownlee

Misophonia

I have Misophonia. At least, I strongly suspect I do, self-diagnosed (!). Misophonia is a neurological condition where specific sounds trigger strong negative emotional and physiological reactions. Common trigger sounds include chewing, breathing, or repetitive noises. People with misophonia may experience intense anger, anxiety, or panic when hearing these triggers, often leading them to avoid situations where they might encounter these sounds. The main daily triggers for me are: Chewing. Breathing. Tapping. Loud walking (stomping/scraping/etc). Lisps. Chewing/slurping/eating noises though. Every meal is hard. ...

January 21, 2025 · 4 min · Jason Brownlee

LLM-Based Recommendation Engine

It occurs to me that we can use LLMs to give better recommendations. I want this for books. The goodreads recommendations are crap. The ton of “new hot books” newsletters I subscribe to are crap. I want a daily email that makes highly specific recommendations made by an LLM based on: The books I am reading and have just read. The authors of those books. The genres of those books. The general areas of interest/study of those books. The specific details in my background. I typically read on topics that interest me right now and I typically read a cluster of books on a topic. ...

January 20, 2025 · 1 min · Jason Brownlee