Learn Best By Programming

I’ve often thought/said/repeated that I “learn best by programming”. It seems true. The process is typically something like: pick a concept read enough about it to get the gist implement it in code it doesn’t work iterate: reading other sources and updating the code until it works share a tutorial on the concept taught via a worked example in code (bonus!) I did this during my phd (60+ tech reports), when writing my first book (~50 optimization algorithms), with machine learning mastery (1000+ tutorials, 20+ books on machine learning and deep learning), and with super fast python (500+ tutorials, 14+ books on python concurrency), and on. ...

January 30, 2025 · 5 min · Jason Brownlee

Doomers, Gloomers, Zoomers, and Bloomers

I’m currently reading Reid Hoffman’s new book “Superagency”. In an early chapter, he condenses and summarizes the main ideological AI camps into 4 categories: So far, at least four key constituencies have been informing the discourse: the Doomers, the Gloomers, the Zoomers, and the Bloomers Here’s a definition of each in turn (summarized by gpt4o from a quote taken from the book): Doomers – People who believe AI poses an existential threat to humanity, especially if superintelligent, autonomous AI systems become misaligned with human values. They fear worst-case scenarios where AI could decide to destroy or enslave humanity. Their concerns often center on long-term risks, including AI-driven human extinction. ...

January 30, 2025 · 5 min · Jason Brownlee

Ideation Framework

I tripped over an ideation framework by John Rush shared in a tweet. Here is a local copy (tweets get deleted sometimes…): Here’s the tweet where it came from: My ideation framework: (I used it to launch over 20 startups) pic.twitter.com/T0SwdBHLhL — John Rush (@johnrushx) January 23, 2025 It’s a great framework. Here’s a snippet of what it means, as described by our friend gpt4o: Start with pain points as the best source of high-quality ideas. Consider technology as a way to adapt existing solutions or create new applications. Cloning can be a valid strategy but typically lacks originality unless combined with significant differentiation. Thoughtfully evaluate ideas’ potential impact, innovativeness, and market fit. Almost gpt buddy, good try. ...

January 30, 2025 · 2 min · Jason Brownlee

Best Scores on Machine Learning Datasets

For about 20 years, I’ve been obsessed with the idea of getting the best score for a machine learning dataset. It started in postgrad when we would talk about learning algorithms and the problems and datasets we were all using to demonstrate that one algorithm was “better” than another. A good friend in our research group always pointed out Włodzisław Duch’s work. Back in the early 2000s, he had a website that listed a suite of the standard ML datasets and the algorithms + configs that achieved the best scores (as reported in published papers), and most importatnyl, the scores they achieved. ...

January 29, 2025 · 3 min · Jason Brownlee

Podcasts Make TV Better

I don’t watch a ton of TV anymore. And I’m not a consumer of youtube. Never really liked it. Nevertheless, I like to watch one, and only one, hour of a tv series (or a contiguous block of a movie) as part of each nights wind-down routine. I’m a huge consumer of podcasts. I’ve been consuming podcasts of movie reviews for a long time. Mostly movies I watched a long time ago. Often movies I won’t see and just want to get a vibe for the plot and quality. ...

January 29, 2025 · 3 min · Jason Brownlee

LLM Meta-Cognition and Exploring the Adjacent Possible

Andrej Karpathy has a wonderful tweet on what he calls learned “cognitive strategies” but I think more generally is referred to as “meta cognition”. The piece I like is: …The models discover, in the process of trying to solve many diverse math/code/etc. problems, strategies that resemble the internal monologue of humans, which are very hard (/impossible) to directly program into the models. I call these “cognitive strategies” - things like approaching a problem from different angles, trying out different ideas, finding analogies, backtracking, re-examining, etc. Weird as it sounds, it’s plausible that LLMs can discover better ways of thinking, of solving problems, of connecting ideas across disciplines, and do so in a way we will find surprising, puzzling, but creative and brilliant in retrospect… ...

January 29, 2025 · 8 min · Jason Brownlee

How to Learn Machine Learning Algorithms (for Programmers)

I’ve written a ton of tutorials and books to help developers learn machine learning algorithms over the years. It’s not my area any longer, but if asked, my suggestion for a programmer (that learns via programming) is to code machine learning algorithms from scratch. Is this you? It is me. It’s how I learn best. Here, I really mean that we learn best by: reading about thing writing code for the thing running the thing it doesn’t work (it never works first go) iterate until the implementation works (and you really-actually-deeply learn the thing) This is how we programmers learned a ton of algorithms and data structures in our CS or SWE degree, or whatever. ...

January 29, 2025 · 3 min · Jason Brownlee

Are LLMs Stuck In-Distribution?

Machine learning models have an IID assumption. That is data on which they are trained must be representative of data on which they will make predictions on later. The big question is AI is: Are generative models capable of generating data out of distribution? Naively, we I think no. But their data distribution is so vast that it’s hard to see at first. For example, an image generation model can interpolate within the space of all most images on the net. ...

January 28, 2025 · 6 min · Jason Brownlee

AI Intuitive Physics

In the last two episodes of “the cognitive revolution” podcast, the host (Nathan Labenz) has mentioned AI’s developing an intuition for the physics of a domain. Specifically: Material Progress: Developing AI’s Scientific Intuition, with Orbital Materials’ Jonathan & Tim Emergency Pod: Reinforcement Learning Works! Reflecting on Chinese Models DeepSeek-R1 and Kimi k1.5 By “physics”, he means the actual rules that limit physical domains, but we can generalize and say any domain. ...

January 28, 2025 · 4 min · Jason Brownlee

Selfish Software

I just read a new post by Edmar Ferreira titled: Selfish Software It’s his take on what we previously called “chat-driven programming”, but perhaps broader. I thought it was user-focused, but his description too is engineer focused, but more personal. His journey. Selfish software refers to writing code for yourself without any external customers in mind. I like the name, I guess. Perhaps the process of creating software this way we can call “chat-driven programming” and the artifacts that are a result we can call “selfish software”, or what I have previously been calling “disposable software”. ...

January 28, 2025 · 2 min · Jason Brownlee