Mountain Climbers Collect Peaks

I was reading Perdurabo about Aleister Crowley. In his youth he was accomplished mountain climber and the book touches on a suite of his climbing achievements in the early chapters. This got me thinking. Mountain climbing is a challenging hobby where the climbers “collect” peaks. That is, they pick hard maintains to climb, that may or may not have been climbed by others, and climb them for fun. It is about personal accomplishment. ...

January 22, 2025 · 5 min · Jason Brownlee

That Scene From "The Fountainhead"

I’ve probably read Ayn Rand’s The Fountainhead about a dozen times over the years. I’m not really into the philosophy of Objectivism, but I’m a fan of the individualism in the story. Really, I’m a sucker for simple hero stories (think Ender’s Game, Dune, etc.) Not sure I’ll read the book again for a while. It’s comforting to re-read, but I’ve had enough for now. Anyway, there’s a scene that I think back on often. ...

January 22, 2025 · 2 min · Jason Brownlee

Misophonia

I have Misophonia. At least, I strongly suspect I do, self-diagnosed (!). Misophonia is a neurological condition where specific sounds trigger strong negative emotional and physiological reactions. Common trigger sounds include chewing, breathing, or repetitive noises. People with misophonia may experience intense anger, anxiety, or panic when hearing these triggers, often leading them to avoid situations where they might encounter these sounds. The main daily triggers for me are: Chewing. Breathing. Tapping. Loud walking (stomping/scraping/etc). Lisps. Chewing/slurping/eating noises though. Every meal is hard. ...

January 21, 2025 · 4 min · Jason Brownlee

LLM-Based Recommendation Engine

It occurs to me that we can use LLMs to give better recommendations. I want this for books. The goodreads recommendations are crap. The ton of “new hot books” newsletters I subscribe to are crap. I want a daily email that makes highly specific recommendations made by an LLM based on: The books I am reading and have just read. The authors of those books. The genres of those books. The general areas of interest/study of those books. The specific details in my background. I typically read on topics that interest me right now and I typically read a cluster of books on a topic. ...

January 20, 2025 · 1 min · Jason Brownlee

Mind-Reach and PSI Debunking

I’m reading “Mind-Reach” by Russell Targ and Harold Puthoff on kindle. It’s ostensibly on the topic of remote viewing as investigated by two physicists at SRI International in the 70s. My default, like most, is that it’s total bunk. The investigations are presented somewhat rigorously. They are trying. Each time they drop stats, it’s an odds estimate (e.g. this result is 1:1,000,000), my spidey sense tells me “p-hacking” and why aren’t you reporting the negative results as well? ...

January 20, 2025 · 5 min · Jason Brownlee

Tech vs Spiritualism

I had an idea: It seems that with the rise of interest/hype in “tech” we see a similar rise in “spiritualism” (for lack of a better word). I recall during the late 1990s with the rise of the internet/.com boom, that Aliens/UFOs was all the rage and the X-files was one of the biggest shows on TV. Hmmm. I see now with the AI-boom hype/LLMs/ChatGPT and the rise in conspiracy theories, UAPs, telepathy tapes, graham hancock, etc. ...

January 20, 2025 · 4 min · Jason Brownlee

AI/LLM Diminishing Returns

Tyler Cowen was interviewed again by Dwarkesh Patel which was recently released (Jan 10): Tyler Cowen - The #1 Bottleneck to AI progress Is Humans I listened to it on release and a number of points got me. One was Cowen’s comment on diminishing returns when it comes to AI. Here’s a cartoon of diminishing returns from Wikipedia, full credit: He may have made this point before/elsewhere, but it’s the first time I’ve tripped over it. ...

January 12, 2025 · 4 min · Jason Brownlee

AutoML Has A Marketing Problem

I think AutoML is great. I think most people should be using it. Even data scientists and machine learning engineers. And not to “get started” but all the time on all things. Why? It’s probably better than you are. I recall Nick Erickson (author of AutoGluon) commenting in one of his videos/interviews that AutoML/AutoGluon is as good or better than the average data scientist. That the bar for AutoML to clear is low, much lower than most people think. I don’t have a quote at hand, sorry. ...

January 12, 2025 · 4 min · Jason Brownlee

Quake2 Bot Archive?!

I’m a hobby quake archivist. Over the last few years, I’ve been maintaining the Quake Bot Archive. It is/was a rewarding project filled with nostalgia and writing code to scrape+index+search the internet archive. Here’s a screenshot: I think I’ve taken the project pretty close to the edge. I emailed every single bot author I could find in all the old docs (think a massive spreadsheet of email addresses and current follow-up status, a client management system basically). I tracked down modern contact info for most bot authors and reached out. I carefully research the history of most bots to ensure I knew exactly what files were released (e.g. quake bot essays and quake bot chronology and quake bot genealogy much more). I maintained wishlists of wanted files and wishlists of broken URLs where wanted files were known to exist at one time. I kept expanding the scope from bots, to mods that had bots, to proxy bots/aimbots/server-side bots, and on. I posted to the community many times kindly asking for the old timers to check their old backup CDs. I searched usenet archives, mail archives, internet archives, warez archives, shovelware archive, etc. I indexed all files on all old quake addon cds. I indexed the files on all of the old quake webpages on the internet archive. And more… I used the same methods to build other archives, like the official quake archive which led me to many more helpful resources and generated many more ideas on how/where to search. ...

January 12, 2025 · 3 min · Jason Brownlee

Stacking Is Great

Stacking or Stacked Generalization is an ensemble machine learning algorithm. I’ve been obsessed with it since I discovered it as part of the Weka in the late 1990s and reading about it in the Weka “Data Mining” book at the same time. From the 2016 edition: Stacked generalization, or stacking for short, is a different way of combining multiple models. Although developed some years ago, it is less widely mentioned in the machine learning literature than bagging and boosting, partly because it is difficult to analyze theoretically and partly because there is no generally accepted best way of doing it—the basic idea can be applied in many different variations. ...

January 12, 2025 · 4 min · Jason Brownlee