Google’s LLM agent-based co-scientist looks interesting:
Early tests of Google’s new tool with experts from Stanford University, Imperial College London and Houston Methodist hospital found it was able to generate scientific hypotheses that showed promising results.
And:
We introduce AI co-scientist, a multi-agent AI system built with Gemini 2.0 as a virtual scientific collaborator to help scientists generate novel hypotheses and research proposals, and to accelerate the clock speed of scientific and biomedical discoveries.
Nice.
But then there’s this:
A complex problem that took microbiologists a decade to get to the bottom of has been solved in just two days by a new artificial intelligence (AI) tool. Professor José R Penadés and his team at Imperial College London had spent years working out and proving why some superbugs are immune to antibiotics. He gave “co-scientist” - a tool made by Google - a short prompt asking it about the core problem he had been investigating and it reached the same conclusion in 48 hours.
Hmmm.
“I was shopping with somebody, I said, ‘please leave me alone for an hour, I need to digest this thing,’” he told the Today programme, on BBC Radio Four.
Ontological shock!!
I have had this a few times over the years. Read a thing or do a thing and then have to be left alone to recover from the implications. Go for a walk. “defrag the old hard disk”.
And:
“It’s not just that the top hypothesis they provide was the right one,” he said.
“It’s that they provide another four, and all of them made sense.
“And for one of them, we never thought about it, and we’re now working on that.”
Very cool story. Great click bate!
Maybe real, maybe not.
I’d bet the hypothesis was already in the training data. (link)
Nod, in some version.
Paper, I think, found via lead author José Penadés on twitter:
Here, we challenge the ability of a recently developed LLM-based platform, AI co-scientist, to generate high-level hypotheses by posing a question that took years to resolve experimentally but remained unpublished…
We can all see this, “idea sex”: novelty by recombining ideas across/within fields give a larger (full!) corpus + working memory + automated reasoning.
Lots of low hanging fruit to be picked in the next few years.
For example:
Sometimes science isn’t doing completely novel things, but combining ideas across different disciplines or areas. (link)
And
Sometimes science isn’t doing completely novel things, but combining ideas across different disciplines or areas. AI has some potential here, because unlike a human, AI can be trained across all of it and has the opportunity to make connections a human, with more limited scope, might miss. (link)
And, importantly, the human must be in the loop at this stage:
What matters isn’t that an AI “make connections”, it’s that the AI generates some text that causes a human to make the connections. It doesn’t even matter if what the AI generates is true or not, if it leads the human to truth. (link)
We’ll be getting Incomprehensible Artifacts soon enough, I’m sure :)