I just read Gian Segato’s “Building AI Products In The Probabilistic Era”.

Good read. Good take.

But, it makes sense. He’s a data scientist and we (as a community) have had to think this way for 10-15 years when working with narrow probabilistic models.

But, the scope has changed.

Inputs and outputs are open-ended.

His examples around replit are good, e.g. constraining the use case to code gen for websites would prevent other use cases like code gen for games.

I’m finding it hard to generalize to other domains/use cases though.

Something like:

If you build deterministic guardrails too early, you risk shrinking the “surface area” where users can discover emergent value. The empirical, probabilistic approach is to release with looser constraints, observe how users actually bend the system, then segment and refine based on real-world “regions of use.”

For me, I need to stop thinking about using an LLM as a model, and instead about inserting intelligence and managing it like one would a person that can sometimes do/say dumb stuff.