I read Florian Ernotte’s “Writing with LLM is not a shame. An essay about transparency on AI use.”
Something like:
The demands for AI disclosure often represent “empty vigilance” and conformity rather than genuine ethics, that ethical standards for such new technology are still being developed and shouldn’t be rigidly enforced yet and the AI disclosure is only relevant when content is good and valuable
I was then reading HN comments on the post and read this:
… Its the attention economy, you’ve demanded people’s attention, and then you shove crap that even you didn’t spend time reading in their face. […] But perhaps you don’t care about your craft. And if that’s the case… why should anyone else care or waste their time on it?
Fine, whatever.
But, sometimes we I don’t care about human authorship.
This must be true because I’ll happily read AI generated material all day long if I am interested.
Why would this not apply to AI material generated in collaboration someone other than me. In fact, more so.
A human has guided/prompted/filtered an AI and I am interested in the topic, I’d happily read the material.
Does it have to be presented accurately? I think so. I think that matters, but I’m not entirely sure.
There’s still human effort in the process, just in a different part than normal.
Something like “content as value”:
If the text offers a new angle, sparks curiosity, or communicates something meaningful, it doesn’t matter whether it was typed by a person or co-constructed with a machine. In this sense, the origin is less important than the insight.
A prompt isn’t just a button push.
I built a bunch of ergodic literature books for my eldest (e.g. all our eyes). I wrote them with AI. They were fun to “build” and are fun (for me, at least) to read/re-read. A ton of work went into pushing, prompting, filtering, directing, etc. My eldest like them okay. Existence proof?