Magical Tokens for LLMs
Prompts for LLMs matter, a lot. I’m no expert at prompting (see this), but there are additional input tokens that offer a lot of leverage in producing better output. For example: think through this step by step And: As an expert in [this specific topic] explain [this thing i want to know] And: Output in bullet points And: Explain [this specific topic] for the audience [described here] And on. This might be an older idea (e.g. gpt3.5/gp4 - where my thinking is probably stuck), these sequences are probably baked in now. ...