Skip to main content

You can always just have a back-and-forth conversation with LLMs; they have a limited memory of what you’ve recently asked. This is especially useful if your first prompt yielded an overly generic output. However, it’s usually better to provide a detailed prompt to begin with.

You’ll want to approach this differently than you do Web search, where many people type a single phrase – perhaps even just a topic or a few nouns – rather than a full sentence.

Rich Prompts

A good prompt for an LLM is likely more than one full sentence. You want a lot of detail. Here are some ideas to consider:

  • Persona – tell the LLM to role play as somebody
  • Context – tell the LLM what you need overall, and why
  • Task – tell the LLM what the output should be
  • Requirements – tell the LLM what the output should look like: format, length, level of sophistication, and tone

Here’s an example:

You are a college student researching medieval life (Persona). You need to learn about daily medieval life in Europe for an upcoming essay you will have to write (Context). Write 5 examples that explain how medieval life was not that different from modern America (Task). Include both gritty and mundane details, as well as tools used in everyday life (Uniqueness). The output should be slightly playful (Tone). The output should be organized in bullet points, and should be no more than two pages long, written at a level a middle schooler would understand (Requirements).

Starting New Chats

LLMs have a limited memory, and each new prompt is compared with the previous prompts, because the LLM automatically assumes that you want to have a conversation. So if you begin a session by asking about when basketball became popular in Europe and then follow by asking about parasites in birds, it will process the second request in the context of the first one. Sometimes this will yield no connection at all, but for some tools, this may lead to weird cross-over answers… and this is definitely true if you’ve asked ten consecutive unrelated questions. The model will increasingly break down or yield unhelpful outputs, the more you prompt it for unrelated things.

It’s actually best practice to click the “new chat” button before each unrelated inquiry. For some tools, the “new chat” button is just a + sign.

Prompting Across Tools

Different AI tools provide different results. Some LLMs refuse to generate essay-length outputs, but others will do so willingly. It can be rewarding to experiment with the same prompt across different LLMs, so you start to learn where each LLM shines.

One customization worth looking for is profile personalization. For example, with ChatGPT+ you can let the model know your name, occupation, and any details about you desired; and you can also set up personality traits (witty, skeptical, etc.) for the output that will persist across conversations.

Best Practices When Prompting for Brainstormed Lists

Although brainstorming is one of the things LLMs are best at, there are still some things to be aware of:

  • Brainstorm as a human first. You may have heard that effective integration of AI into our working lives means “co-creating,” but results may look drastically different when done in a certain order. If you prompt an LLM for a list first, planning to do your own human attempt after seeing the AI output, you may find it hard to come up with ideas not already in the output. This is only human nature; when we see a seemingly-complete list, it’s easy to assume all relevant avenues are already represented. But had you started with a human brainstorm, your creativity is more likely to lead to avenues the AI might not explore when it has its turn.
  • Prompt for a longer list than you actually need. Most LLMs generate lists that contain ideas that are variously poor, mediocre, good, and occasionally inspired. If you need a “top ten” list, instead ask for the top 25, and plan to trim the results to keep the best.
  • As advised above, repeat the brainstorming at more than one LLM. LLMs don’t actually invent anything new; they are essentially great at “remixing” knowledge. Because each LLM has unique training and programming, the various tools will return different results.

Prompt Limitations

You may find it useful to provide explicit limitations in your prompt. Examples include:

  • “Do not include potentially-hallucinated results. If you are uncertain, state so explicitly.”
  • “Do not access information or training data beyond what I will provide you.”
  • “Do not discuss politics or religion in this response.”
  • “Provide sources or references for any factual claims you make.”

Multimodal Prompting

Generative AI tools constantly become more sophisticated. Many models (not all of them free) now allow for multi-modal prompting. You can do such things as:

  • Upload an image, and seek a text description (great for writing ALT text for images to be displayed online)
  • Type a text description, and receive a generated images (or, in the more expensive models, even a video) as the output.

One of the most clever ways to take advantage of these tools is to use an LLM to ask it to write the prompt for you to use at an image-generating tool. You could discuss how the desired image output resonates emotionally, and the LLM will offer useful, concrete prompts that will create a better image than you could have on your own.

Chain of Density Prompting

This type of prompting calls for AI to summarize a longer reading, then to identify an important element from the reading that did NOT make it into the summary, then re-do the summary to include it, but also not increase the word count. As this process gets repeated for four more times, the resulting AI summary becomes richer and more dense each time.