There are many methods of how a language model can be prompted. Some examples are:
Zero-shot Prompting:
This type of input-output prompting (IOP) involves giving the model a task without providing any examples. The model relies solely on its pre-existing knowledge to generate a unspecific response. This is a highly intuitive approach. Can be used for broad problems or situations where there is not a lot of data. Example: “Write a welcome message for guests checking into a hotel.”
Few-shot Prompting:
In this approach, the model receives several examples of the task before generating a response. This is also a form of input-output prompting (IOP). This assists the model in better comprehending the context and format. It is particularly useful for complex queries when specific ideas or data are available. For example, offering a few instances of a problem and its solutions can be beneficial before requesting the model to tackle a new issue.
Chain-of-Thought Prompting (CoT):
This technique involves guiding the model to think step-by-step through a problem or task. It helps in breaking down complex tasks into manageable steps, improving the accuracy of the response. For example: “Take a deep breath, and tell me step-by-step how you would solve problem X?”
Self-Consistency Prompting (SC):
This method involves generating multiple responses to the same prompt and then selecting the most consistent or accurate one. It helps in improving the reliability of the generated response. For example, generating multiple summaries of a text and choosing the one that best captures the main points. Another example: "Provide me step-by-step with five ideal answers and discuss which would be the one. Explain why."
Role-based Prompting / Role-Play / Expert prompting (EP):
In this form, the model is assigned a specific role or responsibility to guide its responses. This provides context and helps in generating responses that are appropriate in tone and style. For example, asking the model to explain a concept as if it were a teacher explaining to a student.
Automatic Prompt engineer (APE):
This technique involves the AI automatically generating and optimizing prompts based on user input and task requirements. For example, an AI system might automatically create a prompt to gather guest preferences by asking, “What are your preferred room features and amenities?” without manual intervention.
Generated Knowledge Prompting (Gkn):
This technique involves the AI generating relevant knowledge or information before making a prediction or providing an answer. For example, if a guest asks about the best time to visit a local attraction, the AI might first generate information about the attraction’s peak and off-peak hours before responding with a recommendation.
Tree-of-Thought Prompting (ToT):
This technique involves the AI exploring multiple lines of reasoning, thoughts or perspectives to solve a complex problem or answer a question. For example, if a guest asks for dining recommendations, the AI might consider various factors such as cuisine type, dietary restrictions, and proximity to the hotel, and then provide a well-rounded recommendation based on these considerations. (Numbers 1-8 are from Walter, 2024)
Rereading (Re2):
This prompting strategy tells the AI-tool to re-read your stated question or problem. It enhances the accuracy of the answers. Just add this, and the AI-tool will read your prompt twice before it gives you an answer. This re-reading of the question might aid you in better understanding the nature of the question. This in turn might aid you in coming up with a better answer than if you had tried to answer based on your first or initial reading of the question (Elliott, 2024).
Chain-of-Verification Prompting (CoVe):
This prompting technique reduces hallucinations by using verification questions.It outperforms Zero-Shot, Few-Shot, and Chain-of-Thought (CoT) methods in generating accurate responses. It involves baseline response, verification planning, answering verification questions, and refining the final output. CoVe reduces but doesn't fully eliminate hallucinations, especially in reasoning steps (Bhatt, 2024).
This is not a complete list.