Prompt engineering checklist
This topic explains the prompt engineering checklist.
Check box | Check | Required or Optional | Notes |
---|---|---|---|
❑ | Note that the LLM is incapable of web-scraping from a provided URL such that including these URLs in your prompt will not support the LLM in extracting any information. | Required | |
❑ | Ensure all context provided in your prompt is useful to the LLM, especially when leveraging training data. | Required | Providing prompt context such as “Only answer if you know. If you don’t know, respond with ‘I don’t know.'” does not result in more accurate results. |
❑ | Be specific with your desired output instead of providing a simple command. For example, Summarize, Generate, Convert. | Required | If you want the LLM to summarize a body of text, elaborate on what specific terms or concepts the LLM should prioritize. |
❑ | Modify your API payloads to only include the required information, omitting any details that are not required for the task. | Required | |
❑ | Define the LLM’s role in completing a task. | Required | For example, providing You are a text extraction agent will guide the LLM in providing more meaningful results. |
❑ | Dissect complex prompts to be composed of smaller sub-tasks. | Required | It is beneficial to support the LLM’s chain of thought by establishing smaller outputs that contribute to the final solution. |
❑ | Ensure that token limits will not impact the completion of your LLM response. | Required | Large prompts that consume a high number of tokens may result in the LLM’s response to return incomplete. |
❑ | Document all related Tools within a single Toolkit. | Required | Toolkits group multiple Tools and assist in coordinating Tool invocation for prompts that requires more than one Tool. |
❑ | Do not rely on the LLM to perform mathematical calculations. | Required | Depending on the model, even simple calculations may provide incorrect outputs and/or cause confusion. |
❑ | Review API documentation within a Tool for clear formatting. | Optional | Utilize the Tool Instructions for API documentation purposes. |
❑ | Utilize the temperature parameter to tailor the LLM response to your intended goal. | Optional | Lower temperature will result in concise and factual responses, while higher temperature will produce creative and diverse results. |