Prompt settings

The setting descriptions below are for OpenAI's chatGPT and were inspired from https://www.unite.ai/prompt-engineering-in-chatgpt/.

Temperature#

The temperature parameter influences the randomness of the AI's responses. A higher temperature, such as 1.0, encourages more random output, resulting in diverse but potentially off-topic responses. In contrast, a lower temperature, like 0.2, prompts the AI to select more deterministic responses, which can be beneficial for focused and specific outputs but may lack variety.

Maximum Length#

This parameter controls the maximum token length of the model's output, which includes both the tokens in the message input and message output. Setting a higher limit allows for more extended responses, while a lower limit keeps the output short and concise.

Stop Sequences#

Stop sequences are specific strings of text where, when the model encounters them, it ceases generating further output. This feature can be useful for controlling the length of the output or instructing the model to stop at logical endpoints.

Top P#

The ‘Top P' parameter, also known as nucleus sampling, is a method that provides a dynamic selection of the number of words considered at each step of the model's predictions. A lower value, like 0.5, leads to safer, more focused outputs. A higher value, like 0.9, includes a broader selection of words, leading to more diverse outputs.

Frequency Penalty#

Frequency Penalty controls how much the model should favor less frequent words. A higher penalty (up to 1) encourages the model to use less common words, while a lower value (down to -1) encourages the model to use more common words.

Presence Penalty#

The Presence Penalty parameter affects how much the model is penalized for generating new ideas or topics that were not present in the conversation history. Higher values encourage the model to stick to the topics already mentioned, while lower values allow the model to introduce new concepts more freely.