Set prompt settings - model name and max token length important.
Important step - Make sure to set the default integrations and default model to avoid having to set for every prompt. Make sure to use "chat/" when using Azure instance.
quick handy - Click outside prompt text box to save.
Stash the items to run the prompt against
Different ways of including stashed data into prompts
{{ sourceConversation }} - does source conversation stay the same.
{{ conversation }} - includes the entire conversation
{{ text }} - includes utterances
Test the pinned prompt against the stashed items.
Output of every prompt run can be accessed using
promptId
generationRunId
In case if you are not happy with your results, modify the prompt to get required results (prompt-tuning).
Once happy with the results, set up your pipeline.
Editing the name of the pipeline, selecting prompt and choosing the number of items to be processed can be edited on the pinned pipeline whereas input data, filters, NLU engine, and sort by must be decided on the data tab upon clicking the edit button.
Save and run pipeline. Now the prompt is run against the given number of data upon executing the pipeline.
Output of every pipeline run can be accessed using
pipelineId
pipelineStepId
Pipeline cache โ Pipeline uses the same output if there is no modification made to the prompt, prompt settings and pipeline settings.
Error handling - If the model throws an error, you would see the original utterance with an error metadata.