playbooks Playbook[]namespace stringNamespace of the workspace. | id stringUnique identifier of the workspace generated by the server. | name stringUser visible given name of the workspace. | conversation_sets ScopedReference[]Optional list of conversation sets ids that are to be included inside conversations namespace stringNamespace of the referenced object. | id stringUnique identifier of the referenced object. |
| color stringColor of the playbook in the UI. Any CSS color format is supported (ex: #FF0000, red). | base_language stringBase language of the workspace. If empty, 'en' is assumed. 2 letters ISO 639-1 or BCP47 locale format (ex: 'en-US') deprecated: use flags.languages.default_language | active_languages string[]Languages that can be used in the workspace, on top of the base language. 2 letters ISO 639-1 or BCP47 locale format (ex: 'en-US') deprecated: use flags.languages.enabled_languages | nlu NluSettingsid stringUnique identifier of the NLU engine in the workspace. | name string(Optional) User defined name of the NLU engine. Since a user can have multiple NLU engines of the same type, this name is used to identify the engine in the UI. | engine_version stringVersion of the specified NLU engine. Since multiple deployments are feasible, this specifies the exact image which will be used when using an external NLU engine. This parameter has no impact for the internal engine. | is_default booleanInternally managed flag to indicate that this is the default engine of the workspace. This should not be modified via the API as it is enforced by the backend, unless set when calling CreatePlaybookNluEngine or UpdatePlaybookNluEngine to declare an engine as default. | seq_id uint32Internally managed non-zero unique sequential number assigned to the engine. This should not be modified via the API as it is enforced by the backend. | on_demand_train booleanOnly allow training the NLU engine when it is explicitely triggered. Useful to prevent expensive NLU engines (ex: DialogFlow) from being triggered automatically. | on_demand_infer booleanOnly allow using the NLU engine in unlabelled data inference if it's explicitely triggered to run. | max_retry UInt32ValueWrapper message for uint32 . The JSON representation for UInt32Value is JSON number. | integration_id string(Optional) Unique identifier of the integration if the NLU engine is linked to an external integration. | training_tag_predicate TagPredicaterequire_ids string[]Only include objects with ALL of the given tag ids. | include_ids string[]Only include objects with ANY of the given tag ids. | exclude_ids string[]Exclude objects with ANY of the given tag ids. |
| intent_tag_predicate TagPredicaterequire_ids string[]Only include objects with ALL of the given tag ids. | include_ids string[]Only include objects with ANY of the given tag ids. | exclude_ids string[]Exclude objects with ANY of the given tag ids. |
| hierarchical_remap_score BoolValueWrapper message for bool . The JSON representation for BoolValue is JSON true and false . | internal NluEngineInternallatent_space_key string(Optional) Specify the latent space to use when training this engine |
| rasa NluEngineRasapipeline_config stringContents of the config.yml to be used for training |
| dialogflow_cx NluEngineDialogflowCxproject_id stringGCP project of the agent. If empty, default project in the integration will be used. | location stringGCP location of the agent (ex: northamerica-northeast1) If empty, default project in the integration will be used, otherwise global is used. | credential_id string | model_type enum |
| huggingface NluEngineHuggingFacebase_model stringThe base model to start from. See https://huggingface.co/models The model needs to use a supported architecture and support TensorFlow (currently) e.g bert-base-uncased | config_json string(Optional) A json configuration to be merged with the base model's default configuration | training_args_json string(Optional) A json object containing training (hyper-) parameters |
| custom NluEngineCustom | auto_train booleanIf true, training and inference of this NLU engine will be triggered automatically when the playbook is saved. The engine will run training and inference regardless of the on_demand_train and on_demand_infer flags. |
| other_nlus NluSettings[]Settings of other NLU engines that the workspace can use. id stringUnique identifier of the NLU engine in the workspace. | name string(Optional) User defined name of the NLU engine. Since a user can have multiple NLU engines of the same type, this name is used to identify the engine in the UI. | engine_version stringVersion of the specified NLU engine. Since multiple deployments are feasible, this specifies the exact image which will be used when using an external NLU engine. This parameter has no impact for the internal engine. | is_default booleanInternally managed flag to indicate that this is the default engine of the workspace. This should not be modified via the API as it is enforced by the backend, unless set when calling CreatePlaybookNluEngine or UpdatePlaybookNluEngine to declare an engine as default. | seq_id uint32Internally managed non-zero unique sequential number assigned to the engine. This should not be modified via the API as it is enforced by the backend. | on_demand_train booleanOnly allow training the NLU engine when it is explicitely triggered. Useful to prevent expensive NLU engines (ex: DialogFlow) from being triggered automatically. | on_demand_infer booleanOnly allow using the NLU engine in unlabelled data inference if it's explicitely triggered to run. | max_retry UInt32ValueWrapper message for uint32 . The JSON representation for UInt32Value is JSON number. | integration_id string(Optional) Unique identifier of the integration if the NLU engine is linked to an external integration. | training_tag_predicate TagPredicaterequire_ids string[]Only include objects with ALL of the given tag ids. | include_ids string[]Only include objects with ANY of the given tag ids. | exclude_ids string[]Exclude objects with ANY of the given tag ids. |
| intent_tag_predicate TagPredicaterequire_ids string[]Only include objects with ALL of the given tag ids. | include_ids string[]Only include objects with ANY of the given tag ids. | exclude_ids string[]Exclude objects with ANY of the given tag ids. |
| hierarchical_remap_score BoolValueWrapper message for bool . The JSON representation for BoolValue is JSON true and false . | internal NluEngineInternallatent_space_key string(Optional) Specify the latent space to use when training this engine |
| rasa NluEngineRasapipeline_config stringContents of the config.yml to be used for training |
| dialogflow_cx NluEngineDialogflowCxproject_id stringGCP project of the agent. If empty, default project in the integration will be used. | location stringGCP location of the agent (ex: northamerica-northeast1) If empty, default project in the integration will be used, otherwise global is used. | credential_id string | model_type enum |
| huggingface NluEngineHuggingFacebase_model stringThe base model to start from. See https://huggingface.co/models The model needs to use a supported architecture and support TensorFlow (currently) e.g bert-base-uncased | config_json string(Optional) A json configuration to be merged with the base model's default configuration | training_args_json string(Optional) A json object containing training (hyper-) parameters |
| custom NluEngineCustom | auto_train booleanIf true, training and inference of this NLU engine will be triggered automatically when the playbook is saved. The engine will run training and inference regardless of the on_demand_train and on_demand_infer flags. |
| evaluation EvaluationSettingsdefault_parameters EvaluationParametersintent_tag_predicate TagPredicaterequire_ids string[]Only include objects with ALL of the given tag ids. | include_ids string[]Only include objects with ANY of the given tag ids. | exclude_ids string[]Exclude objects with ANY of the given tag ids. |
| k_fold KFoldnum_folds uint32 | phrase_tag_predicate TagPredicaterequire_ids string[]Only include objects with ALL of the given tag ids. | include_ids string[]Only include objects with ANY of the given tag ids. | exclude_ids string[]Exclude objects with ANY of the given tag ids. |
|
| test_set TestSetphrase_tag_predicate TagPredicaterequire_ids string[]Only include objects with ALL of the given tag ids. | include_ids string[]Only include objects with ANY of the given tag ids. | exclude_ids string[]Exclude objects with ANY of the given tag ids. |
|
| nlu_id stringOptional unique identifier of the NLU engine to use in the workspace. If none specified, the workspace's default configured NLU engine will be used. See zia.ai.pipeline.v1alpha1.NluSettings.id | evaluation_preset_id stringIf specified, the evaluation parameters will be overridden by the parameters of the given preset id, discarding any current values. | auto booleanIf true, signals that the evaluation is an automatic run. |
|
| creation_time RFC3339Time at which the playbook got created. Added in Feb 2021, so all playbook created before that won't have the field populated. | phrase_uniqueness_level enumDetermines the level of uniqueness allowed in the workspace. 0 = intent level (PHRASE_UNIQUENESS_LEVEL_INTENT): a phrase can only exist once within the intent it's associated to. 1 = workspace level (PHRASE_UNIQUENESS_LEVEL_WORKSPACE): a phrase can only exist once in the entire workspace. 2 = none (PHRASE_UNIQUENESS_LEVEL_NONE): a phrase can exist more than once, anywhere in the entire workspace. | presets Preset[]Stored settings to aid in providing easier repeatability of various behaviours of a workspace. id string | seq_id uint32Internally managed non-zero unique sequential number assigned to the preset. This should not be modified via the API as it is enforced by the backend. | name string | description stringDescription of the preset. | evaluation EvaluationContains settings for running evaluations via zia.ai.playbook.v1alpha1.RunEvaluation . See zia.ai.evaluation.v1alpha1.RunEvaluationRequest for the matching fields that this connects to. parameters EvaluationParametersintent_tag_predicate TagPredicaterequire_ids string[]Only include objects with ALL of the given tag ids. | include_ids string[]Only include objects with ANY of the given tag ids. | exclude_ids string[]Exclude objects with ANY of the given tag ids. |
| k_fold KFoldnum_folds uint32 | phrase_tag_predicate TagPredicaterequire_ids string[]Only include objects with ALL of the given tag ids. | include_ids string[]Only include objects with ANY of the given tag ids. | exclude_ids string[]Exclude objects with ANY of the given tag ids. |
|
| test_set TestSetphrase_tag_predicate TagPredicaterequire_ids string[]Only include objects with ALL of the given tag ids. | include_ids string[]Only include objects with ANY of the given tag ids. | exclude_ids string[]Exclude objects with ANY of the given tag ids. |
|
| nlu_id stringOptional unique identifier of the NLU engine to use in the workspace. If none specified, the workspace's default configured NLU engine will be used. See zia.ai.pipeline.v1alpha1.NluSettings.id | evaluation_preset_id stringIf specified, the evaluation parameters will be overridden by the parameters of the given preset id, discarding any current values. | auto booleanIf true, signals that the evaluation is an automatic run. |
| auto_evaluate booleanAllow preset to be periodically evaluated automatically. |
| intents_export IntentsExportContains settings for exporting intents via zia.ai.playbook.data.v1alpha1.ExportIntents . See zia.ai.playbook.data.v1alpha1.ExportIntentsRequest for the matching fields that this connects to. format enumFormat of the exported data. | format_options IntentsDataOptionshierarchical_intent_name_disabled booleanDisables intents hierarchy encoding via the intent names. Ex: 'Parent / Sub-parent / Intent' | hierarchical_delimiter stringOverrides the default delimiter used for intent hierarchy. Default is '--' for Botpress and Dialogflow, '+' for Rasa, '/' for CSV | zip_encoding booleanIndicates that the intents are zipped and may be splits in different files. | gzip_encoding booleanIndicates that the intent file is gzipped. | hierarchical_follow_up booleanTo be used with Dialogflow to use intents hierarchy using intents follow-up. | include_negative_phrases booleanExport negative phrases as well. | intent_tag_predicate TagPredicaterequire_ids string[]Only include objects with ALL of the given tag ids. | include_ids string[]Only include objects with ANY of the given tag ids. | exclude_ids string[]Exclude objects with ANY of the given tag ids. |
| phrase_tag_predicate TagPredicaterequire_ids string[]Only include objects with ALL of the given tag ids. | include_ids string[]Only include objects with ANY of the given tag ids. | exclude_ids string[]Exclude objects with ANY of the given tag ids. |
| skip_empty_intents booleanSkip all intents that do not contain phrases. |
| intent_ids string[](Optional) Limit export to these given intents. |
| intents_import IntentsImportContains settings for importing intents via zia.ai.playbook.data.v1alpha1.ImportIntents . See zia.ai.evaluation.params.v1alpha1.ImportIntentsRequest for the matching fields that this connects to. format enumFormat of the imported file. | format_options IntentsDataOptionshierarchical_intent_name_disabled booleanDisables intents hierarchy encoding via the intent names. Ex: 'Parent / Sub-parent / Intent' | hierarchical_delimiter stringOverrides the default delimiter used for intent hierarchy. Default is '--' for Botpress and Dialogflow, '+' for Rasa, '/' for CSV | zip_encoding booleanIndicates that the intents are zipped and may be splits in different files. | gzip_encoding booleanIndicates that the intent file is gzipped. | hierarchical_follow_up booleanTo be used with Dialogflow to use intents hierarchy using intents follow-up. | include_negative_phrases booleanExport negative phrases as well. | intent_tag_predicate TagPredicaterequire_ids string[]Only include objects with ALL of the given tag ids. | include_ids string[]Only include objects with ANY of the given tag ids. | exclude_ids string[]Exclude objects with ANY of the given tag ids. |
| phrase_tag_predicate TagPredicaterequire_ids string[]Only include objects with ALL of the given tag ids. | include_ids string[]Only include objects with ANY of the given tag ids. | exclude_ids string[]Exclude objects with ANY of the given tag ids. |
| skip_empty_intents booleanSkip all intents that do not contain phrases. |
| import_options ImportOptionsclear_intents booleanClears workspace intents before importing. | clear_entities booleanClears workspace entities before importing. | clear_tags booleanClears workspace tags before importing. Note: should not be used in combination with extra_intent_tags or extra_phrase_tags since we will clear potentially referenced tags. | merge_intents booleanTries to merge intents into existing ones if they can be found in the workspace. | merge_entities booleanTries to merge entities into existing ones if they can be found in the workspace. | merge_tags booleanTries to merge tags into existing ones if they can be found in the workspace. | extra_intent_tags TagReference[]Add extra tags to imported intents. id stringUnique identifier of the tag. | name string(Optional) Only used when importing data that tag IDs are not defined yet. This will not be filled when requesting tagged objects. | protected booleanFor internal use. There is no guarantee that this will be properly filled. |
| extra_phrase_tags TagReference[]Add extra tags to imported phrases. id stringUnique identifier of the tag. | name string(Optional) Only used when importing data that tag IDs are not defined yet. This will not be filled when requesting tagged objects. | protected booleanFor internal use. There is no guarantee that this will be properly filled. |
| override_metadata booleanOverrides the description, color, and metadata of the workspace with the values of the imported file if they are supported in the received format. Supported formats: INTENTS_FORMAT_HF_JSON | override_name booleanOverrides the name of the workspace with the value of the imported file if they are supported in the received format. Supported formats: INTENTS_FORMAT_HF_JSON |
|
|
| metadata MetadataEntry | nlg NlgSettingsintegration_id stringThe id of the integration to use for the NLG prompt completions. | conversation_set_id stringThe conversation set to use to store NLG prompt completions. It is expected that the first conversation source in the conversation set is a user upload source. | generation_id uint32An incrementing id that is asscoiated to each prompt completion attempt in recommendations. | prompt_template stringThe template to inject an intent's specific prompt into. The prompt will interpolate $INTENT_PROMPT and $EXAMPLE_TEXT within the prompt based on provided data. | intent_prompt_metadata_key stringThe metadata key to reference to extract the an intent's prompt. If this is empty, a fallback key of "hint" will be used. | model_name stringThe LLM model to use for openai prompt completions. Deprecated in favor of model_parameters.model_name . Is in sync with model_parameters.model_name , taking its value. If updated, the model parameters will be updated as well. | temperature floatTemperature setting for prompt completions. Deprecated in favor of model_parameters.temperature . Is in sync with model_parameters.temperature , taking its value. If updated, the model parameters will be updated as well. | max_tokens int32Max number of tokens allowed between the prompt and completion. Deprecated in favor of model_parameters.max_tokens . Is in sync with model_parameters.max_tokens , taking its value. If updated, the model parameters will be updated as well. | top_p floatTop p setting for prompt completions. Should be a value between 0 and 1. Deprecated in favor of model_parameters.top_p . Is in sync with model_parameters.top_p , taking its value. If updated, the model parameters will be updated as well. | frequency_penalty floatFrequency penalty setting for prompt completions. Deprecated in favor of model_parameters.frequency_penalty . Is in sync with model_parameters.frequency_penalty , taking its value. If updated, the model parameters will be updated as well. | presence_penalty floatPresence penalty setting for prompt completions. Deprecated in favor of model_parameters.presence_penalty . Is in sync with model_parameters.presence_penalty , taking its value. If updated, the model parameters will be updated as well. | stop_sequences string[]Configured stop sequences to tell the LLM when to stop generating text in the completion. Deprecated in favor of model_parameters.stop_sequences . For OpenAI, only up to the first 4 stop sequences will be used. Is in sync with model_parameters.stop_sequences , taking its value. If updated, the model parameters will be updated as well. | model_parameters NlgModelParametersmodel_name stringThe LLM model to use for openai prompt completions. | temperature floatTemperature setting for prompt completions. | max_tokens int32Max number of tokens allowed between the prompt and completion. | top_p floatTop p setting for prompt completions. Should be a value between 0 and 1. | frequency_penalty floatFrequency penalty setting for prompt completions. | presence_penalty floatPresence penalty setting for prompt completions. | stop_sequences string[]Configured stop sequences to tell the LLM when to stop generating text in the completion. For OpenAI, only up to the first 4 stop sequences will be used. |
|
| pipelines Pipeline[]Pipelines associated with this workspace. id stringThe unique identifier of the pipeline. | seq_id uint32Internally managed non-zero unique sequential number assigned to the pipeline. This should not be modified via the API as it is enforced by the backend. | name stringThe name of the pipeline. | steps PipelineStep[]The steps of the pipeline. id stringThe unique identifier of the step. | name string | data_query DataQueryqueries Query[]The queries to be used by the step. query AnyAny contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Foo foo = ...; Any any; any.PackFrom(foo); ... if (any.UnpackTo(&foo)) { ... } Example 2: Pack and unpack a message in Java. Foo foo = ...; Any any = Any.pack(foo); ... if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or ... if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); } Example 3: Pack and unpack a message in Python. foo = Foo(...) any = Any() any.Pack(foo) ... if any.Is(Foo.DESCRIPTOR): any.Unpack(foo) ... Example 4: Pack and unpack a message in Go foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { ... } ... foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { ... } The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". JSON ==== The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: package google.profile; message Person { string first_name = 1; string last_name = 2; } { "@type": "type.googleapis.com/google.profile.Person", "firstName": , "lastName": } If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message google.protobuf.Duration): { "@type": "type.googleapis.com/google.protobuf.Duration", "value": "1.212s" }
type_url stringA URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one "/" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading "." is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: If no scheme is provided, https is assumed. An HTTP GET on the URL must yield a google.protobuf.Type value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. | value bytesMust be a valid serialized protocol buffer of the above specified type. |
| extra_client_data stringExtra data to be saved by the client for use under its discretion. |
| max_processed_items uint32The maximum number of items to be processed from the query. |
| program Programprompt_transform PromptTransformprompt_id stringThe unique identifier of the prompt to be used. |
|
|
|
|
|