GetWorkspace

Returns a single workspace.

Path Parameters
namespace string โ€” REQUIRED

Namespace in which the workspace is.

id string โ€” REQUIRED

Unique identifier of the workspace.

Response Body
namespace string

Namespace of the workspace.

id string

Unique identifier of the workspace generated by the server.

name string

User visible given name of the workspace.

conversation_sets ScopedReference[]

Optional list of conversation sets ids that are to be included inside conversations

namespace string

Namespace of the referenced object.

id string

Unique identifier of the referenced object.

color string

Color of the playbook in the UI. Any CSS color format is supported (ex: #FF0000, red).

base_language string

Base language of the workspace. If empty, 'en' is assumed. 2 letters ISO 639-1 or BCP47 locale format (ex: 'en-US') deprecated: use flags.languages.default_language

active_languages string[]

Languages that can be used in the workspace, on top of the base language. 2 letters ISO 639-1 or BCP47 locale format (ex: 'en-US') deprecated: use flags.languages.enabled_languages

nlu NluSettings
id string

Unique identifier of the NLU engine in the workspace.

name string

(Optional) User defined name of the NLU engine. Since a user can have multiple NLU engines of the same type, this name is used to identify the engine in the UI.

engine_version string

Version of the specified NLU engine. Since multiple deployments are feasible, this specifies the exact image which will be used when using an external NLU engine. This parameter has no impact for the internal engine.

is_default boolean

Internally managed flag to indicate that this is the default engine of the workspace. This should not be modified via the API as it is enforced by the backend, unless set when calling CreatePlaybookNluEngine or UpdatePlaybookNluEngine to declare an engine as default.

seq_id uint32

Internally managed non-zero unique sequential number assigned to the engine. This should not be modified via the API as it is enforced by the backend.

on_demand_train boolean

Only allow training the NLU engine when it is explicitely triggered. Useful to prevent expensive NLU engines (ex: DialogFlow) from being triggered automatically.

on_demand_infer boolean

Only allow using the NLU engine in unlabelled data inference if it's explicitely triggered to run.

max_retry UInt32Value

Wrapper message for uint32. The JSON representation for UInt32Value is JSON number.

value uint32

The uint32 value.

integration_id string

(Optional) Unique identifier of the integration if the NLU engine is linked to an external integration.

training_tag_predicate TagPredicate
require_ids string[]

Only include objects with ALL of the given tag ids.

include_ids string[]

Only include objects with ANY of the given tag ids.

exclude_ids string[]

Exclude objects with ANY of the given tag ids.

intent_tag_predicate TagPredicate
require_ids string[]

Only include objects with ALL of the given tag ids.

include_ids string[]

Only include objects with ANY of the given tag ids.

exclude_ids string[]

Exclude objects with ANY of the given tag ids.

hierarchical_remap_score BoolValue

Wrapper message for bool. The JSON representation for BoolValue is JSON true and false.

value boolean

The bool value.

internal NluEngineInternal
latent_space_key string

(Optional) Specify the latent space to use when training this engine

rasa NluEngineRasa
pipeline_config string

Contents of the config.yml to be used for training

dialogflow_cx NluEngineDialogflowCx
project_id string

GCP project of the agent. If empty, default project in the integration will be used.

location string

GCP location of the agent (ex: northamerica-northeast1) If empty, default project in the integration will be used, otherwise global is used.

credential_id string

The id of the GCP credential to use. Deprecated. It was replaced by NluSettings.integration_id

model_type enum
huggingface NluEngineHuggingFace
base_model string

The base model to start from. See https://huggingface.co/models The model needs to use a supported architecture and support TensorFlow (currently) e.g bert-base-uncased

config_json string

(Optional) A json configuration to be merged with the base model's default configuration

training_args_json string

(Optional) A json object containing training (hyper-) parameters

custom NluEngineCustom
auto_train boolean

If true, training and inference of this NLU engine will be triggered automatically when the playbook is saved. The engine will run training and inference regardless of the on_demand_train and on_demand_infer flags.

other_nlus NluSettings[]

Settings of other NLU engines that the workspace can use.

id string

Unique identifier of the NLU engine in the workspace.

name string

(Optional) User defined name of the NLU engine. Since a user can have multiple NLU engines of the same type, this name is used to identify the engine in the UI.

engine_version string

Version of the specified NLU engine. Since multiple deployments are feasible, this specifies the exact image which will be used when using an external NLU engine. This parameter has no impact for the internal engine.

is_default boolean

Internally managed flag to indicate that this is the default engine of the workspace. This should not be modified via the API as it is enforced by the backend, unless set when calling CreatePlaybookNluEngine or UpdatePlaybookNluEngine to declare an engine as default.

seq_id uint32

Internally managed non-zero unique sequential number assigned to the engine. This should not be modified via the API as it is enforced by the backend.

on_demand_train boolean

Only allow training the NLU engine when it is explicitely triggered. Useful to prevent expensive NLU engines (ex: DialogFlow) from being triggered automatically.

on_demand_infer boolean

Only allow using the NLU engine in unlabelled data inference if it's explicitely triggered to run.

max_retry UInt32Value

Wrapper message for uint32. The JSON representation for UInt32Value is JSON number.

value uint32

The uint32 value.

integration_id string

(Optional) Unique identifier of the integration if the NLU engine is linked to an external integration.

training_tag_predicate TagPredicate
require_ids string[]

Only include objects with ALL of the given tag ids.

include_ids string[]

Only include objects with ANY of the given tag ids.

exclude_ids string[]

Exclude objects with ANY of the given tag ids.

intent_tag_predicate TagPredicate
require_ids string[]

Only include objects with ALL of the given tag ids.

include_ids string[]

Only include objects with ANY of the given tag ids.

exclude_ids string[]

Exclude objects with ANY of the given tag ids.

hierarchical_remap_score BoolValue

Wrapper message for bool. The JSON representation for BoolValue is JSON true and false.

value boolean

The bool value.

internal NluEngineInternal
latent_space_key string

(Optional) Specify the latent space to use when training this engine

rasa NluEngineRasa
pipeline_config string

Contents of the config.yml to be used for training

dialogflow_cx NluEngineDialogflowCx
project_id string

GCP project of the agent. If empty, default project in the integration will be used.

location string

GCP location of the agent (ex: northamerica-northeast1) If empty, default project in the integration will be used, otherwise global is used.

credential_id string

The id of the GCP credential to use. Deprecated. It was replaced by NluSettings.integration_id

model_type enum
huggingface NluEngineHuggingFace
base_model string

The base model to start from. See https://huggingface.co/models The model needs to use a supported architecture and support TensorFlow (currently) e.g bert-base-uncased

config_json string

(Optional) A json configuration to be merged with the base model's default configuration

training_args_json string

(Optional) A json object containing training (hyper-) parameters

custom NluEngineCustom
auto_train boolean

If true, training and inference of this NLU engine will be triggered automatically when the playbook is saved. The engine will run training and inference regardless of the on_demand_train and on_demand_infer flags.

evaluation EvaluationSettings
default_parameters EvaluationParameters
intent_tag_predicate TagPredicate
require_ids string[]

Only include objects with ALL of the given tag ids.

include_ids string[]

Only include objects with ANY of the given tag ids.

exclude_ids string[]

Exclude objects with ANY of the given tag ids.

k_fold KFold
num_folds uint32

Number of folds

phrase_tag_predicate TagPredicate
require_ids string[]

Only include objects with ALL of the given tag ids.

include_ids string[]

Only include objects with ANY of the given tag ids.

exclude_ids string[]

Exclude objects with ANY of the given tag ids.

test_set TestSet
phrase_tag_predicate TagPredicate
require_ids string[]

Only include objects with ALL of the given tag ids.

include_ids string[]

Only include objects with ANY of the given tag ids.

exclude_ids string[]

Exclude objects with ANY of the given tag ids.

nlu_id string

Optional unique identifier of the NLU engine to use in the workspace. If none specified, the workspace's default configured NLU engine will be used. See zia.ai.pipeline.v1alpha1.NluSettings.id

evaluation_preset_id string

If specified, the evaluation parameters will be overridden by the parameters of the given preset id, discarding any current values.

auto boolean

If true, signals that the evaluation is an automatic run.

creation_time RFC3339

Time at which the playbook got created. Added in Feb 2021, so all playbook created before that won't have the field populated.

phrase_uniqueness_level enum

Determines the level of uniqueness allowed in the workspace. 0 = intent level (PHRASE_UNIQUENESS_LEVEL_INTENT): a phrase can only exist once within the intent it's associated to. 1 = workspace level (PHRASE_UNIQUENESS_LEVEL_WORKSPACE): a phrase can only exist once in the entire workspace. 2 = none (PHRASE_UNIQUENESS_LEVEL_NONE): a phrase can exist more than once, anywhere in the entire workspace.

presets Preset[]

Stored settings to aid in providing easier repeatability of various behaviours of a workspace.

id string

Id of the preset.

seq_id uint32

Internally managed non-zero unique sequential number assigned to the preset. This should not be modified via the API as it is enforced by the backend.

name string

Name of the preset.

description string

Description of the preset.

evaluation Evaluation

Contains settings for running evaluations via zia.ai.playbook.v1alpha1.RunEvaluation. See zia.ai.evaluation.v1alpha1.RunEvaluationRequest for the matching fields that this connects to.

parameters EvaluationParameters
intent_tag_predicate TagPredicate
require_ids string[]

Only include objects with ALL of the given tag ids.

include_ids string[]

Only include objects with ANY of the given tag ids.

exclude_ids string[]

Exclude objects with ANY of the given tag ids.

k_fold KFold
num_folds uint32

Number of folds

phrase_tag_predicate TagPredicate
require_ids string[]

Only include objects with ALL of the given tag ids.

include_ids string[]

Only include objects with ANY of the given tag ids.

exclude_ids string[]

Exclude objects with ANY of the given tag ids.

test_set TestSet
phrase_tag_predicate TagPredicate
require_ids string[]

Only include objects with ALL of the given tag ids.

include_ids string[]

Only include objects with ANY of the given tag ids.

exclude_ids string[]

Exclude objects with ANY of the given tag ids.

nlu_id string

Optional unique identifier of the NLU engine to use in the workspace. If none specified, the workspace's default configured NLU engine will be used. See zia.ai.pipeline.v1alpha1.NluSettings.id

evaluation_preset_id string

If specified, the evaluation parameters will be overridden by the parameters of the given preset id, discarding any current values.

auto boolean

If true, signals that the evaluation is an automatic run.

auto_evaluate boolean

Allow preset to be periodically evaluated automatically.

intents_export IntentsExport

Contains settings for exporting intents via zia.ai.playbook.data.v1alpha1.ExportIntents. See zia.ai.playbook.data.v1alpha1.ExportIntentsRequest for the matching fields that this connects to.

format enum

Format of the exported data.

format_options IntentsDataOptions
hierarchical_intent_name_disabled boolean

Disables intents hierarchy encoding via the intent names. Ex: 'Parent / Sub-parent / Intent'

hierarchical_delimiter string

Overrides the default delimiter used for intent hierarchy. Default is '--' for Botpress and Dialogflow, '+' for Rasa, '/' for CSV

zip_encoding boolean

Indicates that the intents are zipped and may be splits in different files.

gzip_encoding boolean

Indicates that the intent file is gzipped.

hierarchical_follow_up boolean

To be used with Dialogflow to use intents hierarchy using intents follow-up.

include_negative_phrases boolean

Export negative phrases as well.

intent_tag_predicate TagPredicate
require_ids string[]

Only include objects with ALL of the given tag ids.

include_ids string[]

Only include objects with ANY of the given tag ids.

exclude_ids string[]

Exclude objects with ANY of the given tag ids.

phrase_tag_predicate TagPredicate
require_ids string[]

Only include objects with ALL of the given tag ids.

include_ids string[]

Only include objects with ANY of the given tag ids.

exclude_ids string[]

Exclude objects with ANY of the given tag ids.

skip_empty_intents boolean

Skip all intents that do not contain phrases.

intent_ids string[]

(Optional) Limit export to these given intents.

intents_import IntentsImport

Contains settings for importing intents via zia.ai.playbook.data.v1alpha1.ImportIntents. See zia.ai.evaluation.params.v1alpha1.ImportIntentsRequest for the matching fields that this connects to.

format enum

Format of the imported file.

format_options IntentsDataOptions
hierarchical_intent_name_disabled boolean

Disables intents hierarchy encoding via the intent names. Ex: 'Parent / Sub-parent / Intent'

hierarchical_delimiter string

Overrides the default delimiter used for intent hierarchy. Default is '--' for Botpress and Dialogflow, '+' for Rasa, '/' for CSV

zip_encoding boolean

Indicates that the intents are zipped and may be splits in different files.

gzip_encoding boolean

Indicates that the intent file is gzipped.

hierarchical_follow_up boolean

To be used with Dialogflow to use intents hierarchy using intents follow-up.

include_negative_phrases boolean

Export negative phrases as well.

intent_tag_predicate TagPredicate
require_ids string[]

Only include objects with ALL of the given tag ids.

include_ids string[]

Only include objects with ANY of the given tag ids.

exclude_ids string[]

Exclude objects with ANY of the given tag ids.

phrase_tag_predicate TagPredicate
require_ids string[]

Only include objects with ALL of the given tag ids.

include_ids string[]

Only include objects with ANY of the given tag ids.

exclude_ids string[]

Exclude objects with ANY of the given tag ids.

skip_empty_intents boolean

Skip all intents that do not contain phrases.

import_options ImportOptions
clear_intents boolean

Clears workspace intents before importing.

clear_entities boolean

Clears workspace entities before importing.

clear_tags boolean

Clears workspace tags before importing. Note: should not be used in combination with extra_intent_tags or extra_phrase_tags since we will clear potentially referenced tags.

merge_intents boolean

Tries to merge intents into existing ones if they can be found in the workspace.

merge_entities boolean

Tries to merge entities into existing ones if they can be found in the workspace.

merge_tags boolean

Tries to merge tags into existing ones if they can be found in the workspace.

extra_intent_tags TagReference[]

Add extra tags to imported intents.

id string

Unique identifier of the tag.

name string

(Optional) Only used when importing data that tag IDs are not defined yet. This will not be filled when requesting tagged objects.

protected boolean

For internal use. There is no guarantee that this will be properly filled.

extra_phrase_tags TagReference[]

Add extra tags to imported phrases.

id string

Unique identifier of the tag.

name string

(Optional) Only used when importing data that tag IDs are not defined yet. This will not be filled when requesting tagged objects.

protected boolean

For internal use. There is no guarantee that this will be properly filled.

override_metadata boolean

Overrides the description, color, and metadata of the workspace with the values of the imported file if they are supported in the received format. Supported formats: INTENTS_FORMAT_HF_JSON

override_name boolean

Overrides the name of the workspace with the value of the imported file if they are supported in the received format. Supported formats: INTENTS_FORMAT_HF_JSON

metadata MetadataEntry
key string
value string
nlg NlgSettings
integration_id string

The id of the integration to use for the NLG prompt completions.

conversation_set_id string

The conversation set to use to store NLG prompt completions. It is expected that the first conversation source in the conversation set is a user upload source.

generation_id uint32

An incrementing id that is asscoiated to each prompt completion attempt in recommendations.

prompt_template string

The template to inject an intent's specific prompt into. The prompt will interpolate $INTENT_PROMPT and $EXAMPLE_TEXT within the prompt based on provided data.

intent_prompt_metadata_key string

The metadata key to reference to extract the an intent's prompt. If this is empty, a fallback key of "hint" will be used.

model_name string

The LLM model to use for openai prompt completions. Deprecated in favor of model_parameters.model_name. Is in sync with model_parameters.model_name, taking its value. If updated, the model parameters will be updated as well.

temperature float

Temperature setting for prompt completions. Deprecated in favor of model_parameters.temperature. Is in sync with model_parameters.temperature, taking its value. If updated, the model parameters will be updated as well.

max_tokens int32

Max number of tokens allowed between the prompt and completion. Deprecated in favor of model_parameters.max_tokens. Is in sync with model_parameters.max_tokens, taking its value. If updated, the model parameters will be updated as well.

top_p float

Top p setting for prompt completions. Should be a value between 0 and 1. Deprecated in favor of model_parameters.top_p. Is in sync with model_parameters.top_p, taking its value. If updated, the model parameters will be updated as well.

frequency_penalty float

Frequency penalty setting for prompt completions. Deprecated in favor of model_parameters.frequency_penalty. Is in sync with model_parameters.frequency_penalty, taking its value. If updated, the model parameters will be updated as well.

presence_penalty float

Presence penalty setting for prompt completions. Deprecated in favor of model_parameters.presence_penalty. Is in sync with model_parameters.presence_penalty, taking its value. If updated, the model parameters will be updated as well.

stop_sequences string[]

Configured stop sequences to tell the LLM when to stop generating text in the completion. Deprecated in favor of model_parameters.stop_sequences. For OpenAI, only up to the first 4 stop sequences will be used. Is in sync with model_parameters.stop_sequences, taking its value. If updated, the model parameters will be updated as well.

model_parameters NlgModelParameters
model_name string

The LLM model to use for openai prompt completions.

temperature float

Temperature setting for prompt completions.

max_tokens int32

Max number of tokens allowed between the prompt and completion.

top_p float

Top p setting for prompt completions. Should be a value between 0 and 1.

frequency_penalty float

Frequency penalty setting for prompt completions.

presence_penalty float

Presence penalty setting for prompt completions.

stop_sequences string[]

Configured stop sequences to tell the LLM when to stop generating text in the completion. For OpenAI, only up to the first 4 stop sequences will be used.

pipelines Pipeline[]

Pipelines associated with this workspace.

id string

The unique identifier of the pipeline.

seq_id uint32

Internally managed non-zero unique sequential number assigned to the pipeline. This should not be modified via the API as it is enforced by the backend.

name string

The name of the pipeline.

steps PipelineStep[]

The steps of the pipeline.

id string

The unique identifier of the step.

name string

The name of the step.

data_query DataQuery
queries Query[]

The queries to be used by the step.

query Any

Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Foo foo = ...; Any any; any.PackFrom(foo); ... if (any.UnpackTo(&foo)) { ... } Example 2: Pack and unpack a message in Java. Foo foo = ...; Any any = Any.pack(foo); ... if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or ... if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); } Example 3: Pack and unpack a message in Python. foo = Foo(...) any = Any() any.Pack(foo) ... if any.Is(Foo.DESCRIPTOR): any.Unpack(foo) ... Example 4: Pack and unpack a message in Go foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { ... } ... foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { ... } The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". JSON ==== The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: package google.profile; message Person { string first_name = 1; string last_name = 2; } { "@type": "type.googleapis.com/google.profile.Person", "firstName": , "lastName": } If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message google.protobuf.Duration): { "@type": "type.googleapis.com/google.protobuf.Duration", "value": "1.212s" }

type_url string

A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one "/" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration). The name should be in a canonical form (e.g., leading "." is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http, https, or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: If no scheme is provided, https is assumed. An HTTP GET on the URL must yield a google.protobuf.Type value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. Schemes other than http, https (or the empty scheme) might be used with implementation specific semantics.

value bytes

Must be a valid serialized protocol buffer of the above specified type.

extra_client_data string

Extra data to be saved by the client for use under its discretion.

max_processed_items uint32

The maximum number of items to be processed from the query.

program Program
prompt_transform PromptTransform
prompt_id string

The unique identifier of the prompt to be used.