NLU engines
#
Train on demandBy default, NLU engines will be trained automatically. Enable this option to train only on demand.
Training will train an NLU engine using the intents, labeled utterances and entities defined in the workspace. This will improve the quality of intent predictions.
Training an external NLU engine, such as one from an integration, can incur fees outside of HumanFirst.
#
Limit intent training by tagYou can limit the intents used when training your NLU engine to the ones meeting these tag match criteria.
#
Limit utterance training by tagYou can limit the labeled utterances used when training your NLU engine to the ones meeting these tag match criteria.
#
Infer on demandBy default, NLU engines will be used for inference automatically. Enable this option to infer only on demand (recommended for external NLU providers). Inference uses your trained NLU model(s) to analyze the workspace's unlabeled data. This will provide distribution metrics on your data such as uncertainty, entropy, and margin scores.
Inferring data with an external NLU engine, such as one from an integration, can incur fees outside of HumanFirst.
#
Include parent intents in predictionsHumanFirst supports intent hierarchies. This allows HF NLU to provide relevant fallback intents when a match is not strong enough. For example, if you have 3 intents:
- Has a problem
- Has a authentication problem
- Has a billing problem
NLU predictions will try to match inputs to intents. But if the NLU engine is struggling to decide between sibling intents ("Has a authentication problem" & "Has a billing problem" in our example), it can be configured to automatically fallback to the parent intent of those siblings ("Has a problem" in our example). This is what the include parent intents in predictions feature enables.
Because many external NLU engines do not have a notion of intent hierarchies, this feature should be turned off for them.
#
Latent spaceHF NLU trains a thin model on top of a latent space in order to provide good results with short training times.
If you are using a custom latent space, or are using embeddings from one of our integrations, you can override it here.
#
Rasa 3#
Steps to add a Rasa 3 engine- Navigate to the Workspace Settings page by clicking on the Settings button in the top navigation bar of your workspace.
- Under the Workspace settings options, click the + Add button under the NLU engines heading.
- Select the Rasa 3 engine's button in the Adding NLU Engine modal.
- Configure the engine with your preferences
- Add a Rasa configuration yaml file in the Rasa config (yaml) field
- Click the Save button at the bottom of the modal
#
Duckling support in Rasa 3Using the Duckling service in your Rasa 3 NLU engine will help with extracting common entities (eg: distance, time and phone numbers).
#
Steps to add Duckling to Rasa 3 engine- Navigate to the Workspace Settings page by clicking on the Settings button in the top navigation bar of your workspace.
- Under the Workspace settings options, find the previously add Rasa 3 NLU engine. Hover the mouse cursor over the name and click the pencil icon that appears to access the engines configuration.
- In the Rasa config (yaml) text field, add the following section to the pipeline definition after the classifier definition (eg: DIETClassifier) and specify the dimensions to extract (eg: "phoneNumber").
- Click on the Save button at the bottom of the modal.