Skip to content

v1.8.0

Compare
Choose a tag to compare
@kmaphoenix kmaphoenix released this 11 Sep 19:30
· 106 commits to main since this release

New Features

Offline Agent Parsing 📵

We're excited to announce a new feature we're calling "Offline Agent Parsing"!
So what does it do? 🤔

TLDR

The Offline Agent Parsing feature pulls your entire agent file offline and returns a Python object ✨jam packed ✨ with all of the key Agent Resources and most commonly sought after bits of information from your agent.

It's Fast! ⚡️

We can process any agent size, from 1 Flow to 100 Flows, in blazing fast time! 🔥
image

It's Cheap! (On the API!) 💰

We make all the magic happen in just 2 API calls, no matter the agent size! 🪄
image

It's Convenient! 🙏🏼

We already know you need that map and DataFrame. We gotcha! 😎
image
image

Some examples of what you'll get when you use the feature:

  • All of the major agent resources in List format (i.e. Intents, Entity Types, Flows, Pages, etc.)
  • All of the common resource maps (i.e. intents_map, flows_map, pages_map, etc.)
  • A full graph representation of your agent, for additional downstream parsing
  • Handy DataFrames for identifying gaps in your agent design
  • Total Counts of all common resources (i.e. Total Intents, Total Training Phrases, etc.)
  • ...and much more!

For a detail output of available, see the AgentData class.

Motivation

The motivation behind this class comes from many of our customers that are building really, REALLY big Dialogflow CX agents!

Using the standard APIs can become a costly hinderance in terms of time it takes to retrieve the online data from your Agent.
If you have 50 Flows and you want to get all of the Pages in your agent, that's a minimum of 51 API calls!
Additionally, you have to beware of API Quota limits.
Run your script too fast, and you'll hit a 429 Quota Exhausted error!

With Offline Agent Parsing, we take all of the heavy lifting offline and we do it all with just a single API call.

Batch NLU Evaluation 📊

Another great feature that we're releasing is the NLU Evaluation framework!

This framework allows you to batch / bulk test your Dialogflow CX NLU model against your evaluation datasets.
This is especially helpful for teams that do a lot of NLU tuning in Dialogflow CX.

See the new Sample Eval Notebook for some pointers on getting started!

Bug Fixes

  • Fixed a bug where location_id was reverted to location in agents.py

Misc

  • Fixed some docs in various places
  • Fixed pylint deprecation error for builtins
  • Added optimization for __convert_tr_target_page in CopyUtil

What's Changed

Full Changelog: 1.7.0...1.8.0