Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Release 2.7.0 #574

Merged
merged 83 commits into from
Jun 12, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
83 commits
Select commit Hold shift + click to select a range
219d7d4
Refactor `fixedWidthInput` class
JosephMarinier Mar 23, 2023
d0dddc0
Create reusable handlers
JosephMarinier Mar 25, 2023
6fc8efa
Update `partialConfig` based on the previous state
JosephMarinier Mar 27, 2023
c3f4c62
Remove `displayPipelineStringField()`
JosephMarinier Mar 25, 2023
75dd778
Fix disabling field for `postprocessor.class_name` when postprocessin…
JosephMarinier Mar 25, 2023
2816297
Remove `displayPostprocessorNumberField()`
JosephMarinier Mar 25, 2023
c268dbb
Have `StringField` support the value `null`
JosephMarinier Mar 27, 2023
e79d6fe
Remove `displayStringField()`
JosephMarinier Mar 25, 2023
d7531ce
Make `disabled` uniform
JosephMarinier Mar 25, 2023
b1dd6f2
Have `StringField` support select options
JosephMarinier Mar 25, 2023
b0630d7
Parse `ArtifactsConfig` only `if load_config_history`
JosephMarinier Mar 28, 2023
bd41b31
Move field components to separate files
JosephMarinier Mar 28, 2023
db7fd63
Make CI run on all push
gabegma Apr 3, 2023
63311d4
Minor fixes to comments and logs (#533)
gabegma Apr 4, 2023
d704c36
Clean up `links` from Docker Compose (#536)
JosephMarinier Apr 10, 2023
c77f7fa
Update python to 3.9 (#534)
gabegma Apr 11, 2023
3b5089b
Rename variable in tests (#535)
gabegma Apr 11, 2023
f54c511
Validate behavioral testing number fields (#537)
JosephMarinier Apr 11, 2023
32c58cd
Show error messages from backend on API failure (#524)
nandhinibsn Apr 12, 2023
8a30f8c
Fix config modal closing unexpectedly
JosephMarinier Apr 12, 2023
8827947
Log changes
JosephMarinier Apr 12, 2023
6138ac5
Fix get_tags() method
gabegma Apr 12, 2023
8c5c499
Fix breadcrumbs (#541)
JosephMarinier Apr 12, 2023
652628c
Fix get_tags() method (#544)
gabegma Apr 12, 2023
3bab4d2
Fix config modal closing unexpectedly on unsuccessful config update (…
JosephMarinier Apr 12, 2023
bb347e1
Merge branch 'main' into joseph/cleanup-config
JosephMarinier Apr 12, 2023
e0f76e2
Clean up config (#523)
JosephMarinier Apr 13, 2023
2bd25d7
Cleanup tasks in memory
gabegma Apr 13, 2023
fb76aa5
Cleanup tasks in memory (#543)
gabegma Apr 14, 2023
6eb80b1
Refactor validation modules (#542)
gabegma Apr 14, 2023
1c7d707
Avoid checking `.done()` twice
JosephMarinier Apr 14, 2023
41c081d
Check for startup status a lot more frequently
JosephMarinier Apr 14, 2023
0f4cc88
Improve speed of outcome count per threshold
gabegma Apr 12, 2023
f546770
Improve speed of outcome count per threshold (#545)
gabegma Apr 14, 2023
817524f
Log progress in a separate thread
JosephMarinier Apr 14, 2023
6d242fb
Always log statuses in the same (logical) order
JosephMarinier Apr 17, 2023
2c6f608
Fix if dependencies for empty lists
gabegma Apr 12, 2023
ce2f2c7
Modify order of startup so dependencies work well
gabegma Apr 12, 2023
9a11c13
Rename startup tasks so they are not a substring of another
gabegma Apr 12, 2023
a9e900f
Adapt based on comments
gabegma Apr 17, 2023
a89c318
Avoid restarting task when pending
gabegma Apr 17, 2023
54519e8
Avoid restarting task when pending (#539)
gabegma Apr 17, 2023
54f4731
Fix startup dependencies (#538)
gabegma Apr 17, 2023
4c0eba4
Clean up
JosephMarinier Apr 17, 2023
19968da
Make `thread_log_progress` a `daemon` thread, to make sure it doesn't…
JosephMarinier Apr 17, 2023
f17e012
Use constants for HTTP status codes
JosephMarinier Apr 13, 2023
39c91a6
Handle PATCH "/config" with `handle_validation_error()`
JosephMarinier Apr 13, 2023
a5c834a
Make validation errors less verbose
JosephMarinier Apr 13, 2023
66734bb
Fix Exploration space crashing with invalid dataset split
JosephMarinier Apr 13, 2023
01015c6
Log changes
JosephMarinier Apr 13, 2023
7887517
Workaround CI error
JosephMarinier Apr 17, 2023
4eea598
Don't block execution when waiting for startup tasks (#547)
JosephMarinier Apr 17, 2023
6f98295
Refactor DatasetSplitManager to reload latest cache. (#527)
gabegma Apr 18, 2023
9ba2f9b
Refactor ArtifactManager and restart TaskManager (#529)
gabegma Apr 18, 2023
add5b58
Assign tasks to workers (#525)
gabegma Apr 19, 2023
3c384a4
Move handlers to `utils/exception_handlers.py`
JosephMarinier Apr 20, 2023
d1bfb1c
Merge branch 'main' into joseph/validation-errors
JosephMarinier Apr 20, 2023
120915d
Make validation errors less verbose (#546)
JosephMarinier Apr 20, 2023
f366d54
Merge branch 'main' into ggm/release-v2.6.1
JosephMarinier Apr 20, 2023
21b3d15
Merge release v2.6.1 on main (#553)
JosephMarinier Apr 20, 2023
8494747
Clean up change log after release
JosephMarinier Apr 20, 2023
7fbd36f
Open utterance details in a modal (#551)
JosephMarinier Apr 25, 2023
e1cefea
Remove nonfunctional 500 handler (#556)
JosephMarinier Apr 26, 2023
7d543ef
Refactor `AzimuthBaseSettings` outside of `config.py` (#557)
JosephMarinier Apr 27, 2023
6115918
Align `isort`'s max line length with `flake8` and `black` (#559)
JosephMarinier May 2, 2023
215a758
Hide registering task logs (#562)
gabegma May 3, 2023
b364bee
Create an API route to get the config history
JosephMarinier May 2, 2023
a1d2e9b
Make validators private methods
JosephMarinier May 2, 2023
943f3e8
Create an API route to get the config history (#560)
JosephMarinier May 4, 2023
1090093
Clean up aliases (#563)
JosephMarinier May 4, 2023
1f87749
Support null dataset (#558)
JosephMarinier May 5, 2023
797ba66
Support adding/removing pipelines/post-processors (#561)
nandhinibsn May 25, 2023
13e46b6
Bump pymdown-extensions from 9.3 to 10.0 (#565)
dependabot[bot] May 30, 2023
12aa4a3
Support custom metrics (#569)
JosephMarinier Jun 1, 2023
45eb531
Remove deprecation warning from pytest output (#570)
Dref360 Jun 9, 2023
20a8d14
Add a make target to serve docs
JosephMarinier Jun 9, 2023
d0109a8
Small fixes and enhancements
JosephMarinier Jun 12, 2023
f3af6b3
Document launching Azimuth with or without a config file
JosephMarinier Jun 12, 2023
32d153d
Release 2.6.2
JosephMarinier Jun 12, 2023
8221460
Try `setup-poetry@v8`
JosephMarinier Jun 12, 2023
de3f326
Fix exporting `CFG_PATH`
JosephMarinier Jun 12, 2023
4b94404
Update documentation for 2.6.2 (#571)
JosephMarinier Jun 12, 2023
e888bbf
Bump minor version
JosephMarinier Jun 12, 2023
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .github/workflows/pythonci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ jobs:
- uses: actions/setup-python@v1
with:
python-version: 3.9
- uses: Gr1N/setup-poetry@v7
- uses: Gr1N/setup-poetry@v8
- uses: actions/cache@v2
with:
path: ~/.cache/pypoetry/virtualenvs
Expand Down
1 change: 0 additions & 1 deletion .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,6 @@ repos:
hooks:
- id: isort
name: isort
args: [ "--profile", "black", "--skip", "__init__.py", "--filter-files" ]
- repo: local
hooks:
- id: mypy
Expand Down
4 changes: 4 additions & 0 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -61,3 +61,7 @@ launch:
push:
docker push $(REGISTRY)/$(IMAGE):$(TAG)_$(DEVICE)$(TAG_EXT)
docker push $(REGISTRY)/$(IMAGE)-app:$(TAG)$(TAG_EXT)

.PHONY: docs_serve
docs_serve:
cd docs && mkdocs serve
41 changes: 7 additions & 34 deletions azimuth/app.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,17 +9,14 @@
from distributed import SpecCluster
from fastapi import APIRouter, Depends, FastAPI, HTTPException
from fastapi.exceptions import RequestValidationError
from pydantic import BaseModel
from pydantic import BaseModel, ValidationError
from starlette.middleware.cors import CORSMiddleware
from starlette.requests import Request
from starlette.responses import JSONResponse
from starlette.status import (
HTTP_400_BAD_REQUEST,
HTTP_401_UNAUTHORIZED,
HTTP_403_FORBIDDEN,
HTTP_404_NOT_FOUND,
HTTP_422_UNPROCESSABLE_ENTITY,
HTTP_500_INTERNAL_SERVER_ERROR,
HTTP_503_SERVICE_UNAVAILABLE,
)

Expand All @@ -31,6 +28,7 @@
from azimuth.types import DatasetSplitName, ModuleOptions, SupportedModule
from azimuth.utils.cluster import default_cluster
from azimuth.utils.conversion import JSONResponseIgnoreNan
from azimuth.utils.exception_handlers import handle_validation_error
from azimuth.utils.logs import set_logger_config
from azimuth.utils.validation import assert_not_none

Expand All @@ -51,7 +49,6 @@
# conventional HTTP codes.
# This overwrites the default ValidationError response for 422 in the OpenAPI spec.
HTTP_422_UNPROCESSABLE_ENTITY,
HTTP_500_INTERNAL_SERVER_ERROR,
HTTP_503_SERVICE_UNAVAILABLE,
)

Expand All @@ -60,23 +57,6 @@ class HTTPExceptionModel(BaseModel):
detail: str


async def handle_validation_error(request: Request, exception: RequestValidationError):
return JSONResponse(
status_code=HTTP_404_NOT_FOUND # for errors in paths, e.g., /dataset_splits/potato
if "path" in (error["loc"][0] for error in exception.errors())
else HTTP_400_BAD_REQUEST, # for other errors like in query params, e.g., pipeline_index=-1
content={"detail": str(exception)},
)


async def handle_internal_error(request: Request, exception: Exception):
# Don't expose this unexpected internal error as that could expose a security vulnerability.
return JSONResponse(
status_code=HTTP_500_INTERNAL_SERVER_ERROR,
content={"detail": "Internal server error"},
)


def get_dataset_split_manager_mapping() -> Dict[DatasetSplitName, Optional[DatasetSplitManager]]:
return _dataset_split_managers

Expand Down Expand Up @@ -144,8 +124,6 @@ def start_app(config_path: Optional[str], load_config_history: bool, debug: bool
log.info("🔭 Azimuth starting 🔭")

azimuth_config = load_azimuth_config(config_path, load_config_history)
if azimuth_config.dataset is None:
raise ValueError("No dataset has been specified in the config.")

local_cluster = default_cluster(large=azimuth_config.large_dask_cluster)

Expand Down Expand Up @@ -184,8 +162,9 @@ def create_app() -> FastAPI:
default_response_class=JSONResponseIgnoreNan,
responses={code: {"model": HTTPExceptionModel} for code in COMMON_HTTP_ERROR_CODES},
exception_handlers={
ValidationError: handle_validation_error, # for PATCH "/config",
# where we call old_config.copy(update=partial_config, deep=True) ourselves.
RequestValidationError: handle_validation_error,
HTTP_500_INTERNAL_SERVER_ERROR: handle_internal_error,
},
root_path=".", # Tells Swagger UI and ReDoc to fetch the OpenAPI spec from ./openapi.json
# (relative) so it works through the front-end proxy.
Expand All @@ -201,16 +180,10 @@ def create_app() -> FastAPI:
from azimuth.routers.model_performance.confidence_histogram import (
router as confidence_histogram_router,
)
from azimuth.routers.model_performance.confusion_matrix import (
router as confusion_matrix_router,
)
from azimuth.routers.model_performance.confusion_matrix import router as confusion_matrix_router
from azimuth.routers.model_performance.metrics import router as metrics_router
from azimuth.routers.model_performance.outcome_count import (
router as outcome_count_router,
)
from azimuth.routers.model_performance.utterance_count import (
router as utterance_count_router,
)
from azimuth.routers.model_performance.outcome_count import router as outcome_count_router
from azimuth.routers.model_performance.utterance_count import router as utterance_count_router
from azimuth.routers.top_words import router as top_words_router
from azimuth.routers.utterances import router as utterances_router
from azimuth.utils.routers import require_application_ready, require_available_model
Expand Down
51 changes: 31 additions & 20 deletions azimuth/config.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,12 +11,12 @@

import structlog
from jsonlines import jsonlines
from pydantic import BaseSettings, Extra, Field, root_validator, validator
from pydantic import Extra, Field, ValidationError, root_validator, validator

from azimuth.types import AliasModel, DatasetColumn, SupportedModelContract
from azimuth.utils.conversion import md5_hash
from azimuth.utils.exclude_fields_from_cache import exclude_fields_from_cache
from azimuth.utils.openapi import fix_union_types, make_all_properties_required
from azimuth.utils.openapi import AzimuthBaseSettings

log = structlog.get_logger(__file__)
T = TypeVar("T", bound="ProjectConfig")
Expand Down Expand Up @@ -87,14 +87,6 @@ class AzimuthValidationError(Exception):
pass


class AzimuthBaseSettings(BaseSettings):
class Config:
@staticmethod
def schema_extra(schema):
fix_union_types(schema)
make_all_properties_required(schema)


class CustomObject(AzimuthBaseSettings):
class_name: str = Field(
...,
Expand Down Expand Up @@ -128,8 +120,8 @@ class TemperatureScaling(CustomObject):
] = "azimuth.utils.ml.postprocessing.TemperatureScaling"
temperature: float = Field(1, ge=0, env="TEMP")

@root_validator()
def check_temps(cls, values):
@root_validator
def _check_temps(cls, values):
kwargs = values.get("kwargs", {})
if "temperature" not in kwargs:
kwargs["temperature"] = values.get("temperature", 1)
Expand All @@ -144,8 +136,8 @@ class ThresholdConfig(CustomObject):
] = "azimuth.utils.ml.postprocessing.Thresholding"
threshold: float = Field(0.5, ge=0, le=1, env="TH")

@root_validator()
def check_threshold(cls, values):
@root_validator
def _check_threshold(cls, values):
kwargs = values.get("kwargs", {})
if "threshold" not in kwargs:
kwargs["threshold"] = values.get("threshold", 0.5)
Expand Down Expand Up @@ -267,7 +259,7 @@ class ProjectConfig(AzimuthBaseSettings):
# Name of the current project.
name: str = Field("New project", exclude_from_cache=True)
# Dataset object definition.
dataset: Optional[CustomObject] = None
dataset: Optional[CustomObject] = Field(None, nullable=True)
# Column names config in dataset
columns: ColumnConfiguration = ColumnConfiguration()
# Name of the rejection class.
Expand All @@ -286,7 +278,6 @@ def get_project_hash(self):
self.dict(
include=ProjectConfig.__fields__.keys(),
exclude=exclude_fields_from_cache(self),
by_alias=True,
)
)

Expand Down Expand Up @@ -341,7 +332,7 @@ class ModelContractConfig(CommonFieldsConfig):
saliency_layer: Optional[str] = Field(None, nullable=True)

@validator("pipelines", pre=True)
def check_pipeline_names(cls, pipeline_definitions):
def _check_pipeline_names(cls, pipeline_definitions):
# We support both [] and None (null in JSON), and we standardize it to None.
if not pipeline_definitions:
return None
Expand All @@ -364,7 +355,6 @@ def get_model_contract_hash(self):
self.dict(
include=ModelContractConfig.__fields__.keys()
- CommonFieldsConfig.__fields__.keys(),
by_alias=True,
)
)

Expand Down Expand Up @@ -441,8 +431,8 @@ class AzimuthConfig(
# Reminder: If a module depends on an attribute in AzimuthConfig, the module will be forced to
# include all other configs in its scope.

@root_validator()
def dynamic_language_config_values(cls, values):
@root_validator
def _dynamic_language_config_values(cls, values):
defaults = config_defaults_per_language[values["language"]]
if behavioral_testing := values.get("behavioral_testing"):
neutral_token = behavioral_testing.neutral_token
Expand Down Expand Up @@ -482,6 +472,19 @@ def load_last_from_config_history(cls, config_history_path: str) -> Optional["Az
else:
return AzimuthConfigHistory.parse_obj(last_config).config

def get_config_history(self) -> List["AzimuthConfigHistoryWithHash"]:
config_history = []
try:
with jsonlines.open(self.get_config_history_path(), mode="r") as reader:
for item in reader:
try:
config_history.append(AzimuthConfigHistoryWithHash.parse_obj(item))
except ValidationError:
pass
except (FileNotFoundError, ValueError):
pass
return config_history

def log_info(self):
log.info(f"Config loaded for {self.name} with {self.model_contract} as a model contract.")

Expand Down Expand Up @@ -527,6 +530,14 @@ class AzimuthConfigHistory(AzimuthBaseSettings):
created_on: str = Field(default_factory=lambda: str(datetime.now(timezone.utc)))


class AzimuthConfigHistoryWithHash(AzimuthConfigHistory):
hash: str = ""

@root_validator(skip_on_failure=True)
def _set_hash(cls, values):
return {**values, "hash": md5_hash(values["config"].dict())}


def load_azimuth_config(config_path: Optional[str], load_config_history: bool) -> AzimuthConfig:
log.info("-------------Loading Config--------------")
cfg = AzimuthConfig.load(config_path, load_config_history)
Expand Down
2 changes: 1 addition & 1 deletion azimuth/modules/base_classes/caching.py
Original file line number Diff line number Diff line change
Expand Up @@ -108,7 +108,7 @@ def _store_data_in_cache(self, result: List[ModuleResponse], indices: List[int])

# Save all that affects caching so it can be used for identification and debugging.
with open(os.path.join(self._cache_effective_arguments), "w") as f:
json.dump(self.get_effective_arguments().no_alias_dict(), f, indent=2)
json.dump(self.get_effective_arguments().dict(), f, indent=2)

@retry(stop_max_attempt_number=5, wait_fixed=0.5)
def _check_cache_internal(self, indices: Optional[List[int]]):
Expand Down
2 changes: 1 addition & 1 deletion azimuth/modules/base_classes/dask_module.py
Original file line number Diff line number Diff line change
Expand Up @@ -99,7 +99,7 @@ def _get_config_scope(self, config) -> ConfigScope:
scoped_config = config.__class__
else:
scoped_config = base.__args__[0]
return cast(ConfigScope, scoped_config.parse_obj(config.dict(by_alias=True)))
return cast(ConfigScope, scoped_config.parse_obj(config.dict()))

def start_task_on_dataset_split(
self, client: Client, dependencies: List["DaskModule"] = None
Expand Down
2 changes: 1 addition & 1 deletion azimuth/modules/base_classes/module.py
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ def __init__(
):
mod_options = mod_options or ModuleOptions()
self.mod_options = mod_options
defined_mod_options = set(self.mod_options.no_alias_dict(exclude_defaults=True).keys())
defined_mod_options = set(self.mod_options.dict(exclude_defaults=True).keys())
if not self.required_mod_options.issubset(defined_mod_options):
raise ValueError(f"{self.__class__.__name__} requires {self.required_mod_options}.")
if diff := (defined_mod_options - self.required_mod_options - self.optional_mod_options):
Expand Down
10 changes: 2 additions & 8 deletions azimuth/modules/model_performance/confidence_binning.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,15 +11,9 @@
from azimuth.dataset_split_manager import DatasetSplitManager
from azimuth.modules.base_classes import DatasetResultModule, FilterableModule
from azimuth.types import DatasetColumn
from azimuth.types.model_performance import (
ConfidenceBinDetails,
ConfidenceHistogramResponse,
)
from azimuth.types.model_performance import ConfidenceBinDetails, ConfidenceHistogramResponse
from azimuth.types.outcomes import ALL_OUTCOMES, OutcomeName
from azimuth.utils.dataset_operations import (
get_confidences_from_ds,
get_outcomes_from_ds,
)
from azimuth.utils.dataset_operations import get_confidences_from_ds, get_outcomes_from_ds
from azimuth.utils.validation import assert_not_none

CONFIDENCE_BINS_COUNT = 20
Expand Down
4 changes: 1 addition & 3 deletions azimuth/modules/model_performance/metrics.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,9 +14,7 @@

from azimuth.config import MetricsConfig, MetricsPerFilterConfig
from azimuth.modules.base_classes import AggregationModule, FilterableModule
from azimuth.modules.model_performance.confidence_binning import (
ConfidenceHistogramModule,
)
from azimuth.modules.model_performance.confidence_binning import ConfidenceHistogramModule
from azimuth.plots.ece import make_ece_figure
from azimuth.types import DatasetColumn, DatasetFilters
from azimuth.types.model_performance import (
Expand Down
6 changes: 1 addition & 5 deletions azimuth/modules/model_performance/outcome_count.py
Original file line number Diff line number Diff line change
Expand Up @@ -21,11 +21,7 @@
OutcomeCountPerThresholdValue,
)
from azimuth.types.outcomes import ALL_OUTCOMES, OutcomeName
from azimuth.types.tag import (
ALL_DATA_ACTION_FILTERS,
SMART_TAGS_FAMILY_MAPPING,
SmartTag,
)
from azimuth.types.tag import ALL_DATA_ACTION_FILTERS, SMART_TAGS_FAMILY_MAPPING, SmartTag
from azimuth.utils.dataset_operations import get_outcomes_from_ds
from azimuth.utils.ml.model_performance import (
sorted_by_utterance_count,
Expand Down
7 changes: 1 addition & 6 deletions azimuth/modules/perturbation_testing/perturbation_testing.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,12 +12,7 @@
from azimuth.modules.base_classes import DatasetResultModule
from azimuth.modules.base_classes.dask_module import Worker
from azimuth.modules.model_contract_task_mapping import model_contract_task_mapping
from azimuth.types import (
DatasetColumn,
DatasetSplitName,
ModuleOptions,
SupportedMethod,
)
from azimuth.types import DatasetColumn, DatasetSplitName, ModuleOptions, SupportedMethod
from azimuth.types.perturbation_testing import (
PRETTY_PERTURBATION_TYPES,
PerturbationTestClass,
Expand Down
13 changes: 4 additions & 9 deletions azimuth/routers/app.py
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,7 @@
DatasetInfoResponse,
PerturbationTestingSummary,
StatusResponse,
UtteranceCountPerDatasetSplit,
)
from azimuth.types.perturbation_testing import (
PerturbationTestingMergedResponse,
Expand All @@ -44,7 +45,6 @@
require_available_model,
require_pipeline_index,
)
from azimuth.utils.validation import assert_not_none

router = APIRouter()

Expand Down Expand Up @@ -88,26 +88,21 @@ def get_dataset_info(
):
eval_dm = dataset_split_managers.get(DatasetSplitName.eval)
training_dm = dataset_split_managers.get(DatasetSplitName.train)
dm = assert_not_none(eval_dm or training_dm)

return DatasetInfoResponse(
project_name=config.name,
class_names=dm.get_class_names(),
data_actions=ALL_DATA_ACTION_FILTERS,
smart_tags=ALL_SMART_TAG_FILTERS,
eval_class_distribution=eval_dm.class_distribution().tolist()
if eval_dm is not None
else [],
train_class_distribution=training_dm.class_distribution().tolist()
if training_dm is not None
else [],
startup_tasks={k: v.status() for k, v in startup_tasks.items()},
model_contract=config.model_contract,
prediction_available=predictions_available(config),
perturbation_testing_available=perturbation_testing_available(config),
available_dataset_splits=AvailableDatasetSplits(
eval=eval_dm is not None, train=training_dm is not None
),
utterance_count_per_dataset_split=UtteranceCountPerDatasetSplit(
eval=eval_dm and eval_dm.num_rows, train=training_dm and training_dm.num_rows
),
similarity_available=similarity_available(config),
postprocessing_editable=None
if config.pipelines is None
Expand Down
Loading
Loading