Skip to content
This repository has been archived by the owner on Jan 2, 2024. It is now read-only.

Problem of version compability even with Develop mode on #743

Closed
FlorianJacta opened this issue Aug 30, 2023 · 1 comment
Closed

Problem of version compability even with Develop mode on #743

FlorianJacta opened this issue Aug 30, 2023 · 1 comment
Assignees

Comments

@FlorianJacta
Copy link
Member

Description

Changing configuration might create issues in Taipy Core even in Develop mode. Changing the name of the function used by a task raises an error.

How to reproduce

  • Run the code
from taipy import Config, Scope, Frequency

import datetime as dt

import taipy as tp
import pandas as pd

PATH_CSV = 'dataset.csv'
PATH_PARQUET = 'dataset.parquet'

def transform(csv, parquet, pickle):
    print("     Cleaning data")
    return csv, 5, "hello", dt.datetime.now()

## Input Data csv
csv_cfg = Config.configure_data_node(id="csv",  storage_type="csv", path=PATH_CSV)#scope=Scope.GLOBAL,
parquet_cfg = Config.configure_data_node(id="parquet", scope=Scope.CYCLE, storage_type="parquet", path=PATH_PARQUET)
pickle_cfg = Config.configure_data_node(id="pickle")


## Remaining Data Node
data_out_cfg = Config.configure_data_node(id="data_out")
int_cfg = Config.configure_data_node(id="int")
string_cfg = Config.configure_data_node(id="string")
date_cfg = Config.configure_data_node(id="date")

# Task config objects
transform_task_cfg = Config.configure_task(id="transform",
                                            function=transform,
                                            input=[csv_cfg,parquet_cfg,pickle_cfg],
                                            output=[data_out_cfg, int_cfg, string_cfg, date_cfg],
                                            skippable=True)


# Configure our scenario config.
scenario_cfg = Config.configure_scenario(id="scenario", task_configs=[transform_task_cfg], frequency=Frequency.MONTHLY)


if __name__ == "__main__":
    tp.Core().run()

    scenario = tp.create_scenario(config=scenario_cfg)
    data_pickle = pd.DataFrame({"Hello": [1, 2, 3], "World": [4, 5, 6]})
    scenario.pickle.write(data_pickle)
    data_csv = pd.DataFrame({"Hi": ["red", "step", 'true'], "World": [None, 5, 6]})
    data_csv.to_csv(PATH_CSV)
    data_parquet = pd.DataFrame({"Date":[dt.datetime(2021, 1, 1), dt.datetime(2021, 1, 2), dt.datetime(2021, 1, 3)], "Value": [1, 2, 3]})
    data_parquet.to_parquet(PATH_PARQUET)

    print(scenario.pickle.read())
  • Rename the function used:
from taipy import Config, Scope, Frequency

import datetime as dt

import taipy as tp
import pandas as pd

PATH_CSV = 'dataset.csv'
PATH_PARQUET = 'dataset.parquet'

def other_function(csv, parquet, pickle):
    print("     Cleaning data")
    return csv, 5, "hello", dt.datetime.now()

## Input Data csv
csv_cfg = Config.configure_data_node(id="csv",  storage_type="csv", path=PATH_CSV)#scope=Scope.GLOBAL,
parquet_cfg = Config.configure_data_node(id="parquet", scope=Scope.CYCLE, storage_type="parquet", path=PATH_PARQUET)
pickle_cfg = Config.configure_data_node(id="pickle")


## Remaining Data Node
data_out_cfg = Config.configure_data_node(id="data_out")
int_cfg = Config.configure_data_node(id="int")
string_cfg = Config.configure_data_node(id="string")
date_cfg = Config.configure_data_node(id="date")

# Task config objects
transform_task_cfg = Config.configure_task(id="transform",
                                            function=other_function,
                                            input=[csv_cfg,parquet_cfg,pickle_cfg],
                                            output=[data_out_cfg, int_cfg, string_cfg, date_cfg],
                                            skippable=True)


# Configure our scenario config.
scenario_cfg = Config.configure_scenario(id="scenario", task_configs=[transform_task_cfg], frequency=Frequency.MONTHLY)


if __name__ == "__main__":
    tp.Core().run()

    scenario = tp.create_scenario(config=scenario_cfg)
    data_pickle = pd.DataFrame({"Hello": [1, 2, 3], "World": [4, 5, 6]})
    scenario.pickle.write(data_pickle)
    data_csv = pd.DataFrame({"Hi": ["red", "step", 'true'], "World": [None, 5, 6]})
    data_csv.to_csv(PATH_CSV)
    data_parquet = pd.DataFrame({"Date":[dt.datetime(2021, 1, 1), dt.datetime(2021, 1, 2), dt.datetime(2021, 1, 3)], "Value": [1, 2, 3]})
    data_parquet.to_parquet(PATH_PARQUET)

    print(scenario.pickle.read())

Expected behavior
This shouldn't create any issue as we are in develop mode for Taipy Core.

Screenshots
When available and relevant, screenshots better help show the problem.

Runtime environment
Taipy 3.0 develop

@jrobinAV
Copy link
Member

Thank you @FlorianJacta. We will have a look at it. Probably the next sprint.

trgiangdo added a commit that referenced this issue Sep 13, 2023
…ot-load

Fix/#743 - Clean entities in development mode should not load all entities
trgiangdo added a commit that referenced this issue Sep 13, 2023
…-manage-version

fix: the Core._orchestrator is now only built at run(), after managing the version
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants