Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

lib refector and add lifelong learning #72

Merged
merged 5 commits into from
Jun 4, 2021

Conversation

JoeyHwong-gk
Copy link
Contributor

@JoeyHwong-gk JoeyHwong-gk commented May 31, 2021

Issue #46 by JoeyHwong

  • Lib structural decoupling
  • Interface unification
  • add global log and fileops
  • Independent algorithm module
  • add lifelong learning

@kubeedge-bot kubeedge-bot added the size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files. label May 31, 2021
COPY ./lib/requirements.txt /home
RUN pip install -r /home/requirements.txt
RUN pip install git+https://github.com/joeyhwong-gk/uvicorn
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why using personal pip package?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Federated learning usually sends a large weight data to the aggregation service, but uvicorn does not support change the limit size. A patch has been added.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We have contacted the official to fix the problem and I will checkout to the official version later.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good. In the present can we patch it with some command instead of personal link?

@@ -11,21 +11,38 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

better to add new empty line after copyright declaration

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fix

# use original dataset url,
# see https://github.com/kubeedge/sedna/issues/35
original_dataset_url = Context.get_parameters('original_dataset_url')
return original_dataset_url + os.path.sep + dataset_url
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why does example code need to know this path handling?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link

@llhuii llhuii Jun 2, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I mean this original_dataset_url should not appear in user code (here example code)

# use original dataset url,
# see https://github.com/kubeedge/sedna/issues/35
original_dataset_url = Context.get_parameters('original_dataset_url')
return original_dataset_url + os.path.sep + dataset_url
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same question here: why example code need to know this original_dataset_url

@@ -76,7 +76,7 @@ EOF
Note the setting of the following parameters, which have to same as the script [little_model.py](/examples/joint_inference/helmet_detection_inference/little_model/little_model.py):
- hardExampleMining: set hard example algorithm from {IBT, CrossEntropy} for inferring in edge side.
- video_url: set the url for video streaming.
- all_examples_inference_output: set your output path for the inference results.
- all_examples_inference_output: set your output path for the inference results, and note that the root path has to be /home/data.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why has to be /home/data?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fix

self.weights = None

@abc.abstractmethod
def aggregate(self, weights, size=0):
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what does size parameter of aggregate mean?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sample number of each loop

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What does sample number of each loop mean? Does size equal sizeof(weights)?
pls add some comment about this.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fedavg algorithms average a new weights based on the number of samples in the batch and the weights generated by the training.

"""


@ClassFactory.register(ClassType.FLAGG)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

better to rename FLAGG into FL_AGG. easy to misunderstand FLAGG with FLAG and G

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fix

for inx, weight in enumerate(weights):
old_weight = self.weights[inx]
row_weight = (np.array(weight) - old_weight) * (size / total_sample) + old_weight
self.weights = deepcopy(updates)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

deepcopy here?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Prevents developer-defined post-processing from modifying the original weights.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

deepcopy is expensive, especially the weights.
In what case, post-processing modify it? I think rare case, even so let the copy to developer

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fix

updates = []
for inx, weight in enumerate(weights):
old_weight = self.weights[inx]
row_weight = (np.array(weight) - old_weight) * (size / total_sample) + old_weight
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

row_weight is unused in anywhere

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

updates always empty

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fix

# See the License for the specific language governing permissions and
# limitations under the License.

"""Hard Example Mining Algorithms for Cloud Migration Judgment"""
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what does Cloud Migration mean here?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fix

except Exception as err:
self.log.error(err)
with open(FileOps.join_path(self.config.model_url, "lc_msg.json"), 'w') as f_out:
json.dump(message, f_out)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

just log enough, no need to dump into model url

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fix

}
)
else:
ckpt_model_url = FileOps.remove_path_prefix(self.config.model_url,
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ckpt/pb are tensorflow backend specified

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Adapting to CRD

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

then they should appear in backend directory, not in core/base.py

s.connect(("8.8.8.8", 80))
ip = s.getsockname()[0]
except:
pass
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

when except, need to s.close()

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fix


log_config = {
'DEBUG': {
'level': 10,
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we could just use the python logging level number.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fix


self.logger.addHandler(self.handler)
self.logLevel = 'DEBUG'
self.logger.setLevel(logging.DEBUG)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

default logLevel should be INFO or WARN not DEBUG.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fix

# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import uuid
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

import order for clarity:

standard packages
empty new line
third packages
empty new line
self packages

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fix

json_data.update({"data": x})
_url = "{}/{}".format(self.endpoint, "predict")
try:
res = http_request(url=_url, method="POST", json=json_data)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

just

return http_request(url=_url, method="POST", json=json_data)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fix


def inference(self, x, **kwargs):
"""Use the remote big model server to inference."""
json_data = deepcopy(kwargs)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same question: why deepcopy?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if will change below

"""Use the remote big model server to inference."""
json_data = deepcopy(kwargs)
json_data.update({"data": x})
_url = "{}/{}".format(self.endpoint, "predict")
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this _url is fixed, can be initialized once

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fix


def check_server_status(self):
try:
http_request(url=self.endpoint, method="GET", json={})
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

json={} can be omitted

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fix

@llhuii
Copy link

llhuii commented Jun 1, 2021

please fix ci first

@JoeyHwong-gk JoeyHwong-gk changed the title lib refector lib refector and add lifelong learning Jun 1, 2021
@JoeyHwong-gk JoeyHwong-gk marked this pull request as draft June 2, 2021 01:27
@kubeedge-bot kubeedge-bot added do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. labels Jun 2, 2021
## Motivation


The edge-cloud collaborative lifelong learning combines the attributes of multi-task learning and incremental learning at the edge to deal with the problem of heterogeneous data and small samples in complex scenarios. Rely on the memory of the knowledge base to adapt to new situations, thus effectively solving the small sample problem of traditional machine learning.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Motivation is important and thus careful polishing would be needed. My suggestion is to modify the writing a little bit like "At present, edge-cloud synergy machine learning is confronted with the challenge of heterogeneous data distributions in complex scenarios and small samples on the edge. The edge-cloud synergy lifelong learning is accordingly proposed: 1) In order to learn with shared knowledge between historical scenarios, the scheme is essentially the combination of another two learning schemes, i.e., multi-task learning and incremental learning; 2) The cloud knowledge base in lifelong learning empowers the scheme with memory ability, which helps to adapt historical knowledge to new and unseen situations on the edge. Joining the forces of multi-task learning, incremental learning and the knowledge base, the lifelong learning scheme seeks to fundamentally overcome the above challenges of edge-cloud synergy machine learning."

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fix

class ClassFactory(object):
"""
A Factory Class to manage all class need to register with config.
通过装饰器的方式来注册模块--只需要在上面加上一行ClassFactory.register(<class type>)即可完成注册。
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

pls fix these chinese comments

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fix

self.logger.log(log_level, msg, *args, **kwargs)


sednaLogger = Logger().logger
Copy link

@llhuii llhuii Jun 3, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sednaLogger rename into LOGGER or logger, without sedna word.

And here only use Logger.logger, then looks the other function of Logger is useless.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fix

log_sum += class_data * math.log(class_data)
confidence_score = 1 + 1.0 * log_sum / math.log(
len(infer_result))
return confidence_score >= self.threshold_cross_entropy
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fix it done by #68

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fix

|Lifelong-learning-job Reported State Updated | The controller appends the reported status of the job by LC in the cloud. |

### Details of api between GM(cloud) and LC(edge)
1. GM(downstream controller) syncs the job info to LC:
Copy link

@llhuii llhuii Jun 3, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this section is same with the section of incremental learning job, please use a link instead of copy

task_mining=None,
task_remodeling=None,
inference_integrate=None,

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why there exist an empty line?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fix

res = model_obj.train(train_data=task.samples, **kwargs)
if callback:
res = callback(model_obj, res)
model_path = model_obj.save(model_name=f"{task.entry}.pkl")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why the model end with fix string "pkl", other model format should be considered.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fix

task.result = pred
task.model = m
tasks.append(task)
res = self.inference_integrate(tasks)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it is not a good name for me to understand the function.

metrics_param = {"average": "micro"}

data.x['pred'] = result
data.x['y'] = data.y
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it maybe better understood that you use keys named "pred_y" and "real_y"

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fix

Comment on lines 246 to 285
if metrics:
if callable(metrics):
m_name = getattr(metrics, '__name__', "mtl_eval")
m_dict = {
m_name: metrics
}
elif isinstance(metrics, (set, list)):
for inx, m in enumerate(metrics):
m_name = getattr(m, '__name__', f"mtl_eval_{inx}")
if isinstance(m, str):
m = getattr(sk_metrics, m)
if not callable(m):
continue
m_dict[m_name] = m
elif isinstance(metrics, str):
m_dict = {
metrics: getattr(sk_metrics, metrics, sk_metrics.log_loss)
}
elif isinstance(metrics, dict):
for k, v in metrics.items():
if isinstance(v, str):
v = getattr(sk_metrics, v)
if not callable(v):
continue
m_dict[k] = v
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

there are so many "if else", and it is not intuitive what these lines do? you should add some comment or make the code more clearer.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fix

return {
name: metric(data.y, result, **metrics_param)
for name, metric in m_dict.items()
}, tasks_detail
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

in this long function, either you add comments in segments, or you break them down into smaller functions.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fix

samples = BaseDataSource(data_type=d_type)
samples.x = x_data
samples.y = y_data
return tasks, task_index, samples
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's best to write down the data structures of these objects in comments.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fix

Comment on lines 48 to 56
allocations = meta_attr.apply(
lambda x: self.task_extractor.get(
"_".join(
map(lambda y: str(x[y]).replace("_", "-").replace(" ", ""),
self.attr_filed)
),
0),
axis=1).values.tolist()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

these line looks strange, why you use so hard code "replace" function to do something we don't know.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Avoid errors caused by special characters in some metadata.

@llhuii
Copy link

llhuii commented Jun 3, 2021

please squash/rebase these commits into clean state

Signed-off-by: JoeyHwong <joeyhwong@gknow.cn>
Signed-off-by: JoeyHwong <joeyhwong@gknow.cn>
Signed-off-by: JoeyHwong <joeyhwong@gknow.cn>
Signed-off-by: JoeyHwong <joeyhwong@gknow.cn>
spec:
nodeName: "edge-node"
containers:
- image: kubeedge/sedna-example-lifelong-learning-atcii-classifier:v0.1.0
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

tag => v0.3.0

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fix

trainSpec:
template:
spec:
nodeName: "edge-node"
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

pls use $WORKER_NODE instead of edge-node

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fix

spec:
nodeName: "edge-node"
containers:
- image: kubeedge/sedna-example-lifelong-learning-atcii-classifier:v0.1.0
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ditto, tag v0.1. => v0.3.0

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fix

evalSpec:
template:
spec:
nodeName: "edge-node"
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ditto, fix nodeName



### Effect Display
in this example, false and failed detections occur at stage of inference before lifelong learning, after lifelong learning,
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
in this example, false and failed detections occur at stage of inference before lifelong learning, after lifelong learning,
In this example, **false** and **failed** detections occur at stage of inference before lifelong learning.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fix

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fix


### Effect Display
in this example, false and failed detections occur at stage of inference before lifelong learning, after lifelong learning,
Greatly improve the precision and accuracy of the dataset.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fill the xx%

Suggested change
Greatly improve the precision and accuracy of the dataset.
After lifelong learning, the precision and accuracy of the dataset have been improved by xx%.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fix

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fix

) and len(task_info_res):
task_info_res = list(task_info_res)[0]
final_res.append(task_info_res)
else:
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this else branch can be merge into if branch

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fix

valid_data=None,
post_process=None,
**kwargs):
self.log.warn("Joint inference Experiment should offer model file")
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. Why does joint-inference have train?
  2. Experiment means ?
  3. Why does warn at the very firstly of train


class JointInference(JobBase):
"""
Joint inference Experiment
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ditto, fix Experiment word

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fix

self.cloud = ModelClient(service_name=self.job_name,
host=self.remote_ip, port=self.port)

def train(self, train_data,
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

many duplicate codes with TSBigModelService.train

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fix

@JoeyHwong-gk JoeyHwong-gk force-pushed the lib-refactor branch 2 times, most recently from fc1f7d8 to 1edef5f Compare June 4, 2021 03:54
os.environ['BACKEND_TYPE'] = 'KERAS'


class Estimator:
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please add a comment to describe the function of Estimator

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Estimator As an input parameter of the core of sedna, it represents a functional object that has a training callback function.
Thease parameters should be described in the readme of the lib, not in the example.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK,so it will be better to call it as XXXEstimator?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a great piece of advice and I hope to fix it in the next version.



if __name__ == '__main__':
KBServer(host="127.0.0.1").start()
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is 127.0.0.1 right? listening to 127.0.0.1 will forbid train-worker access KB server

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fix

@llhuii
Copy link

llhuii commented Jun 4, 2021

/lgtm

Let's improve this code with addressing the following but not limited issues in next one or two months:

  1. KB server need its standalone code directory instead of placing into lib directory
  2. KB API need to be reviewed again(such as /file/download, /file/upload)
  3. In multi-task-learning of lib code, need to add docs and code comment
  4. Remove scikit-learn requirement for federated-learning surface-defect-detection example
  5. Reduce lifelong example image size
  6. Code bugs, doc issues

@kubeedge-bot kubeedge-bot added the lgtm Indicates that a PR is ready to be merged. label Jun 4, 2021
Follow the [Sedna installation document](/docs/setup/install.md) to install Sedna.

### Prepare Dataset
In this example, you can use [ASHRAE Global Thermal Comfort Database II](https://datadryad.org/stash/dataset/doi:10.6078/D1F671) to initial lifelong learning Knowledgebase .
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
In this example, you can use [ASHRAE Global Thermal Comfort Database II](https://datadryad.org/stash/dataset/doi:10.6078/D1F671) to initial lifelong learning Knowledgebase .
In this example, you can use [ASHRAE Global Thermal Comfort Database II](https://datadryad.org/stash/dataset/doi:10.6078/D1F671) to initial lifelong learning job.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fix

Signed-off-by: JoeyHwong <joeyhwong@gknow.cn>
@kubeedge-bot kubeedge-bot removed the lgtm Indicates that a PR is ready to be merged. label Jun 4, 2021
@llhuii
Copy link

llhuii commented Jun 4, 2021

/lgtm
/approve

@kubeedge-bot kubeedge-bot added the lgtm Indicates that a PR is ready to be merged. label Jun 4, 2021
@kubeedge-bot
Copy link
Collaborator

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: llhuii

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@kubeedge-bot kubeedge-bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Jun 4, 2021
@kubeedge-bot kubeedge-bot merged commit 9411a2c into kubeedge:main Jun 4, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. lgtm Indicates that a PR is ready to be merged. size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants