-
Notifications
You must be signed in to change notification settings - Fork 164
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
lib refector and add lifelong learning #72
Conversation
COPY ./lib/requirements.txt /home | ||
RUN pip install -r /home/requirements.txt | ||
RUN pip install git+https://github.com/joeyhwong-gk/uvicorn |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why using personal pip package?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Federated learning usually sends a large weight data to the aggregation service, but uvicorn does not support change the limit size. A patch has been added.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We have contacted the official to fix the problem and I will checkout to the official version later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good. In the present can we patch it with some command instead of personal link?
@@ -11,21 +11,38 @@ | |||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |||
# See the License for the specific language governing permissions and | |||
# limitations under the License. | |||
import os |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
better to add new empty line after copyright declaration
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fix
# use original dataset url, | ||
# see https://github.com/kubeedge/sedna/issues/35 | ||
original_dataset_url = Context.get_parameters('original_dataset_url') | ||
return original_dataset_url + os.path.sep + dataset_url |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why does example code need to know this path handling?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I mean this original_dataset_url should not appear in user code (here example code)
# use original dataset url, | ||
# see https://github.com/kubeedge/sedna/issues/35 | ||
original_dataset_url = Context.get_parameters('original_dataset_url') | ||
return original_dataset_url + os.path.sep + dataset_url |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same question here: why example code need to know this original_dataset_url
@@ -76,7 +76,7 @@ EOF | |||
Note the setting of the following parameters, which have to same as the script [little_model.py](/examples/joint_inference/helmet_detection_inference/little_model/little_model.py): | |||
- hardExampleMining: set hard example algorithm from {IBT, CrossEntropy} for inferring in edge side. | |||
- video_url: set the url for video streaming. | |||
- all_examples_inference_output: set your output path for the inference results. | |||
- all_examples_inference_output: set your output path for the inference results, and note that the root path has to be /home/data. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why has to be /home/data?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fix
self.weights = None | ||
|
||
@abc.abstractmethod | ||
def aggregate(self, weights, size=0): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what does size parameter of aggregate mean?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sample number of each loop
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What does sample number of each loop mean? Does size equal sizeof(weights)?
pls add some comment about this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fedavg algorithms average a new weights based on the number of samples in the batch and the weights generated by the training.
""" | ||
|
||
|
||
@ClassFactory.register(ClassType.FLAGG) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
better to rename FLAGG into FL_AGG. easy to misunderstand FLAGG with FLAG and G
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fix
for inx, weight in enumerate(weights): | ||
old_weight = self.weights[inx] | ||
row_weight = (np.array(weight) - old_weight) * (size / total_sample) + old_weight | ||
self.weights = deepcopy(updates) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
deepcopy here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Prevents developer-defined post-processing from modifying the original weights.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
deepcopy is expensive, especially the weights.
In what case, post-processing modify it? I think rare case, even so let the copy to developer
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fix
updates = [] | ||
for inx, weight in enumerate(weights): | ||
old_weight = self.weights[inx] | ||
row_weight = (np.array(weight) - old_weight) * (size / total_sample) + old_weight |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
row_weight is unused in anywhere
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
updates
always empty
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fix
# See the License for the specific language governing permissions and | ||
# limitations under the License. | ||
|
||
"""Hard Example Mining Algorithms for Cloud Migration Judgment""" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what does Cloud Migration
mean here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fix
lib/sedna/core/base.py
Outdated
except Exception as err: | ||
self.log.error(err) | ||
with open(FileOps.join_path(self.config.model_url, "lc_msg.json"), 'w') as f_out: | ||
json.dump(message, f_out) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
just log enough, no need to dump into model url
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fix
lib/sedna/core/base.py
Outdated
} | ||
) | ||
else: | ||
ckpt_model_url = FileOps.remove_path_prefix(self.config.model_url, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ckpt/pb are tensorflow backend specified
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Adapting to CRD
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
then they should appear in backend directory, not in core/base.py
s.connect(("8.8.8.8", 80)) | ||
ip = s.getsockname()[0] | ||
except: | ||
pass |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
when except, need to s.close()
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fix
lib/sedna/common/log.py
Outdated
|
||
log_config = { | ||
'DEBUG': { | ||
'level': 10, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we could just use the python logging level number.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fix
lib/sedna/common/log.py
Outdated
|
||
self.logger.addHandler(self.handler) | ||
self.logLevel = 'DEBUG' | ||
self.logger.setLevel(logging.DEBUG) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
default logLevel should be INFO or WARN not DEBUG.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fix
lib/sedna/service/server/base.py
Outdated
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | ||
# See the License for the specific language governing permissions and | ||
# limitations under the License. | ||
import uuid |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
import order for clarity:
standard packages
empty new line
third packages
empty new line
self packages
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fix
lib/sedna/service/client.py
Outdated
json_data.update({"data": x}) | ||
_url = "{}/{}".format(self.endpoint, "predict") | ||
try: | ||
res = http_request(url=_url, method="POST", json=json_data) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
just
return http_request(url=_url, method="POST", json=json_data)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fix
|
||
def inference(self, x, **kwargs): | ||
"""Use the remote big model server to inference.""" | ||
json_data = deepcopy(kwargs) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same question: why deepcopy?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if will change below
lib/sedna/service/client.py
Outdated
"""Use the remote big model server to inference.""" | ||
json_data = deepcopy(kwargs) | ||
json_data.update({"data": x}) | ||
_url = "{}/{}".format(self.endpoint, "predict") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this _url is fixed, can be initialized once
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fix
lib/sedna/service/client.py
Outdated
|
||
def check_server_status(self): | ||
try: | ||
http_request(url=self.endpoint, method="GET", json={}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
json={}
can be omitted
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fix
please fix ci first |
docs/proposals/lifelong-learning.md
Outdated
## Motivation | ||
|
||
|
||
The edge-cloud collaborative lifelong learning combines the attributes of multi-task learning and incremental learning at the edge to deal with the problem of heterogeneous data and small samples in complex scenarios. Rely on the memory of the knowledge base to adapt to new situations, thus effectively solving the small sample problem of traditional machine learning. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Motivation is important and thus careful polishing would be needed. My suggestion is to modify the writing a little bit like "At present, edge-cloud synergy machine learning is confronted with the challenge of heterogeneous data distributions in complex scenarios and small samples on the edge. The edge-cloud synergy lifelong learning is accordingly proposed: 1) In order to learn with shared knowledge between historical scenarios, the scheme is essentially the combination of another two learning schemes, i.e., multi-task learning and incremental learning; 2) The cloud knowledge base in lifelong learning empowers the scheme with memory ability, which helps to adapt historical knowledge to new and unseen situations on the edge. Joining the forces of multi-task learning, incremental learning and the knowledge base, the lifelong learning scheme seeks to fundamentally overcome the above challenges of edge-cloud synergy machine learning."
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fix
lib/sedna/common/class_factory.py
Outdated
class ClassFactory(object): | ||
""" | ||
A Factory Class to manage all class need to register with config. | ||
通过装饰器的方式来注册模块--只需要在上面加上一行ClassFactory.register(<class type>)即可完成注册。 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
pls fix these chinese comments
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fix
lib/sedna/common/log.py
Outdated
self.logger.log(log_level, msg, *args, **kwargs) | ||
|
||
|
||
sednaLogger = Logger().logger |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sednaLogger rename into LOGGER or logger, without sedna word.
And here only use Logger.logger, then looks the other function of Logger is useless.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fix
log_sum += class_data * math.log(class_data) | ||
confidence_score = 1 + 1.0 * log_sum / math.log( | ||
len(infer_result)) | ||
return confidence_score >= self.threshold_cross_entropy |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fix it done by #68
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fix
docs/proposals/lifelong-learning.md
Outdated
|Lifelong-learning-job Reported State Updated | The controller appends the reported status of the job by LC in the cloud. | | ||
|
||
### Details of api between GM(cloud) and LC(edge) | ||
1. GM(downstream controller) syncs the job info to LC: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this section is same with the section of incremental learning job, please use a link instead of copy
task_mining=None, | ||
task_remodeling=None, | ||
inference_integrate=None, | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why there exist an empty line?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fix
res = model_obj.train(train_data=task.samples, **kwargs) | ||
if callback: | ||
res = callback(model_obj, res) | ||
model_path = model_obj.save(model_name=f"{task.entry}.pkl") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why the model end with fix string "pkl", other model format should be considered.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fix
task.result = pred | ||
task.model = m | ||
tasks.append(task) | ||
res = self.inference_integrate(tasks) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it is not a good name for me to understand the function.
metrics_param = {"average": "micro"} | ||
|
||
data.x['pred'] = result | ||
data.x['y'] = data.y |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it maybe better understood that you use keys named "pred_y" and "real_y"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fix
if metrics: | ||
if callable(metrics): | ||
m_name = getattr(metrics, '__name__', "mtl_eval") | ||
m_dict = { | ||
m_name: metrics | ||
} | ||
elif isinstance(metrics, (set, list)): | ||
for inx, m in enumerate(metrics): | ||
m_name = getattr(m, '__name__', f"mtl_eval_{inx}") | ||
if isinstance(m, str): | ||
m = getattr(sk_metrics, m) | ||
if not callable(m): | ||
continue | ||
m_dict[m_name] = m | ||
elif isinstance(metrics, str): | ||
m_dict = { | ||
metrics: getattr(sk_metrics, metrics, sk_metrics.log_loss) | ||
} | ||
elif isinstance(metrics, dict): | ||
for k, v in metrics.items(): | ||
if isinstance(v, str): | ||
v = getattr(sk_metrics, v) | ||
if not callable(v): | ||
continue | ||
m_dict[k] = v |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
there are so many "if else", and it is not intuitive what these lines do? you should add some comment or make the code more clearer.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fix
return { | ||
name: metric(data.y, result, **metrics_param) | ||
for name, metric in m_dict.items() | ||
}, tasks_detail |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
in this long function, either you add comments in segments, or you break them down into smaller functions.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fix
samples = BaseDataSource(data_type=d_type) | ||
samples.x = x_data | ||
samples.y = y_data | ||
return tasks, task_index, samples |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's best to write down the data structures of these objects in comments.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fix
allocations = meta_attr.apply( | ||
lambda x: self.task_extractor.get( | ||
"_".join( | ||
map(lambda y: str(x[y]).replace("_", "-").replace(" ", ""), | ||
self.attr_filed) | ||
), | ||
0), | ||
axis=1).values.tolist() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
these line looks strange, why you use so hard code "replace" function to do something we don't know.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Avoid errors caused by special characters in some metadata.
please squash/rebase these commits into clean state |
Signed-off-by: JoeyHwong <joeyhwong@gknow.cn>
Signed-off-by: JoeyHwong <joeyhwong@gknow.cn>
Signed-off-by: JoeyHwong <joeyhwong@gknow.cn>
Signed-off-by: JoeyHwong <joeyhwong@gknow.cn>
122bae8
to
8a33c29
Compare
spec: | ||
nodeName: "edge-node" | ||
containers: | ||
- image: kubeedge/sedna-example-lifelong-learning-atcii-classifier:v0.1.0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
tag => v0.3.0
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fix
trainSpec: | ||
template: | ||
spec: | ||
nodeName: "edge-node" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
pls use $WORKER_NODE instead of edge-node
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fix
spec: | ||
nodeName: "edge-node" | ||
containers: | ||
- image: kubeedge/sedna-example-lifelong-learning-atcii-classifier:v0.1.0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ditto, tag v0.1. => v0.3.0
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fix
evalSpec: | ||
template: | ||
spec: | ||
nodeName: "edge-node" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ditto, fix nodeName
|
||
|
||
### Effect Display | ||
in this example, false and failed detections occur at stage of inference before lifelong learning, after lifelong learning, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
in this example, false and failed detections occur at stage of inference before lifelong learning, after lifelong learning, | |
In this example, **false** and **failed** detections occur at stage of inference before lifelong learning. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fix
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fix
|
||
### Effect Display | ||
in this example, false and failed detections occur at stage of inference before lifelong learning, after lifelong learning, | ||
Greatly improve the precision and accuracy of the dataset. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fill the xx%
Greatly improve the precision and accuracy of the dataset. | |
After lifelong learning, the precision and accuracy of the dataset have been improved by xx%. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fix
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fix
4547dfd
to
75509a9
Compare
) and len(task_info_res): | ||
task_info_res = list(task_info_res)[0] | ||
final_res.append(task_info_res) | ||
else: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this else branch can be merge into if branch
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fix
valid_data=None, | ||
post_process=None, | ||
**kwargs): | ||
self.log.warn("Joint inference Experiment should offer model file") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- Why does joint-inference have train?
- Experiment means ?
- Why does warn at the very firstly of train
|
||
class JointInference(JobBase): | ||
""" | ||
Joint inference Experiment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ditto, fix Experiment
word
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fix
self.cloud = ModelClient(service_name=self.job_name, | ||
host=self.remote_ip, port=self.port) | ||
|
||
def train(self, train_data, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
many duplicate codes with TSBigModelService.train
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fix
fc1f7d8
to
1edef5f
Compare
os.environ['BACKEND_TYPE'] = 'KERAS' | ||
|
||
|
||
class Estimator: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please add a comment to describe the function of Estimator
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Estimator
As an input parameter of the core of sedna, it represents a functional object that has a training callback function.
Thease parameters should be described in the readme of the lib, not in the example.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK,so it will be better to call it as XXXEstimator?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a great piece of advice and I hope to fix it in the next version.
|
||
|
||
if __name__ == '__main__': | ||
KBServer(host="127.0.0.1").start() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is 127.0.0.1 right? listening to 127.0.0.1 will forbid train-worker access KB server
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fix
1edef5f
to
a3440eb
Compare
/lgtm Let's improve this code with addressing the following but not limited issues in next one or two months:
|
Follow the [Sedna installation document](/docs/setup/install.md) to install Sedna. | ||
|
||
### Prepare Dataset | ||
In this example, you can use [ASHRAE Global Thermal Comfort Database II](https://datadryad.org/stash/dataset/doi:10.6078/D1F671) to initial lifelong learning Knowledgebase . |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In this example, you can use [ASHRAE Global Thermal Comfort Database II](https://datadryad.org/stash/dataset/doi:10.6078/D1F671) to initial lifelong learning Knowledgebase . | |
In this example, you can use [ASHRAE Global Thermal Comfort Database II](https://datadryad.org/stash/dataset/doi:10.6078/D1F671) to initial lifelong learning job. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fix
Signed-off-by: JoeyHwong <joeyhwong@gknow.cn>
a3440eb
to
97a77d0
Compare
/lgtm |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: llhuii The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Issue #46 by JoeyHwong