Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

q_weight= 0.1 #15

Closed
zly987 opened this issue Jun 10, 2022 · 2 comments
Closed

q_weight= 0.1 #15

zly987 opened this issue Jun 10, 2022 · 2 comments

Comments

@zly987
Copy link

zly987 commented Jun 10, 2022

Set q_weight= 0.1 ,The purpose is the same order of magnitude as the voltage deviation value? Can it be understood that the weight of voltage deviation and power loss is 0.5 respectively

@zly987
Copy link
Author

zly987 commented Jun 10, 2022

(1)voltage_weight=1.0,q_weight= 0.1,If it is defined by weight, why does it not add up to 1.
(2)The reinforcement learning method is multi-objective, while the traditional method is single objective. Can their results be compared? Since want to set the traditional method as multi-objective, it involves the weight of voltage deviation and line loss. Thank you very much for your answer.

@hsvgbkhgbv
Copy link
Member

(1)voltage_weight=1.0,q_weight= 0.1,If it is defined by weight, why does it not add up to 1. (2)The reinforcement learning method is multi-objective, while the traditional method is single objective. Can their results be compared? Since want to set the traditional method as multi-objective, it involves the weight of voltage deviation and line loss. Thank you very much for your answer.

(1) First, from the optimization view, the q_weight just plays the role of adjusting the balance of importance between two objectives.
(2) Second, we would argue that the objective function is part of the traditional method. Besides, in OPF actually it considers more information than RL, e.g., the network topology. While droop control does not consider power loss, the relationship between voltage and q is manually tuned. Therefore, in our expeiments we think the comparison is fair.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants