You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Set q_weight= 0.1 ,The purpose is the same order of magnitude as the voltage deviation value? Can it be understood that the weight of voltage deviation and power loss is 0.5 respectively
The text was updated successfully, but these errors were encountered:
(1)voltage_weight=1.0,q_weight= 0.1,If it is defined by weight, why does it not add up to 1.
(2)The reinforcement learning method is multi-objective, while the traditional method is single objective. Can their results be compared? Since want to set the traditional method as multi-objective, it involves the weight of voltage deviation and line loss. Thank you very much for your answer.
(1)voltage_weight=1.0,q_weight= 0.1,If it is defined by weight, why does it not add up to 1. (2)The reinforcement learning method is multi-objective, while the traditional method is single objective. Can their results be compared? Since want to set the traditional method as multi-objective, it involves the weight of voltage deviation and line loss. Thank you very much for your answer.
(1) First, from the optimization view, the q_weight just plays the role of adjusting the balance of importance between two objectives.
(2) Second, we would argue that the objective function is part of the traditional method. Besides, in OPF actually it considers more information than RL, e.g., the network topology. While droop control does not consider power loss, the relationship between voltage and q is manually tuned. Therefore, in our expeiments we think the comparison is fair.
Set q_weight= 0.1 ,The purpose is the same order of magnitude as the voltage deviation value? Can it be understood that the weight of voltage deviation and power loss is 0.5 respectively
The text was updated successfully, but these errors were encountered: