Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adjusting (where appropriate) for maximization #2

Open
JedStephens opened this issue Jun 9, 2020 · 4 comments
Open

Adjusting (where appropriate) for maximization #2

JedStephens opened this issue Jun 9, 2020 · 4 comments

Comments

@JedStephens
Copy link

JedStephens commented Jun 9, 2020

@gialmisi Would you help me get to the right starting point for some of these problems.

When _ScalarObjective has the maximize = True the solve_pareto_front_representation function fails.

Code to reproduce:

import numpy as np
import matplotlib.pyplot as plt
from desdeo_problem.Problem import MOProblem
from desdeo_problem.Variable import variable_builder
from desdeo_problem.Objective import _ScalarObjective

#f_1 is profitability. We want to maximize...
def f_1(xs: np.ndarray):
    xs = np.atleast_2d(xs)
    return np.sum(100*xs[:,1] + 25*xs[:,1]*xs[:,0] + 50*xs[:,0])

#f_2 is environment we want to maximize...
def f_2(xs: np.ndarray):
    xs = np.atleast_2d(xs)
    return np.sum(1000 - 5*(xs[:,1]-1)**3 - 1.25*(xs[:,0]-5) - ((xs[:,1]-1)*(xs[:,0]-5))**0.5)

varsl = variable_builder(
    ["fertilizer_kg", "irrigation_days"],
    initial_values=[5, 1],
    lower_bounds=[5, 1],
    upper_bounds=[60, 7],
)

f1 = _ScalarObjective(name="f1", evaluator=f_1, maximize = True)
f2 = _ScalarObjective(name="f2", evaluator=f_2, maximize = True)

#Lower and upper bounds consistent with a maximisation problem.
p_lower_bound = f_1([[5,1]])
p_upper_bound = f_1([[60,7]])
e_lower_bound = f_2([[60,7]])
e_upper_bound = f_2([[5,1]])

#Ideal and nadir values chosen consistent with a maximisation problem.
problem = MOProblem(variables=varsl, objectives=[f1, f2], ideal=np.array([p_upper_bound, e_upper_bound]), nadir=np.array([p_lower_bound, e_lower_bound]))

#Determine Pareto Front.
from desdeo_mcdm.utilities.solvers import solve_pareto_front_representation

p_front = solve_pareto_front_representation(problem, step=2.0)[1]

plt.scatter(p_front[:, 0], p_front[:, 1], label="Pareto front")
plt.scatter(problem.ideal[0], problem.ideal[1], label="Ideal")
plt.scatter(problem.nadir[0], problem.nadir[1], label="Nadir")
plt.xlabel("f1")
plt.ylabel("f2")
plt.title("Approximate Pareto front function")
plt.legend()
plt.show()

The above process can be restated as a minimization problem. To do this I negate all the output values, change the upper and lower bounds and adjust the nadir and ideal values.
(That's a lot to remember!)
Now the code will run.

Code to reproduce

#DESDEO expects minimize.
import numpy as np
import matplotlib.pyplot as plt
from desdeo_problem.Problem import MOProblem
from desdeo_problem.Variable import variable_builder
from desdeo_problem.Objective import _ScalarObjective

#f_1 is profitability -- Ultimately we want to maximise.
#Note the return value is negated! (So as to mimimise.)
def f_1(xs: np.ndarray):
    xs = np.atleast_2d(xs)
    return -np.sum(100*xs[:,1] + 25*xs[:,1]*xs[:,0] + 50*xs[:,0])

#f_2 is environment --- Ultimately we want to maximise.
#Note the return value is negated! (So as to mimimise.)
def f_2(xs: np.ndarray):
    xs = np.atleast_2d(xs)
    return -np.sum(1000 - 5*(xs[:,1]-1)**3 - 1.25*(xs[:,0]-5) - ((xs[:,1]-1)*(xs[:,0]-5))**0.5)

varsl = variable_builder(
    ["fertilizer_kg", "irrigation_days"],
    initial_values=[5, 1],
    lower_bounds=[5, 1],
    upper_bounds=[60, 7],
)

f1 = _ScalarObjective(name="f1", evaluator=f_1, maximize=False)
f2 = _ScalarObjective(name="f2", evaluator=f_2, maximize=False)

p_upper_bound = f_1([[5,1]])
p_lower_bound = f_1([[60,7]])
e_upper_bound = f_2([[60,7]])
e_lower_bound = f_2([[5,1]])

problem = MOProblem(variables=varsl, objectives=[f1, f2], nadir=np.array([p_upper_bound, e_upper_bound]), ideal=np.array([p_lower_bound, e_lower_bound]))

#Determine Pareto Front.
from desdeo_mcdm.utilities.solvers import solve_pareto_front_representation

p_front = solve_pareto_front_representation(problem, step=2.0)[1]

plt.scatter(p_front[:, 0], p_front[:, 1], label="Pareto front")
plt.scatter(problem.ideal[0], problem.ideal[1], label="Ideal")
plt.scatter(problem.nadir[0], problem.nadir[1], label="Nadir")
plt.xlabel("f1")
plt.ylabel("f2")
plt.title("Approximate Pareto front function")
plt.legend()
plt.show()
@JedStephens
Copy link
Author

Any thoughts why when _ScalarObjective has the maximize = True the solve_pareto_front_representation function fails.

@gialmisi
Copy link
Contributor

Hi Jed. I will look into this later today. I have my own suspicions why this problem is emerging.

@gialmisi
Copy link
Contributor

@JedStephens I was able to reproduce the same results with both snippets of code you shared. The issue was that MOProblem expects its nadir and ideal points to be expressed as if each of the objectives were to be minimized. Also, when computing slices:

# bounds to be used to compute slices

It was previously assumed that the values in the ideal point were always less than the values in nadir point. In other words, the assumption of everything being minimized was screwing things up.

Also, when defining an MOProblem, the nadir and ideal should be supplied as if each of the objectives were to be minimized regardless of whether a _ScalarObjective was to be actually minimized or maximized. So if you are maximizing each objective, you should have something like this:

# Notice the minus signs, we assume maximization in each objective here.
problem = MOProblem(variables=varsl, objectives=[f1, f2], ideal=-np.array([p_upper_bound, e_upper_bound]), nadir=-np.array([p_lower_bound, e_lower_bound]))

I think this is confusing and we should address this in the future.

solve_pareto_front_representation should now return objective values with the correct sign when maximizing and objective. The latest version of desdeo-mcdm on PyPI is up to date with the current master branch.

@JedStephens
Copy link
Author

@gialmisi much appreciated as usual.
Let me see how we go now for a bit!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants