Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error on complex values in NLP #1978

Merged
merged 2 commits into from May 29, 2019
Merged

Error on complex values in NLP #1978

merged 2 commits into from May 29, 2019

Conversation

odow
Copy link
Member

@odow odow commented May 26, 2019

Closes #1973

Related:

model = Model()
@variable(model, x >= 1im)
# ERROR: InexactError: Float64(0 + 1im)

Probably in lots of other places as well. What do we want to do about this?

@codecov
Copy link

codecov bot commented May 26, 2019

Codecov Report

Merging #1978 into master will increase coverage by 0.02%.
The diff coverage is n/a.

Impacted file tree graph

@@            Coverage Diff            @@
##           master   #1978      +/-   ##
=========================================
+ Coverage   88.77%   88.8%   +0.02%     
=========================================
  Files          33      33              
  Lines        4260    4260              
=========================================
+ Hits         3782    3783       +1     
+ Misses        478     477       -1
Impacted Files Coverage Δ
src/parse_nlp.jl 90.13% <ø> (+0.65%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 2201c60...4b6f121. Read the comment docs.

src/parse_nlp.jl Outdated Show resolved Hide resolved
@mlubin
Copy link
Member

mlubin commented May 27, 2019

Probably in lots of other places as well. What do we want to do about this?

Check all values that come into a macro, and throw an error if they match a condition that we know will throw an unhelpful internal error message (e.g., not being convertible to Float64). However, doing this well requires a bit of a refactoring of the macros so perhaps not high priority for now.

@odow odow merged commit 45235ca into master May 29, 2019
@odow odow deleted the od/complex branch May 29, 2019 15:00
@Bondan000
Copy link

hi,
should this merge solve the problem with the complex numbers? i am using JuMP v0.20.1 and Julia v1.2.0 and still having this problem:
julia> mdl=Model()
julia> @variable(mdl, x >= 1*im)
ERROR: InexactError: Float64(0 + 1im)
is there any work around to use complex numbers while defining expressions?

@odow
Copy link
Member Author

odow commented Nov 13, 2019

should this merge solve the problem with the complex numbers

No. JuMP does not support complex numbers.

is there any work around to use complex numbers while defining expressions?

Define two variables, one for the real part and one for the imaginary part

model = Model()
@variable(model, x_real >= 0)
@variable(model, x_imag >= 1)

See https://discourse.julialang.org/t/how-define-a-complex-jump-variable/14362

@Bondan000
Copy link

should this merge solve the problem with the complex numbers

No. JuMP does not support complex numbers.

is there any work around to use complex numbers while defining expressions?

Define two variables, one for the real part and one for the imaginary part
model = Model()
@variable(model, x_real >= 0)
@variable(model, x_imag >= 1)
See https://discourse.julialang.org/t/how-define-a-complex-jump-variable/14362

Thank you for the quick reply! will go through the discourse and try to solve my specific problem!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging this pull request may close these issues.

InexactError: Float64 when problem gets big
3 participants