Skip to content

Conversation

@odow
Copy link
Member

@odow odow commented May 8, 2022

Closes #2973

TODO

  • Document
  • Consider doing same to @constraint

@codecov
Copy link

codecov bot commented May 8, 2022

Codecov Report

Merging #2978 (4f3f38a) into master (249d969) will increase coverage by 0.01%.
The diff coverage is 100.00%.

@@            Coverage Diff             @@
##           master    #2978      +/-   ##
==========================================
+ Coverage   95.42%   95.44%   +0.01%     
==========================================
  Files          43       43              
  Lines        5794     5819      +25     
==========================================
+ Hits         5529     5554      +25     
  Misses        265      265              
Impacted Files Coverage Δ
src/JuMP.jl 95.66% <100.00%> (+0.08%) ⬆️
src/macros.jl 96.41% <100.00%> (+0.06%) ⬆️
src/file_formats.jl 100.00% <0.00%> (ø)
src/variables.jl 98.71% <0.00%> (+<0.01%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 249d969...4f3f38a. Read the comment docs.

@nickrobinson251
Copy link
Contributor

Thanks! This looks great, and is a much simipler option than the user code changes i mentioned having to make in #2973 :)

I think to close #2973 (from my perspective), this same keyword would need to be available in @constraint, so the code change is the same for @variable and @constraint. And for this option to be mentioned on the Performance Tips page specifically; having this info in the relevant section of the docs is also very helpful, but personally i think this can make big enough performance difference to be worthy of mention on the dedicated Performance Tips page too.

@odow
Copy link
Member Author

odow commented May 15, 2022

Seems like jump-dev/MutableArithmetics.jl#153 broke our tests.

@odow
Copy link
Member Author

odow commented May 15, 2022

Fixing the tests here: jump-dev/MutableArithmetics.jl#156

Needs JuliaRegistries/General#60308

@odow odow closed this May 16, 2022
@odow odow reopened this May 16, 2022
@odow
Copy link
Member Author

odow commented May 16, 2022

@nickrobinson251 how's this now?

@nickrobinson251
Copy link
Contributor

Yeah, this looks good.

I've updated the model building code mentioned in #2973. I don't see the same performance gains using this new syntax, but it's most of the way there so it's almost certainly human-error on my part (i'm looking into it), but just in case: is there anyway using this keyword like @constraint(model, con[...], x[...] <= y[...], set_string_names-=false) wouldn't save as much work as using model[:con] = @constraint(model, [...], x[...] <= y[...])?

One thought does come to mind when updating all that code:
in that use-case, "set_string_names" is something we actually want to set at the Model level (not at a per-@variable/@constraint level) i.e. what would be optimal for that code is if we could write e.g. model = Model(; set_string_names=false) and then, for that model, @variable(model, x) would be equivalent to @variable(model, x, set_string_name=false), and avoids adding set_string_name=false to tens of @variable/@constraint calls in the code-base (and reduces the chance someone forgets to add set_string_name=false when adding a new @variable/@constraint).

so the macro logic would be something like

new_name_code = if isempty(set_string_name_kw_args)
-        name_code   
+        model.set_string_names ? name_code : ""
    else
        Expr(:if, esc(set_string_name_kw_args[1].args[2]), name_code, "")
    end

(assuming model (somehow) stores a set_string_names::Bool)

@odow
Copy link
Member Author

odow commented May 16, 2022

Hmm. Let me take a look. Admittedly, I didn't benchmark this.

And yes, as I was writing this PR I considered a model-level flag. But we haven't added any others, so this would be precedent setting. I don't think we need both the model and macro level flags.

@odow
Copy link
Member Author

odow commented May 16, 2022

Benchmarking seems to work for me?

julia> using JuMP

julia> using BenchmarkTools

julia> function bar(flag::Bool)
           model = Model()
           @variable(model, x[1:1_000_000], set_string_name = flag)
           return model
       end
bar (generic function with 2 methods)

julia> @benchmark foo(true)
BenchmarkTools.Trial: 8 samples with 1 evaluation.
 Range (min  max):  555.151 ms  813.241 ms  ┊ GC (min  max): 25.19%  47.18%
 Time  (median):     688.693 ms               ┊ GC (median):    27.57%
 Time  (mean ± σ):   683.801 ms ±  99.854 ms  ┊ GC (mean ± σ):  35.44% ±  9.90%

  █       █ █       █                         ██        █     █  
  █▁▁▁▁▁▁▁█▁█▁▁▁▁▁▁▁█▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁██▁▁▁▁▁▁▁▁█▁▁▁▁▁█ ▁
  555 ms           Histogram: frequency by time          813 ms <

 Memory estimate: 351.84 MiB, allocs estimate: 7000197.

julia> @benchmark foo(false)
BenchmarkTools.Trial: 38 samples with 1 evaluation.
 Range (min  max):  127.087 ms  140.850 ms  ┊ GC (min  max): 15.42%  16.45%
 Time  (median):     135.225 ms               ┊ GC (median):    14.96%
 Time  (mean ± σ):   134.352 ms ±   4.170 ms  ┊ GC (mean ± σ):  14.28% ±  2.31%

           █▃       ▃                             ▃ ▃ ▃ ▃        
  ▇▇▁▁▁▁▁▁▇██▁▁▁▇▇▇▇█▁▇▇▁▁▁▇▁▁▁▁▁▁▁▇▇▁▁▇▇▇▁▁▇▇▇▁▁▇█▇█▁█▇█▁▁▇▁▁▇ ▁
  127 ms           Histogram: frequency by time          141 ms <

 Memory estimate: 143.08 MiB, allocs estimate: 2000147.

julia> function bar(flag::Bool)
           model = Model()
           @variable(model, x[1:1_000_000], set_string_name = false)
           @constraint(model, con[i=1:1_000_000], i * x[i] >= i, set_string_name = flag)
           return model
       end
bar (generic function with 2 methods)

julia> @benchmark bar(true)
BenchmarkTools.Trial: 2 samples with 1 evaluation.
 Range (min  max):  2.734 s    2.826 s  ┊ GC (min  max): 39.93%  44.46%
 Time  (median):     2.780 s              ┊ GC (median):    42.23%
 Time  (mean ± σ):   2.780 s ± 64.850 ms  ┊ GC (mean ± σ):  42.23% ±  3.21%

  █                                                       █  
  █▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁█ ▁
  2.73 s         Histogram: frequency by time        2.83 s <

 Memory estimate: 1.29 GiB, allocs estimate: 22000227.

julia> @benchmark bar(false)
BenchmarkTools.Trial: 4 samples with 1 evaluation.
 Range (min  max):  1.268 s     1.518 s  ┊ GC (min  max): 38.18%  48.50%
 Time  (median):     1.406 s               ┊ GC (median):    45.42%
 Time  (mean ± σ):   1.399 s ± 110.421 ms  ┊ GC (mean ± σ):  44.62% ±  4.63%

  █                   █                      █             █  
  █▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁█▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁█▁▁▁▁▁▁▁▁▁▁▁▁▁█ ▁
  1.27 s         Histogram: frequency by time         1.52 s <

 Memory estimate: 1.07 GiB, allocs estimate: 16000177.

@nickrobinson251
Copy link
Contributor

nickrobinson251 commented May 17, 2022

Benchmarking seems to work for me?

Thanks for checking. The kind of comparison i was talking about is between writing the same thing these two different ways:

julia> function f_old()
           model = Model()
           model[:x] = x = @variable(model, [1:1_000_000])
           model[:con] = @constraint(model, [i=1:1_000_000], i * x[i] >= i)
           return model
       end
f_old (generic function with 1 method)

julia> function f_new()
           model = Model()
           @variable(model, x[1:1_000_000], set_string_name = false)
           @constraint(model, con[i=1:1_000_000], i * x[i] >= i, set_string_name = false)
           return model
       end
f_new (generic function with 1 method)

And indeed these do benchmark to exactly the same

julia> @benchmark f_old()
BenchmarkTools.Trial: 4 samples with 1 evaluation.
 Range (min  max):  1.198 s     1.582 s  ┊ GC (min  max): 32.44%  55.68%
 Time  (median):     1.309 s               ┊ GC (median):    40.02%
 Time  (mean ± σ):   1.349 s ± 165.000 ms  ┊ GC (mean ± σ):  42.93% ± 10.23%

  █           █       █                                    █
  █▁▁▁▁▁▁▁▁▁▁▁█▁▁▁▁▁▁▁█▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁█ ▁
  1.2 s          Histogram: frequency by time         1.58 s <

 Memory estimate: 1.07 GiB, allocs estimate: 16000153.

julia> @benchmark f_new()
BenchmarkTools.Trial: 4 samples with 1 evaluation.
 Range (min  max):  1.127 s     1.598 s  ┊ GC (min  max): 33.56%  55.72%
 Time  (median):     1.252 s               ┊ GC (median):    41.24%
 Time  (mean ± σ):   1.307 s ± 211.168 ms  ┊ GC (mean ± σ):  44.01% ±  9.60%

  █     █                 █                                █
  █▁▁▁▁▁█▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁█▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁█ ▁
  1.13 s         Histogram: frequency by time          1.6 s <

 Memory estimate: 1.07 GiB, allocs estimate: 16000153.

But the behaviour i am seeing on my real example (unfortunately private) is that rewriting it to "new"-style version, using set_string_name, allocates more. This micro benchmark makes me even more confident it's just my human error somewhere, but i'm struggling to find the source. The "new"-style version of my model doesn't have any names, so i don't think that it's just i forget a set_string_name = false somewhere, using this check:

julia> function has_names(model)
           for container in values(object_dictionary(model))
               for item in container
                   isempty(JuMP.name(item)) || return true
               end
           end
           return false
       end
has_names (generic function with 1 method)

julia> has_names(model)
false

More generally, i think this looks great. Is the Model-level flag worth considering a little more? It'd probably help out with the situation i'm in above 😂 But also set_string_name is, i think, the first keyword added to the macros that could make sense as a Model-level flag (maybe arguably container could make sense, but very often the best option there is going to be var/con-dependent). Personally, as a user, i'm leaning towards thinking it's a good idea.

Thanks so much for working on this!

@odow
Copy link
Member Author

odow commented May 17, 2022

How's this?

@nickrobinson251
Copy link
Contributor

How's this?

This is great!

Updated my code, and it's a much nicer solution that either of the other options.

One tiny wrinkle that came up is that you cannot create a Model with model.set_string_names == false in a single call.
e.g. to change Models to not set names you get changes like this

function foo(args...)
-    return foo(Model(), args...)
+    model = Model()
+    set_string_names(model, false)
+    return foo(model, args...)
 end

would being able to set this at Model creation time via a keyword be a good option?
e.g. the diff would then be like

function foo(args...)
-    return foo(Model(), args...)
+    return foo(Model(set_string_names=false), args...)
 end

This last tiny question aside, this looks perfect. It is already a really great addition for our use-case building up huge models.
Thanks so much, @odow !

@odow
Copy link
Member Author

odow commented May 18, 2022

@nickrobinson251 I think we'd prefer to keep the function approach. It matches other JuMP settings like set_silent and set_time_limit_sec.

@odow odow changed the title Add set_string_name argument to variable macro Add set_string_names_on_creation May 24, 2022
@odow odow merged commit 890b430 into master May 24, 2022
@odow odow deleted the od/set-string-name branch May 24, 2022 00:59
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Development

Successfully merging this pull request may close these issues.

Document cost of named variables and constraints in Performance Tips

4 participants