Skip to content
Permalink
Branch: master
Commits on Aug 14, 2019
  1. Assorted memory-related fixes for HostManager, EE, RecSys (#3411)

    jfix71 authored and facebook-github-bot committed Aug 14, 2019
    Summary:
    4 commits here:
    - When calling `ExecutionEngine::clear()`, reset the Module
    - Add device memory to the EE constructor so we don't need to reset
    - Use `HostManager::removeNetwork()` from `HostManager::clearHost()`
    - Refactor RecSys to make sure EEs are cleared once done and we save only the result tensor to compare against.
    Pull Request resolved: #3411
    
    Differential Revision: D16796999
    
    Pulled By: jfix71
    
    fbshipit-source-id: 7d701bed2d610d151cff4702c35a35bd09abaf5c
Commits on Aug 13, 2019
  1. Add whitelisting option for FP16 conversion (#3386)

    jfix71 authored and facebook-github-bot committed Aug 13, 2019
    Summary:
    Add option to whitelist node kinds for conversion instead of the default blacklist. This makes testing of networks easier.
    Pull Request resolved: #3386
    
    Test Plan: Added unit test.
    
    Differential Revision: D16685611
    
    Pulled By: jfix71
    
    fbshipit-source-id: 17ec8c1a5a2ef691d927401eaab5c2c40bb90124
Commits on Aug 12, 2019
  1. Allow for command line specification of intermediate dimensions of ML…

    jfix71 authored and facebook-github-bot committed Aug 12, 2019
    …P FCs, and min/max values for lengths (#3405)
    
    Summary:
    I also renamed things to be more clear RE: number of layers in top/bottom MLPs (it's for hidden layers, not total count of layers).
    Pull Request resolved: #3405
    
    Differential Revision: D16748437
    
    Pulled By: jfix71
    
    fbshipit-source-id: 30ebc057ad8fee7c1a830a91de5d921a3a193050
Commits on Aug 8, 2019
  1. Add FP16 versions of tests (#3399)

    jfix71 authored and facebook-github-bot committed Aug 8, 2019
    Summary:
    Add FP16 versions of many of the RecSys tests. This includes versions that use either FP16 or FP32 accumulation.
    
    Also added a `Function::createConvertTo()` that takes an ElemKind, which is more intuitive/easier to use.
    Pull Request resolved: #3399
    
    Differential Revision: D16705642
    
    Pulled By: jfix71
    
    fbshipit-source-id: af3bb1b41d8cd1dca6174af1d94896e6bcc68dbe
Commits on Aug 7, 2019
  1. Add FP16 SLS/SLWS tests (#3398)

    jfix71 authored and facebook-github-bot committed Aug 7, 2019
    Summary:
    These were already supported by the interpreter but untested.
    Pull Request resolved: #3398
    
    Differential Revision: D16698522
    
    Pulled By: jfix71
    
    fbshipit-source-id: c0cd29118512cc0a056e382ada3cde6178c8d054
  2. Add SLWS sweep tests; add a few more test cases for FC. (#3392)

    jfix71 authored and facebook-github-bot committed Aug 7, 2019
    Summary:
    Add SLWS parameter sweep tests. This tests normal SLWS, Fused/unfused versions, and FP16 versions with and without FP16 accumulation.
    
    Note that all of these compare against the Interpreter. Because only the Interpreter supports FP16, the FP16 versions have no open-source backend enabled for them.
    
    I also added a couple extra parameters to sweep over for FC on the lower end for Z and B.
    Pull Request resolved: #3392
    
    Differential Revision: D16685607
    
    Pulled By: jfix71
    
    fbshipit-source-id: 7a3fa494a3ded517795ff17fee06ace844561fb0
Commits on Aug 5, 2019
  1. Add FP16 accumulation option to (Fused)-RWQ-SLWS/SLS (#3356)

    jfix71 authored and facebook-github-bot committed Aug 5, 2019
    Summary:
    As above.
    Pull Request resolved: #3356
    
    Test Plan: Added new FP16 accumulation versions of tests for the already FP16 versions that exist.
    
    Differential Revision: D16607934
    
    Pulled By: jfix71
    
    fbshipit-source-id: 68a40c823a475f1902de6c12e85841720088f6b8
  2. Rename ConvertFrom tests to use ElemKind (#3357)

    jfix71 authored and facebook-github-bot committed Aug 5, 2019
    Summary:
    I prefer this naming -- it's more clear what exactly we're testing, since from the Glow perspective these conversions happen on ElemKinds and not on native data types. Also we have many ElemKinds mapped to native data types (e.g. `Int32QTy` and `Int32ITy` both map to `int32_t`). E.g.:
    
    ```
    old: OperatorTest/OperatorTest.ConvertFrom_float_To_int32_t/1
    new: OperatorTest/OperatorTest.ConvertFrom_FloatTy_To_Int32ITy/1
    ```
    Pull Request resolved: #3357
    
    Differential Revision: D16603401
    
    Pulled By: jfix71
    
    fbshipit-source-id: 5dd012f1cc9a4f90a11414680acf167898163446
Commits on Aug 1, 2019
  1. Initialize filter and bias to prevent flakiness (#3343)

    jfix71 authored and facebook-github-bot committed Aug 1, 2019
    Summary:
    Another flaky test fix.
    Pull Request resolved: #3343
    
    Differential Revision: D16592248
    
    Pulled By: jfix71
    
    fbshipit-source-id: 39be2e9d34de3abfa4c25112dc35f9fc2a2a6043
  2. Only convert PHs from provided F (#3340)

    jfix71 authored and facebook-github-bot committed Aug 1, 2019
    Summary:
    Only consider PHs used by the passed in Function for conversion.
    
    This was discovered due to a cascading series of small issues:
    1. Two Functions are in a single Module, `F1` and `F2`
    2. `F1` has already allocated the Tensor `T1` (whose values are uninitialized) for its Save `S1`'s output PH `PH1`
    3. `convertPlaceholdersToConstants()` is called with `F2`, creating a Constant `C1` for `PH1` using `T1`. However it does not replace `PH1` with it because `replaceAllUsesOfWith()` skipped it due to `S1` not being in `F2`
    4. `compile(F2)` is called, which runs the `ConstantDeduplication` pass on the Module, which finds `C1` and inspects the values in `T1` which were uninitialized and thus may have NANs
    5. This caused the assertion mentioned in #3328 to fire every ~100 runs or so
    Pull Request resolved: #3340
    
    Test Plan: No longer fails: `./tests/OperatorTest gtest_filter=OperatorTest/OperatorTest.NonSquarePaddingConvolution/2 --gtest_repeat=1000 `
    
    Differential Revision: D16589453
    
    Pulled By: jfix71
    
    fbshipit-source-id: a0e210755da71ebd5132a54f2540a0464f0acb8f
  3. Add string->string map backendSpecificOpts to BackendOptions (#3336)

    jfix71 authored and facebook-github-bot committed Aug 1, 2019
    Summary:
    Allow for specifying options specific to a backend via a string to string map in the BackendOptions.
    Pull Request resolved: #3336
    
    Test Plan: Added test for deserializing the string string map for the backend opts yaml.
    
    Differential Revision: D16584717
    
    Pulled By: jfix71
    
    fbshipit-source-id: 637861ac4c6917b8c18cc8b65fbc53ada3283cb7
  4. Compare against Interpreter. Use Xaiver initialization based on tenso…

    jfix71 authored and facebook-github-bot committed Aug 1, 2019
    …r size. (#3312)
    
    Summary:
    A couple improvements to RecSys:
    - If we're executing on a non-Interpreter backend, then run on the Interpreter and make sure the results are equal.
    - Randomly initialize floating point tensors via Xaiver initialization, using twice the size of the tensor as the filter size. Hopefully this will reduce overflows as the test uses different sizes/configurations.
    Pull Request resolved: #3312
    
    Test Plan: All tests pass.
    
    Differential Revision: D16529200
    
    Pulled By: jfix71
    
    fbshipit-source-id: 652be557ce65e10caafd3f9d0267574ead441a49
Commits on Jul 31, 2019
  1. Initialize Tensor for Storage to fix flakiness (#3341)

    jfix71 authored and facebook-github-bot committed Jul 31, 2019
    Summary:
    We weren't initializing the Constant. This was causing issues again for constant deduplication with NANs.
    Pull Request resolved: #3341
    
    Test Plan: No longer breaks: `./tests/GraphOptzTest --gtest_filter=*GraphOptz.ReshapeConstantOneUse* --gtest_repeat=100000`
    
    Differential Revision: D16590730
    
    Pulled By: jfix71
    
    fbshipit-source-id: 85549efc837535cedc566a87dfc1794b4640ba20
  2. Disable constant folding (#3338)

    jfix71 authored and facebook-github-bot committed Jul 31, 2019
    Summary:
    The model runner runs models with all constant inputs. Constant folding was essentially shrinking the entire graph down to `Constant -> Save`.
    Pull Request resolved: #3338
    
    Test Plan: Verified the graph is no longer constant folded away.
    
    Differential Revision: D16589293
    
    Pulled By: jfix71
    
    fbshipit-source-id: cc88dedfd49b09050687f2811aecda4acd9c42ce
Commits on Jul 26, 2019
  1. Add FP16 versions of RWQ-SLWS (#3265)

    jfix71 authored and facebook-github-bot committed Jul 26, 2019
    Summary:
    Add support for FP16 RWQ-SLWS, fused and unfused. In this version of the op, the per-row scale/offset are FP16, the weights are  FP16, and the output is FP16. In the Interpreter kernels, each value from the embedding is dequantized to FP16, accumulation occurs in FP32, and then is casted back down to FP16 for the result.
    
    For the Fused case, I added in a new `ElemKind::UInt8FusedFP16QTy`. This type has 4 extra bytes per row instead of 8, to save FP16 scale/offset.
    
    Documentation: Updated.
    Pull Request resolved: #3265
    
    Test Plan: Added FP16 versions of all OperatorTests for RWQ-SLWS/SLS, fused and unfused.
    
    Differential Revision: D16478496
    
    Pulled By: jfix71
    
    fbshipit-source-id: 9c46e4fea92ff8491093e9f341bd8b897b88164e
Commits on Jul 25, 2019
  1. Allow for passing paths to init_net + predict_net (#3297)

    jfix71 authored and facebook-github-bot committed Jul 25, 2019
    Summary:
    Specifying paths to each of the protos (e.g. one for `predict_net.pbtxt` and one for `init_net.pb`) was broken. This PR fixes it.
    Pull Request resolved: #3297
    
    Test Plan: Added case for this in `run.sh`.
    
    Differential Revision: D16471322
    
    Pulled By: jfix71
    
    fbshipit-source-id: 0d858a80193bf78ebe3861a90aba68ce45fe7335
Commits on Jul 17, 2019
  1. Add PassManager and FunctionPassPipelines (#3185)

    jfix71 authored and facebook-github-bot committed Jul 17, 2019
    Summary:
    Add PassManager and FunctionPassPipelines. I added OptimizationOptions from #3161 with slight modifications.
    
    One open question right now is the best way to allow the backend to have at least some control over the passes.
    
    - Currently in the PR I added `OptimizationOptions::funPassesToSkip` which is a set of passes to skip during optimizations, and then pass `OptimizationOptions` into the Backend at the start of `optimizeFunction()` to allow it to modify it. This is a bit lacking in functionality, but it could be used during both calls to `optimize()` and `fold()`.
    - Alternatively I considered adding the entire `FunctionPassPipeline` for optimizations to `OptimizationOptions`. This means it can be modified however the backend wants (adding in passes we may want to keep out of the default set; rearranging opts; etc.). But this feels a bit weird because there are different pipelines, e.g. the one for Fold, and the one for pre-lowering (though this one is supposed to be independent of the backend anyway and so the backend shouldn't really have any say here...). Maybe we could add in `foldPipeline`, `optPipeline`, etc. to the `OptimizationOptions` instead of having a single one.
    
    Documentation: Added.
    
    Fixes #1641
    Pull Request resolved: #3185
    
    Test Plan: Tests still pass.
    
    Reviewed By: beicy
    
    Differential Revision: D16262299
    
    Pulled By: jfix71
    
    fbshipit-source-id: 5617880c444df70b1de6fac9997cf41ffe30517c
Commits on Jul 13, 2019
  1. Fix uses of MaxPool without NodeValue (#3232)

    jfix71 authored and facebook-github-bot committed Jul 13, 2019
    Summary:
    This was broken as a result of #3146
    Pull Request resolved: #3232
    
    Test Plan:
    Ran mnist and it works again 🙂
    
    Reported in [PyTorch forums here](https://discuss.pytorch.org/t/glow-example-run-error/50461).
    
    Differential Revision: D16232983
    
    Pulled By: jfix71
    
    fbshipit-source-id: f0d636c0a01a4bb1f673b1d1f47f4392a468da6d
Commits on Jul 11, 2019
  1. Check that there are no external inputs (#3222)

    jfix71 authored and facebook-github-bot committed Jul 11, 2019
    Summary:
    `ModelRunner` is not supposed to be used with models that have external inputs. It's a simple test tool to get started with. Check that no external inputs are found after loading the model.
    
    Related to #3221
    Pull Request resolved: #3222
    
    Differential Revision: D16206959
    
    Pulled By: jfix71
    
    fbshipit-source-id: d00d7e1274a220a21ce3460730b0fcb00ce68adf
Commits on Jul 4, 2019
  1. A couple small updates related to rowwise quantized FC symmetric (#3203)

    jfix71 authored and facebook-github-bot committed Jul 4, 2019
    Summary:
    - Slightly increase allowed error threshold for rowwiseQuantizedFCTestSymmetric test for a private backend's benefit
    - Add option for symmetric quantization for RecSys
    Pull Request resolved: #3203
    
    Differential Revision: D16109624
    
    Pulled By: jfix71
    
    fbshipit-source-id: af4ee89d993a6cad8e30e6395c357395b23603fc
Commits on Jun 26, 2019
  1. Fix lint (references, curly braces, shadow variables, etc.) (#3178)

    jfix71 authored and facebook-github-bot committed Jun 26, 2019
    Summary:
    Fix lint found in Optimizer. Mostly NFC.
    Pull Request resolved: #3178
    
    Test Plan: Tests still pass.
    
    Differential Revision: D16007570
    
    Pulled By: jfix71
    
    fbshipit-source-id: b94bc65ffbc4f872c94ad4ffa369a0975cb97f10
  2. Refactor Optimizer lib into {Graph,IR}Optimizer libs (#3165)

    jfix71 authored and facebook-github-bot committed Jun 26, 2019
    Summary:
    Allow for more precise dependences on libs. Most places rely on either the Graph or IR Optimizer, not both. Related to #3161.
    Pull Request resolved: #3165
    
    Test Plan: All tests pass.
    
    Reviewed By: bertmaher
    
    Differential Revision: D15979435
    
    Pulled By: jfix71
    
    fbshipit-source-id: 79b96cc791c25c06305c66e1bb0e52aa64ebff3e
Commits on Jun 21, 2019
  1. Introduce FunctionPass (#3147)

    jfix71 authored and facebook-github-bot committed Jun 21, 2019
    Summary:
    Introduce the `FunctionPass`. In future PRs I would like to integrate these FunctionPasses into a PassManager, which will run some pre/post pass logic. It will also query a backend to determine what passes to run/disable, as well as load specific pass pipelines defined by the backend.
    
    Related to #1641
    
    Documentation: Will add once I also add in PassManager to interface with it.
    Pull Request resolved: #3147
    
    Test Plan: Passes all tests.
    
    Reviewed By: arunm-git
    
    Differential Revision: D15931749
    
    Pulled By: jfix71
    
    fbshipit-source-id: e40aabc37b428a0571041b4c655d2591008fd1ee
Commits on Jun 19, 2019
  1. Make parameters tunable from the command line. Rename vars to lowerCa…

    jfix71 authored and facebook-github-bot committed Jun 19, 2019
    …melCase.
    
    Summary: Pull Request resolved: #3139
    
    Differential Revision: D15907043
    
    Pulled By: jfix71
    
    fbshipit-source-id: a1035488b9f384da3a3c94ace3277747f84728da
Commits on Jun 14, 2019
  1. Add Concat sweep tests (#3096)

    jfix71 authored and facebook-github-bot committed Jun 14, 2019
    Summary:
    Add in sweep tests across various Concat parameters. I tried to select various number of inputs/sizes/number of dimensions that were representative of a wide range of workloads. On my Macbook using 8 cores and gtest-parallel this takes about 110 seconds on Release. Hence, this also moves the ParameterSweepTest to `stress/`.
    
    Pull Request resolved: #3096
    
    Reviewed By: rdzhabarov
    
    Differential Revision: D15809093
    
    Pulled By: jfix71
    
    fbshipit-source-id: 0d3770be0f2765a24d95350ac98a158d3212d698
Commits on Jun 12, 2019
  1. Move RELU into create*MLP(); Cleanup uses of explicit indices in dims()

    jfix71 authored and facebook-github-bot committed Jun 12, 2019
    Summary: Pull Request resolved: #3086
    
    Differential Revision: D15781491
    
    Pulled By: jfix71
    
    fbshipit-source-id: eb60b062e5d11fbd074dad54150ee9ba03f54567
Commits on Jun 11, 2019
  1. Fix incorrect check from GE to GT (#3072)

    jfix71 authored and facebook-github-bot committed Jun 11, 2019
    Summary:
    The test was supposed to check that users is `> 1`, not `>=`.
    Pull Request resolved: #3072
    
    Differential Revision: D15754029
    
    Pulled By: jfix71
    
    fbshipit-source-id: 67239f4a7f2b0c97939178f28fc3111330b77c76
Commits on Jun 7, 2019
  1. Add verbose mode to isEqual check for easier debugging. (#3054)

    jfix71 authored and facebook-github-bot committed Jun 7, 2019
    Summary:
    Add a verbose mode to isEqual so that when there is a mismatch during our unit tests we log info the error. I disabled this in the two places we aren't using this in unit tests and so we want to early exit when Tensors aren't equal, i.e. GraphOptimizer and PlaceholderBindings.
    Pull Request resolved: #3054
    
    Differential Revision: D15702217
    
    Pulled By: jfix71
    
    fbshipit-source-id: 2d2d419b84b7445e60955bbce2a9e96a8edbdaa0
Commits on Jun 6, 2019
  1. Move Fixes above Test Plan

    jfix71 authored and facebook-github-bot committed Jun 6, 2019
    Summary: Pull Request resolved: #3059
    
    Differential Revision: D15703977
    
    Pulled By: jfix71
    
    fbshipit-source-id: fd60ff22d163ee5244e81c474be136665c2ff538
Commits on Jun 4, 2019
  1. Add FC sweep test

    jfix71 authored and facebook-github-bot committed Jun 4, 2019
    Summary: Pull Request resolved: #3035
    
    Differential Revision: D15605326
    
    Pulled By: jfix71
    
    fbshipit-source-id: d3b7a2e461936c5660fe974cafd0363d267be8aa
Commits on May 31, 2019
  1. Move ConvTest to ParameterSweepTest, and add MatMul to it (#3021)

    jfix71 authored and facebook-github-bot committed May 31, 2019
    Summary:
    Rename the ConvTest to ParameterSweepTest, and then add MatMul sweep tests to it. I somewhat arbitrarily picked the dimensions for MatMul sweeping. FYI on my macbook it took ~80 seconds to run these tests sequentially, and ~26 seconds in parallel w/ gtest-parallel.
    Pull Request resolved: #3021
    
    Reviewed By: rdzhabarov
    
    Differential Revision: D15574721
    
    Pulled By: jfix71
    
    fbshipit-source-id: 627e759bac4642e2b813f3b9d4d970a00ca44ce2
  2. Use compareAgainstInterpreter(); add more precisions (#3018)

    jfix71 authored and facebook-github-bot committed May 31, 2019
    Summary:
    Move ConvTest to be similar to all other tests comparing against backends. This also allows for easy testing of varying precisions; I added Int8 and FP16 tests.
    Pull Request resolved: #3018
    
    Reviewed By: rdzhabarov
    
    Differential Revision: D15571651
    
    Pulled By: jfix71
    
    fbshipit-source-id: cda5e4d4ed8cfed33642dc8a674be1d3273ad7f5
Commits on May 30, 2019
  1. Add option to clone a FunctionTensorPair repeatedly inside itself (#2967

    jfix71 authored and facebook-github-bot committed May 30, 2019
    )
    
    Summary:
    Take a Function with a single SaveNode and copy it repeatedly inside itself. This is to enable stress testing an architecture with many embarrassingly parallel graphs.
    Pull Request resolved: #2967
    
    Differential Revision: D15531889
    
    Pulled By: jfix71
    
    fbshipit-source-id: 3ca45467b38dbd52fb1a81eed3fefc2899f84c14
Commits on May 24, 2019
  1. Move all precision transformation into optimizeFunction() (#2700)

    jfix71 authored and facebook-github-bot committed May 24, 2019
    Summary:
    Move quantization/profiling/fp16 conversion into `optimizeFunction()`. This means we can call lower just once in a normal way, instead of forcing it to occur twice. Also cleans up callers to `compile()` so they do not need to worry about when/how to quantize/profile/convert to fp16.
    
    Documentation: Added.
    Pull Request resolved: #2700
    
    Reviewed By: bertmaher
    
    Differential Revision: D15303198
    
    Pulled By: jfix71
    
    fbshipit-source-id: e7d31460615c6c55077e149a0e781cb6a1df3283
Commits on May 23, 2019
  1. Change command line memory options to kilobytes (#2956)

    jfix71 authored and facebook-github-bot committed May 23, 2019
    Summary:
    CI is broken because the `en2gr_cpu_partition_test` tries to set the CPU memory on the command line, but it seems that command line parameters don't work with `uint64_t` on some platforms. I've hit this issue in the past, and I'm not sure why it's the case.
    Pull Request resolved: #2956
    
    Reviewed By: bertmaher
    
    Differential Revision: D15458152
    
    Pulled By: jfix71
    
    fbshipit-source-id: de91dd71ba7182d54fd7219f52091f009f51d720
Older
You can’t perform that action at this time.