Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Upgrade all dependencies to the latest versions #761

Merged
merged 21 commits into from Aug 21, 2020

Conversation

shankari
Copy link
Contributor

  • Upgraded to the most recent version of everything, with the following caveats:
    • miniconda is 4.8.3 instead of 4.8.4 since there doesn't appear to be a
      4.8.4 labelled binary yet and I don't want to use latest
    • remove versioned python since it is installed automatically
    • specifying python also causes major dependency incompatibilities, since
      all packages have not been upgraded to the latest version
  • Remove the manual installs since we are upgrading to the most recent version

Testing done:

  • Setup works locally
  • Running tests on CI; might run locally with persistent failures

- Upgraded to the most recent version of everything, with the following caveats:
    - miniconda is 4.8.3 instead of 4.8.4 since there doesn't appear to be a
      4.8.4 labelled binary yet and I don't want to use `latest`
    - remove versioned python since it is installed automatically
    - specifying python also causes major dependency incompatibilities, since
      all packages have not been upgraded to the latest version
- Remove the manual installs since we are upgrading to the most recent version

Testing done:
- Setup works locally
- Running tests on CI; might run locally with persistent failures
to environment.yml.
But only add the direct dependencies since the indirect dependencies take care of themselves
MongoDB has deprecated `count()` on cursors and has no plans of ever restoring it
https://jira.mongodb.org/browse/PYTHON-1724

We used `Collection.count()` and `Cursor.count()` extensively.
This commit replaces all of those with the new API calls.

Concretely, the find and replace regex was:
- Collection.`find\(.*\).count()` ➡️  `count_documents\1`
- Collection.`find().count()` ➡️  `estimated_document_count`
- Collection.`count()` ➡️  `estimated_document_count()`

In a couple of files, we returned a cursor from a function, checked the count
and then used values from the cursor. I had to replace these with a separate
call to get the count.

These were in:
- `emission/storage/timeseries/builtin_timeseries.py`
- `emission/storage/decorations/section_queries.py`
- `emission/storage/decorations/stop_queries.py`

I also had to change `$where` ➡️  `$expr` in
`emission/analysis/modelling/tour_model/prior_unused/exploratory_scripts/explore_smoothing_trajectories.py`
`as_matrix` has been removed from pandas

From [0.22 docs](https://pandas.pydata.org/pandas-docs/version/0.22/generated/pandas.DataFrame.as_matrix.html)

> This method is provided for backwards compatibility. Generally, it is recommended to use ‘.values’.

However, in the [most recent docs](https://pandas.pydata.org/pandas-docs/version/1.1.0/reference/api/pandas.DataFrame.values.html)

> We recommend using DataFrame.to_numpy() instead.

Fortunately, that doesn't appear to be [deprecated (yet)](https://pandas.pydata.org/pandas-docs/version/1.1.0/reference/api/pandas.DataFrame.to_numpy.html#pandas.DataFrame.to_numpy)

Replace all `as_matrix` by `to_numpy`
Due to changes in the way that pandas interacts with numpy, `numpy.nonzero`
does not work properly with pandas series (see example below).

```
$ ./e-mission-py.bash
Python 3.7.8 | packaged by conda-forge | (default, Jul 31 2020, 02:37:09)
[Clang 10.0.1 ] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import pandas as pd
>>> import numpy as np
>>> s = pd.Series([True, False, True, False, False, True])
>>> np.nonzero(s)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "<__array_function__ internals>", line 6, in nonzero
  File "/Users/kshankar/miniconda-4.8.3/envs/emission/lib/python3.7/site-packages/numpy/core/fromnumeric.py", line 1908, in nonzero
    return _wrapfunc(a, 'nonzero')
  File "/Users/kshankar/miniconda-4.8.3/envs/emission/lib/python3.7/site-packages/numpy/core/fromnumeric.py", line 55, in _wrapfunc
    return _wrapit(obj, method, *args, **kwds)
  File "/Users/kshankar/miniconda-4.8.3/envs/emission/lib/python3.7/site-packages/numpy/core/fromnumeric.py", line 48, in _wrapit
    result = wrap(result)
  File "/Users/kshankar/miniconda-4.8.3/envs/emission/lib/python3.7/site-packages/pandas/core/generic.py", line 1787, in __array_wrap__
    return self._constructor(result, **d).__finalize__(
  File "/Users/kshankar/miniconda-4.8.3/envs/emission/lib/python3.7/site-packages/pandas/core/series.py", line 314, in __init__
    f"Length of passed values is {len(data)}, "
ValueError: Length of passed values is 1, index implies 6.
>>> np.nonzero(s.to_numpy())
(array([0, 2, 5]),)
```

We fix this by changing

`np.nonzero(s)` ➡️  `np.nonzero(s.to_numpy())` iff `s` is a pandas series

This change was only in one file
- `emission//analysis/intake/segmentation/section_segmentation_methods/smoothed_high_confidence_motion.py`: fixed

The others were already numpy arrays and did not need conversion

- `emission//analysis/classification/inference/mode/seed/pipeline.py`: the `featureMatrix` is an numpy array
- `emission//analysis/classification/inference/Classifier.ipynb`: Ditto
- `emission//analysis/modelling/tour_model/prior_unused/exploratory_scripts/plot_error_types.py`: the array is already a numpy array
- `emission//analysis/intake/cleaning/cleaning_methods/jump_smoothing.py`: `inlier_mask_` seems to be a normal array. Tested in `emission//tests/analysisTests/intakeTests/TestLocationSmoothing.py`
- `emission//analysis/intake/cleaning/cleaning_methods/jump_smoothing.py`: `inlier_mask_` seems to be a normal array. Tested in `emission//tests/analysisTests/intakeTests/TestLocationSmoothing.py`
- `emission//incomplete_tests/TestGpsSmoothing.py`: ditto
@shankari
Copy link
Contributor Author

While testing this locally, ran into a regression caused by an assertion error in place squishing

The mismatch is in one of the initial speed computations

  File "/Users/kshankar/e-mission/e-mission-server/emission/analysis/intake/cleaning/clean_and_resample.py", l
ine 969, in link_trip_start
    ts, cleaned_trip_data, cleaned_start_place_data)
  File "/Users/kshankar/e-mission/e-mission-server/emission/analysis/intake/cleaning/clean_and_resample.py", l
ine 1152, in _fix_squished_place_mismatch
    assert False

The change is small, so depending on the results of the investigation, we could just fix the "ground truth" and move on. But let's try to understand how it happened first.

[0.0 1.3918799677570426

[0.0 1.391207493734843

@shankari
Copy link
Contributor Author

So the original speeds were almost identical, but after inserting, there was a mismatch.

1  5f34bd7e6fdd4851b0124abd  1.452702e+09  2016-01-13T08:22:27.391337-08:00
1  37.875767 -122.258413   41.756399  1.391880           1.391880
1  5f34bcc6be14470893941331  1.452702e+09  ...  1.391880           1.391207

The change happens while inserting

fix_squished_place: before inserting, speeds = [0.0, 0.31120032155672667, 5.279835474516015, 5.590681152758044, 5.590709355970388, 5.590737558966657, 5.422935527879857, 5.413431856340051, 6.160750357248137, 3.013363168584424]
fix_squished_place: after inserting, speeds = [0.0, 1.3918799677570426, 0.31120032155672667, 5.279835474516015, 5.590681152758044, 5.590709355970388, 5.590737558966657, 5.422935527879857, 5.413431856340051, 6.160750357248137]
fix_squished_place: before inserting, speeds = [0.0, 0.3111992781051712, 5.279608473667694, 5.590681151577752, 5.590709354830196, 5.590737557826475, 5.422942143907315, 5.413431855536246, 6.160681173619549, 3.013497148854115]
fix_squished_place: after inserting, speeds = [0.0, 1.391207493734843, 0.3111992781051712, 5.279608473667694, 5.590681151577752, 5.590709354830196, 5.590737557826475, 5.422942143907315, 5.413431855536246, 6.160681173619549]

Looking at the code, we insert and then fix a bunch of values. The insert check depends on the distance_delta, which is also slightly different

  distance_delta = 41.75639903271128 < 100, continuing with squish
  distance_delta = 41.73622481204529 < 100, continuing with squish

And the reset values are identical, except for some rounding.

squishing mismatch: resetting trip start_loc [-122.2584129, 37.8757669] to cleaned_start_place.location [-122.2587421, 37.8754958]
squishing mismatch: resetting trip start_loc [-122.258413, 37.875767] to cleaned_start_place.location [-122.258742, 37.875496]

@shankari
Copy link
Contributor Author

shankari commented Aug 13, 2020

Let's see where we got these values from and how we rounded them. We got those
values from the raw_start_place, which is again consistent bar rounding.

Adding distance 8.619875014824052 to original 3111.61930762 to extend section start from [-122.2584253, 37.8758438] to [-122.2584129, 37.8757669]
After subtracting time 1.566663247456032 from original 539.978000164 to cover additional distance 8.619875014824052 at speed 5.502059889910047, new_start_ts = 1452702147.39

Raw place is 5f34bcc1be144708939411ad and corresponding location is 56967ed55771abda98
Adding distance 8.626548757749633 to original 3111.6193076220175 to extend section start from [-122.258425, 37.875844] to [-122.258413, 37.875767]
After subtracting time 1.5678762009787335 from original 539.978000164032 to cover additional distance 8.626548757749633 at speed 5.502059889910047, new_start_ts = 1452702147.3901238

But back in section segmentation, the related place seems to be accurate

with iloc
 'fmt_time': '2016-01-13T08:24:59.147000-08:00'
 'coordinates': [-122.2657953 37.8742511]}
 'ts': 1452702299.1470001

 'fmt_time': '2016-01-13T08:29:28.817000-08:00'
 'coordinates': [-122.2819714 37.8702897]}
 'ts': 1452702568.8169999

with iloc
 'coordinates': [-122.2657953 37.8742511]}
 'fmt_time': '2016-01-13T08:24:59.147000-08:00'
 'ts': 1452702299.147

 'coordinates': [-122.2819714 37.8702897]}
 'fmt_time': '2016-01-13T08:29:28.817000-08:00'
 'ts': 1452702568.817

@shankari
Copy link
Contributor Author

Ok so looking further, the [-122.258413, 37.875767] is a recreated location,
and the timestamps are slightly different.

 'metadata': {'key': 'analysis/recreated_location'
 'data': Recreatedlocation({'fmt_time': '2016-01-13T08:22:27.391337-08:00'
 'loc': {'type': 'Point' 'coordinates': [-122.2584129 37.8757669]}
 'ts': 1452702147.3913367
 'distance': 41.75639903271128
 'speed': 1.3918799677570426

 'metadata': {'key': 'analysis/recreated_location'
 'data': Recreatedlocation({'fmt_time': '2016-01-13T08:22:27.390124-08:00'
 'loc': {'type': 'Point' 'coordinates': [-122.258413 37.875767]}
 'ts': 1452702147.3901238
 'distance': 41.73622481204529
 'speed': 1.391207493734843

Which is because the raw trip point is different

 'metadata': {'key': 'segmentation/raw_trip'
 'start_ts': 1452701818.444
 'start_fmt_time': '2016-01-13T08:16:58.444000-08:00'
 'start_loc': {'type': 'Point' 'coordinates': [-122.2583791 37.8757955]}
 'end_ts': 1452701938.572
 'end_fmt_time': '2016-01-13T08:18:58.572000-08:00'
 'end_loc': {'type': 'Point' 'coordinates': [-122.2584129 37.8757669]}
 'duration': 120.12800002098083
 'distance': 4.349090147122709}})

 'metadata': {'key': 'segmentation/raw_trip'
 'start_ts': 1452701818.444
 'start_fmt_time': '2016-01-13T08:16:58.444000-08:00'
 'start_loc': {'type': 'Point' 'coordinates': [-122.258379 37.875796]}
 'end_ts': 1452701938.572
 'end_fmt_time': '2016-01-13T08:18:58.572000-08:00'
 'end_loc': {'type': 'Point' 'coordinates': [-122.258413 37.875767]}
 'duration': 120.12800002098083
 'distance': 4.393622836921463}})

@shankari
Copy link
Contributor Author

Not sure why the change would trigger an assert, though. While this may be different from the ground truth, it should be internally consistent.

@shankari
Copy link
Contributor Author

So the mismatch while inserting the entry is caused by the distance mismatch

distance_delta = 41.75639903271128 < 100, continuing with squish
distance_delta = 41.73622481204529 < 100, continuing with squish
>>> 41.75639903271128 / 30
1.3918799677570426
>>> 41.73622481204529 / 30
1.391207493734843
fix_squished_place: after inserting, speeds = [0.0, 1.3918799677570426]
fix_squished_place: after inserting, speeds = [0.0, 1.391207493734843]

But why is it different while recalculating?

The new reconstructed location is the same

fix_squished_place: added new reconstructed location Recreatedlocation({'loc': {'type': 'Point', 'coordinates': [-122.2587421, 37.8754958]}, 'fmt_time': '2016-01-13T08:21:57.391337-08:00' to match new start point

fix_squished_place: added new reconstructed location Recreatedlocation({'loc': {'type': 'Point', 'coordinates': [-122.2587421, 37.8754958]}, 'fmt_time': '2016-01-13T08:21:57.390124-08:00' to match new start point

So the newly added point is the same, and the second point is the same, so the recalculated speeds are the same. But the distance was different so the non-validated insertion was different. What is different between the two distance calculations?

@shankari
Copy link
Contributor Author

Ah, so I think I have the root cause! It looks like the coordinates in the loc field are rounded from the original lat, lon values.

fix_squished_place: after updating, old first location data = {'key': 'analysis/recreated_location', 'data': Recreatedlocation({'latitude': 37.8757669, 'longitude': -122.2584129, 'loc': {'type': 'Point', 'coordinates': [-122.2584129, 37.8757669]}})

fix_squished_place: after updating, old first location data = {'key': 'analysis/recreated_location', 'data': Recreatedlocation({'latitude': 37.8757669, 'longitude': -122.2584129, 'loc': {'type': 'Point', 'coordinates': [-122.258413, 37.875767]}})

When we compute the delta distance, we use the coordinates, which are rounded in this version

    distance_delta = ecc.calDistance(cleaned_trip_data.start_loc.coordinates,
                                     cleaned_start_place_data.location.coordinates)
squishing mismatch: resetting trip start_loc [-122.2584129, 37.8757669] to cleaned_start_place.location [-122.2587421, 37.8754958]
squishing mismatch: resetting trip start_loc [-122.258413, 37.875767] to cleaned_start_place.location [-122.258742, 37.875496]

But when we recompute the speeds and distances, we use the latitude and longitude fields.

    point_list = [ad.AttrDict(row) for row in points_df.to_dict('records')]
    zipped_points_list = list(zip(point_list, point_list[1:]))

    distances = [pf.calDistance(p1, p2) for (p1, p2) in zipped_points_list]
    distances.insert(0, 0)
def calDistance(point1, point2):
    return ec.calDistance([point1.longitude, point1.latitude], [point2.longitude, point2.latitude])

A simple fix would be to use a consistent value (either coordinates or lat/lon fields) throughout.
A different fix would be to see why the coordinates are rounded and fix that.

In this case, since we are upgrading, probably fixing the rounding is the safer fix.

@shankari
Copy link
Contributor Author

Aha! from https://pypi.org/project/geojson/

GeoJSON Object-based classes in this package have an additional precision attribute which rounds off coordinates to 6 decimal places (roughly 0.1 meters) by default and can be customized per object instance.

So our options are:

  1. fix this particular assert by using the pf.calDistance method for both calculations. Note that this will retain precision differences between the two data sources and is sub-optimal.
  2. fix this completely by specifying a custom precision at all places where we create geojson objects
  3. fix this completely by rounding all lat/lon points to 6 decimal places. This is probably the right long-term solution, since the extra precision gives us a false measure of accuracy (see also) but is likely to be a fairly heavy lift code-wise
  4. punt this by reverting to an older version of geojson

Given that this change is already long and complex, I vote for (4).

@shankari
Copy link
Contributor Author

From https://github.com/jazzband/geojson/blob/master/CHANGELOG.rst, this was introduced in version 2.5.0.
Reverting to 2.4.2 should fix it.

@shankari
Copy link
Contributor Author

That fixed the assertion error but resulted in a ground truth regression. Let's try reverting back to 2.3.0, and if that doesn't work, we have to debug further.

======================================================================
FAIL: testZeroDurationPlaceInterpolationSingleSync (__main__.TestPipelineRealData)
----------------------------------------------------------------------
Traceback (most recent call last):
AssertionError: 80961.11998618588 != 80856.14036388611 within 2 places (104.97962229976838 difference)

@shankari
Copy link
Contributor Author

Reverting back to 2.3.0 didn't fix it. However, looking at the full list of regressions, we have an issue with some of the modules (e.g. location smoothing). If we are not smoothing out the correct values, that could be a reason for the difference in the distances. Let's fix that first.

In 0bb8e59, we determined that the
`inlier_mask_` was a numpy array.  This is true for `SmoothBoundary` and
`SmoothPosdap`, but NOT for `SmoothZigzag`.

`SmoothZigzag` does use a pandas Series

```
        self.inlier_mask_ = pd.Series([True] * with_speeds_df.shape[0])
```

So we need to use `to_numypy()` while accessing it
@shankari
Copy link
Contributor Author

shankari commented Aug 13, 2020

The filtering error is

======================================================================
FAIL: testFilterSection (__main__.TestLocationSmoothing)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "emission/tests/analysisTests/intakeTests/TestLocationSmoothing.py", line 240, in testFilterSection
    self.assertEqual(len(filtered_points_entry.data.deleted_points), 12)
AssertionError: 8 != 12

And indeed we deleted 8 points instead of the original 12

bad_segments = [Segment(25, 26, 0.0), Segment(64, 65, 0.0), Segment(114, 121, 0.0), Segment(123, 124, 0.0), Segment(126, 127, 0.0), Segment(130, 131, 0.0)]
bad_segments = [Segment(25, 26, 0.0), Segment(64, 65, 0.0), Segment(113, 114, 0.0), Segment(121, 123, 7.372715972019115), Segment(124, 125, 0.0), Segment(126, 127, 0.0), Segment(130, 131, 0.0)]
after setting values, outlier_mask = [ 25  64 114 115 116 117 118 119 120 123 126 130]
after setting values, outlier_mask = [ 25  64 113 121 122 124 126 130]
after filtering ignored points, 12 -> 12
deleted 12 points

after filtering ignored points, 8 -> 8
deleted 8 points

@shankari
Copy link
Contributor Author

The main difference seems to start here:

shortest_non_cluster_segment = 9
shortest_non_cluster_segment = 5

Checking the segment list, we have (in both cases):

0. For cluster 0 - 25, distance = 8627.966220167396, is_cluster = False
1. For cluster 25 - 26, distance = 0.0, is_cluster = True
2. For cluster 26 - 64, distance = 14053.184009991968, is_cluster = False
3. For cluster 64 - 65, distance = 0.0, is_cluster = True
4. For cluster 65 - 114, distance = 15577.108477310048, is_cluster = False
5. For cluster 114 - 121, distance = 0.0, is_cluster = True
6. For cluster 121 - 124, distance = 7304.23306547181, is_cluster = False
7. For cluster 124 - 126, distance = 771.1159696538559, is_cluster = False
8. For cluster 126 - 127, distance = 0.0, is_cluster = True
9. For cluster 127 - 130, distance = 121.51096760524027, is_cluster = False
10. For cluster 130 - 138, distance = 1696.5583526264854, is_cluster = False

Segment 9 is clearly the shortest non-cluster segment.
Segment 5 is shorter, but is a cluster segment.

Why is the code picking it?

In pandas, the behavior of `argmin` has changed. It used to return the **label** of the minimum. It now returns the **position** of the minimum.

This was foreshadowed in

https://pandas-docs.github.io/pandas-docs-travis/reference/api/pandas.Series.argmin.html

> Deprecated since version 0.21.0.

> The current behaviour of ‘Series.argmin’ is deprecated, use ‘idxmin’ instead. The behavior of ‘argmin’ will be corrected to return the positional minimum in the future. For now, use ‘series.values.argmin’ or ‘np.argmin(np.array(values))’ to get the position of the minimum row.

And verified through experimentation.

```
$ ./e-mission-py.bash
Python 3.6.1 | packaged by conda-forge | (default, May 11 2017, 18:00:28)
[GCC 4.2.1 Compatible Apple LLVM 6.1.0 (clang-602.0.53)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import pandas as pd
>>> s = pd.Series([8.6, 14, 15, 7, 0.7, 0.1, 1.6], index=[0,2,4,6,7,9,10])
>>> s.argmin()
9
>>> s.idxmin()
9
>>>
```

```
$ ./e-mission-py.bash
Python 3.7.8 | packaged by conda-forge | (default, Jul 31 2020, 02:37:09)
[Clang 10.0.1 ] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import pandas as pd
>>> s = pd.Series([8.6, 14, 15, 7, 0.7, 0.1, 1.6], index=[0,2,4,6,7,9,10])
>>> s.argmin()
5
>>> s.idxmin()
9
>>>
```

Let's fix this by replacing `argmin` ➡️  `idxmin` everywhere
It turns out that we were also passing cursors around in the pipeline.
Let's fix it the same way as f122aa2
by returning both the cursor and the count. This now works properly.

The `TestPipelineSeed` tests now pass completely
Removes a bunch of User Warnings of the form

```
e-mission-server/emission/analysis/intake/cleaning/location_smoothing.py:55: UserWarning: DataFrame columns are not unique, some columns will be omitted.
```
Wasn't caught earlier because it was on multiple lines.
Follow on to 2517fe0
and f122aa2
To avoid format warnings

```
/home/runner/miniconda-4.8.3/envs/emissiontest/lib/python3.7/site-packages/sklearn/base.py:334: UserWarning: Trying to unpickle estimator RandomForestClassifier from version 0.18.1 when using version 0.23.2. This might lead to breaking code or invalid results. Use at your own risk.
```
This is a followon/fix to 3032f79
Without this fix, there were failures similar to

```
File "/home/runner/work/e-mission-server/e-mission-server/emission/analysis/intake/cleaning/clean_and_resample.py", line 119, in save_cleaned_segments_for_timeline
    filtered_trip = get_filtered_trip(ts, trip)
...
KeyError: "['heading'] not found in axis"
```
Due to warning

```
e-mission-server/e-mission-server/emission/tests/common.py:41: DeprecationWarning: collection_names is deprecated. Use list_collection_names instead.
  collections = db.collection_names()
```
Due to warning

```
e-mission-server/emission/tests/coreTests/TestBase.py:105: DeprecationWarning: Please use assertRaisesRegex instead.
  with self.assertRaisesRegexp(AttributeError, ".*not defined.*"):
```
Now that all the software dependencies are done
Before this change, we were returning the values in the database order

```
    def getKeyListForType(self, message_type):
        return self.db.find({"user_id": self.user_id, "metadata.type": message_type}).distinct("metadata.key")
```

The database order changed between 3.4.0 and 4.4.0, leading to a regression when we tested with
c71712f

```
======================================================================
FAIL: testDeleteObsoleteEntries (__main__.TestBuiltinUserCacheHandlerOutput)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "emission/tests/netTests/TestBuiltinUserCacheHandlerOutput.py", line 121, in testDeleteObsoleteEntries
    self.assertEqual(uc.getDocumentKeyList(), ["2015-12-30", "2015-12-29"])
AssertionError: Lists differ: ['2015-12-29', '2015-12-30'] != ['2015-12-30', '2015-12-29']

First differing element 0:
'2015-12-29'
'2015-12-30'

- ['2015-12-29', '2015-12-30']
+ ['2015-12-30', '2015-12-29']
```

We now force an order and update the checks to verify that order
If we use the e-mission server base instead, that has the conda version
built-in, and we can't test with a changed conda version.

Also dockerhub does not currently have an image for the version of conda that
we are using (4.8.3). The most recent version on dockerhub is 4.8.2 published 5
months ago.

So let's just start with a blank ubuntu image and install everything from scratch.
That will allow us to test with the changed conda version before we merge
With this fix, `docker-compose` seems to work locally
Make corresponding changes to all the scripts in `bin` as well.
Concretely:
- change all instances of `Cursor.count()`, similar to
e-mission@f122aa2

None of the other changes identified apply to these fairly simple scripts

```
$ grep -r as_matrix bin/
$ grep -r nonzero bin/
$ grep -r argmin bin/
$
```
@shankari shankari merged commit c832bed into e-mission:master Aug 21, 2020
@shankari shankari deleted the upgrade_versions branch August 21, 2020 23:54
shankari added a commit to e-mission/e-mission-docker that referenced this pull request Sep 3, 2020
This makes it consistent with 57c83732591742f4967e41978b7047b14c47dbb0
and 98882c79c86d613024cb4dc85070185e89ed5634
and with
e-mission/e-mission-server#761 in general

Other example updates TBD
jf87 pushed a commit to jf87/e-mission-server that referenced this pull request Jun 21, 2021
jf87 pushed a commit to jf87/e-mission-server that referenced this pull request Jun 21, 2021
Upgrade all dependencies to the latest versions
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
1 participant