New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ENH: Convert read_gbq() function to use google-cloud-python #25

Merged
merged 43 commits into from Dec 20, 2017

Conversation

Projects
None yet
8 participants
@jasonqng
Contributor

jasonqng commented Apr 8, 2017

Description

I've rewritten the current read_gbq() function using google-cloud-python, which handles the naming of structs and arrays out of the box. For more discussion about this, see: #23.

However, because of the fact that google-cloud-python potentially uses different authentication flows and may break existing behavior, I've left the existing read_gbq() function and and named this new function from_gbq(). If in the future we are able to reconcile the authentication flows and/or decide to deprecate flows that are not supported in google-cloud-python, we can rename this to read_gbq().

UPDATE: As requested in comment by @jreback (https://github.com/pydata/pandas-gbq/pull/25/files/a763cf071813c836b7e00ae40ccf14e93e8fd72b#r110518161), I deleted old read_gbq() and named my new function read_gbq(), deleting all legacy functions and code.

Added in a few lines to requirements file, but I'll leave it to you @jreback to deal with conda dependency issues which you mentioned in Issue 23.

Let know if any questions or if any tests need to be written. You can confirm that it works by running the following:

q = """
select ROW_NUMBER() over () row_num, struct(a,b) col, c, d, c*d c_times_d, e
from
(select * from
    (SELECT 1 a, 2 b, null c, 0 d, 100 e)
    UNION ALL
    (SELECT 5 a, 6 b, 0 c, null d, 200 e)
    UNION ALL
    (SELECT 8 a, 9 b, 10.0 c, 10 d, 300 e)
)
"""
df = gbq.read_gbq(q, dialect='standard')
df
row_num col c d c_times_d e
2 {u'a': 5, u'b': 6} 0.0 NaN NaN 200
1 {u'a': 1, u'b': 2} NaN 0.0 NaN 100
3 {u'a': 8, u'b': 9} 10.0 10.0 100.0 300
q = """
select array_agg(a) mylist
from
(select "1" a UNION ALL select "2" a)
"""
df = gbq.read_gbq(q, dialect='standard')
df
mylist
[1, 2]
q = """
select array_agg(struct(a,b)) col, f
from
(select * from
    (SELECT 1 a, 2 b, null c, 0 d, 100 e, "hello" f)
    UNION ALL
    (SELECT 5 a, 6 b, 0 c, null d, 200 e, "ok" f)
    UNION ALL
    (SELECT 8 a, 9 b, 10.0 c, 10 d, 300 e, "ok" f)
)
group by f
"""
df = gbq.read_gbq(q, dialect='standard')
df
col f
[{u'a': 5, u'b': 6}, {u'a': 8, u'b': 9}] ok
[{u'a': 1, u'b': 2}] hello

Confirmed that col_order and index_col still work (feel free to pull that out into a separate function since there's now redundant code with read_gbq()), and I removed the type conversion lines which appear to be unnecessary (google-cloud-python and/or pandas appears to do the necessary type conversion automatically, even if there are nulls; can confirm by examining the datatypes in the resulting dataframes).

@codecov-io

This comment has been minimized.

Show comment
Hide comment
@codecov-io

codecov-io Apr 8, 2017

Codecov Report

Merging #25 into master will decrease coverage by 43.58%.
The diff coverage is 7.84%.

Impacted file tree graph

@@             Coverage Diff             @@
##           master      #25       +/-   ##
===========================================
- Coverage   72.56%   28.97%   -43.59%     
===========================================
  Files           4        4               
  Lines        1578     1491       -87     
===========================================
- Hits         1145      432      -713     
- Misses        433     1059      +626
Impacted Files Coverage Δ
pandas_gbq/tests/test_gbq.py 27.98% <30%> (-54.36%) ⬇️
pandas_gbq/gbq.py 20.55% <6.29%> (-53.6%) ⬇️
pandas_gbq/_version.py 44.4% <0%> (+1.8%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update cd76dde...26d6431. Read the comment docs.

codecov-io commented Apr 8, 2017

Codecov Report

Merging #25 into master will decrease coverage by 43.58%.
The diff coverage is 7.84%.

Impacted file tree graph

@@             Coverage Diff             @@
##           master      #25       +/-   ##
===========================================
- Coverage   72.56%   28.97%   -43.59%     
===========================================
  Files           4        4               
  Lines        1578     1491       -87     
===========================================
- Hits         1145      432      -713     
- Misses        433     1059      +626
Impacted Files Coverage Δ
pandas_gbq/tests/test_gbq.py 27.98% <30%> (-54.36%) ⬇️
pandas_gbq/gbq.py 20.55% <6.29%> (-53.6%) ⬇️
pandas_gbq/_version.py 44.4% <0%> (+1.8%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update cd76dde...26d6431. Read the comment docs.

Show outdated Hide outdated pandas_gbq/gbq.py Outdated
@jreback

This comment has been minimized.

Show comment
Hide comment
@jreback

jreback Apr 9, 2017

Contributor

this need to pass all of the original test
it's simply an implementation change

Contributor

jreback commented Apr 9, 2017

this need to pass all of the original test
it's simply an implementation change

@jreback

can you show the output of running the test suite.

Show outdated Hide outdated pandas_gbq/gbq.py Outdated
Show outdated Hide outdated pandas_gbq/gbq.py Outdated
Show outdated Hide outdated pandas_gbq/gbq.py Outdated
@jreback

This comment has been minimized.

Show comment
Hide comment
@jreback

jreback Apr 9, 2017

Contributor

@jasonqng we have to be careful about back-compat here.

Contributor

jreback commented Apr 9, 2017

@jasonqng we have to be careful about back-compat here.

@jreback

This comment has been minimized.

Show comment
Hide comment
@jreback

jreback Apr 9, 2017

Contributor

you need to update the ci/requirements-*.pip files, removing the old requirements and adding the new

Contributor

jreback commented Apr 9, 2017

you need to update the ci/requirements-*.pip files, removing the old requirements and adding the new

@jasonqng jasonqng changed the title from Add new from_gbq() function using google-cloud-python to Convert read_gbq() function to use google-cloud-python Apr 10, 2017

@jasonqng

This comment has been minimized.

Show comment
Hide comment
@jasonqng

jasonqng Apr 10, 2017

Contributor

@jreback Yeah, the back compatibility issues with the authentication is partly why I suggested writing it as a new function, but hopefully we can replicate a form of the pop-up authentication->refresh token with the new api (https://googlecloudplatform.github.io/google-cloud-python/stable/google-cloud-auth.html#user-accounts-3-legged-oauth-2-0-with-a-refresh-token). I might need some help with that if others are more familiar with it. Almost everything else should carry over, so I'm not too concerned with compatibility otherwise.

Contributor

jasonqng commented Apr 10, 2017

@jreback Yeah, the back compatibility issues with the authentication is partly why I suggested writing it as a new function, but hopefully we can replicate a form of the pop-up authentication->refresh token with the new api (https://googlecloudplatform.github.io/google-cloud-python/stable/google-cloud-auth.html#user-accounts-3-legged-oauth-2-0-with-a-refresh-token). I might need some help with that if others are more familiar with it. Almost everything else should carry over, so I'm not too concerned with compatibility otherwise.

Show outdated Hide outdated pandas_gbq/gbq.py Outdated
Show outdated Hide outdated requirements.txt Outdated
Show outdated Hide outdated pandas_gbq/gbq.py Outdated
@tswast

This comment has been minimized.

Show comment
Hide comment
@tswast

tswast Apr 21, 2017

Collaborator

I recommend closing this PR.

The google-cloud-bigquery library is not yet 1.0. Breaking changes are very likely as we (Google) get the libraries in good shape for a 1.0 release (no specific timeline on that yet). There are also a few things yet-to-be-implemented in google-cloud-bigquery such as retry logic, which are in the "API client library".

We should revisit this after the google-cloud-bigquery library goes 1.0.

Collaborator

tswast commented Apr 21, 2017

I recommend closing this PR.

The google-cloud-bigquery library is not yet 1.0. Breaking changes are very likely as we (Google) get the libraries in good shape for a 1.0 release (no specific timeline on that yet). There are also a few things yet-to-be-implemented in google-cloud-bigquery such as retry logic, which are in the "API client library".

We should revisit this after the google-cloud-bigquery library goes 1.0.

@jasonqng

This comment has been minimized.

Show comment
Hide comment
@jasonqng

jasonqng Apr 21, 2017

Contributor

@jreback In light of @tswast's comment, should we just close this or should we go back to building this as a separate function (e.g. from_gbq()) as I had it before? We could clearly mark it as experimental. For my own selfish sake, I'd vote for latter just so I can start using it in the mean time before google-cloud API goes 1.0, but obviously defer to others here.

Contributor

jasonqng commented Apr 21, 2017

@jreback In light of @tswast's comment, should we just close this or should we go back to building this as a separate function (e.g. from_gbq()) as I had it before? We could clearly mark it as experimental. For my own selfish sake, I'd vote for latter just so I can start using it in the mean time before google-cloud API goes 1.0, but obviously defer to others here.

@jreback

This comment has been minimized.

Show comment
Hide comment
@jreback

jreback Apr 21, 2017

Contributor

we can easily just pin to a specific version of api stability is a concern but in general i don't see this as a big deal

no reason to wait for a 1.0

Pandas itself is not even 1.0 and lots of people use / depend on it

Contributor

jreback commented Apr 21, 2017

we can easily just pin to a specific version of api stability is a concern but in general i don't see this as a big deal

no reason to wait for a 1.0

Pandas itself is not even 1.0 and lots of people use / depend on it

@parthea parthea added this to the 0.2.0 milestone Apr 22, 2017

Show outdated Hide outdated pandas_gbq/gbq.py Outdated
@tswast

This comment has been minimized.

Show comment
Hide comment
@tswast

tswast May 23, 2017

Collaborator

@jasonqng You'll probably want to rebase this change after #39 gets merged. The google-auth library is used by both google-api-python-client and google-cloud-python, so you should be able to reuse my changes for user and service account authentication.

Collaborator

tswast commented May 23, 2017

@jasonqng You'll probably want to rebase this change after #39 gets merged. The google-auth library is used by both google-api-python-client and google-cloud-python, so you should be able to reuse my changes for user and service account authentication.

@parthea parthea changed the title from Convert read_gbq() function to use google-cloud-python to ENH: Convert read_gbq() function to use google-cloud-python Jun 13, 2017

@jreback jreback removed this from the 0.2.0 milestone Jul 7, 2017

@jreback

This comment has been minimized.

Show comment
Hide comment
@jreback

jreback Jul 7, 2017

Contributor

is this PR relevant after #39 ?

@tswast

Contributor

jreback commented Jul 7, 2017

is this PR relevant after #39 ?

@tswast

@parthea

This comment has been minimized.

Show comment
Hide comment
@parthea

parthea Jul 7, 2017

Collaborator

We still have #23 as an open issue. It would be great to move this forward to address that. I recently added a conda recipe for google-cloud-bigquery.
https://github.com/conda-forge/google-cloud-bigquery-feedstock

Collaborator

parthea commented Jul 7, 2017

We still have #23 as an open issue. It would be great to move this forward to address that. I recently added a conda recipe for google-cloud-bigquery.
https://github.com/conda-forge/google-cloud-bigquery-feedstock

@jreback

This comment has been minimized.

Show comment
Hide comment
@jreback

jreback Jul 7, 2017

Contributor

@parthea ok is it worth waiting on this for 0.2.0 to avoid even more changes?

Contributor

jreback commented Jul 7, 2017

@parthea ok is it worth waiting on this for 0.2.0 to avoid even more changes?

@parthea

This comment has been minimized.

Show comment
Hide comment
@parthea

parthea Jul 7, 2017

Collaborator

My initial thought is that the milestone for this PR should be 0.3.0 as thorough testing is required. I think ultimately it depends on how soon we can begin testing this PR and whether we are in a hurry to release 0.2.0.

@jasonqng Please could you rebase ?

Collaborator

parthea commented Jul 7, 2017

My initial thought is that the milestone for this PR should be 0.3.0 as thorough testing is required. I think ultimately it depends on how soon we can begin testing this PR and whether we are in a hurry to release 0.2.0.

@jasonqng Please could you rebase ?

@tswast

This comment has been minimized.

Show comment
Hide comment
@tswast

tswast Jul 8, 2017

Collaborator

Yeah, this PR is still relevant. #39 moves pandas-gbq to use google-auth but still use the Google API client libraries. This PR is to move it to the Google Cloud client libraries (which incidentally also uses google-auth, so rebasing would be really helpful in completing the work required to get this working properly)

Collaborator

tswast commented Jul 8, 2017

Yeah, this PR is still relevant. #39 moves pandas-gbq to use google-auth but still use the Google API client libraries. This PR is to move it to the Google Cloud client libraries (which incidentally also uses google-auth, so rebasing would be really helpful in completing the work required to get this working properly)

@jasonqng

This comment has been minimized.

Show comment
Hide comment
@jasonqng

jasonqng Jul 10, 2017

Contributor

@tswast @parthea @jreback Sorry, been swamped these past few months. Hope to scratch out some time this week to incorporate all comments (and also get it to working with queries with large results, which it currently fails with). Just checking, any particular reason to rebase vs a merge? Happy to do former, just haven't done rebase on any collaborative projects so this would be first time. (Haha, worst case scenario, I mess up my branch and I just rewrite and open a new PR branched off new master.)

Contributor

jasonqng commented Jul 10, 2017

@tswast @parthea @jreback Sorry, been swamped these past few months. Hope to scratch out some time this week to incorporate all comments (and also get it to working with queries with large results, which it currently fails with). Just checking, any particular reason to rebase vs a merge? Happy to do former, just haven't done rebase on any collaborative projects so this would be first time. (Haha, worst case scenario, I mess up my branch and I just rewrite and open a new PR branched off new master.)

@parthea

This comment has been minimized.

Show comment
Hide comment
@parthea

parthea Jul 11, 2017

Collaborator

@jasonqng Thanks for taking care of this! Rebase is preferred because it will allow you to add commits on top of the latest master which is much nicer to look at during code review.

Collaborator

parthea commented Jul 11, 2017

@jasonqng Thanks for taking care of this! Rebase is preferred because it will allow you to add commits on top of the latest master which is much nicer to look at during code review.

@GregCT

This comment has been minimized.

Show comment
Hide comment
@GregCT

GregCT Aug 9, 2017

Hi,
Just wondering if @jasonqng has already made any progress rebasing this?
If not I'd be keen to help finish it off and get the changes reviewed and merged.

GregCT commented Aug 9, 2017

Hi,
Just wondering if @jasonqng has already made any progress rebasing this?
If not I'd be keen to help finish it off and get the changes reviewed and merged.

@jasonqng

This comment has been minimized.

Show comment
Hide comment
@jasonqng

jasonqng Aug 12, 2017

Contributor

@GregCT Thanks for offering! Been meaning to wrap this up, but been swamped. I'd love some help finishing this. I just added you as a collaborator so you can push directly to the fork. Look forward to working together on this if you're able!

Contributor

jasonqng commented Aug 12, 2017

@GregCT Thanks for offering! Been meaning to wrap this up, but been swamped. I'd love some help finishing this. I just added you as a collaborator so you can push directly to the fork. Look forward to working together on this if you're able!

@tswast tswast referenced this pull request Nov 30, 2017

Merged

Update pandas.read_gbq docs to point to pandas-gbq #18548

1 of 1 task complete
@tswast

This comment has been minimized.

Show comment
Hide comment
@tswast

tswast Nov 30, 2017

Collaborator

@jreback Could you take a look at the docs changes I made? I've documented both the dependencies for 0.3.0 and what they were before (with notes on how they've changed)

Collaborator

tswast commented Nov 30, 2017

@jreback Could you take a look at the docs changes I made? I've documented both the dependencies for 0.3.0 and what they were before (with notes on how they've changed)

@tswast

tswast approved these changes Dec 5, 2017

This change LGTM, but since I made some contributions to this one I'd like one of the other maintainers to also review before we merge it.

@@ -781,6 +656,14 @@ def verify_schema(self, dataset_id, table_id, schema):
key=lambda x: x['name'])
fields_local = sorted(schema['fields'], key=lambda x: x['name'])
# Ignore mode when comparing schemas.
for field in fields_local:
if 'mode' in field:

This comment has been minimized.

@max-sixty

max-sixty Dec 6, 2017

Collaborator

not worth changing, but this could be marginally simpler as field.pop('mode', None)

@max-sixty

max-sixty Dec 6, 2017

Collaborator

not worth changing, but this could be marginally simpler as field.pop('mode', None)

Show outdated Hide outdated pandas_gbq/gbq.py Outdated
tableId=table_id,
body=body).execute()
except HttpError as ex:
self.client.load_table_from_file(

This comment has been minimized.

@max-sixty

max-sixty Dec 6, 2017

Collaborator

This is so much better than the existing method

@max-sixty

max-sixty Dec 6, 2017

Collaborator

This is so much better than the existing method

This comment has been minimized.

@tswast

tswast Dec 8, 2017

Collaborator

Yeah, it's technically a change in behavior (kicks off a load job instead of using the streaming API), but I think the change is small enough to be worth it. Load jobs should be much more reliable for the use case of this library.

@tswast

tswast Dec 8, 2017

Collaborator

Yeah, it's technically a change in behavior (kicks off a load job instead of using the streaming API), but I think the change is small enough to be worth it. Load jobs should be much more reliable for the use case of this library.

rows.append(row_dict)
row_json = row.to_json(
force_ascii=False, date_unit='s', date_format='iso')
rows.append(row_json)

This comment has been minimized.

@max-sixty

max-sixty Dec 6, 2017

Collaborator

This isn't worse than the last version, but this would be much faster if .to_json was called on the whole table rather than each row, iterating in python

CSV might be even faster given the reduced space (and pandas can't use nesting or structs anyway). But potentially wait until parquet is GA to make the jump

@max-sixty

max-sixty Dec 6, 2017

Collaborator

This isn't worse than the last version, but this would be much faster if .to_json was called on the whole table rather than each row, iterating in python

CSV might be even faster given the reduced space (and pandas can't use nesting or structs anyway). But potentially wait until parquet is GA to make the jump

This comment has been minimized.

@tswast

tswast Dec 8, 2017

Collaborator

I'd prefer to keep the current behavior for now and do a subsequent PR for any changes like this for performance improvements. I've filed #96 to track the work for speeding up encoding for the to_gbq() method.

@tswast

tswast Dec 8, 2017

Collaborator

I'd prefer to keep the current behavior for now and do a subsequent PR for any changes like this for performance improvements. I've filed #96 to track the work for speeding up encoding for the to_gbq() method.

This comment has been minimized.

@max-sixty

max-sixty Dec 8, 2017

Collaborator

100% re doing that separately

@max-sixty

max-sixty Dec 8, 2017

Collaborator

100% re doing that separately

field_type)
for row_num, entries in enumerate(rows):
for col_num in range(len(col_types)):
field_value = entries[col_num]

This comment has been minimized.

@max-sixty

max-sixty Dec 6, 2017

Collaborator

I don't think I'd realized we were looping over all the values in python before. This explains a lot of why exporting a query to a file on GCS and then reading from that file is an order of magnitude faster.

If we could pass rows directly into DataFrame that would be much faster, but I'm not sure if that's possible

@max-sixty

max-sixty Dec 6, 2017

Collaborator

I don't think I'd realized we were looping over all the values in python before. This explains a lot of why exporting a query to a file on GCS and then reading from that file is an order of magnitude faster.

If we could pass rows directly into DataFrame that would be much faster, but I'm not sure if that's possible

This comment has been minimized.

@tswast

tswast Dec 8, 2017

Collaborator

I've filed #97 to track improving the performance in the read case.

@tswast

tswast Dec 8, 2017

Collaborator

I've filed #97 to track improving the performance in the read case.

field_type)
for row_num, entries in enumerate(rows):
for col_num in range(len(col_types)):
field_value = entries[col_num]

This comment has been minimized.

@max-sixty

max-sixty Dec 6, 2017

Collaborator

Because this is being called so many times, you may even get a small speed up from eliminating assigning to field_value

(but these are all things that are either the same or better than the existing version)

@max-sixty

max-sixty Dec 6, 2017

Collaborator

Because this is being called so many times, you may even get a small speed up from eliminating assigning to field_value

(but these are all things that are either the same or better than the existing version)

self._print('Standard price: ${:,.2f} USD\n'.format(
bytes_processed * self.query_price_for_TB))
bytes_billed * self.query_price_for_TB))
self._print('Retrieving results...')

This comment has been minimized.

@max-sixty

max-sixty Dec 6, 2017

Collaborator

Presumably this is never going to be relevant because the prior part is blocking?

@max-sixty

max-sixty Dec 6, 2017

Collaborator

Presumably this is never going to be relevant because the prior part is blocking?

This comment has been minimized.

@tswast

tswast Dec 8, 2017

Collaborator

Yes and no. I think it indeed is less relevant, but actually fetching the rows and constructing the dataframe will take non-zero time, especially for larger result sets.

@tswast

tswast Dec 8, 2017

Collaborator

Yes and no. I think it indeed is less relevant, but actually fetching the rows and constructing the dataframe will take non-zero time, especially for larger result sets.

@max-sixty

This comment has been minimized.

Show comment
Hide comment
@max-sixty

max-sixty Dec 6, 2017

Collaborator

I took a proper read through, though needs someone like @jreback to approve

I think this strictly dominates the existing version. There are a couple of extremely small tweaks that we can do in a follow-up if not now.

There are also some areas for huge speed-ups - IIUC the code is currently running through each value in python atm.

In line with that: we've built a function for exporting to a file to GCS and loading that in, which works much better for > 1-2m rows. We can do a PR for that if people are interested, in addition to speeding up the current path.

Collaborator

max-sixty commented Dec 6, 2017

I took a proper read through, though needs someone like @jreback to approve

I think this strictly dominates the existing version. There are a couple of extremely small tweaks that we can do in a follow-up if not now.

There are also some areas for huge speed-ups - IIUC the code is currently running through each value in python atm.

In line with that: we've built a function for exporting to a file to GCS and loading that in, which works much better for > 1-2m rows. We can do a PR for that if people are interested, in addition to speeding up the current path.

@max-sixty

This comment has been minimized.

Show comment
Hide comment
@max-sixty

max-sixty Dec 12, 2017

Collaborator

@jreback I know there's lots going on in pandas, but would be super if you could take a glance at this. A few follow-ups are dependent on this merging.

Thanks v much

Collaborator

max-sixty commented Dec 12, 2017

@jreback I know there's lots going on in pandas, but would be super if you could take a glance at this. A few follow-ups are dependent on this merging.

Thanks v much

@jreback

This comment has been minimized.

Show comment
Hide comment
@jreback

jreback Dec 12, 2017

Contributor

sure will look

Contributor

jreback commented Dec 12, 2017

sure will look

@jreback

lgtm. some small doc comments that I would do. better to be over explanatory in the whatsnew.

Show outdated Hide outdated docs/source/changelog.rst Outdated
@@ -181,14 +158,6 @@ class QueryTimeout(ValueError):
pass
class StreamingInsertError(ValueError):

This comment has been minimized.

@jreback

jreback Dec 14, 2017

Contributor

mention that this is eliminated in whatsnew

@jreback

jreback Dec 14, 2017

Contributor

mention that this is eliminated in whatsnew

@tswast tswast merged commit afdbcfa into pydata:master Dec 20, 2017

2 of 3 checks passed

codecov/patch 7.84% of diff hit (target 50%)
Details
codecov/project 28.97% (target 0%)
Details
continuous-integration/travis-ci/pr The Travis CI build passed
Details
@max-sixty

This comment has been minimized.

Show comment
Hide comment
@max-sixty

max-sixty Dec 20, 2017

Collaborator

Congrats @jasonqng & @tswast !

Collaborator

max-sixty commented Dec 20, 2017

Congrats @jasonqng & @tswast !

@tswast

This comment has been minimized.

Show comment
Hide comment
@tswast

tswast Dec 20, 2017

Collaborator

Thanks! A couple things to clean up before we make a release. I'd like to add a couple tests for some of the other issues we think this PR might address.

Plus I'm not sure it makes sense to do a release right before the holidays.

Collaborator

tswast commented Dec 20, 2017

Thanks! A couple things to clean up before we make a release. I'd like to add a couple tests for some of the other issues we think this PR might address.

Plus I'm not sure it makes sense to do a release right before the holidays.

This was referenced Dec 21, 2017

@jasonqng

This comment has been minimized.

Show comment
Hide comment
@jasonqng

jasonqng Dec 21, 2017

Contributor

TY @tswast for carrying this across the finish line!!! I learned so much watching you and @jreback mold this. Can't wait to stop having to rely on my janky fork and install the new version!

Contributor

jasonqng commented Dec 21, 2017

TY @tswast for carrying this across the finish line!!! I learned so much watching you and @jreback mold this. Can't wait to stop having to rely on my janky fork and install the new version!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment