New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PICKLE format broken / seishub client broken (unpickling of new nanosecond based utcdatetime) #1664

Merged
merged 9 commits into from Mar 8, 2017

Conversation

Projects
None yet
3 participants
@megies
Member

megies commented Feb 10, 2017

I think #1325 broke clients.seishub, because data is sent as pickled streams from the server and the data from the server is lacking _ns attribute then...

I can't see a way to fix this right now, as we can't make sure seishub server and client are on the same page UTCDateTime-wise..

(master) ~$ py.test ~/git/obspy-master/obspy/clients/seishub -k test_get_waveform_with_metadata
====================================================== test session starts =======================================================
platform linux2 -- Python 2.7.12, pytest-2.8.5, py-1.4.31, pluggy-0.3.1
rootdir: /home/megies, inifile: pytest.ini
plugins: random-0.2
collected 16 items 

git/obspy-master/obspy/clients/seishub/tests/test_client.py F

============================================================ FAILURES ============================================================
_________________________________________ ClientTestCase.test_get_waveform_with_metadata _________________________________________

self = <obspy.clients.seishub.tests.test_client.ClientTestCase testMethod=test_get_waveform_with_metadata>

    def test_get_waveform_with_metadata(self):
        # metadata change during t1 -> t2 !
        t1 = UTCDateTime("2010-05-03T23:59:30")
        t2 = UTCDateTime("2010-05-04T00:00:30")
        client = self.client
        self.assertRaises(Exception, client.waveform.get_waveforms, "BW",
                          "UH1", "", "EH*", t1, t2, get_paz=True,
                          get_coordinates=True)
        st = client.waveform.get_waveforms("BW", "UH1", "", "EH*", t1, t2,
                                           get_paz=True, get_coordinates=True,
>                                          metadata_timecheck=False)

git/obspy-master/obspy/clients/seishub/tests/test_client.py:235: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
git/obspy-master/obspy/clients/seishub/client.py:505: in get_waveforms
    stream = _unpickle(data)
git/obspy-master/obspy/clients/seishub/client.py:49: in _unpickle
    obj = pickle.loads(data)
anaconda/envs/master/lib/python2.7/pickle.py:1388: in loads
    return Unpickler(file).load()
anaconda/envs/master/lib/python2.7/pickle.py:864: in load
    dispatch[key](self)
anaconda/envs/master/lib/python2.7/pickle.py:1223: in load_build
    setstate(state)
git/obspy-master/obspy/core/util/attribdict.py:115: in __setstate__
    self.update(adict)
git/obspy-master/obspy/core/util/attribdict.py:142: in update
    self.__setitem__(key, value)
git/obspy-master/obspy/core/trace.py:165: in __setitem__
    value = UTCDateTime(value)
git/obspy-master/obspy/core/utcdatetime.py:231: in __init__
    self._ns = value._ns
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <[AttributeError("'UTCDateTime' object has no attribute '_UTCDateTime__ns'") raised in repr()] SafeRepr object at 0x7fdcb2143710>

    def _get_ns(self):
>       return self.__ns
E       AttributeError: 'UTCDateTime' object has no attribute '_UTCDateTime__ns'

git/obspy-master/obspy/core/utcdatetime.py:360: AttributeError
=================================== 15 tests deselected by '-ktest_get_waveform_with_metadata' ===================================
============================================ 1 failed, 15 deselected in 1.65 seconds =============================================
@barsch

This comment has been minimized.

Show comment
Hide comment
@barsch

barsch Feb 10, 2017

Member

hmm thats really bad - I guess updating the SeisHub server itself could be a solution - but again it should be backward compatible

unfortunatly I can't debug this as I don't have access to a running SeisHub atm - how about you try to change obspy/core/utcdatetime.py:231: in init into

    try:
        self._ns = value._ns
    except AttributeError:
        self._ns = int(round(value.timestamp * 10**9))

or something similar ?

Member

barsch commented Feb 10, 2017

hmm thats really bad - I guess updating the SeisHub server itself could be a solution - but again it should be backward compatible

unfortunatly I can't debug this as I don't have access to a running SeisHub atm - how about you try to change obspy/core/utcdatetime.py:231: in init into

    try:
        self._ns = value._ns
    except AttributeError:
        self._ns = int(round(value.timestamp * 10**9))

or something similar ?

@megies

This comment has been minimized.

Show comment
Hide comment
@megies

megies Feb 10, 2017

Member

Good idea, although in principle that workaround is pretty ugly of course, but I can't think of any better approach myself..

Using that approach, seishub tests pass on my machine.

But, doing another smallish test, it somehow seems to shift the unpickled timestamp by one second..??

from obspy import UTCDateTime
from obspy.clients.seishub import Client

t = UTCDateTime(2017, 2, 1, 11)
print t
client = Client()
st = client.waveform.get_waveforms('BW', 'MGSBH', '', 'HHZ', t, t+10)
print st

on maintenance_1.0.x:

2017-02-01T11:00:00.000000Z
1 Trace(s) in Stream:
BW.MGSBH..HHZ | 2017-02-01T11:00:00.000000Z - 2017-02-01T11:00:10.000000Z | 200.0 Hz, 2001 samples

on this branch:

2017-02-01T11:00:00.000000Z
1 Trace(s) in Stream:
BW.MGSBH..HHZ | 2017-02-01T10:59:59.000000Z - 2017-02-01T11:00:09.000000Z | 200.0 Hz, 2001 samples
Member

megies commented Feb 10, 2017

Good idea, although in principle that workaround is pretty ugly of course, but I can't think of any better approach myself..

Using that approach, seishub tests pass on my machine.

But, doing another smallish test, it somehow seems to shift the unpickled timestamp by one second..??

from obspy import UTCDateTime
from obspy.clients.seishub import Client

t = UTCDateTime(2017, 2, 1, 11)
print t
client = Client()
st = client.waveform.get_waveforms('BW', 'MGSBH', '', 'HHZ', t, t+10)
print st

on maintenance_1.0.x:

2017-02-01T11:00:00.000000Z
1 Trace(s) in Stream:
BW.MGSBH..HHZ | 2017-02-01T11:00:00.000000Z - 2017-02-01T11:00:10.000000Z | 200.0 Hz, 2001 samples

on this branch:

2017-02-01T11:00:00.000000Z
1 Trace(s) in Stream:
BW.MGSBH..HHZ | 2017-02-01T10:59:59.000000Z - 2017-02-01T11:00:09.000000Z | 200.0 Hz, 2001 samples
@barsch

This comment has been minimized.

Show comment
Hide comment
@barsch

barsch Feb 10, 2017

Member

maybe an issue with the int(round()) construct - can you debug it using a break point?

Member

barsch commented Feb 10, 2017

maybe an issue with the int(round()) construct - can you debug it using a break point?

@barsch

barsch approved these changes Feb 11, 2017

@megies

This comment has been minimized.

Show comment
Hide comment
@megies

megies Feb 11, 2017

Member

Let's wait a bit with merging this, I need to do more checks..

Member

megies commented Feb 11, 2017

Let's wait a bit with merging this, I need to do more checks..

@megies megies self-assigned this Feb 11, 2017

@megies

This comment has been minimized.

Show comment
Hide comment
@megies

megies Feb 11, 2017

Member

Ermmh.. today doctests get executed again for me.. (no idea why they weren't yesterday).

There really seems to be a problem with a one second offset with this PR:
http://tests.obspy.org/69881/

Member

megies commented Feb 11, 2017

Ermmh.. today doctests get executed again for me.. (no idea why they weren't yesterday).

There really seems to be a problem with a one second offset with this PR:
http://tests.obspy.org/69881/

@barsch

This comment has been minimized.

Show comment
Hide comment
@barsch

barsch Feb 13, 2017

Member

self._from_timestamp uses also int(round(value * 10**9)) - could it be a rounding issue - unfortunatly I can't really debug it ...

Member

barsch commented Feb 13, 2017

self._from_timestamp uses also int(round(value * 10**9)) - could it be a rounding issue - unfortunatly I can't really debug it ...

@megies

This comment has been minimized.

Show comment
Hide comment
@megies

megies Feb 13, 2017

Member

self._from_timestamp uses also int(round(value * 10**9))

yeah, I saw that, just used that one to not duplicate code.. need to debug it when I got some time..

Member

megies commented Feb 13, 2017

self._from_timestamp uses also int(round(value * 10**9))

yeah, I saw that, just used that one to not duplicate code.. need to debug it when I got some time..

@megies megies changed the title from seishub client broken to utcdatetime rounding issue (when printing?) Feb 13, 2017

@megies megies changed the title from utcdatetime rounding issue (when printing?) to seishub client broken (unpickling of new nanosecond based utcdatetime) / utcdatetime rounding issue (when printing?) Feb 13, 2017

@megies

This comment has been minimized.

Show comment
Hide comment
@megies

megies Feb 13, 2017

Member

Ok, so after fixing the unpickling, there is a completely separate issue. I'm a bit confused right now, it seems like it's a rounding issue, I can reproduce using Trace.trim()..

import numpy as np
from obspy import UTCDateTime, Trace

t = UTCDateTime()
try:
    t._from_timestamp(1484308799.9749999046)
    label = 'new'
except:
    t = UTCDateTime(1484308799.9749999046)
    label = 'old'

x = np.arange(15, dtype=np.int32)
x[::5] = 0
tr = Trace(x)
tr.stats.sampling_rate = 200
tr.stats.starttime = t

t2 = UTCDateTime(2017, 1, 13, 12)

print tr

tr.trim(starttime=t2)

print t2
print tr
print '{:.9f}'.format(tr.stats.starttime.timestamp)

tr.stats.station = label
tr.write('/tmp/utcdatetime_trim_bug_{}.mseed'.format(label), 'MSEED')

On maintenance_1.0.x:

... | 2017-01-13T11:59:59.975000Z - 2017-01-13T12:00:00.045000Z | 200.0 Hz, 15 samples
2017-01-13T12:00:00.000000Z
... | 2017-01-13T12:00:00.000000Z - 2017-01-13T12:00:00.045000Z | 200.0 Hz, 10 samples
1484308800.000000000

On master:

... | 2017-01-13T11:59:59.975000Z - 2017-01-13T12:00:00.045000Z | 200.0 Hz, 15 samples
2017-01-13T12:00:00.000000Z
... | 2017-01-13T11:59:59.000000Z - 2017-01-13T12:00:00.045000Z | 200.0 Hz, 10 samples
1484308799.999999762

The timestamps that are output seem to differ by 0.3 microseconds but printing they differ by one second straight? Writing to miniseed (which stores timestamps with microsecond accuracy IIRC) and plotting the traces look the same (which would be kind of expected for a sub-micron difference of time of first sample..

figure_1

Member

megies commented Feb 13, 2017

Ok, so after fixing the unpickling, there is a completely separate issue. I'm a bit confused right now, it seems like it's a rounding issue, I can reproduce using Trace.trim()..

import numpy as np
from obspy import UTCDateTime, Trace

t = UTCDateTime()
try:
    t._from_timestamp(1484308799.9749999046)
    label = 'new'
except:
    t = UTCDateTime(1484308799.9749999046)
    label = 'old'

x = np.arange(15, dtype=np.int32)
x[::5] = 0
tr = Trace(x)
tr.stats.sampling_rate = 200
tr.stats.starttime = t

t2 = UTCDateTime(2017, 1, 13, 12)

print tr

tr.trim(starttime=t2)

print t2
print tr
print '{:.9f}'.format(tr.stats.starttime.timestamp)

tr.stats.station = label
tr.write('/tmp/utcdatetime_trim_bug_{}.mseed'.format(label), 'MSEED')

On maintenance_1.0.x:

... | 2017-01-13T11:59:59.975000Z - 2017-01-13T12:00:00.045000Z | 200.0 Hz, 15 samples
2017-01-13T12:00:00.000000Z
... | 2017-01-13T12:00:00.000000Z - 2017-01-13T12:00:00.045000Z | 200.0 Hz, 10 samples
1484308800.000000000

On master:

... | 2017-01-13T11:59:59.975000Z - 2017-01-13T12:00:00.045000Z | 200.0 Hz, 15 samples
2017-01-13T12:00:00.000000Z
... | 2017-01-13T11:59:59.000000Z - 2017-01-13T12:00:00.045000Z | 200.0 Hz, 10 samples
1484308799.999999762

The timestamps that are output seem to differ by 0.3 microseconds but printing they differ by one second straight? Writing to miniseed (which stores timestamps with microsecond accuracy IIRC) and plotting the traces look the same (which would be kind of expected for a sub-micron difference of time of first sample..

figure_1

@megies megies requested a review from krischer Feb 13, 2017

@barsch

This comment has been minimized.

Show comment
Hide comment
@barsch

barsch Feb 13, 2017

Member

master:

>>> UTCDateTime(2017, 1, 13, 11, 59, 59, 975000) + 0.005
UTCDateTime(2017, 1, 13, 11, 59, 59, 980000)
>>> (UTCDateTime(2017, 1, 13, 11, 59, 59, 975000) + 0.005)._ns
1484308799980000000

>>> UTCDateTime(1484308799.975) + 0.005
UTCDateTime(2017, 1, 13, 11, 59, 59, 979999)
>>> (UTCDateTime(1484308799.975) + 0.005)._ns
1484308799979999808

but it goes back to:

>>> int(float(1484308799.975 * 10**9))
1484308799974999808L

so never trust timestamps ...

also very interesting:

>>> int(float(1484308799.975 * 10**6 * 10**3))
1484308799975000064L
Member

barsch commented Feb 13, 2017

master:

>>> UTCDateTime(2017, 1, 13, 11, 59, 59, 975000) + 0.005
UTCDateTime(2017, 1, 13, 11, 59, 59, 980000)
>>> (UTCDateTime(2017, 1, 13, 11, 59, 59, 975000) + 0.005)._ns
1484308799980000000

>>> UTCDateTime(1484308799.975) + 0.005
UTCDateTime(2017, 1, 13, 11, 59, 59, 979999)
>>> (UTCDateTime(1484308799.975) + 0.005)._ns
1484308799979999808

but it goes back to:

>>> int(float(1484308799.975 * 10**9))
1484308799974999808L

so never trust timestamps ...

also very interesting:

>>> int(float(1484308799.975 * 10**6 * 10**3))
1484308799975000064L
@barsch

This comment has been minimized.

Show comment
Hide comment
@barsch

barsch Feb 13, 2017

Member

we should use as fallback self._from_datetime instead of self._from_timestamp if the _ns attribute is missing - it should work than at least in this case

generally usage of UTCDateTime.timestamp should be discouraged as its not accurate enough ...

Member

barsch commented Feb 13, 2017

we should use as fallback self._from_datetime instead of self._from_timestamp if the _ns attribute is missing - it should work than at least in this case

generally usage of UTCDateTime.timestamp should be discouraged as its not accurate enough ...

@megies

This comment has been minimized.

Show comment
Hide comment
@megies

megies Feb 13, 2017

Member

we should use as fallback self._from_datetime instead of self._from_timestamp if the _ns attribute is missing - it should work than at least in this case

uhm.. old utcdatetime objects serve everything derived from the timestamp.. the timestamp is the only attribute that exists when an old utcdatetime object is unpickled

Member

megies commented Feb 13, 2017

we should use as fallback self._from_datetime instead of self._from_timestamp if the _ns attribute is missing - it should work than at least in this case

uhm.. old utcdatetime objects serve everything derived from the timestamp.. the timestamp is the only attribute that exists when an old utcdatetime object is unpickled

@barsch

This comment has been minimized.

Show comment
Hide comment
@barsch

barsch Feb 13, 2017

Member

self._from_datetime(datetime.datetime.fromtimestamp(value)) works in this case - however it won't work with for timestamps before < 1970 and >2038 again ...

but as fallback it should be fine?

Member

barsch commented Feb 13, 2017

self._from_datetime(datetime.datetime.fromtimestamp(value)) works in this case - however it won't work with for timestamps before < 1970 and >2038 again ...

but as fallback it should be fine?

@megies

This comment has been minimized.

Show comment
Hide comment
@megies

megies Feb 14, 2017

Member

however it won't work with for timestamps before < 1970 and >2038 again ...

but as fallback it should be fine?

I guess that's fine, because it's supposed to not have worked before either..

Member

megies commented Feb 14, 2017

however it won't work with for timestamps before < 1970 and >2038 again ...

but as fallback it should be fine?

I guess that's fine, because it's supposed to not have worked before either..

@megies

This comment has been minimized.

Show comment
Hide comment
@megies

megies Feb 15, 2017

Member

seishub tests are failing: http://tests.obspy.org/70224/

(I added a missing test for fetching waveforms today)

Member

megies commented Feb 15, 2017

seishub tests are failing: http://tests.obspy.org/70224/

(I added a missing test for fetching waveforms today)

megies added some commits Feb 10, 2017

utcdatetime: try to fix unpickling streams that were pickled with old
timestamp-based UTCDateTime on Obspy using new ns based timestamps

@megies megies changed the title from seishub client broken (unpickling of new nanosecond based utcdatetime) / utcdatetime rounding issue (when printing?) to PICKLE format broken / seishub client broken (unpickling of new nanosecond based utcdatetime) / utcdatetime rounding issue (when printing?) Mar 5, 2017

@megies

This comment has been minimized.

Show comment
Hide comment
@megies

megies Mar 6, 2017

Member

OK, so.. the timing problem is fixed, but it turns out we can not read a pickle on Py3 when it was written on Py2. (The other way around works)

Actually, I am not sure we ever were capable of that, as we only had a round-trip write-read test for PICKLE format.

Maybe we just raise an exception explicitly when trying to read a Py2 pickle on Py3?

Member

megies commented Mar 6, 2017

OK, so.. the timing problem is fixed, but it turns out we can not read a pickle on Py3 when it was written on Py2. (The other way around works)

Actually, I am not sure we ever were capable of that, as we only had a round-trip write-read test for PICKLE format.

Maybe we just raise an exception explicitly when trying to read a Py2 pickle on Py3?

@krischer

This comment has been minimized.

Show comment
Hide comment
@krischer

krischer Mar 6, 2017

Member

Regarding py2/py3: https://docs.python.org/3/library/pickle.html#pickle.load

The load method contains a couple options that might enable that.

Pickling is AFAIK guaranteed to be backwards compatible but I'm not sure if that's valid across python 2 and 3.

Member

krischer commented Mar 6, 2017

Regarding py2/py3: https://docs.python.org/3/library/pickle.html#pickle.load

The load method contains a couple options that might enable that.

Pickling is AFAIK guaranteed to be backwards compatible but I'm not sure if that's valid across python 2 and 3.

@barsch

This comment has been minimized.

Show comment
Hide comment
@barsch

barsch Mar 6, 2017

Member

EDIT: order of traceback corrected

@krischer: I already played with those - don't have all tracebacks anymore - but here some excerpts from chat with Tobi

with pickle.load(filename, errors="ignore") I got:

Traceback (most recent call last):
  File "D:\Workspace\obspy\obspy\core\tests\test_stream.py", line 1335, in test_read_pickle
    st = read(pickle_file, format='PICKLE')
  File "<decorator-gen-32>", line 2, in read
  File "D:\Workspace\obspy\obspy\core\util\decorator.py", line 299, in _map_example_filename
    return func(*args, **kwargs)
  File "D:\Workspace\obspy\obspy\core\stream.py", line 231, in read
    st.extend(_read(file, format, headonly, **kwargs).traces)
  File "<decorator-gen-33>", line 2, in _read
  File "D:\Workspace\obspy\obspy\core\util\decorator.py", line 209, in uncompress_file
    result = func(filename, *args, **kwargs)
  File "D:\Workspace\obspy\obspy\core\stream.py", line 273, in _read
    headonly=headonly, **kwargs)
  File "D:\Workspace\obspy\obspy\core\util\base.py", line 463, in _read_from_plugin
    list_obj = read_format(filename, **kwargs)
  File "D:\Workspace\obspy\obspy\core\stream.py", line 3148, in _read_pickle
    return pickle.load(fp, errors="ignore")
ValueError: buffer size does not match array size

encoding='utf-8' or encoding='latin1' did not work - with pickle.load(filename, encoding="raw-unicode-escape") -> maxrecursion error

File "D:\Python\virtualenv\obspy\64\Python34\lib\site-packages\future\types\newint.py", line 126, in __mul__
    return long(self) * other
  File "D:\Python\virtualenv\obspy\64\Python34\lib\site-packages\future\types\newint.py", line 126, in __mul__
    return long(self) * other
  File "D:\Python\virtualenv\obspy\64\Python34\lib\site-packages\future\types\newint.py", line 126, in __mul__
    return long(self) * other
  File "D:\Python\virtualenv\obspy\64\Python34\lib\site-packages\future\types\newint.py", line 126, in __mul__
    return long(self) * other
  File "D:\Python\virtualenv\obspy\64\Python34\lib\site-packages\future\types\newint.py", line 126, in __mul__
    return long(self) * other

and

DeprecationWarning: __int__ returned non-int (type newint).  The ability to return an instance of a strict subclass of int is deprecated, and may be removed in a future version of Python.
  value = int(value)
Member

barsch commented Mar 6, 2017

EDIT: order of traceback corrected

@krischer: I already played with those - don't have all tracebacks anymore - but here some excerpts from chat with Tobi

with pickle.load(filename, errors="ignore") I got:

Traceback (most recent call last):
  File "D:\Workspace\obspy\obspy\core\tests\test_stream.py", line 1335, in test_read_pickle
    st = read(pickle_file, format='PICKLE')
  File "<decorator-gen-32>", line 2, in read
  File "D:\Workspace\obspy\obspy\core\util\decorator.py", line 299, in _map_example_filename
    return func(*args, **kwargs)
  File "D:\Workspace\obspy\obspy\core\stream.py", line 231, in read
    st.extend(_read(file, format, headonly, **kwargs).traces)
  File "<decorator-gen-33>", line 2, in _read
  File "D:\Workspace\obspy\obspy\core\util\decorator.py", line 209, in uncompress_file
    result = func(filename, *args, **kwargs)
  File "D:\Workspace\obspy\obspy\core\stream.py", line 273, in _read
    headonly=headonly, **kwargs)
  File "D:\Workspace\obspy\obspy\core\util\base.py", line 463, in _read_from_plugin
    list_obj = read_format(filename, **kwargs)
  File "D:\Workspace\obspy\obspy\core\stream.py", line 3148, in _read_pickle
    return pickle.load(fp, errors="ignore")
ValueError: buffer size does not match array size

encoding='utf-8' or encoding='latin1' did not work - with pickle.load(filename, encoding="raw-unicode-escape") -> maxrecursion error

File "D:\Python\virtualenv\obspy\64\Python34\lib\site-packages\future\types\newint.py", line 126, in __mul__
    return long(self) * other
  File "D:\Python\virtualenv\obspy\64\Python34\lib\site-packages\future\types\newint.py", line 126, in __mul__
    return long(self) * other
  File "D:\Python\virtualenv\obspy\64\Python34\lib\site-packages\future\types\newint.py", line 126, in __mul__
    return long(self) * other
  File "D:\Python\virtualenv\obspy\64\Python34\lib\site-packages\future\types\newint.py", line 126, in __mul__
    return long(self) * other
  File "D:\Python\virtualenv\obspy\64\Python34\lib\site-packages\future\types\newint.py", line 126, in __mul__
    return long(self) * other

and

DeprecationWarning: __int__ returned non-int (type newint).  The ability to return an instance of a strict subclass of int is deprecated, and may be removed in a future version of Python.
  value = int(value)
@barsch

This comment has been minimized.

Show comment
Hide comment
@barsch

barsch Mar 6, 2017

Member

while working on the vcr stuff I made pretty much the same experience - py2 pickles couldn't be loaded into py3 but the other way around - no idea why - thats the reason the "vcr recording" part is restricted to py3 only

Member

barsch commented Mar 6, 2017

while working on the vcr stuff I made pretty much the same experience - py2 pickles couldn't be loaded into py3 but the other way around - no idea why - thats the reason the "vcr recording" part is restricted to py3 only

@krischer

This comment has been minimized.

Show comment
Hide comment
@krischer

krischer Mar 6, 2017

Member

Hmm..really strange to be honest but I don't have any experience in those things so who knows.

So the solution is to port SeisHub to Python 3? ;-)

Member

krischer commented Mar 6, 2017

Hmm..really strange to be honest but I don't have any experience in those things so who knows.

So the solution is to port SeisHub to Python 3? ;-)

@megies

This comment has been minimized.

Show comment
Hide comment
@megies

megies Mar 6, 2017

Member

So the solution is to port SeisHub to Python 3? ;-)

;-) I think it's OK if we explicitly raise an exception when trying to fetch data from a seishub server, stating that it only works on Py2.

Member

megies commented Mar 6, 2017

So the solution is to port SeisHub to Python 3? ;-)

;-) I think it's OK if we explicitly raise an exception when trying to fetch data from a seishub server, stating that it only works on Py2.

@barsch

This comment has been minimized.

Show comment
Hide comment
@barsch

barsch Mar 6, 2017

Member

SeisHub should better be replaced with https://github.com/krischer/jane ;)

Member

barsch commented Mar 6, 2017

SeisHub should better be replaced with https://github.com/krischer/jane ;)

megies added some commits Mar 7, 2017

enable unpickling Py2 pickles on Py3
+ test for reading both py2 and py3 pickles
+ remove explicitly raising seishub exceptions again
minor tweak to one seishub test case, sort stream received from server..
somehow trace order on py2 and py3 is different after unpickling it
seems..
@megies

This comment has been minimized.

Show comment
Hide comment
@megies

megies Mar 7, 2017

Member

seishub and pickle tests are running fine now locally, on py27 and py36:

Looks like this is resolved now.. :-)

Member

megies commented Mar 7, 2017

seishub and pickle tests are running fine now locally, on py27 and py36:

Looks like this is resolved now.. :-)

@megies megies changed the title from PICKLE format broken / seishub client broken (unpickling of new nanosecond based utcdatetime) / utcdatetime rounding issue (when printing?) to PICKLE format broken / seishub client broken (unpickling of new nanosecond based utcdatetime) Mar 7, 2017

@megies

This comment has been minimized.

Show comment
Hide comment
@megies

megies Mar 8, 2017

Member

Huhh.. don't understand that recursion error.. on Python 3, and it's not happening (locally) on Python 3.6..

Member

megies commented Mar 8, 2017

Huhh.. don't understand that recursion error.. on Python 3, and it's not happening (locally) on Python 3.6..

@krischer

This comment has been minimized.

Show comment
Hide comment
@krischer

krischer Mar 8, 2017

Member

Huii...looks like a future error. Do you have the same future version installed?

Member

krischer commented Mar 8, 2017

Huii...looks like a future error. Do you have the same future version installed?

@megies

This comment has been minimized.

Show comment
Hide comment
@megies

megies Mar 8, 2017

Member

I also thought it's future but I ended up trying future 0.16.0 on Py3.5 and also had that failure..

Member

megies commented Mar 8, 2017

I also thought it's future but I ended up trying future 0.16.0 on Py3.5 and also had that failure..

@megies

This comment has been minimized.

Show comment
Hide comment
@megies

megies Mar 8, 2017

Member

OK, this fixes it for me..

Member

megies commented Mar 8, 2017

OK, this fixes it for me..

unpickling of old UTCDateTime: work around floating point
accuracy/rounding issue on Python 3.3
@megies

This comment has been minimized.

Show comment
Hide comment
@megies

megies Mar 8, 2017

Member

OK, so hopefully this is the last fix needed here.. (Python 3.3 Travis shows some floating point accuracy/rounding issue when converting old pickled UTCDateTime object to new integer nanosecond one..)

Member

megies commented Mar 8, 2017

OK, so hopefully this is the last fix needed here.. (Python 3.3 Travis shows some floating point accuracy/rounding issue when converting old pickled UTCDateTime object to new integer nanosecond one..)

@barsch

This comment has been minimized.

Show comment
Hide comment
@barsch

barsch Mar 8, 2017

Member

looking good!

Member

barsch commented Mar 8, 2017

looking good!

@barsch

barsch approved these changes Mar 8, 2017

@krischer krischer merged commit 8628d5a into master Mar 8, 2017

7 of 8 checks passed

codecov/changes 10 files have unexpected coverage changes not visible in diff.
Details
ci/circleci Your tests passed on CircleCI!
Details
codecov/patch 100% of diff hit (target 90%)
Details
codecov/project 87.57% (+1.54%) compared to 1e2e014
Details
continuous-integration/appveyor/branch AppVeyor build succeeded
Details
continuous-integration/appveyor/pr AppVeyor build succeeded
Details
continuous-integration/travis-ci/pr The Travis CI build passed
Details
docker-testbot Docker tests succeeded
Details

@krischer krischer deleted the fix_seishub_unpickle_old_utcdatetime branch Mar 8, 2017

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment