-
Notifications
You must be signed in to change notification settings - Fork 31
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
UnboundLocalError: local variable 'dotAB' referenced before assignment #20
Comments
Hi,
Your input are vectors, while they should be minimum 2D-arrays:
When using
data = np.array([[185, 206, 163]])
Rw = np.array([[256, 264, 202]])
the code runs fine.
regards,
Kevin
Op wo 13 okt. 2021 om 02:14 schreef Alexander Forsythe <
***@***.***>:
… Forgive me if I'm doing something stupid ...
The following code produces the following error in basics.py dot23
import luxpy as lx
import numpy as np
data = np.array([185, 206, 163])
Rw = np.array([256, 264, 202])
camout = lx.cam.zcam(data, Rw)
UnboundLocalError: local variable 'dotAB' referenced before assignment
---------------------------------------------------------------------------
UnboundLocalError Traceback (most recent call last)
<ipython-input-4-599aff8f528b> in <module>
2 Rw = np.array([256, 264, 202])
3
----> 4 camout = lx.cam.zcam(data, Rw)
/usr/local/Caskroom/miniconda/base/lib/python3.8/site-packages/luxpy/color/cam/zcam.py in run(data, xyzw, outin, cieobs, conditions, forward, mcat, **kwargs)
377 #--------------------------------------------
378 # Apply CAT to white point:
--> 379 xyzwc = cat.apply_vonkries1(xyzw, xyzw1 = xyzw, xyzw2 = xyzw_d65,
380 D = D, mcat = mcat, invmcat = invmcat,
381 use_Yw = True)
/usr/local/Caskroom/miniconda/base/lib/python3.8/site-packages/luxpy/color/cat/chromaticadaptation.py in apply_vonkries1(xyz, xyzw1, xyzw2, D, mcat, invmcat, in_, out_, use_Yw)
671 # transform from xyz to cat sensor space:
672 if in_ == 'xyz':
--> 673 rgb = math.dot23(mcat, xyz.T)
674 rgbw1 = math.dot23(mcat, xyzw1.T)
675 rgbw2 = math.dot23(mcat, xyzw2.T)
/usr/local/Caskroom/miniconda/base/lib/python3.8/site-packages/luxpy/math/basics.py in dot23(A, B, keepdims)
235 dotAB = np.expand_dims(dotAB,axis=1)
236
--> 237 return dotAB
238
239 #------------------------------------------------------------------------------
UnboundLocalError: local variable 'dotAB' referenced before assignment
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#20>, or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ACS4LX2T4FLOS5FGVDGKTLLUGTFPBANCNFSM5F35A2BA>
.
Triage notifications on the go with GitHub Mobile for iOS
<https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675>
or Android
<https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub>.
|
Hi, |
Thanks for the quick reply. Sorry for the confusion. Not really a Python guy. Just a follow up ... When running For
I get
Where I would expect
Some of the values match (within rounding), some are sort of close, and some seem significantly off. I suspect maybe I'm misusing the function. Again, apologies if there's something basic here I'm missing. |
The aM,bM are not the same as the az, bz reported in the supplemental
material.
Apart from that I had noted earlier that there are some differences between
what is calculated and the values in the supplementary material, which I
tracked down due to the CAT. I had coded an earlier version of Z-CAM, but
the authors changed it a lot. Later, they got back to me with matlab code
for a newer version, but I found some errors in that. All this was
pre-publication. Since publication, I've contacted the authors for an
update of their matlab code for an older version of ZCAM (prepublication),
but haven't heard back yet. I therefore adjusted their old matlab code (for
a previous version, but which includes the specifics on how they deal with
chromatic adaptation) to follow the publication and now the luxpy and
matlab code agree. They also agree better with the published values (only
check first column) than before. However, to get this better agreement, no
CAT must be applied to the white point (which is used in later
calculations), but which doesn't really make sense, as one would also be
adapted for this stimulus and relative perceptual attributes should really
be calculated with respect to the adapted white point, in my opinion, and
not to the unadapted input white point. In addition, the authors used as
XYZ for the D65 white point the following values: 95.0429, 100, 108.89,
which are slightly different from the 'true' values: 95.047, 100.000,
108.883. Not sure where they got those values from (probably from
calculating XYZ from a 380-780 nm D65 spectrum with 5 nm spacing, instead
of the CIE recommended 360-830 nm range with 1 nm spacing). In addition,
they changed the way Iz is calculated from their previous publication on
the Iz, az,bz color space.
Op wo 13 okt. 2021 om 21:47 schreef Alexander Forsythe <
***@***.***>:
… Thanks for the quick reply. Sorry for the confusion. Not really a Python
guy.
Just a follow up ...
When running zcam.py I seem to be unable to match the reference values
found in the supplemental document
<https://doi.org/10.6084/m9.figshare.13640927>
For Sample 1 I'm doing :
import numpy as np
import luxpy as lx
np.set_printoptions(formatter={'float_kind':"{:.2f}".format})
data = np.array([[185, 206, 163]])
Rw = np.array([[256, 264, 202]])
cond = {'La': 264, 'Yb': 100, 'D': None, 'surround': 'avg', 'Dtype': None}
outin = "h,Q,J,aM,bM,M,C,Sz,Vz,Kz,Wz"
camout = lx.cam.zcam(data, Rw, cieobs='1931_2', conditions=cond, outin=outin)
print(camout)
I get
[[196.34 321.35 91.46 -10.04 -2.94 10.46 2.98 19.07 33.91 26.52 90.95]]
Where I would expect
[[196.3524 321.3464 92.25 -0.0165 -0.0048 10.53 3.0216 19.1314 34.7022 25.2994 91.6837]]
Some of the values match (within rounding), some are sort of close, and
some seem significantly off. I suspect maybe I'm misusing the function.
Again, apologies if there's something basic here I'm missing.
—
You are receiving this because you modified the open/close state.
Reply to this email directly, view it on GitHub
<#20 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/ACS4LXZ3C2DDA32LMTXIFYDUGXO5FANCNFSM5F35A2BA>
.
Triage notifications on the go with GitHub Mobile for iOS
<https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675>
or Android
<https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub>.
|
Btw, updated zcam code is only on github for now. It'll be added to the
next pypi and conda release.
Op do 14 okt. 2021 om 13:42 schreef kevin smet ***@***.***>:
… The aM,bM are not the same as the az, bz reported in the supplemental
material.
Apart from that I had noted earlier that there are some differences
between what is calculated and the values in the supplementary
material, which I tracked down due to the CAT. I had coded an earlier
version of Z-CAM, but the authors changed it a lot. Later, they got back to
me with matlab code for a newer version, but I found some errors in that.
All this was pre-publication. Since publication, I've contacted the authors
for an update of their matlab code for an older version of ZCAM
(prepublication), but haven't heard back yet. I therefore adjusted their
old matlab code (for a previous version, but which includes the specifics
on how they deal with chromatic adaptation) to follow the publication and
now the luxpy and matlab code agree. They also agree better with the
published values (only check first column) than before. However, to get
this better agreement, no CAT must be applied to the white point (which is
used in later calculations), but which doesn't really make sense, as one
would also be adapted for this stimulus and relative perceptual attributes
should really be calculated with respect to the adapted white point, in my
opinion, and not to the unadapted input white point. In addition, the
authors used as XYZ for the D65 white point the following values: 95.0429,
100, 108.89, which are slightly different from the 'true' values: 95.047,
100.000, 108.883. Not sure where they got those values from (probably from
calculating XYZ from a 380-780 nm D65 spectrum with 5 nm spacing, instead
of the CIE recommended 360-830 nm range with 1 nm spacing). In addition,
they changed the way Iz is calculated from their previous publication on
the Iz, az,bz color space.
Op wo 13 okt. 2021 om 21:47 schreef Alexander Forsythe <
***@***.***>:
> Thanks for the quick reply. Sorry for the confusion. Not really a Python
> guy.
>
> Just a follow up ...
>
> When running zcam.py I seem to be unable to match the reference values
> found in the supplemental document
> <https://doi.org/10.6084/m9.figshare.13640927>
>
> For Sample 1 I'm doing :
>
> import numpy as np
> import luxpy as lx
>
> np.set_printoptions(formatter={'float_kind':"{:.2f}".format})
>
> data = np.array([[185, 206, 163]])
> Rw = np.array([[256, 264, 202]])
> cond = {'La': 264, 'Yb': 100, 'D': None, 'surround': 'avg', 'Dtype': None}
> outin = "h,Q,J,aM,bM,M,C,Sz,Vz,Kz,Wz"
>
> camout = lx.cam.zcam(data, Rw, cieobs='1931_2', conditions=cond, outin=outin)
> print(camout)
>
> I get
>
> [[196.34 321.35 91.46 -10.04 -2.94 10.46 2.98 19.07 33.91 26.52 90.95]]
>
> Where I would expect
>
> [[196.3524 321.3464 92.25 -0.0165 -0.0048 10.53 3.0216 19.1314 34.7022 25.2994 91.6837]]
>
> Some of the values match (within rounding), some are sort of close, and
> some seem significantly off. I suspect maybe I'm misusing the function.
> Again, apologies if there's something basic here I'm missing.
>
> —
> You are receiving this because you modified the open/close state.
> Reply to this email directly, view it on GitHub
> <#20 (comment)>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/ACS4LXZ3C2DDA32LMTXIFYDUGXO5FANCNFSM5F35A2BA>
> .
> Triage notifications on the go with GitHub Mobile for iOS
> <https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675>
> or Android
> <https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub>.
>
>
|
… the authors In ksmet1977/luxpy#20 (comment), @ksmet1977 mentions the exact D65 white point coefficient rounding digits that the ZCAM authors used for the creation of the transform. Also, ensure that we use the CAT02 matrix for chromatic adaptation. This increases precision significantly. Signed-off-by: Tim Janik <timj@gnu.org>
* zcam-js: SRC: check.js: check zcam_hue_find_cusp() SRC: check.js: check ZCAM Cz Mz Vz transforms SRC: check.js: avoid checking for input X Y Z values in results SRC: check.js: test sRGB <-> ZCAM conversions SRC: check.js: support verbose reporting (disabled by default) SRC: check.js: ignore Hz diffs, currently unsupported SRC: check.js: catch NaN as BAD test case SRC: check.js: run all xyz_from_zcam() tests for various chroma inputs SRC: check.js: run all xyz_from_zcam() tests for Qz OR Jz inputs SRC: check.js: add `strict` and force CAT02_CAT transform for test cases SRC: check.js: adjust Kz test tolerance reasonably SRC: check.js: fix hz in test 5) SRC: check.js: fix values for test 3) SRC: check.js: add xyz_from_zcam() tests SRC: check.js: add test cases from ZCAM paper SRC: zcam.js: fix jzazbz.js module calls SRC: zcam.js: export zcam_extend() SRC: zcam.js: export _zcam_setup and inside_rgb() SRC: zcam.js: provide zcam_hue_find_cusp() SRC: zcam.js: add zcam_maximize_Cz() SRC: zcam.js: support ZCAM.Mz instead of Cz SRC: zcam.js: allow optimization of Jz and Qz divisions SRC: zcam.js: skip XYZ transform for ZCAM <-> sRGB, check viewing.D SRC: zcam.js: cache intermediate surround dependent factors SRC: zcam.js: avoid chromatic adaption if XYZ are ZCAM_D65 relative already SRC: zcam.js: xyz_from_zcam: cache the white point context factor SRC: zcam.js: use La=100 Fs=ZCAM_DIM as default ZCAM viewing condition The default ZCAM viewing conditions are intended to match CIELUV and similar color spaces closely. ZCAM now results in #777777 at 0.5 * white.Jz. SRC: zcam.js: cache zcam_viewing initialization SRC: zcam.js: introduce zcam_setup() to cache viewing condition calculations SRC: zcam.js: add zcam_from_srgb() and srgb_from_zcam() SRC: zcam.js: avoid Sz=NaN in case Qz becomes 0.0 SRC: zcam.js: comment on ZCAM viewing conditions SRC: zcam.js: xyz_from_zcam(): reconstruct chroma from Vz SRC: zcam.js: xyz_from_zcam(): reconstruct chroma from Kz SRC: zcam.js: add `strict` and force CAT02_CAT transform for test cases SRC: zcam.js: use ZCAM white point as default ZCAM viewing condition white point SRC: zcam.js: adjust ZCAM white point to fractional digits as used by the authors In ksmet1977/luxpy#20 (comment), @ksmet1977 mentions the exact D65 white point coefficient rounding digits that the ZCAM authors used for the creation of the transform. Also, ensure that we use the CAT02 matrix for chromatic adaptation. This increases precision significantly. SRC: zcam.js: use deg2rad for hz SRC: zcam.js: add deg2rad, rad2deg helpers SRC: zcam.js: add xyz_from_zcam() SRC: zcam.js: add zcam_from_xyz() SRC: zcam.js: add ZCAM constants SRC: zcam.js: add file Signed-off-by: Tim Janik <timj@gnu.org>
Forgive me if I'm doing something stupid ...
The following code produces the following error in
basics.py
dot23
The text was updated successfully, but these errors were encountered: