You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I was getting pylut 1.4.9 via pycharm venv, and works under python 3.6.5.
I have converted old print and xrange statements to python 3 version.
It seems that all of the From and To methods suffer from improper indices now, where they should be floored but was left as float. Since numpy no longer support float indices, I guess encapsulating with a int() will solve the problem.
Also I was trying this module with photoshop original LUTs, some nuke3DL file has entries that goes up to 12-bit color depth, but the first row ends with 1023, which by the FromNuke3DLFile() method should imply 10-bit depth and will lead to a lattice being 4 times brighter than the file was originally intended.
I think the correct outputDepth should actually be +2, but am not sure whether it is a common problem or not.
The text was updated successfully, but these errors were encountered:
The integer problem was fixed in #7 but I no longer maintain pylut and have not updated the pip version since the fix. I have encountered this 3DL problem when designing Lattice and had to make some assumptions based on max values found in the LUT. Not all manufacturers respect 3DLs the same way, and honestly no one should be using them anymore anyway.
I was getting pylut 1.4.9 via pycharm venv, and works under python 3.6.5.
I have converted old print and xrange statements to python 3 version.
It seems that all of the From and To methods suffer from improper indices now, where they should be floored but was left as float. Since numpy no longer support float indices, I guess encapsulating with a int() will solve the problem.
Also I was trying this module with photoshop original LUTs, some nuke3DL file has entries that goes up to 12-bit color depth, but the first row ends with 1023, which by the FromNuke3DLFile() method should imply 10-bit depth and will lead to a lattice being 4 times brighter than the file was originally intended.
I think the correct outputDepth should actually be +2, but am not sure whether it is a common problem or not.
The text was updated successfully, but these errors were encountered: