-
Notifications
You must be signed in to change notification settings - Fork 402
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Compile issues with Visual Studio 2019 #20
Comments
Just commenting to confirm when I remove PBRT_OPTIX7_PATH the code compiles fine. |
I get the errors on VS2019 as well when CUDA and Optix is turned on:
|
Comment out those lines(79~85) in CMakeList.txt fix this error for me
|
I am admittedly not very good with Windows and (embarrassingly) don't have a Windows system with a GPU at hand at the moment. I made the very first issue, #1, on this issue for that reason. If we can together figure out how to get the windows+GPU build working, that'd be fantastic.. Note that if you don't define PBRT_OPTIX7_PATH, then you don't get GPU support. And what fun is that? :-) I expect that the sys/syscall.h thing can be fixed by putting those #includes inside #ifndef PBRT_IS_WINDOWS checks. I can preemptively do that tomorrow (but can't confirm the fix.) That makes sense that commenting out those lines helps, @wuyakuma. I can definitely see that the CUDA compiler isn't going to think those make sense. I can also try to fix that on my side, to pass those to MSVC but not to NVCC. With that fix, does it build and run on the GPU for you? |
Removing the warning ignores got me farther but I'm still seeing some hard errors in the GPU code. I'll do another build tomorrow and update this thread |
Sounds good! I'm happy to consult on figuring those out if you get stuck.
…On Thu, Aug 27, 2020 at 6:31 PM richardmgoodin ***@***.***> wrote:
Removing the warning ignores got me farther but I'm still seeing some hard
errors in the GPU code. I'll do another build tomorrow and update this
thread
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#20 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAAZBJ35F462TSVXZI3ZAV3SC4CIJANCNFSM4QMN42BQ>
.
|
When i'm compiling GPU version of PBRT on windows, visual studio 2019 ,i got this : error #349: no operator "=" matches these operands |
Here's what I'm seeing: C:/cygwin64/home/goodin/pbrt-v4/src\pbrt/util/spectrum.cpp(255): error #42: operand types are incompatible ("pbrt::RGB" and "COLORREF") Here's the code: RGBSpectrum::RGBSpectrum(const RGBColorSpace &cs, const RGB &rgb) This fixes the compile but I'm not sure it is correct: Second Error: C:\cygwin64\home\goodin\pbrt-v4\src\pbrt/util/pstd.h(86): error : incomplete type is not allowed Here's the code, not sure what it doesn't like: bxdfs.cpp pstd.h:
private: Third error: C:/cygwin64/home/goodin/pbrt-v4/src/pbrt/gpu/pathintegrator.cpp(276): error : identifier "syscall" is undefined The code: Fourth error: C:/cygwin64/home/goodin/pbrt-v4/src/pbrt/gpu/film.cpp(22): error : identifier "isnan " is undefined in device code The code: |
@mmp After commenting out those warning, I got
so I changed line 142 in CMakeLists.txt to
and also the BUILD_SHARED_LIBS need to be set to ON
otherwise the IlmImf project will staticlib, which causes link errors after this add
and
I believe it's because MSVC doesn't support zero length array, which is ok for clang and also, add some #ifndef PBRT_IS_WINDOWS like you said as for the RGB error,
at the front of spectrum.cpp, and after all these, it compile, but with a lot link errors
I am not familiar with cuda, so not sure what to do next then... The attached file is a patch, in case anyone need it |
@mmp |
I've just compiled with your latest commit from scratch. The path where the zlib static library is found appears to be off. The library is built in build\src\ext\zlib\Debug\zlibstatic.lib. It is referenced as: ------ Build started: Project: wtest, Configuration: Debug x64 ------ I don't know enough about where the project is building to figure out the difference. Here's another error (I'm also getting a lot of the "host function" warnings: C:\cygwin64\home\goodin\pbrt-v4\src\pbrt/util/sampling.h(732): warning : calling a host function from a host device function is not allowed C:\cygwin64\home\goodin\pbrt-v4\src\pbrt/util/pstd.h(86): error : incomplete type is not allowed I'm not seeing any other explicit errors. I'm getting 13 failures total. Most seem to be related to not finding zlib. |
Mostly to make MSVC happy, but it's got a point... Relates to issue #20.
(Status: I think that ToT includes all of the fixes that @wuyakuma listed, except for the shared libraries one.) From a little googling, it looks like switching to shared libs is the cause of those missing __cudaRegisterLinkedBinary* symbols. In general, I've found it's best to avoid shared libs for pbrt, just because it's one more thing that can confuse students who are trying to use the system. In this case, when BUILD_SHARED_LIBS is set to ON, it also looks like the OpenEXR DLLs are going into a different directory than pbrt.exe, which makes it even more confusing. (So, my preference would be to figure out a non-shared lib approach, if that is possible.) zlib and OpenEXR on windows have been a long-standing headache with the pbrt build. OpenEXR requires a zlib install, but we try to build it for the user if it isn't installed, just to make pbrt self-contained. An alternative could be to just require that people install zlib themselves; I'm wondering if that would take care of that zlibstatic issue you're seeing @richardmgoodin. I'll take a look at those host/device function call warnings. There are a handful of them still on Linux, but they're all innocuous there. Is there any more context on that "incomplete type" error? |
I'm good with the non shared library approach. I can't believe that anyone running pbrt would have problems with a larger executable. I'm also OK with installing zlib separately. I already have to do installs of CUDA and OptiX so one more wouldn't be an issue for the GPU version. |
Unfortunately I just did a recursive pull and got a new non building version of openEXR. This seems to be a consistent problem with their ToT in my very limited experience. That failure appears to be masking the others as I'm not seeing any other errors. Is there any way in Git I can pull a specific version of a recursive library? |
A third option could be using vcpkg (or Conan): it works on Windows, Linux and macOS, integrates with CMake, now has a manifest file (if you enable that feature on your computer, it will automatically download the listed dependencies before the rest of CMakeLists.txt runs), but it does not have versionning support yet (it might actually come in a month or two) so you end up with the latest packaged version (which could be enough for PBRT).
If you |
@richardmgoodin ext/openexr should be at: commit 5cfb5dab6dfada731586b0281bdb15ee75e26782 (HEAD -> zlibstatic-export-workaround, origin/zlibstatic-export-workaround) (That was actually me forking ToT OpenEXR a week or two ago to fix a Windows build break related to zlib, so maybe your zlibstatic issues are due to having done a sync to a newer version? @pierremoreau ah, interesting--good pointer. I'll look at those more closely. On one hand, I'm hoping that we're almost there and that it's just another small fix or two and everything will work, so I'd rather not make big changes to how the ext/ stuff is handled if necessary. On the other hand, that might be the best long-term option. |
Yes, that's the version I have. So I have been assuming that it was changes to OpenEXR that was breaking the build. I have also backed up to OpenEXR 2.3.5 which gives the same errors. That was a long and twisty divergence. I just deleted and pulled a new version. Here is what I'm seeing it looks like I'm getting two blocks of errors. One linking and one having to do with RGB. Here's the linking problem. It looks like the function call parameters are getting munged. 34>libpbrt_d.lib(pbrt.obj) : error LNK2019: unresolved external symbol __cudaRegisterLinkedBinary_39_tmpxft_00000a40_00000000_8_pbrt_cpp1_ii_27c0afcc referenced in function "void __cdecl __nv_cudaEntityRegisterCallback(void * *)" (?__nv_cudaEntityRegisterCallback@@YAXPEAPEAX@Z) And the following (looks like RGB again): 32>C:\cygwin64\home\goodin\pbrt-v4\src\pbrt\cmd\imgtool.cpp(1042,50): error C2440: 'initializing': cannot convert from 'initializer list' to 'std::vector<pbrt::RGB,std::allocatorpbrt::RGB>' |
Fixes warnings that were noted in issue #20.
Just pushed a fix for the imgtool one--thanks. (I have no idea where RGB is getting defined and why it's only on the Windows+NVCC build...) Are those |
Those were with a clean ToT |
Ok, could you try adding:
Down around line 661 of the top-level CMakeLists.txt and rebuilding? (Via https://stackoverflow.com/a/51566919, which reports that this is a Windows-only cmake bug, and it sure looks like the same symptoms...) |
I'm still seeing the the zlib static missing problem. This is on a clean tree. It also looks like you have a typo on you imgtool fix. Line 58 says "indef". If we can get zlib to link I think we are there. Do you want me to install zlib separately? |
38>Generating Code... It looks like it is looking in the src tree instead of the build directory |
Also when I look in the build tree I see zlibstatic.lib not zlibstatic_d.lib |
If you wouldn't mind installing zlib, I'd be interested to hear if that makes a difference. (I'm not sure what's going on with those issues about looking for the wrong thing in the wrong place!) |
I tried installing zlib but apparently Cmake didn't find it and it still built but didn't find zlib. The log was identical to the previous run. I'm going to try to download the source and try it that way. Where can I get the source, there appears to be multiple hits on zlib online. |
Here's a verbose log of the failure. Looks like it his failing at the same place as before. |
Ah, good find on those missing synchronizations--sorry I missed those. Added now! I was surprised that I only saw a ~7% slowdown when I added all those synchronizations to test their overhead on Linux. (So that San Miguel scene went from ~70s to render to ~75s. While that was on a 2080 GPU with RTX cores, most of the work isn't ray intersections, so I'd expect a GV100 to have really good performance--in the same ballpark.) I suspect the issue gets back to lots of data being copied back and forth between CPU and GPU between kernel launches on Windows: CPU accesses something that lives on the GPU, it gets copied over, then a kernel is launched, and it has to be copied back (even though it wasn't modified.) |
My Windows machine is actually dual boot with Fedora. I was thinking tomorrow of installing a compatible Linux so I might have some numbers for you of running the GV100 under Linux. I was thinking Ubuntu 20.8 because NVidia explicitly supports it.
…Sent from my iPad
On Aug 29, 2020, at 10:26 PM, Matt Pharr ***@***.***> wrote:
Ah, good find on those missing synchronizations--sorry I missed those. Added now!
I was surprised that I only saw a ~7% slowdown when I added all those synchronizations to test their overhead on Linux. (So that San Miguel scene went from ~70s to render to ~75s. While that was on a 2080 GPU with RTX cores, most of the work isn't ray intersections, so I'd expect a GV100 to have really good performance--in the same ballpark.)
I suspect the issue gets back to lots of data being copied back and forth between CPU and GPU between kernel launches on Windows: CPU accesses something that lives on the GPU, it gets copied over, then a kernel is launched, and it has to be copied back (even though it wasn't modified.)
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or unsubscribe.
|
@mmp This is indeed what I was getting at, but I should have explained it better rather than just quote the programming guide. I will try the updated version later today, and then do a comparison with Linux as I have a dual boot with Arch. You might want to add to the code something like the following in src/pbrt/gpu/init.cpp, int hasUnifiedAddressing;
CUDA_CHECK(
cudaDeviceGetAttribute(&hasUnifiedAddressing, cudaDevAttrUnifiedAddressing, device));
if (!hasUnifiedAddressing) {
LOG_FATAL("The selected device (%d) does not support unified addressing.", device);
}
// On Windows we perform additional synchronisation to work around the lack of
// concurrent managed access as this is a platform-wide issue and even occurs for
// hardware that does support it.
#if !defined(PBRT_IS_WINDOWS)
int hasConcurrentManagedAccess;
CUDA_CHECK(cudaDeviceGetAttribute(&hasConcurrentManagedAccess,
cudaDevAttrConcurrentManagedAccess,
device));
if (!hasConcurrentManagedAccess) {
LOG_FATAL("The selected device (%d) does not support concurrent managed access.",
device);
}
#endif |
@mmp I knew I was forgetting to reply to one of your comments… Regarding OpenEXR, I will open an issue on their side and ask if they can change their behaviour, but I don’t know what could be done in PBRT. Maybe delete the cache variable after dealing with OpenEXR? But on the other hand I did not run into any issues regarding it and zlib. |
I did run into the problem early on but haven't seen it lately with fresh builds either Release or Debug. |
…GPU is running kernels. This should allow the Windows GPU build to run without synchronizing after each kernel launch. However, it currently causes a ~15-25% slowdown on Linux. (If this does in fact work well on Windows, then we'll see about fixing that before merging it into master...) Issue #20...
I believe I've figured out how to rewrite things without too much pain so that the CPU isn't accessing unified memory during rendering. I just pushed that in a branch, windows-gpu-rework, since it currently causes a 15-25% slowdown on Linux. However, if it works, performance on Windows should be much better. (I'll dig into that slowdown now to see what's going on.) |
I'm running Release build of windows-gpu-reworks branch, on GTX1070, the utilization rate of GPU is near < 2% always, Is there something wrong what i do |
Utilisation rate is a lot lower for me today (though I did went through a complete re-install of Windows from scratch, so hard to say what is different); Rendering killeroo-gold.pbrt with the default settings on the GPU (RTX 2080 Ti) is taking about 2h, and if Task Manager is to be trusted, GPU usage is at about 0.2% (though it is also saying it is currently using my other GPU). I need to compare against Linux later, and try the new branch. |
Interesting... on Linux killeroo-gold renders for me in 13.8s on a RTX 2080. 13.8s / .002 = 6900s ~= 2 hours, so it looks like utilization is likely the entire problem here. I don't have any good theories for what the cause might be though. Hmm. |
I'll fire up Nsight Systems to have a look there; maybe perf should be tracked in a separate issue though. |
Hi Guys,
I’ve almost got pbrt-v4 running under ubuntu. What environment variables do I need set for it to find CUDA?
Rich
… On Aug 30, 2020, at 1:36 PM, Pierre Moreau ***@***.***> wrote:
I'll fire up Nsight Systems to have a look there; maybe perf should be tracked in a separate issue though.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub <#20 (comment)>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/ADB45JPCT6AKLBKTC7U3FBLSDKE2TANCNFSM4QMN42BQ>.
|
@richardmgoodin If you specify |
I installed CUDA as in #23. I’ve set CUDACXX=/usr/local/cuda/bin/nvcc. Cmake still doesn’t find it.
Rich
… On Aug 30, 2020, at 1:41 PM, Pierre Moreau ***@***.***> wrote:
@richardmgoodin <https://github.com/richardmgoodin> If you specify CUDACXX=path/to/nvcc as an environment variable before running CMake, does it find it? Otherwise try the solution from #23 <#23>.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub <#20 (comment)>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/ADB45JPKKNAKM2VH6N6YNY3SDKFMVANCNFSM4QMN42BQ>.
|
Installed Nvidia-cuda-toolkit and it found CUDA. Now I’m stuck with:
nvcc fatal : Value 'c++17' is not defined for option 'std'
CMake Error at cuda_compile_ptx_1_generated_optix.cu.ptx.Release.cmake:212 (message):
Error generating
/home/goodin/pbrt-v4/build/cuda_compile_ptx_1_generated_optix.cu.ptx
make[2]: *** [CMakeFiles/pbrt_embedded_ptx_lib.dir/build.make:69: cuda_compile_ptx_1_generated_optix.cu.ptx] Error 1
make[1]: *** [CMakeFiles/Makefile2:771: CMakeFiles/pbrt_embedded_ptx_lib.dir/all] Error 2
make: *** [Makefile:141: all] Error 2
… On Aug 30, 2020, at 1:58 PM, Rich Goodin ***@***.***> wrote:
I installed CUDA as in #23. I’ve set CUDACXX=/usr/local/cuda/bin/nvcc. Cmake still doesn’t find it.
Rich
> On Aug 30, 2020, at 1:41 PM, Pierre Moreau ***@***.*** ***@***.***>> wrote:
>
>
> @richardmgoodin <https://github.com/richardmgoodin> If you specify CUDACXX=path/to/nvcc as an environment variable before running CMake, does it find it? Otherwise try the solution from #23 <#23>.
>
> —
> You are receiving this because you were mentioned.
> Reply to this email directly, view it on GitHub <#20 (comment)>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/ADB45JPKKNAKM2VH6N6YNY3SDKFMVANCNFSM4QMN42BQ>.
>
|
If you look at the output of |
Yes:
…--std {c++03|c++11|c++14|c++17} (-std)
Select a particular C++ dialect. Note that this flag also turns on the corresponding
dialect flag for the host compiler.
Allowed values for this option: 'c++03','c++11','c++14','c++17’.
On Aug 30, 2020, at 2:17 PM, Pierre Moreau ***@***.***> wrote:
If you look at the output of nvcc --help for the nvcc you specified, does it include c++17 under the std option? Is it properly using a CUDA 11.0 install and not an earlier version?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub <#20 (comment)>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/ADB45JNHHVS7JMYZR26EHPDSDKJT3ANCNFSM4QMN42BQ>.
|
@richardmgoodin You should update the export PATH=/usr/local/cuda/bin:$PATH #assuming /usr/local/cuda is where v11 is installed
which nvcc # should output /usr/local/cuda/bin/nvcc
nvcc --version #should be v11 You could probably check the values of Now, you should be able to compile it. At least, this is what I see on my end. Oleg |
Two things:
1) installing the CUDA toolkit was setting up a separate v10 of everything
2) and - DUH - not running Cmake-gui from bash wasn’t picking up my environment
I’m building now. One thing I’ve noticed is that the builds appear to be clean under linux. Is this because the warnings are only logged? The windows build throws tons of warnings..
Rich
… On Aug 30, 2020, at 2:27 PM, olegded ***@***.***> wrote:
@richardmgoodin <https://github.com/richardmgoodin> You should update the PATH environment variable so nvcc v11 will be found before any other version. Then, in the same terminal run cmake (e.g. running cmake-gui in the same terminal should show you that all CUDA-related variables are automatically set to the right values at the first run). Now, you should be able to compile it. At least, this is what I see on my end.
Oleg
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub <#20 (comment)>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/ADB45JIWHUQSL5TNW3HD6ULSDKK27ANCNFSM4QMN42BQ>.
|
Definitely something going on evil with Windows. sanmiguel ran for me in 166.6s under Linux |
There are couple of warnings under Linux as well. Different compilers could use different settings for this case, e.g. see gcc warning options, I guess it needs more investigation, e.g. are warnings generated for the |
So I'm back on Windows after discovering that my GV100 wasn't really as slow as it was looking. I've built clean with the windows-gpu-reworks branch and it looks faster. I use an app called "OpenHardwareMonitor" and it shows usage as follows: If I was going to take a guess what could be going on is that the Windows driver is emulating shared memory by copying large blocks of memory up and down very frequently and thrashing the system. I'm not familiar with Nsight. Would it show this if it was happening? Just historically, even with the old Release w/sync code I was seeing GPU core around 70-80%. I don't know @jiangwei007 is seeing I have never seen utilization that low. What sm level is the 1070? San Miguel now runs in 1802.8 which is about 15% faster. |
I just fired up Nsight compute and am getting about a 3% SM utilization. I don't know if I have run long enough to get out of building the BVH structure. |
(Just for info, I opened #24 to look into the performance on Windows.) |
Closing this one out as well; AFAIK we are now all good on this front. |
I apologize in advance as I'm new to Windows and CMake. I'm getting two classes of compile errors. The first is nvcc.
Here's my command line:
"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.0\bin\nvcc.exe" -gencode=arch=compute_70,code="sm_70,compute_70" --use-local-env -ccbin "C:\Program Files (x86)\Microsoft Visual Studio\2019\Professional\VC\Tools\MSVC\14.27.29110\bin\HostX64\x64" -x cu -rdc=true -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.0\include" -I"C:\ProgramData\NVIDIA Corporation\OptiX SDK 7.1.0\include" -I"C:\cygwin64\home\goodin\pbrt-v4\src" -I"C:\cygwin64\home\goodin\pbrt-v4\build" -I"C:\cygwin64\home\goodin\pbrt-v4\src\ext" -I"C:\cygwin64\home\goodin\pbrt-v4\src\ext\stb" -I"C:\cygwin64\home\goodin\pbrt-v4\src\ext\openexr\IlmBase\Imath" -I"C:\cygwin64\home\goodin\pbrt-v4\src\ext\openexr\IlmBase\Half" -I"C:\cygwin64\home\goodin\pbrt-v4\src\ext\openexr\IlmBase\Iex" -I"C:\cygwin64\home\goodin\pbrt-v4\src\ext\openexr\OpenEXR\IlmImf" -I"C:\cygwin64\home\goodin\pbrt-v4\build\src\ext\openexr\IlmBase\config" -I"C:\cygwin64\home\goodin\pbrt-v4\build\src\ext\openexr\OpenEXR\config" -I"C:\cygwin64\home\goodin\pbrt-v4\src\ext\zlib" -I"C:\cygwin64\home\goodin\pbrt-v4\build\src\ext\zlib" -I"C:\cygwin64\home\goodin\pbrt-v4\src\ext\filesystem" -I"C:\cygwin64\home\goodin\pbrt-v4\src\ext\ptex\src\ptex" -I"C:\cygwin64\home\goodin\pbrt-v4\src\ext\double-conversion" -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.0\include" -G --keep-dir x64\Debug -maxrregcount=0 --machine 64 --compile -cudart static -Xcudafe --diag_suppress=partial_override -Xcudafe --diag_suppress=virtual_function_decl_hidden -Xcudafe --diag_suppress=integer_sign_change -Xcudafe --diag_suppress=declared_but_not_referenced -Xcudafe --diag_suppress=implicit_return_from_non_void_function --expt-relaxed-constexpr --extended-lambda -Xnvlink -suppress-stack-size-warning --std=c++17 /wd4305 /wd4244 /wd4843 /wd4267 /wd4838 /wd26495 /wd26451 -Xcompiler="/EHsc -Zi -Ob0" -g -use_fast_math -D_WINDOWS -D_CRT_SECURE_NO_WARNINGS -DPBRT_IS_MSVC -DPBRT_BUILD_GPU_RENDERER -DNVTX -DPBRT_HAS_INTRIN_H -DPBRT_IS_WINDOWS -DNOMINMAX -D"PBRT_NOINLINE=__declspec(noinline)" -DPBRT_HAVE__ALIGNED_MALLOC -DPTEX_STATIC -D"CMAKE_INTDIR="Debug"" -DWIN32 -D_WINDOWS -D_CRT_SECURE_NO_WARNINGS -DPBRT_IS_MSVC -DPBRT_BUILD_GPU_RENDERER -DNVTX -DPBRT_HAS_INTRIN_H -DPBRT_IS_WINDOWS -DNOMINMAX -D"PBRT_NOINLINE=__declspec(noinline)" -DPBRT_HAVE__ALIGNED_MALLOC -DPTEX_STATIC -D"CMAKE_INTDIR="Debug"" -D_MBCS -Xcompiler "/EHsc /W3 /nologo /Od /Fdpbrt_lib.dir\Debug\pbrt_lib.pdb /FS /Zi /RTC1 /MDd /GR" -o pbrt_lib.dir\Debug\cameras.obj "C:\cygwin64\home\goodin\pbrt-v4\src\pbrt\cameras.cpp"
I'm getting the following error:
nvcc fatal : A single input file is required for a non-link phase when an outputfile is specified
When I run the command standalone it still fails. When I remove the output file it thinks the "/wd4305" is a file.
The second is an error:
C:/cygwin64/home/goodin/pbrt-v4/src\pbrt/util/image.h(317): warning #3059-D: calling a host function from a host device function is not allowed
C:/cygwin64/home/goodin/pbrt-v4/src\pbrt/textures.h(682): error #349: no operator "=" matches these operands
operand types are: pbrt::RGB = COLORREF
C:/cygwin64/home/goodin/pbrt-v4/src\pbrt/textures.h(685): error #349: no operator "=" matches these operands
operand types are: pbrt::RGB = float
C:/cygwin64/home/goodin/pbrt-v4/src\pbrt/util/spectrum.cpp(268): error #42: operand types are incompatible ("pbrt::RGB" and "COLORREF")
The text was updated successfully, but these errors were encountered: