Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update from upstream repo Microsoft/CNTK@master #3

Merged
merged 186 commits into from
Feb 26, 2018
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
186 commits
Select commit Hold shift + click to select a range
7c08325
Adding ResNet101_ImageNet_CNTK/ResNet152_ImageNet_CNTK in
MingyuJoo Nov 28, 2017
348773e
add missing ops (logical, reduceL1, reduceL2, etc.) with ONNX support
liqunfu Dec 31, 2017
0e069fa
fix bugs in crosstalkcaffe
yuxiaoguo Jan 2, 2018
3889d0a
Update current_iteration.md
ebarsoumMS Jan 2, 2018
14105a2
Update current_iteration.md
ebarsoumMS Jan 2, 2018
5214e2e
Integrate v-yuxgu/crosstalkcaffe-bugfix into master
Jan 3, 2018
6477d5b
gather over axis
n17s Jan 3, 2018
8bb6864
Expose StopGradient node in BS
vmazalov Jan 3, 2018
6ee82cc
transforms with composite source work correctly.
n17s Jan 3, 2018
9bfe525
Bump up model version to 29
vmazalov Jan 5, 2018
13ab006
Integrate vadimma/StopGradientBS into master
Jan 5, 2018
ae2678f
Integrate nikosk/synced-xforms3 into master
Jan 6, 2018
5424fa6
Grammar fix.
tsbertalan Jan 7, 2018
21c4d20
Update current_iteration.md
n17s Jan 8, 2018
a04c1c9
Adding error message for Convolution layer when groups > 1.
Jan 9, 2018
7142995
Allow cntk.times operator over tensors: [shape, free_axis] x [free_ax…
tangyuq Jan 9, 2018
8b261b8
Integrate sptiwari/group_conv_layer_bugfix into master
Jan 10, 2018
959f1c8
Fixes #2704 (#2805)
nglee Jan 12, 2018
0b76ebc
Introduce lattice reader
vmazalov Nov 23, 2017
2758a8c
Lattice reader, cont
vmazalov Nov 25, 2017
e1f1101
Lattice reader, cont
vmazalov Nov 28, 2017
31c8c1e
Lattice deserializer - index
vmazalov Nov 29, 2017
e43e23f
Lattice reader, add lattice index
vmazalov Dec 1, 2017
f9539f8
make the initial version compile
vmazalov Dec 2, 2017
b88cac1
Refine lattice reader changes
vmazalov Dec 3, 2017
4731e40
Update SGD to not consume lattice for new serializer
vmazalov Dec 5, 2017
5a01480
Update the lattice reader sample size
vmazalov Dec 6, 2017
1b2a256
Add SequenceWithLatticeNode
vmazalov Dec 7, 2017
4a96f60
Ensure lattice consumption in MB
vmazalov Dec 8, 2017
5d7e995
Add parameters for the lattice symlit
vmazalov Dec 9, 2017
b2edf0c
Rename lattice file parameters
vmazalov Dec 9, 2017
fe31b52
Ensure that lattice metadata is passed inside the node
vmazalov Dec 9, 2017
b209dd0
Refactor lattice methods
vmazalov Dec 10, 2017
8aa9342
Add symlistpath as a parameter to the SE node'
vmazalov Dec 10, 2017
589aa30
Adding binary packing
eldakms Dec 11, 2017
113cdf0
Initial work on deserialization of the lattice
vmazalov Dec 12, 2017
30bf710
Make the new lattice node inherit from the legacy one
vmazalov Dec 13, 2017
6e485ad
Further work to enable lattice deserializer in Sequence Node
vmazalov Dec 13, 2017
98594d5
Enabled more SE parameters
vmazalov Dec 14, 2017
71880a9
Able to run up to lattice consupmtion in the SE node
vmazalov Dec 14, 2017
c36448c
Able to run for single sequence
vmazalov Dec 17, 2017
8c8560b
Initial support of multiple lattice sequences
vmazalov Dec 17, 2017
3dba770
Testing parallel sequences
vmazalov Dec 18, 2017
a01da0c
Fix lattice index for last sequence
vmazalov Dec 18, 2017
f5edf14
Some cleanup of the SE with lattice node
vmazalov Dec 18, 2017
cf28412
update preferred device for lattice
vmazalov Dec 19, 2017
815e3fc
Fixing definesMbSize for HTK deserializer
eldakms Dec 19, 2017
8ab9d94
Fix indexing of lattice sequences
vmazalov Dec 19, 2017
6fc83ca
Ensure lattice sequences matche the label sequences
vmazalov Dec 20, 2017
9041489
Fixing non existing sequence
eldakms Dec 20, 2017
36315e7
Some asserts and prints
vmazalov Dec 21, 2017
0750688
some debugging statements
vmazalov Dec 22, 2017
60c2996
Some optimization of the binary reader
vmazalov Dec 25, 2017
d38c060
Add new lattice reader test
vmazalov Dec 25, 2017
9565deb
Remove tabs
vmazalov Dec 26, 2017
554610e
Some gcc fixes
vmazalov Dec 26, 2017
4b975a0
Explicitly reference SequenceWithSoftmax node methods
vmazalov Dec 26, 2017
13206a4
Add lattice index builder to makefile
vmazalov Dec 27, 2017
646cf60
Revert pre-compute break statement
vmazalov Dec 27, 2017
c848303
Update E2E test for lattice reader
vmazalov Dec 29, 2017
bd783aa
Debug statement for the Linux test
vmazalov Dec 30, 2017
4135324
Add check if lattice index file exists
vmazalov Dec 30, 2017
fcb3a2f
Add some logs for the lattice reader'
vmazalov Dec 31, 2017
8912cbf
Some debugging of failed linux test
vmazalov Dec 31, 2017
a5b78ee
Some more debugging
vmazalov Dec 31, 2017
0adc7be
Update e2e test baselines
vmazalov Dec 31, 2017
d4e71b8
Return the SequenceTraining baselines
vmazalov Jan 1, 2018
0ea1eaf
Remove current dir logging
vmazalov Jan 1, 2018
6e39b34
Clean up boundary code
vmazalov Jan 3, 2018
733a4e7
Fixing mbsize
eldakms Jan 3, 2018
1d04199
Forcibly moving allocation of lattices to CPU
eldakms Jan 3, 2018
8c60dca
Ensure the new reader output is identical to the old reader
vmazalov Jan 5, 2018
e230130
Some clean of the lattice reader change
vmazalov Jan 5, 2018
6f9185f
Revert changes of the HTKMLFreader
vmazalov Jan 5, 2018
75f3e43
Ensure lattice sequence owns the buffer
vmazalov Jan 6, 2018
ce3ff8c
Remove baselines for update
vmazalov Jan 6, 2018
aaaa939
Expose latticedeserializer in python
vmazalov Jan 8, 2018
9f1d5e2
Update old lattice reader test to 1 utterances
vmazalov Jan 8, 2018
5489a4e
Change the old lattice reader config back
vmazalov Jan 8, 2018
e57bac1
Further lattice reader cleanup
vmazalov Jan 8, 2018
73f2315
Ensure the clean up builds
vmazalov Jan 8, 2018
25d79f1
Update the SE with new reader test
vmazalov Jan 8, 2018
bf673b8
Disable parallel lattice constructoin
vmazalov Jan 9, 2018
697793b
Ensure the e2e tests between old and new readers match
vmazalov Jan 9, 2018
5bb82fa
Expose lattice reader to python
vmazalov Jan 10, 2018
9c70a43
Python lattice reader test
vmazalov Jan 10, 2018
0a6a429
Update baselines to ensure they match
vmazalov Jan 10, 2018
eb88155
Include the python test and bump up the model version
vmazalov Jan 10, 2018
44f21e7
Update python test
vmazalov Jan 11, 2018
2cac661
Update the python test
vmazalov Jan 11, 2018
f3e9993
Remove main from the lattice test
vmazalov Jan 12, 2018
bcbf95b
Some fixes after rebase
vmazalov Jan 12, 2018
01c22ad
Change the docstring
vmazalov Jan 13, 2018
15e705d
add batch matmul
Jan 8, 2018
7d67daf
Enable max seq size config in HTK reader
vmazalov Jan 14, 2018
5962f71
Introduce the maxSequenceSize config to the HTK feature reader
vmazalov Jan 14, 2018
ea6834b
Restrict the size of lattice
vmazalov Jan 14, 2018
f79324c
Ensure the lattice serializer handles several chunks in a file
vmazalov Jan 15, 2018
1fe17f8
Add some debugging statements
vmazalov Jan 15, 2018
d26f547
debug large lattice entries
vmazalov Jan 15, 2018
1a81d41
Fix batch matmul test failures
Jan 15, 2018
d929ecd
Add logs for lattice index builder
vmazalov Jan 15, 2018
77c95a4
Trim lattice TOC line entries
vmazalov Jan 15, 2018
01c1509
Fixed typos in tutorials.
Jan 1, 2018
e6dc1a4
Refactor lattice sequence initialization'
vmazalov Jan 15, 2018
05cbd0a
Remove too much logging
vmazalov Jan 16, 2018
8b6af80
Update the max_sequence_length
vmazalov Jan 16, 2018
10d7130
Integrate vadimma/LatticeSerializer into master
Jan 16, 2018
5bdaed7
Integrate yuqtang/TimesOnFreeAxes into master
Jan 16, 2018
c3d01e4
Enabling evaluation on GPU
eldakms Jan 3, 2018
8066621
Fixing BPTT for case when the minibatch size changes
eldakms Jan 3, 2018
f07a98d
Adding find_by_uid to logging API.
Jan 17, 2018
4836687
Fixing bug in ONNX pooling node serialization.
Jan 17, 2018
f36c73c
Integrate sptiwari/find_by_uid4 into master
Jan 19, 2018
997d3a1
Replace CntkCsAssemblyVersion by CntkComponentVersion on VS projects
Jan 17, 2018
3765da9
Support CNTK installation in VirtualEnv
Jan 20, 2018
eeb0645
Removing find_by_uid API.
Jan 22, 2018
3cf3af5
CNTK support for CUDA 9
Jan 23, 2018
e883520
Remove redefinition of CNTK_COMPONENT_VERSION
Jan 23, 2018
a40285c
Rename default Anaconda python environment
Jan 22, 2018
a087490
Integrate sptiwari/remove_find_by_uid into master
Jan 23, 2018
9f6cced
Fix Dockerfiles under Tools/docker
Jan 23, 2018
387725d
Merge branch 'kedeng/fixDocker2'
Jan 24, 2018
851ea5d
Fix linux binary drop script for nccl
Jan 24, 2018
a7a52d7
Adding halide based binary convolution operators and its dependancies
jaliyae Jan 23, 2018
af26e4f
Fix TensorBoard on_write_test_summary method (#2878)
rnrneverdies Jan 25, 2018
32c4e04
Update CNTK_204_Sequence_To_Sequence.ipynb
zingdle Jan 25, 2018
ffc7507
Integrate jaliyaek/halide_squash2 into master
Jan 25, 2018
343f383
Remove Python 3.4 support
Jan 26, 2018
dd47e21
Making Halide default in Jenkin builds
jaliyae Jan 27, 2018
950ac47
Integrate jaliyaek/config into master
Jan 28, 2018
ba9c2e7
Fix build if optional MKLDNN is not present
Jan 28, 2018
be931c8
Update post-build.cmd
mhamilton723 Jan 25, 2018
08cc45c
Edit makefile
mhamilton723 Jan 25, 2018
6db8b6e
Add image writer SOs and zlib dlls
mhamilton723 Jan 26, 2018
166c8bc
add debug dll support
mhamilton723 Jan 29, 2018
8569868
fix variable
mhamilton723 Jan 29, 2018
6e846aa
fix makefile
mhamilton723 Jan 29, 2018
6ea2c67
Break Nuget packages
Jan 29, 2018
00158ff
Merge pull request #2887 from Microsoft/marhamil-fix-jar
mhamilton723 Jan 30, 2018
2a764a7
add ImageScaler, fix ConvTranspose
liqunfu Jan 30, 2018
f18bcda
Updating release notes with ONNX and other feature work (e.g. group c…
spandantiwari Jan 30, 2018
e91df39
Pooling missing code fix
liqunfu Jan 30, 2018
22cea7c
broadcast of scalar
liqunfu Jan 31, 2018
4bb3031
Making axis attribute optional in Concat.
Jan 31, 2018
b09ff2b
Merge branch 'master' into liqun/onnx17Stage2
Jan 31, 2018
688b96e
Update current_iteration.md
jaliyae Jan 31, 2018
abfc3a8
Update current_iteration.md
jaliyae Jan 31, 2018
7877e3f
add hierarchical softmax
Jan 31, 2018
e129bcf
MakeBinaryDrop: Change Cuda libs to 9.0
Jan 31, 2018
42a47a3
Integrate yacheo/h-softmax into master
Feb 1, 2018
fbbbaa9
Merge release/2.4
Feb 1, 2018
3a19d2f
Adding find_by_uid to logging API.
Feb 1, 2018
3660b7a
Node timing and profile details format in chrome://tracing.
Feb 2, 2018
977e402
Backward compat bug fix in LeakyReLu reverting alph to type double.
Feb 5, 2018
6556e1f
Passin device ID in DistComm for NCCL initilization
junjieqian Jan 26, 2018
d615602
Integrate sptiwari/onnx_leakyrelu_fix into master
Feb 6, 2018
56e1f3b
Fix typo
k-ujihara Feb 7, 2018
c5abe7f
Add KERA_BACKGROUND=cntk into CNTK Dockerfiles
Feb 7, 2018
aa1aaf8
Bug fix in ONNX broadcasting for scalar.
Feb 8, 2018
1142563
Integrate sptiwari/onnx_scalar_constant into master
Feb 8, 2018
08ba214
Update README.md
mx-iao Feb 9, 2018
062ce4d
Merge pull request #2822 from tsbertalan/master
mx-iao Feb 9, 2018
06ee7b8
Merge pull request #2940 from kazuyaujihara/patch/csexampletypo
mx-iao Feb 9, 2018
2b94e7d
Merge pull request #2880 from rnrneverdies/master
mx-iao Feb 9, 2018
8b71ff0
Merge pull request #2881 from zingdle/patch-2
mx-iao Feb 9, 2018
bd121f6
Merge pull request #2843 from kaiidams/kaiidams/typotut206
mx-iao Feb 9, 2018
08bcbdc
Merge pull request #2690 from MingyuJoo/CNTK-2-PretrainedModels
mx-iao Feb 9, 2018
19719a6
Fast tensor ops using MKL
Feb 9, 2018
aa7447e
First round of changes for ONNX FreeDimension support.
Feb 13, 2018
6da3408
Fix for ONNX ConvTranspose node loading issue.
Feb 14, 2018
206db8c
Fix Tutorial 201B for convergence issue.
Feb 14, 2018
1a1e08c
Make the lattice deserialization parallel
vmazalov Feb 2, 2018
c3b6f1f
Introduce lattice reader verbosity
vmazalov Feb 7, 2018
838a433
Some refactoring
vmazalov Feb 8, 2018
54ba085
Integrate kedeng/fix201B2 into master
Feb 14, 2018
2237dd0
Add support for FreeDimension in Pooling/Unpooling
Feb 14, 2018
e971570
Introduce latticeConfigPath to SE node
vmazalov Feb 13, 2018
e186ddd
Integrate vadimma/latpar into master
Feb 15, 2018
5f1d710
Add nightly build badges
Feb 15, 2018
4017a16
Adding mean_variance_normalization CNTK and ONNX op, and LayerNormali…
Feb 16, 2018
461b82d
Move hard-coded CNTK version to a common place
Feb 20, 2018
9032174
Disable MKL-DNN for now before we pick up the fix for AMD cache size …
Feb 21, 2018
1c78107
Handle malformed lattice
vmazalov Feb 17, 2018
523af50
Minor refactoring
vmazalov Feb 21, 2018
695bdf7
Fix crash in CBF when crossing sweep boundary
Feb 23, 2018
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
1 change: 1 addition & 0 deletions .gitattributes
Original file line number Diff line number Diff line change
Expand Up @@ -65,6 +65,7 @@ Makefile text
*.asax text

*.h text
*.hpp text
*.cpp text
*.cc text
*.cu text
Expand Down
4 changes: 3 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -224,6 +224,7 @@ bindings/python/cntk/cntk_py.py
bindings/python/cntk/libs/
bindings/python/cntk/cntk_py_wrap.cpp
bindings/python/cntk/cntk_py_wrap.h
bindings/python/cntk/VERSION
bindings/python/dist/
bindings/python/doc/cntk.*.rst
bindings/python/doc/cntk.rst
Expand Down Expand Up @@ -332,7 +333,8 @@ Manual/.ipynb_checkpoints
Examples/Text/LightRNN/LightRNN/*.so

# other
/packages
packages/
/CNTK.VC.db
/CNTK.VC.VC.opendb
/Local
.vs/
29 changes: 28 additions & 1 deletion CNTK.Common.props
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,34 @@
<HasJava>false</HasJava>
<HasJava Condition="Exists('$(JAVA_HOME)\bin\javac.exe')">true</HasJava>

<CntkComponentVersion>2.3.1</CntkComponentVersion>
<!-- Set CNTK version related properties -->

<!-- CntkVersion:
CNTK version which should be used where CNTK version is required. Ex: print version or tag CNTK binaries. Default value is the last released version of CNTK. -->
<!-- NOTE: Modify both CntkVersion and PublicBuild during MAJOR RELEASE -->
<CntkVersion>2.4</CntkVersion>

<!-- PublicBuild:
True if build binaries are meant to shared publicly with CNTK community. -->
<!-- NOTE: Modify both CntkVersion and PublicBuild during MAJOR RELEASE -->
<PublicBuild>false</PublicBuild>

<!-- CntkVersionProvidedExternally:
Hard-coded CntkVersion can be overridden if property BUILD_CNTK_VERSION is present. -->
<CntkVersionProvidedExternally>false</CntkVersionProvidedExternally>
<CntkVersionProvidedExternally Condition=" '$(BUILD_CNTK_VERSION)' != '' ">true</CntkVersionProvidedExternally>

<CntkVersion Condition="$(CntkVersionProvidedExternally)">$(BUILD_CNTK_VERSION)</CntkVersion>
<PublicBuild Condition="$(CntkVersionProvidedExternally)">true</PublicBuild>

<!-- CntkVersionBanner:
Cntk Version banner is printed wherever CntkVersion should be printed. ex: python -c 'import cntk;cntk.__version__'. -->
<CntkVersionBanner>$(CntkVersion)</CntkVersionBanner>
<CntkVersionBanner Condition="!$(PublicBuild)">$(CntkVersionBanner)+</CntkVersionBanner>

<!-- CntkComponentVersion:
Cntk binaries (generated by build) are appended with CntkComponentVersion. Ex: Cntk.Core-$(CntkComponentVersion).dll -->
<CntkComponentVersion>$(CntkVersion)</CntkComponentVersion>
<CntkComponentVersion Condition="$(DebugBuild)">$(CntkComponentVersion)d</CntkComponentVersion>
</PropertyGroup>

Expand Down
64 changes: 30 additions & 34 deletions CNTK.Cpp.props
Original file line number Diff line number Diff line change
Expand Up @@ -3,19 +3,10 @@
<Import Project="$(SolutionDir)\CNTK.Common.props" />
<PropertyGroup>
<CudaVersion />
<CudaVersion Condition="Exists('$(CUDA_PATH_V8_0)') And '$(CudaVersion)' == ''">8.0</CudaVersion>
<CudaVersion Condition="Exists('$(CUDA_PATH_V7_5)') And '$(CudaVersion)' == ''">7.5</CudaVersion>

<NvmlInclude />
<NvmlInclude Condition="'$(CudaVersion)' == '7.5'">"c:\Program Files\NVIDIA Corporation\GDK\gdk_win7_amd64_release\nvml\include"</NvmlInclude>
<NvmlInclude Condition="'$(CudaVersion)' == '8.0'" />

<NvmlLibPath />
<NvmlLibPath Condition="'$(CudaVersion)' == '7.5'">"c:\Program Files\NVIDIA Corporation\GDK\gdk_win7_amd64_release\nvml\lib"</NvmlLibPath>
<NvmlLibPath Condition="'$(CudaVersion)' == '8.0'" />
<CudaVersion Condition="Exists('$(CUDA_PATH_V9_0)') And '$(CudaVersion)' == ''">9.0</CudaVersion>

<NvmlDll>%ProgramW6432%\NVIDIA Corporation\NVSMI\nvml.dll</NvmlDll>
<NvmlDll Condition="Exists('c:\local\bindrop\NVSMI\nvml.dll')">c:\local\bindrop\NVSMI\nvml.dll</NvmlDll>
<NvmlDll Condition="Exists('c:\local\nvsmi9\NVSMI\nvml.dll')">c:\local\nvsmi9\NVSMI\nvml.dll</NvmlDll>

<HasOpenCv>false</HasOpenCv>
<HasOpenCv Condition="Exists('$(OPENCV_PATH)') Or Exists('$(OPENCV_PATH_V31)')">true</HasOpenCv>
Expand Down Expand Up @@ -65,16 +56,22 @@

<PropertyGroup Condition="!$(IsUWP)">
<MathLibrary>MKL</MathLibrary>
<MathIncludePath>$(MKLML_PATH)\include</MathIncludePath>
<MathIncludePath>$(MKL_PATH)\include</MathIncludePath>
<MathDefine>USE_MKL</MathDefine>
<!-- Only non-UWP configurations consume PerformanceProfiler -->
<ReaderLibs>Cntk.PerformanceProfiler-$(CntkComponentVersion).lib;$(ReaderLibs)</ReaderLibs>
<MathLibraryName>MKL-ML Library</MathLibraryName>
<MathLibraryPath>$(MKLML_PATH)\lib</MathLibraryPath>
<MathLibraryName>MKL Library</MathLibraryName>
<MathLibraryPath>$(MKL_PATH)\lib</MathLibraryPath>
<MathLinkLibrary>mklml.lib</MathLinkLibrary>
<MathDelayLoad>mklml.dll</MathDelayLoad>
<MathPostBuildCopyPattern>$(MathLibraryPath)\*.dll</MathPostBuildCopyPattern>
<UnitTestDlls>$(OutDir)mklml.lib;$(OutDir)libiomp5md.dll;</UnitTestDlls>
<HasMklDnn>false</HasMklDnn>
<!-- disable MKL-DNN until we pick up the fix for AMD cache size https://github.com/intel/mkl-dnn/commit/ccfbf83ab489b42f7452b6701498b07c28cdb502
<HasMklDnn Condition="Exists('$(MKL_PATH)\include\mkldnn.h')">true</HasMklDnn>
<MathDefine Condition="$(HasMklDnn)">$(MathDefine);USE_MKLDNN</MathDefine>
-->
<MathLinkLibrary Condition="$(HasMklDnn)">$(MathLinkLibrary);mkldnn.lib</MathLinkLibrary>
<MathDelayLoad Condition="$(HasMklDnn)">$(MathDelayLoad);mkldnn.dll</MathDelayLoad>
</PropertyGroup>
<PropertyGroup Condition="$(UseZip)">
<ZipInclude>$(ZLIB_PATH)\include;$(ZLIB_PATH)\lib\libzip\include;</ZipInclude>
Expand Down Expand Up @@ -109,31 +106,19 @@
<ProtobufLib Condition="$(DebugBuild)">libprotobufd.lib</ProtobufLib>
</PropertyGroup>

<PropertyGroup Condition="'$(CudaVersion)' == '8.0'">
<CudaPath>$(CUDA_PATH_V8_0)</CudaPath>
<CudaRuntimeDll>cudart64_80.dll</CudaRuntimeDll>
<CudaDlls>cublas64_80.dll;cusparse64_80.dll;curand64_80.dll;$(CudaRuntimeDll)</CudaDlls>
<PropertyGroup Condition="'$(CudaVersion)' == '9.0'">
<CudaPath>$(CUDA_PATH_V9_0)</CudaPath>
<CudaRuntimeDll>cudart64_90.dll</CudaRuntimeDll>
<CudaDlls>cublas64_90.dll;cusparse64_90.dll;curand64_90.dll;$(CudaRuntimeDll)</CudaDlls>

<!-- Use NvidiaCompute to define nvcc target architectures (will generate code to support them all, i.e. fat-binary, in release mode)
In debug mode we only include cubin/PTX for 30 and rely on PTX / JIT to generate the required native cubin format
http://docs.nvidia.com/cuda/pascal-compatibility-guide/index.html#building-applications-with-pascal-support -->
<NvidiaCompute Condition="$(DebugBuild)">$(CNTK_CUDA_CODEGEN_DEBUG)</NvidiaCompute>
<NvidiaCompute Condition="$(DebugBuild) And '$(NvidiaCompute)'==''">compute_30,sm_30</NvidiaCompute>

<NvidiaCompute Condition="$(ReleaseBuild)">$(CNTK_CUDA_CODEGEN_RELEASE)</NvidiaCompute>
<NvidiaCompute Condition="$(ReleaseBuild) And '$(NvidiaCompute)'==''">compute_30,sm_30;compute_35,sm_35;compute_50,sm_50;compute_60,sm_60;compute_61,sm_61</NvidiaCompute>
</PropertyGroup>

<PropertyGroup Condition="'$(CudaVersion)' == '7.5'">
<CudaPath>$(CUDA_PATH_V7_5)</CudaPath>
<CudaRuntimeDll>cudart64_75.dll</CudaRuntimeDll>
<CudaDlls>cublas64_75.dll;cusparse64_75.dll;curand64_75.dll;$(CudaRuntimeDll)</CudaDlls>

<NvidiaCompute Condition="$(DebugBuild)">$(CNTK_CUDA_CODEGEN_DEBUG)</NvidiaCompute>
<NvidiaCompute Condition="$(DebugBuild) And '$(NvidiaCompute)'==''">compute_30,sm_30</NvidiaCompute>

<NvidiaCompute Condition="$(ReleaseBuild)">$(CNTK_CUDA_CODEGEN_RELEASE)</NvidiaCompute>
<NvidiaCompute Condition="$(ReleaseBuild) And '$(NvidiaCompute)'==''">compute_30,sm_30;compute_35,sm_35;compute_50,sm_50</NvidiaCompute>
<NvidiaCompute Condition="$(ReleaseBuild) And '$(NvidiaCompute)'==''">compute_30,sm_30;compute_35,sm_35;compute_50,sm_50;compute_60,sm_60;compute_61,sm_61;compute_70,sm_70</NvidiaCompute>
</PropertyGroup>

<PropertyGroup>
Expand All @@ -144,21 +129,32 @@
<CudaMsbuildPath Condition="'$(CudaMsbuildPath)' == ''">$(VCTargetsPath)\BuildCustomizations</CudaMsbuildPath>
</PropertyGroup>

<PropertyGroup>
<PlatformToolset>v141</PlatformToolset>
</PropertyGroup>

<PropertyGroup Condition="Exists('$(HALIDE_PATH)')">
<HalidePath>$(HALIDE_PATH)</HalidePath>
<HalideInclude>$(HALIDE_PATH)\include;</HalideInclude>
<HalideLibPath>$(HALIDE_PATH)\Release;</HalideLibPath>
<HalideLib>halide.lib</HalideLib>
</PropertyGroup>

<!-- TODO warn if ConfigurationType not (yet) defined -->

<PropertyGroup Condition="'$(ConfigurationType)' == 'StaticLibrary'">
<UseDebugLibraries>$(DebugBuild)</UseDebugLibraries>
<PlatformToolset>v140</PlatformToolset>
<CharacterSet>Unicode</CharacterSet>
<WholeProgramOptimization>$(ReleaseBuild)</WholeProgramOptimization>
<LinkIncremental>$(DebugBuild)</LinkIncremental>
</PropertyGroup>

<ItemDefinitionGroup>
<ClCompile>
<PreprocessorDefinitions>CNTK_COMPONENT_VERSION="$(CntkComponentVersion)"</PreprocessorDefinitions>
<PreprocessorDefinitions>CNTK_VERSION="$(CntkVersion)";CNTK_VERSION_BANNER="$(CntkVersionBanner)";CNTK_COMPONENT_VERSION="$(CntkComponentVersion)"</PreprocessorDefinitions>
<!-- UWP does not use MPI -->
<PreprocessorDefinitions Condition="!$(IsUWP)">%(PreprocessorDefinitions);HAS_MPI=1</PreprocessorDefinitions>
<PreprocessorDefinitions Condition="'$(CudaVersion)' == '9.0'">%(PreprocessorDefinitions);CUDA_NO_HALF;__CUDA_NO_HALF_OPERATORS__</PreprocessorDefinitions>
</ClCompile>
</ItemDefinitionGroup>

Expand Down
Loading