Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OSX: nim cpp -r torch.nim fails #7

Closed
timotheecour opened this issue Oct 8, 2018 · 11 comments
Closed

OSX: nim cpp -r torch.nim fails #7

timotheecour opened this issue Oct 8, 2018 · 11 comments
Labels
macOS question Further information is requested

Comments

@timotheecour
Copy link
Contributor

after building ATEN via #5 (comment):

ATEN=/tmp/d11/Users/timothee/git_clone/nim/pytorch/built/output/lib nim cpp -r torch.nim
error: unknown type name 'constexpr'
ATEN=/tmp/d11/Users/timothee/git_clone/nim/pytorch/built/output nim cpp -r --passC:-std=c++11 torch.nim
/Users/timothee/.cache/nim/torch_d/torch_tensors.cpp:206:14: error: no matching constructor for initialization of 'at::IntList' (aka 'ArrayRef<long long>')
        at::IntList temp(((long*) ((&self[((NI) 0)]))), selfLen_0);
                    ^    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/tmp/d11/Users/timothee/git_clone/nim/pytorch/built/output/include/ATen/core/ArrayRef.h:67:13: note: candidate constructor not viable: no known conversion from 'long *' to 'const long long *' for 1st argument
  constexpr ArrayRef(const T* data, size_t length)

@sinkingsugar
Copy link
Owner

I suspect you are not using nightly branch.

Osx needs {.passC: "-std=c++14".} which is already in there.. (nightly)

@sinkingsugar sinkingsugar added the question Further information is requested label Oct 8, 2018
@sinkingsugar sinkingsugar changed the title nim cpp -r torch.nim fails OSX: nim cpp -r torch.nim fails Oct 8, 2018
@sinkingsugar
Copy link
Owner

btw you can now also try:

conda create -n nimtorch -c fragcolor nimtorch
source activate nimtorch
nim cpp -r ~/miniconda3/envs/nimtorch/dist/nimtorch/torch.nim

Replace ~/miniconda3/ with your conda location if different

@timotheecour
Copy link
Contributor Author

timotheecour commented Oct 9, 2018

thanks!

after brew cask install anaconda etc:

nim cpp -r ~/miniconda3/envs/nimtorch/dist/nimtorch/torch.nim

nim cpp -r /Users/timothee/homebrew/anaconda3/envs/nimtorch/dist/nimtorch/torch.nim
Hint: used config file '/Users/timothee/homebrew/anaconda3/envs/nimtorch/config/nim.cfg' [Conf]
Hint: used config file '/Users/timothee/.config/nim/nim.cfg' [Conf]
Hint: used config file '/Users/timothee/homebrew/anaconda3/envs/nimtorch/dist/nimtorch/nim.cfg' [Conf]
Hint: used config file '/Users/timothee/.config/nim/config.nims' [Conf]
D20180822T214914: in top level config.nims;
/Users/timothee/homebrew/anaconda3/envs/nimtorch/dist/nimtorch/torch/native/convolutions.nim(60, 6) Hint: 'tensor_ops.use_miopen(self: ConvParams, input: Tensor)[declared in torch/native/convolutions.nim(60, 5)]' is declared but not used [XDeclaredButNotUsed]
  proc use_miopen(self: ConvParams; input: Tensor): bool =
       ^
/Users/timothee/homebrew/anaconda3/envs/nimtorch/dist/nimtorch/torch/native/convolutions.nim(28, 6) Hint: 'tensor_ops.is_output_padding_big(self: ConvParams)[declared in torch/native/convolutions.nim(28, 5)]' is declared but not used [XDeclaredButNotUsed]
  proc is_output_padding_big(self: ConvParams): bool =
       ^
/Users/timothee/homebrew/anaconda3/envs/nimtorch/dist/nimtorch/torch/native/convolutions.nim(22, 6) Hint: 'tensor_ops.is_padded(self: ConvParams)[declared in torch/native/convolutions.nim(22, 5)]' is declared but not used [XDeclaredButNotUsed]
  proc is_padded(self: ConvParams): bool =
       ^
/Users/timothee/homebrew/anaconda3/envs/nimtorch/dist/nimtorch/torch/native/convolutions.nim(16, 6) Hint: 'tensor_ops.is_strided(self: ConvParams)[declared in torch/native/convolutions.nim(16, 5)]' is declared but not used [XDeclaredButNotUsed]
  proc is_strided(self: ConvParams): bool =
       ^
Hint:  [Link]
ld: warning: directory not found for option '-L/Users/timothee/homebrew/anaconda3/envs/nimtorch/lib64'
ld: can't open output file for writing: /Users/timothee/homebrew/anaconda3/envs/nimtorch/dist/nimtorch/torch, errno=21 for architecture x86_64
clang: error: linker command failed with exit code 1 (use -v to see invocation)
Error: execution of an external program failed: 'clang++   -o /Users/timothee/homebrew/anaconda3/envs/nimtorch/dist/nimtorch/torch  /Users/timothee/.cache/nim/torch_d/stdlib_system.cpp.o /Users/timothee/.cache/nim/torch_d/torch_torch.cpp.o /Users/timothee/.cache/nim/torch_d/fragments_cpp.cpp.o /Users/timothee/.cache/nim/torch_d/stdlib_macros.cpp.o /Users/timothee/.cache/nim/torch_d/stdlib_tables.cpp.o /Users/timothee/.cache/nim/torch_d/stdlib_hashes.cpp.o /Users/timothee/.cache/nim/torch_d/stdlib_math.cpp.o /Users/timothee/.cache/nim/torch_d/stdlib_strutils.cpp.o /Users/timothee/.cache/nim/torch_d/stdlib_parseutils.cpp.o /Users/timothee/.cache/nim/torch_d/stdlib_bitops.cpp.o /Users/timothee/.cache/nim/torch_d/stdlib_algorithm.cpp.o /Users/timothee/.cache/nim/torch_d/stdlib_unicode.cpp.o /Users/timothee/.cache/nim/torch_d/stdlib_ospaths.cpp.o /Users/timothee/.cache/nim/torch_d/torch_torch_cpp.cpp.o /Users/timothee/.cache/nim/torch_d/stdlib_os.cpp.o /Users/timothee/.cache/nim/torch_d/stdlib_times.cpp.o /Users/timothee/.cache/nim/torch_d/stdlib_options.cpp.o /Users/timothee/.cache/nim/torch_d/stdlib_typetraits.cpp.o /Users/timothee/.cache/nim/torch_d/stdlib_strformat.cpp.o /Users/timothee/.cache/nim/torch_d/stdlib_posix.cpp.o /Users/timothee/.cache/nim/torch_d/torch_tensors.cpp.o /Users/timothee/.cache/nim/torch_d/stdlib_sequtils.cpp.o /Users/timothee/.cache/nim/torch_d/stdlib_sets.cpp.o /Users/timothee/.cache/nim/torch_d/torch_tensor_ops.cpp.o /Users/timothee/.cache/nim/torch_d/torch_autograd_macro.cpp.o /Users/timothee/.cache/nim/torch_d/torch_autograd_backward.cpp.o /Users/timothee/.cache/nim/torch_d/torch_functional.cpp.o  -lm -L/Users/timothee/homebrew/anaconda3/envs/nimtorch/lib -L/Users/timothee/homebrew/anaconda3/envs/nimtorch/lib64 -lcaffe2 -lcpuinfo -lsleef -pthread -lc10 -Wl,-rpath,'/Users/timothee/homebrew/anaconda3/envs/nimtorch/lib' -Wl,-rpath,'$ORIGIN'   -ldl'

this worked:

nim cpp -o:/tmp/d03/app -r /Users/timothee/homebrew/anaconda3/envs/nimtorch/dist/nimtorch/torch.nim

@sinkingsugar
Copy link
Owner

Oh yeah true... -o is now needed 👍

@timotheecour
Copy link
Contributor Author

what's simplest way to use custom nim? I like to use nim devel or a fork; whereas this uses latest stable (0.19)

@sinkingsugar
Copy link
Owner

https://github.com/fragcolor-xyz/nimtorch#semi-manual-way

This way you just use ATen binaries, nothing else ( you bring your nim etc )

@timotheecour
Copy link
Contributor Author

timotheecour commented Oct 9, 2018

after updating to e9f110b

ATEN=/tmp/d11/Users/timothee/git_clone/nim/pytorch/built/output nim cpp -r -f torch.nim

Error: execution of an external compiler program 'clang++ -c  -w -I/tmp/d11/Users/timothee/git_clone/nim/pytorch/built/output/include -std=c++14  -I/Users/timothee/git_clone/nim/Nim/lib -o /Users/timothee/.cache/nim/torch_d/stdlib_strutils.cpp.o /Users/timothee/.cache/nim/torch_d/stdlib_strutils.cpp' failed with exit code: 1

/Users/timothee/.cache/nim/torch_d/torch_tensors.cpp:338:2: error: no matching function for call to 'copyMem_E1xtACub5WcDa3vbrIXbwgsystem'
        copyMem_E1xtACub5WcDa3vbrIXbwgsystem(((void*) ((&result->data[((NI) 0)]))), &((*self)[((NI) 0)]), ((NI)chckRange((NI)(TM_ezgizBtjhxIusTTGW3j9bgA_5), ((NI) 0), ((NI) IL64(9223372036854775807)))));
        ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/Users/timothee/.cache/nim/torch_d/torch_tensors.cpp:319:23: note: candidate function not viable: no known conversion from 'const long long *' to 'void *' for 2nd argument
static N_INLINE(void, copyMem_E1xtACub5WcDa3vbrIXbwgsystem)(void* dest, void* source, NI size) {
                      ^

also tried via conda (cf semi manual way):


export PATH=/Users/timothee/homebrew/anaconda3/bin:"$PATH"

conda create -n aten -c fragcolor aten=2018.10.09

source activate aten

nim cpp -o:test -r torch.nim

Error: execution of an external compiler program 'clang++ -c  -w -I/Users/timothee/homebrew/anaconda3/envs/aten/include -std=c++14  -I/Users/timothee/git_clone/nim/Nim/lib -o /Users/timothee/.cache/nim/torch_d/stdlib_math.cpp.o /Users/timothee/.cache/nim/torch_d/stdlib_math.cpp' failed with exit code: 1

/Users/timothee/.cache/nim/torch_d/torch_tensors.cpp:338:2: error: no matching function for call to 'copyMem_E1xtACub5WcDa3vbrIXbwgsystem'
        copyMem_E1xtACub5WcDa3vbrIXbwgsystem(((void*) ((&result->data[((NI) 0)]))), &((*self)[((NI) 0)]), ((NI)chckRange((NI)(TM_ezgizBtjhxIusTTGW3j9bgA_5), ((NI) 0), ((NI) IL64(9223372036854775807)))));
        ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/Users/timothee/.cache/nim/torch_d/torch_tensors.cpp:319:23: note: candidate function not viable: no known conversion from 'const long long *' to 'void *' for 2nd argument
static N_INLINE(void, copyMem_E1xtACub5WcDa3vbrIXbwgsystem)(void* dest, void* source, NI size) {
                      ^
/Users/timothee/git_clone/nim/Nim/lib/nimbase.h:95:50: note: expanded from macro 'N_INLINE'
#  define N_INLINE(rettype, name) inline rettype name
                                                 ^

@sinkingsugar
Copy link
Owner

Nice catch! our tests were not testing clang, gonna add that too 😄

@sinkingsugar
Copy link
Owner

sinkingsugar commented Oct 10, 2018

Should be fixed now, could you try to force update your aten package? (has same version)

edit: actually this does not seem to work... let me see to bump version

@sinkingsugar
Copy link
Owner

Released 2018.10.10 now. Tested locally, seems fixed!

@timotheecour
Copy link
Contributor Author

timotheecour commented Oct 11, 2018

ATEN=/tmp/d11/Users/timothee/git_clone/nim/pytorch/built/output nim cpp -r -f -o:/tmp/z01 torch.nim now works, thanks!
(note: all i did was update nimtorch, I didn't even rebuild aten but seemed to work anyways)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
macOS question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants