Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Neovim is stuck a 100% CPU #77

Closed
Clonkk opened this issue Apr 7, 2021 · 12 comments
Closed

Neovim is stuck a 100% CPU #77

Clonkk opened this issue Apr 7, 2021 · 12 comments

Comments

@Clonkk
Copy link

Clonkk commented Apr 7, 2021

When opening a Nim file with Neovim, it gets stucks at 100% CPU after the file is opened for a while. Some action (?) will trigger the bug instantly (inserting "{" will do that for instance).

The bug happpend when opening the file https://github.com/SciNim/flambeau/blob/master/flambeau/raw_bindings/tensors.nim

Here is the log when using nimlsp in debug mode :

Version: 0.3.1

explicitSourcePath: /home/rcaillaud/.choosenim/toolchains/nim-1.4.4

Trying to read frame

Got frame:
{"jsonrpc":"2.0","id":0,"method":"initialize","params":{"processId":29947,"rootPath":"/home/rcaillaud/Workspace/localws/ForkedProjects/flambeau","rootUri":"file:///home/rcaillaud/Workspace/localws/ForkedProjects/flambeau","capabilities":{"workspace":{"applyEdit":true,"workspaceEdit":{"documentChanges":true,"resourceOperations":["create","rename","delete"],"failureHandling":"textOnlyTransactional"},"didChangeConfiguration":{"dynamicRegistration":true},"didChangeWatchedFiles":{"dynamicRegistration":true},"symbol":{"dynamicRegistration":true,"symbolKind":{"valueSet":[1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26]},"tagSupport":{"valueSet":[1]}},"executeCommand":{"dynamicRegistration":true},"configuration":true,"workspaceFolders":true},"textDocument":{"publishDiagnostics":{"relatedInformation":true,"versionSupport":false,"tagSupport":{"valueSet":[1,2]}},"synchronization":{"dynamicRegistration":true,"willSave":true,"willSaveWaitUntil":true,"didSave":true},"completion":{"dynamicRegistration":true,"contextSupport":true,"completionItem":{"snippetSupport":true,"commitCharactersSupport":true,"documentationFormat":["markdown","plaintext"],"deprecatedSupport":true,"preselectSupport":true,"tagSupport":{"valueSet":[1]}},"completionItemKind":{"valueSet":[1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25]}},"hover":{"dynamicRegistration":true,"contentFormat":["markdown","plaintext"]},"signatureHelp":{"dynamicRegistration":true,"contextSupport":true,"signatureInformation":{"documentationFormat":["markdown","plaintext"],"activeParameterSupport":true,"parameterInformation":{"labelOffsetSupport":true}}},"definition":{"dynamicRegistration":true},"references":{"dynamicRegistration":true},"documentHighlight":{"dynamicRegistration":true},"documentSymbol":{"dynamicRegistration":true,"symbolKind":{"valueSet":[1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26]},"hierarchicalDocumentSymbolSupport":true,"tagSupport":{"valueSet":[1]}},"codeAction":{"dynamicRegistration":true,"isPreferredSupport":true,"codeActionLiteralSupport":{"codeActionKind":{"valueSet":["","quickfix","refactor","refactor.extract","refactor.inline","refactor.rewrite","source","source.organizeImports"]}}},"codeLens":{"dynamicRegistration":true},"formatting":{"dynamicRegistration":true},"rangeFormatting":{"dynamicRegistration":true},"onTypeFormatting":{"dynamicRegistration":true},"rename":{"dynamicRegistration":true,"prepareSupport":true},"documentLink":{"dynamicRegistration":true,"tooltipSupport":true},"typeDefinition":{"dynamicRegistration":true},"implementation":{"dynamicRegistration":true},"declaration":{"dynamicRegistration":true},"colorProvider":{"dynamicRegistration":true},"foldingRange":{"dynamicRegistration":true,"rangeLimit":5000,"lineFoldingOnly":true},"selectionRange":{"dynamicRegistration":true}},"window":{"workDoneProgress":true}},"initializationOptions":{},"trace":"verbose","workspaceFolders":[{"uri":"file:///home/rcaillaud/Workspace/localws/ForkedProjects/flambeau","name":"flambeau"}],"clientInfo":{"name":"coc.nvim","version":"0.0.80"},"workDoneToken":"9e4e18fa-b735-42e9-8cdb-c52a1c42fa00"}}

Got valid Request message of type initialize

Got initialize request, answering

Trying to read frame

Got frame:
{"jsonrpc":"2.0","method":"initialized","params":{}}

Unable to parse data as RequestMessage

Got valid Notification message of type initialized

Properly initialized

Trying to read frame

Got frame:
{"jsonrpc":"2.0","method":"textDocument/didOpen","params":{"textDocument":{"uri":"file:///home/rcaillaud/Workspace/localws/ForkedProjects/flambeau/flambeau/raw_bindings/tensors.nim","languageId":"nim","version":1,"text":"# Flambeau\n# Copyright (c) 2020 Mamy André-Ratsimbazafy\n# Licensed and distributed under either of\n#   * MIT license (license terms in the root directory or at http://opensource.org/licenses/MIT).\n#   * Apache v2 license (license terms in the root directory or at http://www.apache.org/licenses/LICENSE-2.0).\n# at your option. This file may not be copied, modified, or distributed except according to those terms.\n\nimport\n  # Standard library\n  std/complex,\n  # Internal\n  ../cpp/std_cpp,\n  ../libtorch,\n  ./c10\n\n# (Almost) raw bindings to PyTorch Tensors\n# -----------------------------------------------------------------------\n#\n# This provides almost raw bindings to PyTorch tensors.\n#\n# \"Nimification\" (camelCase), ergonomic indexing and interoperability with Nim types is left to the \"high-level\" bindings.\n# This should ease searching PyTorch and libtorch documentation,\n# and make C++ tutorials easily applicable.\n#\n# Nonetheless some slight modifications were given to the raw bindings:\n# - `&=`, `|=` and `^=` have been renamed bitand, bitor, bitxor\n# - `[]` and `[]=` are not exported as index and index_put are more flexible\n#   and we want to leave those symbols available for Numpy-like ergonomic indexing.\n# - Nim's `index_fill_mut` and `masked_fill_mut` are mapped to the in-place\n#   C++ `index_fill_` and `masked_fill_`.\n#   The original out-of-place versions are doing clone+in-place mutation\n\n# C++ interop\n# -----------------------------------------------------------------------\n\n{.push cdecl.}\n{.push header: torchHeader.}\n\n# #######################################################################\n#\n#                         Context\n#\n# #######################################################################\n\ntype Torch* = object\n\n# Random Number Generation\n# -----------------------------------------------------------------------\n\nproc manual_seed*(_: type Torch, seed: uint64) {.sideeffect, importcpp: \"torch::manual_seed(@)\".}\n  ## Set torch random number generator seed\n\n# Backends\n# -----------------------------------------------------------------------\n\nproc hasCuda*(_: type Torch): bool{.sideeffect, importcpp: \"torch::hasCuda()\".}\n  ## Returns true if libtorch was compiled with CUDA support\nproc cuda_is_available*(_: type Torch): bool{.sideeffect, importcpp: \"torch::cuda::is_available()\".}\n  ## Returns true if libtorch was compiled with CUDA support\n  ## and at least one CUDA device is available\nproc cudnn_is_available*(_: type Torch): bool {.sideeffect, importcpp: \"torch::cuda::cudnn_is_available()\".}\n  ## Returns true if libtorch was compiled with CUDA and CuDNN support\n  ## and at least one CUDA device is available\n\n# #######################################################################\n#\n#                         Tensor Metadata\n#\n# #######################################################################\n\n# Backend Device\n# -----------------------------------------------------------------------\n# libtorch/include/c10/core/DeviceType.h\n# libtorch/include/c10/core/Device.h\n\ntype\n  DeviceIndex = int16\n\n  DeviceKind* {.importc: \"c10::DeviceType\",\n                size: sizeof(int16).} = enum\n    kCPU = 0\n    kCUDA = 1\n    kMKLDNN = 2\n    kOpenGL = 3\n    kOpenCL = 4\n    kIDEEP = 5\n    kHIP = 6\n    kFPGA = 7\n    kMSNPU = 8\n    kXLA = 9\n    kVulkan = 10\n\n  Device* {.importc: \"c10::Device\", bycopy.} = object\n    kind: DeviceKind\n    index: DeviceIndex\n\nfunc init*(T: type Device, kind: DeviceKind): T {.constructor, importcpp: \"torch::Device(#)\".}\n\n# Datatypes\n# -----------------------------------------------------------------------\n# libtorch/include/torch/csrc/api/include/torch/types.h\n# libtorch/include/c10/core/ScalarType.h\n\ntype\n  ScalarKind* {.importc: \"torch::ScalarType\",\n                size: sizeof(int8).} = enum\n    kUint8 = 0       # kByte\n    kInt8 = 1        # kChar\n    kInt16 = 2       # kShort\n    kInt32 = 3       # kInt\n    kInt64 = 4       # kLong\n    kFloat16 = 5     # kHalf\n    kFloat32 = 6     # kFloat\n    kFloat64 = 7     # kDouble\n    kComplexF16 = 8  # kComplexHalf\n    kComplexF32 = 9  # kComplexFloat\n    kComplexF64 = 10 # kComplexDouble\n    kBool = 11\n    kQint8 = 12      # Quantized int8\n    kQuint8 = 13     # Quantized uint8\n    kQint32 = 14     # Quantized int32\n    kBfloat16 = 15   # Brain float16\n\n\n  SomeTorchType* = uint8|byte or SomeSignedInt or\n                   SomeFloat or Complex[float32] or Complex[float64]\n  ## Torch Tensor type mapped to Nim type\n\n# TensorOptions\n# -----------------------------------------------------------------------\n# libtorch/include/c10/core/TensorOptions.h\n\ntype\n  TensorOptions* {.importcpp: \"torch::TensorOptions\", bycopy.} = object\n\nfunc init*(T: type TensorOptions): TensorOptions {.constructor, importcpp: \"torch::TensorOptions\".}\n\n# Scalars\n# -----------------------------------------------------------------------\n# Scalars are defined in libtorch/include/c10/core/Scalar.h\n# as tagged unions of double, int64, complex\n# And C++ types are implicitly convertible to Scalar\n#\n# Hence in Nim we don't need to care about Scalar or defined converters\n# (except maybe for complex)\ntype Scalar* = SomeNumber or bool\n\n# TensorAccessors\n# -----------------------------------------------------------------------\n# libtorch/include/ATen/core/TensorAccessors.h\n#\n# Tensor accessors gives \"medium-level\" access to a Tensor raw-data\n# - Compared to low-level \"data_ptr\" they take care of striding and shape\n# - Compared to high-level functions they don't provide any parallelism.\n\n# #######################################################################\n#\n#                            Tensors\n#\n# #######################################################################\n\n# Tensors\n# -----------------------------------------------------------------------\n\ntype\n  Tensor* {.importcpp: \"torch::Tensor\", bycopy.} = object\n\n# Strings & Debugging\n# -----------------------------------------------------------------------\n\nproc print*(self: Tensor) {.sideeffect, importcpp: \"torch::print(@)\".}\n\n# Metadata\n# -----------------------------------------------------------------------\n\nfunc dim*(self: Tensor): int64 {.importcpp: \"#.dim()\".}\n  ## Number of dimensions\nfunc reset*(self: var Tensor) {.importcpp: \"#.reset()\".}\nfunc is_same*(self, other: Tensor): bool {.importcpp: \"#.is_same(#)\".}\n  ## Reference equality\n  ## Do the tensors use the same memory.\n\nfunc sizes*(self: Tensor): IntArrayRef {.importcpp: \"#.sizes()\".}\n  ## This is Arraymancer and Numpy \"shape\"\n\nfunc strides*(self: Tensor): IntArrayRef {.importcpp: \"#.strides()\".}\n\nfunc ndimension*(self: Tensor): int64 {.importcpp: \"#.ndimension()\".}\n  ## This is Arraymancer rank\nfunc nbytes*(self: Tensor): uint {.importcpp: \"#.nbytes()\".}\n  ## Bytes-size of the Tensor\nfunc numel*(self: Tensor): int64 {.importcpp: \"#.numel()\".}\n  ## This is Arraymancer and Numpy \"size\"\n\nfunc size*(self: Tensor, axis: int64): int64 {.importcpp: \"#.size(#)\".}\nfunc itemsize*(self: Tensor): uint {.importcpp: \"#.itemsize()\".}\nfunc element_size*(self: Tensor): int64 {.importcpp: \"#.element_size()\".}\n\n# Accessors\n# -----------------------------------------------------------------------\n\nfunc data_ptr*(self: Tensor, T: typedesc[SomeTorchType]): ptr UncheckedArray[T] {.importcpp: \"#.data_ptr<'2>(#)\".}\n  ## Gives raw access to a tensor data of type T.\n  ##\n  ## This is a very low-level procedure. You need to take care\n  ## of the tensor shape and strides yourself.\n  ##\n  ## It is recommended to use this only on contiguous tensors\n  ## (freshly created or freshly cloned) and to avoid\n  ## sliced tensors.\n\n# Backend\n# -----------------------------------------------------------------------\n\nfunc has_storage*(self: Tensor): bool {.importcpp: \"#.has_storage()\".}\nfunc get_device*(self: Tensor): int64 {.importcpp: \"#.get_device()\".}\nfunc is_cuda*(self: Tensor): bool {.importcpp: \"#.is_cuda()\".}\nfunc is_hip*(self: Tensor): bool {.importcpp: \"#.is_hip()\".}\nfunc is_sparse*(self: Tensor): bool {.importcpp: \"#.is_sparse()\".}\nfunc is_mkldnn*(self: Tensor): bool {.importcpp: \"#.is_mkldnn()\".}\nfunc is_vulkan*(self: Tensor): bool {.importcpp: \"#.is_vulkan()\".}\nfunc is_quantized*(self: Tensor): bool {.importcpp: \"#.is_quantized()\".}\nfunc is_meta*(self: Tensor): bool {.importcpp: \"#.is_meta()\".}\n\nfunc cpu*(self: Tensor): Tensor {.importcpp: \"#.cpu()\".}\nfunc cuda*(self: Tensor): Tensor {.importcpp: \"#.cuda()\".}\nfunc hip*(self: Tensor): Tensor {.importcpp: \"#.hip()\".}\nfunc vulkan*(self: Tensor): Tensor {.importcpp: \"#.vulkan()\".}\nfunc to*(self: Tensor, device: DeviceKind): Tensor {.importcpp: \"#.to(#)\".}\nfunc to*(self: Tensor, device: Device): Tensor {.importcpp: \"#.to(#)\".}\n\n# dtype\n# -----------------------------------------------------------------------\n\nfunc to*(self: Tensor, dtype: ScalarKind): Tensor {.importcpp: \"#.to(#)\".}\nfunc scalarType*(self: Tensor): ScalarKind {.importcpp: \"#.scalar_type()\".}\n\n# Constructors\n# -----------------------------------------------------------------------\n\n# DeviceType and ScalarType are auto-convertible to TensorOptions\n\nfunc init*(T: type Tensor): Tensor {.constructor, importcpp: \"torch::Tensor\".}\n\nfunc from_blob*(data: pointer, sizes: IntArrayRef, options: TensorOptions): Tensor {.importcpp: \"torch::from_blob(@)\".}\nfunc from_blob*(data: pointer, sizes: IntArrayRef, scalarKind: ScalarKind): Tensor {.importcpp: \"torch::from_blob(@)\".}\nfunc from_blob*(data: pointer, sizes: IntArrayRef, device: DeviceKind): Tensor {.importcpp: \"torch::from_blob(@)\".}\n\nfunc from_blob*(data: pointer, sizes: int64, options: TensorOptions): Tensor {.importcpp: \"torch::from_blob(@)\".}\nfunc from_blob*(data: pointer, sizes: int64, scalarKind: ScalarKind): Tensor {.importcpp: \"torch::from_blob(@)\".}\nfunc from_blob*(data: pointer, sizes: int64, device: DeviceKind): Tensor {.importcpp: \"torch::from_blob(@)\".}\n\nfunc from_blob*(data: pointer, sizes, strides: IntArrayRef, options: TensorOptions): Tensor {.\n    importcpp: \"torch::from_blob(@)\".}\nfunc from_blob*(data: pointer, sizes, strides: IntArrayRef, scalarKind: ScalarKind): Tensor {.\n    importcpp: \"torch::from_blob(@)\".}\nfunc from_blob*(data: pointer, sizes, strides: IntArrayRef, device: DeviceKind): Tensor {.\n    importcpp: \"torch::from_blob(@)\".}\n\nfunc empty*(size: IntArrayRef, options: TensorOptions): Tensor {.importcpp: \"torch::empty(@)\".}\n  ## Create an uninitialized tensor of shape `size`\n  ## The tensor data must be filled manually\n  ##\n  ## The output tensor will be row major (C contiguous)\nfunc empty*(size: IntArrayRef, scalarKind: ScalarKind): Tensor {.importcpp: \"torch::empty(@)\".}\nfunc empty*(size: IntArrayRef, device: DeviceKind): Tensor {.importcpp: \"torch::empty(@)\".}\n  ## Create an uninitialized tensor of shape `size`\n  ## The tensor data must be filled manually.\n  ##\n  ## If device is NOT on CPU make sure to use specialized\n  ## copy operations. For example to update on Cuda devices\n  ## use cudaMemcpy not a.data[i] = 123\n  ##\n  ## The output tensor will be row major (C contiguous)\n\nfunc clone*(self: Tensor): Tensor {.importcpp: \"#.clone()\".}\n\n# Random sampling\n# -----------------------------------------------------------------------\n\nfunc random_mut*(self: var Tensor, start, stopEx: int64) {.importcpp: \"#.random_(@)\".}\nfunc randint*(start, stopEx: int64): Tensor {.varargs, importcpp: \"torch::randint(#, #, {@})\".}\nfunc randint*(start, stopEx: int64, size: IntArrayRef): Tensor {.importcpp: \"torch::randint(@)\".}\n\nfunc rand_like*(self: Tensor, options: TensorOptions): Tensor {.importcpp: \"torch::rand_like(@)\".}\nfunc rand_like*(self: Tensor, options: ScalarKind): Tensor {.importcpp: \"torch::rand_like(@)\".}\nfunc rand_like*(self: Tensor, options: DeviceKind): Tensor {.importcpp: \"torch::rand_like(@)\".}\nfunc rand_like*(self: Tensor, options: Device): Tensor {.importcpp: \"torch::rand_like(@)\".}\nfunc rand_like*(self: Tensor): Tensor {.importcpp: \"torch::rand_like(@)\".}\n\n\n# func rand*(size: IntArrayRef, options: TensorOptions): Tensor {.importcpp: \"torch::rand(@)\"}\nfunc rand*(size: IntArrayRef, options: ScalarKind): Tensor {.importcpp: \"torch::rand(@)\".}\n# func rand*(size: IntArrayRef, options: DeviceKind): Tensor {.importcpp: \"torch::rand(@)\"}\n# func rand*(size: IntArrayRef, options: Device): Tensor {.importcpp: \"torch::rand(@)\"}\nfunc rand*(size: IntArrayRef): Tensor {.importcpp: \"torch::rand(@)\".}\n\n# Indexing\n# -----------------------------------------------------------------------\n# TODO throw IndexDefect when bounds checking is active\n# libtorch/include/ATen/TensorIndexing.h\n# and https://pytorch.org/cppdocs/notes/tensor_indexing.html\n\nfunc item*(self: Tensor, T: typedesc): T {.importcpp: \"#.item<'0>()\".}\n  ## Extract the scalar from a 0-dimensional tensor\n\n# Unsure what those corresponds to in Python\n# func `[]`*(self: Tensor, index: Scalar): Tensor {.importcpp: \"#[#]\".}\n# func `[]`*(self: Tensor, index: Tensor): Tensor {.importcpp: \"#[#]\".}\n# func `[]`*(self: Tensor, index: int64): Tensor {.importcpp: \"#[#]\".}\n\nfunc index*(self: Tensor): Tensor {.varargs, importcpp: \"#.index({@})\".}\n  ## Tensor indexing. It is recommended\n  ## to Nimify this in a high-level wrapper.\n  ## `tensor.index(indexers)`\n\n# We can't use the construct `#.index_put_({@}, #)`\n# so hardcode sizes,\n# 6d seems reasonable, that would be a batch of 3D videos (videoID/batchID, Time, Color Channel, Height, Width, Depth)\n# If you need more you likely aren't indexing individual values.\n\nfunc index_put*(self: var Tensor, i0: auto, val: Scalar or Tensor) {.importcpp: \"#.index_put_({#}, #)\".}\n  ## Tensor mutation at index. It is recommended\n  ## to Nimify this in a high-level wrapper.\nfunc index_put*(self: var Tensor, i0, i1: auto, val: Scalar or Tensor) {.importcpp: \"#.index_put_({#, #}, #)\".}\n  ## Tensor mutation at index. It is recommended\n  ## to Nimify this in a high-level wrapper.\nfunc index_put*(self: var Tensor, i0, i1, i2: auto, val: Scalar or Tensor) {.importcpp: \"#.index_put_({#, #, #}, #)\".}\n  ## Tensor mutation at index. It is recommended\n  ## to Nimify this in a high-level wrapper.\nfunc index_put*(self: var Tensor, i0, i1, i2, i3: auto, val: Scalar or Tensor) {.importcpp: \"#.index_put_({#, #, #, #}, #)\".}\n  ## Tensor mutation at index. It is recommended\n  ## to Nimify this in a high-level wrapper.\nfunc index_put*(self: var Tensor, i0, i1, i2, i3, i4: auto, val: Scalar or Tensor) {.importcpp: \"#.index_put_({#, #, #, #, #}, #)\".}\n  ## Tensor mutation at index. It is recommended\n  ## to Nimify this in a high-level wrapper.\nfunc index_put*(self: var Tensor, i0, i1, i2, i3, i4, i5: auto, val: Scalar or Tensor) {.importcpp: \"#.index_put_({#, #, #, #, #, #}, #)\".}\n  ## Tensor mutation at index. It is recommended\n  ## to Nimify this in a high-level wrapper.\n\n# Fancy Indexing\n# -----------------------------------------------------------------------\n\nfunc index_select*(self: Tensor, axis: int64, indices: Tensor): Tensor {.importcpp: \"#.index_select(@)\".}\nfunc masked_select*(self: Tensor, mask: Tensor): Tensor {.importcpp: \"#.masked_select(@)\".}\n\n# PyTorch exposes in-place `index_fill_` and `masked_fill_`\n# and out-of-place `index_fill` and `masked_fill`\n# that does in-place + clone\n# we only exposes the in-place version.\n\nfunc index_fill_mut*(self: var Tensor, mask: Tensor, value: Scalar or Tensor) {.importcpp: \"#.index_fill_(@)\".}\nfunc masked_fill_mut*(self: var Tensor, mask: Tensor, value: Scalar or Tensor) {.importcpp: \"#.masked_fill_(@)\".}\n\n# Shapeshifting\n# -----------------------------------------------------------------------\n\nfunc reshape*(self: Tensor): Tensor {.varargs, importcpp: \"#.reshape({@})\".}\nfunc view*(self: Tensor): Tensor {.varargs, importcpp: \"#.reshape({@})\".}\n\n# Automatic Differentiation\n# -----------------------------------------------------------------------\n\nfunc backward*(self: var Tensor){.importcpp: \"#.backward()\".}\n\n# Low-level slicing API\n# -----------------------------------------------------------------------\n\ntype\n  TorchSlice* {.importcpp: \"torch::indexing::Slice\", bycopy.} = object\n  # libtorch/include/ATen/TensorIndexing.h\n\n  TensorIndexType*{.size: sizeof(cint), bycopy, importcpp: \"torch::indexing::TensorIndexType\".} = enum\n    ## This is passed to torchSlice functions\n    IndexNone = 0\n    IndexEllipsis = 1\n    IndexInteger = 2\n    IndexBoolean = 3\n    IndexSlice = 4\n    IndexTensor = 5\n\n  SomeSlicer* = TensorIndexType or SomeSignedInt\n\nproc SliceSpan*(): TorchSlice {.importcpp: \"at::indexing::Slice()\".}\n    ## This is passed to the \"index\" function\n    ## This is Python \":\", span / whole dimension\n\nfunc torchSlice*(){.importcpp: \"torch::indexing::Slice(@)\", constructor.}\nfunc torchSlice*(start: SomeSlicer): TorchSlice {.importcpp: \"torch::indexing::Slice(@)\", constructor.}\nfunc torchSlice*(start: SomeSlicer, stop: SomeSlicer): TorchSlice {.importcpp: \"torch::indexing::Slice(@)\", constructor.}\nfunc torchSlice*(start: SomeSlicer, stop: SomeSlicer, step: SomeSlicer): TorchSlice {.importcpp: \"torch::indexing::Slice(@)\", constructor.}\nfunc start*(s: TorchSlice): int64 {.importcpp: \"#.start()\".}\nfunc stop*(s: TorchSlice): int64 {.importcpp: \"#.stop()\".}\nfunc step*(s: TorchSlice): int64 {.importcpp: \"#.step()\".}\n\n# Operators\n# -----------------------------------------------------------------------\n\nfunc `not`*(self: Tensor): Tensor {.importcpp: \"(~#)\".}\nfunc `-`*(self: Tensor): Tensor {.importcpp: \"(-#)\".}\n\nfunc `+`*(self: Tensor, b: Tensor): Tensor {.importcpp: \"(# + #)\".}\nfunc `-`*(self: Tensor, b: Tensor): Tensor {.importcpp: \"(# - #)\".}\nfunc `*`*(self: Tensor, b: Tensor): Tensor {.importcpp: \"(# * #)\".}\n\nfunc `*`*(a: cfloat or cdouble, b: Tensor): Tensor {.importcpp: \"(# * #)\".}\nfunc `*`*(self: Tensor, b: cfloat or cdouble): Tensor {.importcpp: \"(# * #)\".}\n\nfunc `+=`*(self: var Tensor, b: Tensor) {.importcpp: \"(# += #)\".}\nfunc `+=`*(self: var Tensor, s: Scalar) {.importcpp: \"(# += #)\".}\nfunc `-=`*(self: var Tensor, b: Tensor) {.importcpp: \"(# -= #)\".}\nfunc `-=`*(self: var Tensor, s: Scalar) {.importcpp: \"(# -= #)\".}\nfunc `*=`*(self: var Tensor, b: Tensor) {.importcpp: \"(# *= #)\".}\nfunc `*=`*(self: var Tensor, s: Scalar) {.importcpp: \"(# *= #)\".}\nfunc `/=`*(self: var Tensor, b: Tensor) {.importcpp: \"(# /= #)\".}\nfunc `/=`*(self: var Tensor, s: Scalar) {.importcpp: \"(# /= #)\".}\n\nfunc `and`*(self: Tensor, b: Tensor): Tensor {.importcpp: \"#.bitwise_and(#)\".}\n  ## bitwise `and`.\nfunc `or`*(self: Tensor, b: Tensor): Tensor {.importcpp: \"#.bitwise_or(#)\".}\n  ## bitwise `or`.\nfunc `xor`*(self: Tensor, b: Tensor): Tensor {.importcpp: \"#.bitwise_xor(#)\".}\n  ## bitwise `xor`.\n\nfunc bitand_mut*(self: var Tensor, s: Tensor) {.importcpp: \"#.bitwise_and_(#)\".}\n  ## In-place bitwise `and`.\nfunc bitor_mut*(self: var Tensor, s: Tensor) {.importcpp: \"#.bitwise_or_(#)\".}\n  ## In-place bitwise `or`.\nfunc bitxor_mut*(self: var Tensor, s: Tensor) {.importcpp: \"#.bitwise_xor_(#)\".}\n  ## In-place bitwise `xor`.\n\nfunc eq*(a, b: Tensor): Tensor {.importcpp: \"#.eq(#)\".}\n  ## Equality of each tensor values\nfunc equal*(a, b: Tensor): bool {.importcpp: \"#.equal(#)\".}\ntemplate `==`*(a, b: Tensor): bool =\n  a.equal(b)\n\n# Functions.h\n# -----------------------------------------------------------------------\n\nfunc toType*(self: Tensor, dtype: ScalarKind): Tensor {.importcpp: \"#.toType(@)\".}\nfunc toSparse*(self: Tensor): Tensor {.importcpp: \"#.to_sparse()\".}\nfunc toSparse*(self: Tensor, sparseDim: int64): Tensor {.importcpp: \"#.to_sparse(@)\".}\n\nfunc eye*(n: int64): Tensor {.importcpp: \"torch::eye(@)\".}\nfunc eye*(n: int64, options: TensorOptions): Tensor {.importcpp: \"torch::eye(@)\".}\nfunc eye*(n: int64, scalarKind: ScalarKind): Tensor {.importcpp: \"torch::eye(@)\".}\nfunc eye*(n: int64, device: DeviceKind): Tensor {.importcpp: \"torch::eye(@)\".}\n\nfunc zeros*(dim: int64): Tensor {.importcpp: \"torch::zeros(@)\".}\nfunc zeros*(dim: IntArrayRef): Tensor {.importcpp: \"torch::zeros(@)\".}\nfunc zeros*(dim: IntArrayRef, options: TensorOptions): Tensor {.importcpp: \"torch::zeros(@)\".}\nfunc zeros*(dim: IntArrayRef, scalarKind: ScalarKind): Tensor {.importcpp: \"torch::zeros(@)\".}\nfunc zeros*(dim: IntArrayRef, device: DeviceKind): Tensor {.importcpp: \"torch::zeros(@)\".}\n\nfunc linspace*(start, stop: Scalar, steps: int64, options: TensorOptions) : Tensor {.importcpp: \"torch::linspace(@)\".}\nfunc linspace*(start, stop: Scalar, steps: int64, options: ScalarKind) : Tensor {.importcpp: \"torch::linspace(@)\".}\nfunc linspace*(start, stop: Scalar, steps: int64, options: DeviceKind) : Tensor {.importcpp: \"torch::linspace(@)\".}\nfunc linspace*(start, stop: Scalar, steps: int64, options: Device) : Tensor {.importcpp: \"torch::linspace(@)\".}\nfunc linspace*(start, stop: Scalar, steps: int64) : Tensor {.importcpp: \"torch::linspace(@)\".}\nfunc linspace*(start, stop: Scalar) : Tensor {.importcpp: \"torch::linspace(@)\".}\n\nfunc logspace*(start, stop: Scalar, steps, base: int64, options: TensorOptions) : Tensor {.importcpp: \"torch::logspace(@)\".}\nfunc logspace*(start, stop: Scalar, steps, base: int64, options: ScalarKind) : Tensor {.importcpp: \"torch::logspace(@)\".}\nfunc logspace*(start, stop: Scalar, steps, base: int64, options: DeviceKind) {.importcpp: \"torch::logspace(@)\".}\nfunc logspace*(start, stop: Scalar, steps, base: int64, options: Device)  : Tensor {.importcpp: \"torch::logspace(@)\".}\nfunc logspace*(start, stop: Scalar, steps, base: int64) : Tensor {.importcpp: \"torch::logspace(@)\".}\nfunc logspace*(start, stop: Scalar, steps: int64)  : Tensor {.importcpp: \"torch::logspace(@)\".}\nfunc logspace*(start, stop: Scalar)  : Tensor {.importcpp: \"torch::logspace(@)\".}\n\nfunc arange*(stop: Scalar, options: TensorOptions) : Tensor  {.importcpp: \"torch::arange(@)\".}\nfunc arange*(stop: Scalar, options: ScalarKind) : Tensor  {.importcpp: \"torch::arange(@)\".}\nfunc arange*(stop: Scalar, options: DeviceKind) : Tensor  {.importcpp: \"torch::arange(@)\".}\nfunc arange*(stop: Scalar, options: Device) : Tensor  {.importcpp: \"torch::arange(@)\".}\nfunc arange*(stop: Scalar) : Tensor  {.importcpp: \"torch::arange(@)\".}\n\nfunc arange*(start, stop: Scalar, options: TensorOptions) : Tensor  {.importcpp: \"torch::arange(@)\".}\nfunc arange*(start, stop: Scalar, options: ScalarKind) : Tensor  {.importcpp: \"torch::arange(@)\".}\nfunc arange*(start, stop: Scalar, options: DeviceKind) : Tensor  {.importcpp: \"torch::arange(@)\".}\nfunc arange*(start, stop: Scalar, options: Device) : Tensor  {.importcpp: \"torch::arange(@)\".}\nfunc arange*(start, stop: Scalar) : Tensor  {.importcpp: \"torch::arange(@)\".}\n\nfunc arange*(start, stop, step: Scalar, options: TensorOptions) : Tensor  {.importcpp: \"torch::arange(@)\".}\nfunc arange*(start, stop, step: Scalar, options: ScalarKind) : Tensor  {.importcpp: \"torch::arange(@)\".}\nfunc arange*(start, stop, step: Scalar, options: DeviceKind) : Tensor  {.importcpp: \"torch::arange(@)\".}\nfunc arange*(start, stop, step: Scalar, options: Device) : Tensor  {.importcpp: \"torch::arange(@)\".}\nfunc arange*(start, stop, step: Scalar) : Tensor  {.importcpp: \"torch::arange(@)\".}\n\n# Operations\n# -----------------------------------------------------------------------\nfunc add*(self: Tensor, other: Tensor, alpha: Scalar = 1): Tensor {.importcpp: \"#.add(@)\".}\nfunc add*(self: Tensor, other: Scalar, alpha: Scalar = 1): Tensor {.importcpp: \"#.add(@)\".}\nfunc addmv*(self: Tensor, mat: Tensor, vec: Tensor, beta: Scalar = 1, alpha: Scalar = 1): Tensor {.importcpp: \"#.addmv(@)\".}\nfunc addmm*(t, mat1, mat2: Tensor, beta: Scalar = 1, alpha: Scalar = 1): Tensor {.importcpp: \"#.addmm(@)\".}\nfunc mm*(t, other: Tensor): Tensor {.importcpp: \"#.mm(@)\".}\nfunc matmul*(t, other: Tensor): Tensor {.importcpp: \"#.matmul(@)\".}\nfunc bmm*(t, other: Tensor): Tensor {.importcpp: \"#.bmm(@)\".}\n\nfunc luSolve*(t, data, pivots: Tensor): Tensor {.importcpp: \"#.lu_solve(@)\".}\n\nfunc qr*(self: Tensor, some: bool = true): CppTuple2[Tensor, Tensor] {.importcpp: \"#.qr(@)\".}\n  ## Returns a tuple:\n  ## - Q of shape (∗,m,k)\n  ## - R of shape (∗,k,n)\n  ## with k=min(m,n) if some is true otherwise k=m\n  ##\n  ## The QR decomposition is batched over dimension(s) *\n  ## t = QR\n\n# addr?\nfunc all*(self: Tensor, axis: int64): Tensor {.importcpp: \"#.all(@)\".}\nfunc all*(self: Tensor, axis: int64, keepdim: bool): Tensor {.importcpp: \"#.all(@)\".}\nfunc allClose*(t, other: Tensor, rtol: float64 = 1e-5, abstol: float64 = 1e-8, equalNan: bool = false): bool {.importcpp: \"#.allclose(@)\".}\nfunc any*(self: Tensor, axis: int64): Tensor {.importcpp: \"#.any(@)\".}\nfunc any*(self: Tensor, axis: int64, keepdim: bool): Tensor {.importcpp: \"#.any(@)\".}\nfunc argmax*(self: Tensor): Tensor {.importcpp: \"#.argmax()\".}\nfunc argmax*(self: Tensor, axis: int64, keepdim: bool = false): Tensor {.importcpp: \"#.argmax(@)\".}\nfunc argmin*(self: Tensor): Tensor {.importcpp: \"#.argmin()\".}\nfunc argmin*(self: Tensor, axis: int64, keepdim: bool = false): Tensor {.importcpp: \"#.argmin(@)\".}\n\n# aggregate\n# -----------------------------------------------------------------------\n\n# sum needs wrapper procs/templates to allow for using nim arrays and single axis.\nfunc sum*(self: Tensor): Tensor {.importcpp: \"#.sum()\".}\nfunc sum*(self: Tensor, dtype: ScalarKind): Tensor {.importcpp: \"#.sum(@)\".}\nfunc sum*(self: Tensor, axis: int64, keepdim: bool = false): Tensor {.importcpp: \"#.sum(@)\".}\nfunc sum*(self: Tensor, axis: int64, keepdim: bool = false, dtype: ScalarKind): Tensor {.importcpp: \"#.sum(@)\".}\nfunc sum*(self: Tensor, axis: IntArrayRef, keepdim: bool = false): Tensor {.importcpp: \"#.sum(@)\".}\nfunc sum*(self: Tensor, axis: IntArrayRef, keepdim: bool = false, dtype: ScalarKind): Tensor {.importcpp: \"#.sum(@)\".}\n\n# mean as well\nfunc mean*(self: Tensor): Tensor {.importcpp: \"#.mean()\".}\nfunc mean*(self: Tensor, dtype: ScalarKind): Tensor {.importcpp: \"#.mean(@)\".}\nfunc mean*(self: Tensor, axis: int64, keepdim: bool = false): Tensor {.importcpp: \"#.mean(@)\".}\nfunc mean*(self: Tensor, axis: int64, keepdim: bool = false, dtype: ScalarKind): Tensor {.importcpp: \"#.mean(@)\".}\nfunc mean*(self: Tensor, axis: IntArrayRef, keepdim: bool = false): Tensor {.importcpp: \"#.mean(@)\".}\nfunc mean*(self: Tensor, axis: IntArrayRef, keepdim: bool = false, dtype: ScalarKind): Tensor {.importcpp: \"#.mean(@)\".}\n\n# median requires std::tuple\n\nfunc prod*(self: Tensor): Tensor {.importcpp: \"#.prod()\".}\nfunc prod*(self: Tensor, dtype: ScalarKind): Tensor {.importcpp: \"#.prod(@)\".}\nfunc prod*(self: Tensor, axis: int64, keepdim: bool = false): Tensor {.importcpp: \"#.prod(@)\".}\nfunc prod*(self: Tensor, axis: int64, keepdim: bool = false, dtype: ScalarKind): Tensor {.importcpp: \"#.prod(@)\".}\n\nfunc min*(self: Tensor): Tensor {.importcpp: \"#.min()\".}\nfunc min*(self: Tensor, axis: int64, keepdim: bool = false): CppTuple2[Tensor, Tensor] {.importcpp: \"torch::min(@)\".}\n  ## Returns a tuple (values, indices) of type (TensorT, TensorInt64)\n  ## of the minimum values and their index in the specified axis\n\nfunc max*(self: Tensor): Tensor {.importcpp: \"#.max()\".}\nfunc max*(self: Tensor, axis: int64, keepdim: bool = false): CppTuple2[Tensor, Tensor] {.importcpp: \"torch::max(@)\".}\n  ## Returns a tuple (values, indices) of type (TensorT, TensorInt64)\n  ## of the maximum values and their index in the specified axis\n\nfunc variance*(self: Tensor, unbiased: bool = true): Tensor {.importcpp: \"#.var(@)\".} # can't use `var` because of keyword.\nfunc variance*(self: Tensor, axis: int64, unbiased: bool = true, keepdim: bool = false): Tensor {.importcpp: \"#.var(@)\".}\nfunc variance*(self: Tensor, axis: IntArrayRef, unbiased: bool = true, keepdim: bool = false): Tensor {.importcpp: \"#.var(@)\".}\n\nfunc stddev*(self: Tensor, unbiased: bool = true): Tensor {.importcpp: \"#.std(@)\".}\nfunc stddev*(self: Tensor, axis: int64, unbiased: bool = true, keepdim: bool = false): Tensor {.importcpp: \"#.std(@)\".}\nfunc stddev*(self: Tensor, axis: IntArrayRef, unbiased: bool = true, keepdim: bool = false): Tensor {.importcpp: \"#.std(@)\".}\n\n# algorithms:\n# -----------------------------------------------------------------------\n\nfunc sort*(self: Tensor, axis: int64 = -1, descending: bool = false): CppTuple2[Tensor, Tensor] {.importcpp: \"#.sort(@)\".}\n  ## Sorts the elements of the input tensor along a given dimension in ascending order by value.\n  ## If dim is not given, the last dimension of the input is chosen (dim=-1).\n  ## Returns (values, originalIndices) or type (TensorT, TensorInt64)\n  ## where originalIndices is the original index of each values (before sorting)\nfunc argsort*(self: Tensor, axis: int64 = -1, descending: bool = false): Tensor {.importcpp: \"#.argsort(@)\".}\n\n# math\n# -----------------------------------------------------------------------\nfunc abs*(self: Tensor): Tensor {.importcpp: \"#.abs()\".}\nfunc absolute*(self: Tensor): Tensor {.importcpp: \"#.absolute()\".}\nfunc angle*(self: Tensor): Tensor {.importcpp: \"#.angle()\".}\nfunc sgn*(self: Tensor): Tensor {.importcpp: \"#.sgn()\".}\nfunc conj*(self: Tensor): Tensor {.importcpp: \"#.conj()\".}\nfunc acos*(self: Tensor): Tensor {.importcpp: \"#.acos()\".}\nfunc arccos*(self: Tensor): Tensor {.importcpp: \"#.arccos()\".}\nfunc acosh*(self: Tensor): Tensor {.importcpp: \"#.acosh()\".}\nfunc arccosh*(self: Tensor): Tensor {.importcpp: \"#.arccosh()\".}\nfunc asinh*(self: Tensor): Tensor {.importcpp: \"#.asinh()\".}\nfunc arcsinh*(self: Tensor): Tensor {.importcpp: \"#.arcsinh()\".}\nfunc atanh*(self: Tensor): Tensor {.importcpp: \"#.atanh()\".}\nfunc arctanh*(self: Tensor): Tensor {.importcpp: \"#.arctanh()\".}\nfunc asin*(self: Tensor): Tensor {.importcpp: \"#.asin()\".}\nfunc arcsin*(self: Tensor): Tensor {.importcpp: \"#.arcsin()\".}\nfunc atan*(self: Tensor): Tensor {.importcpp: \"#.atan()\".}\nfunc arctan*(self: Tensor): Tensor {.importcpp: \"#.arctan()\".}\nfunc cos*(self: Tensor): Tensor {.importcpp: \"#.cos()\".}\nfunc sin*(self: Tensor): Tensor {.importcpp: \"#.sin()\".}\nfunc tan*(self: Tensor): Tensor {.importcpp: \"#.tan()\".}\nfunc exp*(self: Tensor): Tensor {.importcpp: \"#.exp()\".}\nfunc exp2*(self: Tensor): Tensor {.importcpp: \"#.exp2()\".}\nfunc erf*(self: Tensor): Tensor {.importcpp: \"#.erf()\".}\nfunc erfc*(self: Tensor): Tensor {.importcpp: \"#.erfc()\".}\nfunc reciprocal*(self: Tensor): Tensor {.importcpp: \"#.reciprocal()\".}\nfunc neg*(self: Tensor): Tensor {.importcpp: \"#.neg()\".}\nfunc clamp*(self: Tensor, min, max: Scalar): Tensor {.importcpp: \"#.clamp(@)\".}\nfunc clampMin*(self: Tensor, min: Scalar): Tensor {.importcpp: \"#.clamp_min(@)\".}\nfunc clampMax*(self: Tensor, max: Scalar): Tensor {.importcpp: \"#.clamp_max(@)\".}\n\nfunc dot*(self: Tensor, other: Tensor): Tensor {.importcpp: \"#.dot(@)\".}\n\nfunc squeeze*(self: Tensor): Tensor {.importcpp: \"#.squeeze()\".}\nfunc squeeze*(self: Tensor, axis: int64): Tensor {.importcpp: \"#.squeeze(@)\".}\nfunc unsqueeze*(self: Tensor, axis: int64): Tensor {.importcpp: \"#.unsqueeze(@)\".}\n\n{.pop.}\n\nfunc inv*(self: Tensor) : Tensor {.importcpp: \"#.linalg_inv(@)\", header: \"linalg.h\".}\n\n# FFT\n# -----------------------------------------------------------------------\n{.push header: \"fft.h\".}\nfunc fftshift*(self: Tensor): Tensor {.importcpp: \"torch::fft_fftshift(@)\".}\nfunc fftshift*(self: Tensor, dim: IntArrayRef): Tensor {.importcpp: \"torch::fft_ifftshift(@)\".}\nfunc ifftshift*(self: Tensor): Tensor {.importcpp: \"torch::fft_fftshift(@)\".}\nfunc ifftshift*(self: Tensor, dim: IntArrayRef): Tensor {.importcpp: \"torch::fft_ifftshift(@)\".}\n\nlet defaultNorm : CppString = initCppString(\"backward\")\n\nfunc fft*(self: Tensor, n: int64, dim: int64 = -1, norm: CppString = defaultNorm): Tensor {.importcpp: \"torch::fft_fft(@)\".}\n## Compute the 1-D Fourier transform\n## ``n`` represents signal size. If given, each dimension dim[i] will either be zero padded or trimmed to the length s[i] before computing the FFT.\n## ``norm`` can be :\n##    * \"forward\" - normalize by 1/n\n##    * \"backward\" - no normalization\n##    * \"ortho\" - normalize by 1/sqrt(n)\nfunc fft*(self: Tensor, dim: int64 = -1, norm: CppString = defaultNorm): Tensor {.importcpp: \"torch::fft_fft(@)\".}\n## Compute the 1-D Fourier transform\n\nfunc ifft*(self: Tensor, n: int64, dim: int64 = -1, norm: CppString = defaultNorm): Tensor {.importcpp: \"torch::ifft_ifft(@)\".}\n## Compute the 1-D Fourier transform\n## ``norm`` can be :\n##   * \"forward\" - no normalization\n##   * \"backward\" - normalization by 1/n\n##   * \"ortho\" - normalization by 1/sqrt(n)\nfunc ifft*(self: Tensor, dim: int64 = -1, norm: CppString = defaultNorm): Tensor {.importcpp: \"torch::ifft_ifft(@)\".}\n## Compute the 1-D Fourier transform\n\nfunc fft2*(self: Tensor, s: IntArrayRef, dim: IntArrayRef, norm: CppString = defaultNorm): Tensor {.\n    importcpp: \"torch::fft_fft2(@)\".}\n## Compute the 2-D Fourier transform\n## ``s`` represents signal size. If given, each dimension dim[i] will either be zero padded or trimmed to the length s[i] before computing the FFT.\n## ``norm`` can be :\n##    * \"forward\" - normalize by 1/n\n##    * \"backward\" - no normalization\n##    * \"ortho\" - normalize by 1/sqrt(n)\n## With n the logical FFT size: ``n = prod(s)``.\nfunc fft2*(self: Tensor, s: IntArrayRef): Tensor {.importcpp: \"torch::fft_fft2(@)\".}\n## Compute the 2-D Fourier transform\n## ``s`` represents signal size. If given, each dimension dim[i] will either be zero padded or trimmed to the length s[i] before computing the FFT.\nfunc fft2*(self: Tensor): Tensor {.importcpp: \"torch::fft_fft2(@)\".}\n## Compute the 2-D Fourier transform\n\nfunc ifft2*(self: Tensor, s: IntArrayRef, dim: IntArrayRef, norm: CppString = defaultNorm): Tensor {.\n    importcpp: \"torch::fft_ifft2(@)\".}\n## Compute the 2-D Inverse Fourier transform\n## ``s`` represents signal size. If given, each dimension dim[i] will either be zero padded or trimmed to the length s[i] before computing the FFT.\n## ``norm`` can be :\n##   * \"forward\" - no normalization\n##   * \"backward\" - normalization by 1/n\n##   * \"ortho\" - normalization by 1/sqrt(n)\n## With n the logical FFT size: ``n = prod(s)``.\nfunc ifft2*(self: Tensor, s: IntArrayRef): Tensor {.importcpp: \"torch::fft_ifft2(@)\".}\n## Compute the 2-D Inverse Fourier transform\n## ``s`` represents signal size. If given, each dimension dim[i] will either be zero padded or trimmed to the length s[i] before computing the FFT.\nfunc ifft2*(self: Tensor): Tensor {.importcpp: \"torch::fft_ifft2(@)\".}\n## Compute the 2-D Inverse Fourier transform\n\nfunc fftn*(self: Tensor, s: IntArrayRef, dim: IntArrayRef, norm: CppString = defaultNorm): Tensor {.importcpp: \"torch::fft_fftn(@)\".}\n## Compute the N-D Fourier transform\n## ``s`` represents signal size. If given, each dimension dim[i] will either be zero padded or trimmed to the length s[i] before computing the FFT.\n## ``norm`` can be :\n##    * \"forward\" normalize by 1/n\n##    * \"backward\" - no normalization\n##    * \"ortho\" normalize by 1/sqrt(n)\n## With n the logical FFT size: ``n = prod(s)``.\nfunc fftn*(self: Tensor, s: IntArrayRef): Tensor {.importcpp: \"torch::fft_fftn(@)\".}\n## Compute the N-D Fourier transform\n## ``s`` represents signal size. If given, each dimension dim[i] will either be zero padded or trimmed to the length s[i] before computing the FFT.\nfunc fftn*(self: Tensor): Tensor {.importcpp: \"torch::fft_fftn(@)\".}\n## Compute the N-D Fourier transform\n\nfunc ifftn*(self: Tensor, s: IntArrayRef, dim: IntArrayRef, norm: CppString = defaultNorm): Tensor {.importcpp: \"torch::fft_ifftn(@)\".}\n## Compute the N-D Inverse Fourier transform\n## ``s`` represents signal size. If given, each dimension dim[i] will either be zero padded or trimmed to the length s[i] before computing the FFT.\n## ``norm`` can be :\n##   * \"forward\" - no normalization\n##   * \"backward\" - normalization by 1/n\n##   * \"ortho\" - normalization by 1/sqrt(n)\n## With n the logical FFT size: ``n = prod(s)``.\nfunc ifftn*(self: Tensor, s: IntArrayRef): Tensor {.importcpp: \"torch::fft_ifftn(@)\".}\n## Compute the N-D Inverse Fourier transform\n## ``s`` represents signal size. If given, each dimension dim[i] will either be zero padded or trimmed to the length s[i] before computing the FFT.\nfunc ifftn*(self: Tensor): Tensor {.importcpp: \"torch::fft_ifftn(@)\".}\n## Compute the N-D Inverse Fourier transform\n\nfunc rfft*(self: Tensor, n: int64, dim: int64 = -1, norm: CppString = defaultNorm): Tensor {.importcpp: \"torch::fft_rfft\".}\n## Computes the one dimensional Fourier transform of real-valued input.\nfunc rfft*(self: Tensor, dim: int64 = -1, norm: CppString = defaultNorm): Tensor {.importcpp: \"torch::fft_rfft\".}\n## Computes the one dimensional Fourier transform of real-valued input.\n\nfunc irfft*(self: Tensor, n: int64, dim: int64 = -1, norm: CppString = defaultNorm): Tensor {.importcpp: \"torch::fft_irfft\".}\n## Computes the one dimensional Fourier transform of real-valued input.\nfunc irfft*(self: Tensor, dim: int64 = -1, norm: CppString = defaultNorm): Tensor {.importcpp: \"torch::fft_irfft\".}\n## Computes the one dimensional Fourier transform of real-valued input.\n\nfunc rfft2*(self: Tensor, s: IntArrayRef, dim: IntArrayRef, norm: CppString = defaultNorm): Tensor {.importcpp: \"torch::fft_rfft2(@)\".}\n## Compute the N-D Fourier transform\n## ``s`` represents signal size. If given, each dimension dim[i] will either be zero padded or trimmed to the length s[i] before computing the FFT.\n## ``norm`` can be :\n##    * \"forward\" - normalize by 1/n\n##    * \"backward\" - no normalization\n##    * \"ortho\" - normalize by 1/sqrt(n)\n## With n the logical FFT size: ``n = prod(s)``.\nfunc rfft2*(self: Tensor, s: IntArrayRef): Tensor {.importcpp: \"torch::fft_rfft2(@)\".}\n## Compute the N-D Fourier transform\n## ``s`` represents signal size. If given, each dimension dim[i] will either be zero padded or trimmed to the length s[i] before computing the FFT.\nfunc rfft2*(self: Tensor): Tensor {.importcpp: \"torch::fft_rfft2(@)\".}\n## Compute the N-D Fourier transform\n\nfunc irfft2*(self: Tensor, s: IntArrayRef, dim: IntArrayRef, norm: CppString = defaultNorm): Tensor {.\n    importcpp: \"torch::fft_irfft2(@)\".}\n## Compute the N-D Inverse Fourier transform\n## ``s`` represents signal size. If given, each dimension dim[i] will either be zero padded or trimmed to the length s[i] before computing the FFT.\n## ``norm`` can be :\n##   * \"forward\" - no normalization\n##   * \"backward\" - normalization by 1/n\n##   * \"ortho\" - normalization by 1/sqrt(n)\n## With n the logical FFT size: ``n = prod(s)``.\nfunc irfft2*(self: Tensor, s: IntArrayRef): Tensor {.importcpp: \"torch::fft_irfft2(@)\".}\n## Compute the N-D Inverse Fourier transform\n## ``s`` represents signal size. If given, each dimension dim[i] will either be zero padded or trimmed to the length s[i] before computing the FFT.\nfunc irfft2*(self: Tensor): Tensor {.importcpp: \"torch::fft_irfft2(@)\".}\n## Compute the N-D Inverse Fourier transform\n\n\nfunc rfftn*(self: Tensor, s: IntArrayRef, dim: IntArrayRef, norm: CppString = defaultNorm): Tensor {.importcpp: \"torch::fft_rfftn(@)\".}\n## Compute the N-D Fourier transform\n## ``s`` represents signal size. If given, each dimension dim[i] will either be zero padded or trimmed to the length s[i] before computing the FFT.\n## ``norm`` can be :\n##    * \"forward\" - normalize by 1/n\n##    * \"backward\" - no normalization\n##    * \"ortho\" - normalize by 1/sqrt(n)\n## With n the logical FFT size: ``n = prod(s)``.\nfunc rfftn*(self: Tensor, s: IntArrayRef): Tensor {.importcpp: \"torch::fft_rfftn(@)\".}\n## Compute the N-D Fourier transform\n## ``s`` represents signal size. If given, each dimension dim[i] will either be zero padded or trimmed to the length s[i] before computing the FFT.\nfunc rfftn*(self: Tensor): Tensor {.importcpp: \"torch::fft_rfftn(@)\".}\n## Compute the N-D Fourier transform\n\nfunc irfftn*(self: Tensor, s: IntArrayRef, dim: IntArrayRef, norm: CppString = defaultNorm): Tensor {.\n    importcpp: \"torch::fft_irfftn(@)\".}\n## Compute the N-D Inverse Fourier transform\n## ``s`` represents signal size. If given, each dimension dim[i] will either be zero padded or trimmed to the length s[i] before computing the FFT.\n## ``norm`` can be :\n##   * \"forward\" - no normalization\n##   * \"backward\" - normalization by 1/n\n##   * \"ortho\" - normalization by 1/sqrt(n)\n## With n the logical FFT size: ``n = prod(s)``.\nfunc irfftn*(self: Tensor, s: IntArrayRef): Tensor {.importcpp: \"torch::fft_irfftn(@)\".}\n## Compute the N-D Inverse Fourier transform\n## ``s`` represents signal size. If given, each dimension dim[i] will either be zero padded or trimmed to the length s[i] before computing the FFT.\nfunc irfftn*(self: Tensor): Tensor {.importcpp: \"torch::fft_irfftn(@)\".}\n## Compute the N-D Inverse Fourier transform\n\nfunc hfft*(self: Tensor, n: int64, dim: int64 = -1, norm: CppString = defaultNorm) : Tensor {.importcpp: \"torch::hfft\".}\n## Computes the 1 dimensional FFT of a onesided Hermitian signal.\nfunc hfft*(self: Tensor, dim: int64 = -1, norm: CppString = defaultNorm) : Tensor {.importcpp: \"torch::hfft\".}\n## Computes the 1 dimensional FFT of a onesided Hermitian signal.\nfunc ihfft*(self: Tensor, n: int64, dim: int64 = -1, norm: CppString = defaultNorm) : Tensor {.importcpp: \"torch::ihfft\".}\n## Computes the inverse FFT of a real-valued Fourier domain signal.\nfunc ihfft*(self: Tensor, dim: int64 = -1, norm: CppString = defaultNorm) : Tensor {.importcpp: \"torch::ihfft\".}\n## Computes the inverse FFT of a real-valued Fourier domain signal.\n\n#func convolution*(self: Tensor, weight: Tensor, bias: Tensor, stride, padding, dilation: int64, transposed: bool, outputPadding: int64, groups: int64): Tensor {.importcpp: \"torch::convolution(@)\".}\n"}}}

Unable to parse data as RequestMessage

Got valid Notification message of type textDocument/didOpen

New document opened for URI: file:///home/rcaillaud/Workspace/localws/ForkedProjects/flambeau/flambeau/raw_bindings/tensors.nim saving to /tmp/nimlsp/00000000FF0711BA.nim

Initialising project with /home/rcaillaud/Workspace/localws/ForkedProjects/flambeau/flambeau.nim:/home/rcaillaud/.choosenim/toolchains/nim-1.4.4

Trying to read frame

Got frame:
{"jsonrpc":"2.0","method":"textDocument/didChange","params":{"textDocument":{"uri":"file:///home/rcaillaud/Workspace/localws/ForkedProjects/flambeau/flambeau/raw_bindings/tensors.nim","version":2},"contentChanges":[{"text":"# Flambeau\n# Copyright (c) 2020 Mamy André-Ratsimbazafy\n# Licensed and distributed under either of\n#   * MIT license (license terms in the root directory or at http://opensource.org/licenses/MIT).\n#   * Apache v2 license (license terms in the root directory or at http://www.apache.org/licenses/LICENSE-2.0).\n# at your option. This file may not be copied, modified, or distributed except according to those terms.\n\nimport\n  # Standard library\n  std/complex,\n  # Internal\n  ../cpp/std_cpp,\n  ../libtorch,\n  ./c10\n\n# (Almost) raw bindings to PyTorch Tensors\n# -----------------------------------------------------------------------\n#\n# This provides almost raw bindings to PyTorch tensors.\n#\n# \"Nimification\" (camelCase), ergonomic indexing and interoperability with Nim types is left to the \"high-level\" bindings.\n# This should ease searching PyTorch and libtorch documentation,\n# and make C++ tutorials easily applicable.\n#\n# Nonetheless some slight modifications were given to the raw bindings:\n# - `&=`, `|=` and `^=` have been renamed bitand, bitor, bitxor\n# - `[]` and `[]=` are not exported as index and index_put are more flexible\n#   and we want to leave those symbols available for Numpy-like ergonomic indexing.\n# - Nim's `index_fill_mut` and `masked_fill_mut` are mapped to the in-place\n#   C++ `index_fill_` and `masked_fill_`.\n#   The original out-of-place versions are doing clone+in-place mutation\n\n# C++ interop\n# -----------------------------------------------------------------------\n\n{.push cdecl.}\n{.push header: torchHeader.}\n\n# #######################################################################\n#\n#                         Context\n#\n# #######################################################################\n\ntype Torch* = object\n\n# Random Number Generation\n# -----------------------------------------------------------------------\n\nproc manual_seed*(_: type Torch, seed: uint64) {.sideeffect, importcpp: \"torch::manual_seed(@)\".}\n  ## Set torch random number generator seed\n\n# Backends\n# -----------------------------------------------------------------------\n\nproc hasCuda*(_: type Torch): bool{.sideeffect, importcpp: \"torch::hasCuda()\".}\n  ## Returns true if libtorch was compiled with CUDA support\nproc cuda_is_available*(_: type Torch): bool{.sideeffect, importcpp: \"torch::cuda::is_available()\".}\n  ## Returns true if libtorch was compiled with CUDA support\n  ## and at least one CUDA device is available\nproc cudnn_is_available*(_: type Torch): bool {.sideeffect, importcpp: \"torch::cuda::cudnn_is_available()\".}\n  ## Returns true if libtorch was compiled with CUDA and CuDNN support\n  ## and at least one CUDA device is available\n\n# #######################################################################\n#\n#                         Tensor Metadata\n#\n# #######################################################################\n\n# Backend Device\n# -----------------------------------------------------------------------\n# libtorch/include/c10/core/DeviceType.h\n# libtorch/include/c10/core/Device.h\n\ntype\n  DeviceIndex = int16\n\n  DeviceKind* {.importc: \"c10::DeviceType\",\n                size: sizeof(int16).} = enum\n    kCPU = 0\n    kCUDA = 1\n    kMKLDNN = 2\n    kOpenGL = 3\n    kOpenCL = 4\n    kIDEEP = 5\n    kHIP = 6\n    kFPGA = 7\n    kMSNPU = 8\n    kXLA = 9\n    kVulkan = 10\n\n  Device* {.importc: \"c10::Device\", bycopy.} = object\n    kind: DeviceKind\n    index: DeviceIndex\n\nfunc init*(T: type Device, kind: DeviceKind): T {.constructor, importcpp: \"torch::Device(#)\".}\n\n# Datatypes\n# -----------------------------------------------------------------------\n# libtorch/include/torch/csrc/api/include/torch/types.h\n# libtorch/include/c10/core/ScalarType.h\n\ntype\n  ScalarKind* {.importc: \"torch::ScalarType\",\n                size: sizeof(int8).} = enum\n    kUint8 = 0       # kByte\n    kInt8 = 1        # kChar\n    kInt16 = 2       # kShort\n    kInt32 = 3       # kInt\n    kInt64 = 4       # kLong\n    kFloat16 = 5     # kHalf\n    kFloat32 = 6     # kFloat\n    kFloat64 = 7     # kDouble\n    kComplexF16 = 8  # kComplexHalf\n    kComplexF32 = 9  # kComplexFloat\n    kComplexF64 = 10 # kComplexDouble\n    kBool = 11\n    kQint8 = 12      # Quantized int8\n    kQuint8 = 13     # Quantized uint8\n    kQint32 = 14     # Quantized int32\n    kBfloat16 = 15   # Brain float16\n\n\n  SomeTorchType* = uint8|byte or SomeSignedInt or\n                   SomeFloat or Complex[float32] or Complex[float64]\n  ## Torch Tensor type mapped to Nim type\n\n# TensorOptions\n# -----------------------------------------------------------------------\n# libtorch/include/c10/core/TensorOptions.h\n\ntype\n  TensorOptions* {.importcpp: \"torch::TensorOptions\", bycopy.} = object\n\nfunc init*(T: type TensorOptions): TensorOptions {.constructor, importcpp: \"torch::TensorOptions\".}\n\n# Scalars\n# -----------------------------------------------------------------------\n# Scalars are defined in libtorch/include/c10/core/Scalar.h\n# as tagged unions of double, int64, complex\n# And C++ types are implicitly convertible to Scalar\n#\n# Hence in Nim we don't need to care about Scalar or defined converters\n# (except maybe for complex)\ntype Scalar* = SomeNumber or bool\n\n# TensorAccessors\n# -----------------------------------------------------------------------\n# libtorch/include/ATen/core/TensorAccessors.h\n#\n# Tensor accessors gives \"medium-level\" access to a Tensor raw-data\n# - Compared to low-level \"data_ptr\" they take care of striding and shape\n# - Compared to high-level functions they don't provide any parallelism.\n\n# #######################################################################\n#\n#                            Tensors\n#\n# #######################################################################\n\n# Tensors\n# -----------------------------------------------------------------------\n\ntype\n  Tensor* {.importcpp: \"torch::Tensor\", bycopy.} = object\n\n# Strings & Debugging\n# -----------------------------------------------------------------------\n\nproc print*(self: Tensor) {.sideeffect, importcpp: \"torch::print(@)\".}\n\n# Metadata\n# -----------------------------------------------------------------------\n\nfunc dim*(self: Tensor): int64 {.importcpp: \"#.dim()\".}\n  ## Number of dimensions\nfunc reset*(self: var Tensor) {.importcpp: \"#.reset()\".}\nfunc is_same*(self, other: Tensor): bool {.importcpp: \"#.is_same(#)\".}\n  ## Reference equality\n  ## Do the tensors use the same memory.\n\nfunc sizes*(self: Tensor): IntArrayRef {.importcpp: \"#.sizes()\".}\n  ## This is Arraymancer and Numpy \"shape\"\n\nfunc strides*(self: Tensor): IntArrayRef {.importcpp: \"#.strides()\".}\n\nfunc ndimension*(self: Tensor): int64 {.importcpp: \"#.ndimension()\".}\n  ## This is Arraymancer rank\nfunc nbytes*(self: Tensor): uint {.importcpp: \"#.nbytes()\".}\n  ## Bytes-size of the Tensor\nfunc numel*(self: Tensor): int64 {.importcpp: \"#.numel()\".}\n  ## This is Arraymancer and Numpy \"size\"\n\nfunc size*(self: Tensor, axis: int64): int64 {.importcpp: \"#.size(#)\".}\nfunc itemsize*(self: Tensor): uint {.importcpp: \"#.itemsize()\".}\nfunc element_size*(self: Tensor): int64 {.importcpp: \"#.element_size()\".}\n\n# Accessors\n# -----------------------------------------------------------------------\n\nfunc data_ptr*(self: Tensor, T: typedesc[SomeTorchType]): ptr UncheckedArray[T] {.importcpp: \"#.data_ptr<'2>(#)\".}\n  ## Gives raw access to a tensor data of type T.\n  ##\n  ## This is a very low-level procedure. You need to take care\n  ## of the tensor shape and strides yourself.\n  ##\n  ## It is recommended to use this only on contiguous tensors\n  ## (freshly created or freshly cloned) and to avoid\n  ## sliced tensors.\n\n# Backend\n# -----------------------------------------------------------------------\n\nfunc has_storage*(self: Tensor): bool {.importcpp: \"#.has_storage()\".}\nfunc get_device*(self: Tensor): int64 {.importcpp: \"#.get_device()\".}\nfunc is_cuda*(self: Tensor): bool {.importcpp: \"#.is_cuda()\".}\nfunc is_hip*(self: Tensor): bool {.importcpp: \"#.is_hip()\".}\nfunc is_sparse*(self: Tensor): bool {.importcpp: \"#.is_sparse()\".}\nfunc is_mkldnn*(self: Tensor): bool {.importcpp: \"#.is_mkldnn()\".}\nfunc is_vulkan*(self: Tensor): bool {.importcpp: \"#.is_vulkan()\".}\nfunc is_quantized*(self: Tensor): bool {.importcpp: \"#.is_quantized()\".}\nfunc is_meta*(self: Tensor): bool {.importcpp: \"#.is_meta()\".}\n\nfunc cpu*(self: Tensor): Tensor {.importcpp: \"#.cpu()\".}\nfunc cuda*(self: Tensor): Tensor {.importcpp: \"#.cuda()\".}\nfunc hip*(self: Tensor): Tensor {.importcpp: \"#.hip()\".}\nfunc vulkan*(self: Tensor): Tensor {.importcpp: \"#.vulkan()\".}\nfunc to*(self: Tensor, device: DeviceKind): Tensor {.importcpp: \"#.to(#)\".}\nfunc to*(self: Tensor, device: Device): Tensor {.importcpp: \"#.to(#)\".}\n\n# dtype\n# -----------------------------------------------------------------------\n\nfunc to*(self: Tensor, dtype: ScalarKind): Tensor {.importcpp: \"#.to(#)\".}\nfunc scalarType*(self: Tensor): ScalarKind {.importcpp: \"#.scalar_type()\".}\n\n# Constructors\n# -----------------------------------------------------------------------\n\n# DeviceType and ScalarType are auto-convertible to TensorOptions\n\nfunc init*(T: type Tensor): Tensor {.constructor, importcpp: \"torch::Tensor\".}\n\nfunc from_blob*(data: pointer, sizes: IntArrayRef, options: TensorOptions): Tensor {.importcpp: \"torch::from_blob(@)\".}\nfunc from_blob*(data: pointer, sizes: IntArrayRef, scalarKind: ScalarKind): Tensor {.importcpp: \"torch::from_blob(@)\".}\nfunc from_blob*(data: pointer, sizes: IntArrayRef, device: DeviceKind): Tensor {.importcpp: \"torch::from_blob(@)\".}\n\nfunc from_blob*(data: pointer, sizes: int64, options: TensorOptions): Tensor {.importcpp: \"torch::from_blob(@)\".}\nfunc from_blob*(data: pointer, sizes: int64, scalarKind: ScalarKind): Tensor {.importcpp: \"torch::from_blob(@)\".}\nfunc from_blob*(data: pointer, sizes: int64, device: DeviceKind): Tensor {.importcpp: \"torch::from_blob(@)\".}\n\nfunc from_blob*(data: pointer, sizes, strides: IntArrayRef, options: TensorOptions): Tensor {.\n    importcpp: \"torch::from_blob(@)\".}\nfunc from_blob*(data: pointer, sizes, strides: IntArrayRef, scalarKind: ScalarKind): Tensor {.\n    importcpp: \"torch::from_blob(@)\".}\nfunc from_blob*(data: pointer, sizes, strides: IntArrayRef, device: DeviceKind): Tensor {.\n    importcpp: \"torch::from_blob(@)\".}\n\nfunc empty*(size: IntArrayRef, options: TensorOptions): Tensor {.importcpp: \"torch::empty(@)\".}\n  ## Create an uninitialized tensor of shape `size`\n  ## The tensor data must be filled manually\n  ##\n  ## The output tensor will be row major (C contiguous)\nfunc empty*(size: IntArrayRef, scalarKind: ScalarKind): Tensor {.importcpp: \"torch::empty(@)\".}\nfunc empty*(size: IntArrayRef, device: DeviceKind): Tensor {.importcpp: \"torch::empty(@)\".}\n  ## Create an uninitialized tensor of shape `size`\n  ## The tensor data must be filled manually.\n  ##\n  ## If device is NOT on CPU make sure to use specialized\n  ## copy operations. For example to update on Cuda devices\n  ## use cudaMemcpy not a.data[i] = 123\n  ##\n  ## The output tensor will be row major (C contiguous)\n\nfunc clone*(self: Tensor): Tensor {.importcpp: \"#.clone()\".}\n\n# Random sampling\n# -----------------------------------------------------------------------\n\nfunc random_mut*(self: var Tensor, start, stopEx: int64) {.importcpp: \"#.random_(@)\".}\nfunc randint*(start, stopEx: int64): Tensor {.varargs, importcpp: \"torch::randint(#, #, {@})\".}\nfunc randint*(start, stopEx: int64, size: IntArrayRef): Tensor {.importcpp: \"torch::randint(@)\".}\n\nfunc rand_like*(self: Tensor, options: TensorOptions): Tensor {.importcpp: \"torch::rand_like(@)\".}\nfunc rand_like*(self: Tensor, options: ScalarKind): Tensor {.importcpp: \"torch::rand_like(@)\".}\nfunc rand_like*(self: Tensor, options: DeviceKind): Tensor {.importcpp: \"torch::rand_like(@)\".}\nfunc rand_like*(self: Tensor, options: Device): Tensor {.importcpp: \"torch::rand_like(@)\".}\nfunc rand_like*(self: Tensor): Tensor {.importcpp: \"torch::rand_like(@)\".}\n\n\n# func rand*(size: IntArrayRef, options: TensorOptions): Tensor {.importcpp: \"torch::rand(@)\"}\nfunc rand*(size: IntArrayRef, options: ScalarKind): Tensor {.importcpp: \"torch::rand(@)\".}\n# func rand*(size: IntArrayRef, options: DeviceKind): Tensor {.importcpp: \"torch::rand(@)\"}\n# func rand*(size: IntArrayRef, options: Device): Tensor {.importcpp: \"torch::rand(@)\"}\nfunc rand*(size: IntArrayRef): Tensor {.importcpp: \"torch::rand(@)\".}\n\n# Indexing\n# -----------------------------------------------------------------------\n# TODO throw IndexDefect when bounds checking is active\n# libtorch/include/ATen/TensorIndexing.h\n# and https://pytorch.org/cppdocs/notes/tensor_indexing.html\n\nfunc item*(self: Tensor, T: typedesc): T {.importcpp: \"#.item<'0>()\".}\n  ## Extract the scalar from a 0-dimensional tensor\n\n# Unsure what those corresponds to in Python\n# func `[]`*(self: Tensor, index: Scalar): Tensor {.importcpp: \"#[#]\".}\n# func `[]`*(self: Tensor, index: Tensor): Tensor {.importcpp: \"#[#]\".}\n# func `[]`*(self: Tensor, index: int64): Tensor {.importcpp: \"#[#]\".}\n\nfunc index*(self: Tensor): Tensor {.varargs, importcpp: \"#.index({@})\".}\n  ## Tensor indexing. It is recommended\n  ## to Nimify this in a high-level wrapper.\n  ## `tensor.index(indexers)`\n\n# We can't use the construct `#.index_put_({@}, #)`\n# so hardcode sizes,\n# 6d seems reasonable, that would be a batch of 3D videos (videoID/batchID, Time, Color Channel, Height, Width, Depth)\n# If you need more you likely aren't indexing individual values.\n\nfunc index_put*(self: var Tensor, i0: auto, val: Scalar or Tensor) {.importcpp: \"#.index_put_({#}, #)\".}\n  ## Tensor mutation at index. It is recommended\n  ## to Nimify this in a high-level wrapper.\nfunc index_put*(self: var Tensor, i0, i1: auto, val: Scalar or Tensor) {.importcpp: \"#.index_put_({#, #}, #)\".}\n  ## Tensor mutation at index. It is recommended\n  ## to Nimify this in a high-level wrapper.\nfunc index_put*(self: var Tensor, i0, i1, i2: auto, val: Scalar or Tensor) {.importcpp: \"#.index_put_({#, #, #}, #)\".}\n  ## Tensor mutation at index. It is recommended\n  ## to Nimify this in a high-level wrapper.\nfunc index_put*(self: var Tensor, i0, i1, i2, i3: auto, val: Scalar or Tensor) {.importcpp: \"#.index_put_({#, #, #, #}, #)\".}\n  ## Tensor mutation at index. It is recommended\n  ## to Nimify this in a high-level wrapper.\nfunc index_put*(self: var Tensor, i0, i1, i2, i3, i4: auto, val: Scalar or Tensor) {.importcpp: \"#.index_put_({#, #, #, #, #}, #)\".}\n  ## Tensor mutation at index. It is recommended\n  ## to Nimify this in a high-level wrapper.\nfunc index_put*(self: var Tensor, i0, i1, i2, i3, i4, i5: auto, val: Scalar or Tensor) {.importcpp: \"#.index_put_({#, #, #, #, #, #}, #)\".}\n  ## Tensor mutation at index. It is recommended\n  ## to Nimify this in a high-level wrapper.\n\n# Fancy Indexing\n# -----------------------------------------------------------------------\n\nfunc index_select*(self: Tensor, axis: int64, indices: Tensor): Tensor {.importcpp: \"#.index_select(@)\".}\nfunc masked_select*(self: Tensor, mask: Tensor): Tensor {.importcpp: \"#.masked_select(@)\".}\n\n# PyTorch exposes in-place `index_fill_` and `masked_fill_`\n# and out-of-place `index_fill` and `masked_fill`\n# that does in-place + clone\n# we only exposes the in-place version.\n\nfunc index_fill_mut*(self: var Tensor, mask: Tensor, value: Scalar or Tensor) {.importcpp: \"#.index_fill_(@)\".}\nfunc masked_fill_mut*(self: var Tensor, mask: Tensor, value: Scalar or Tensor) {.importcpp: \"#.masked_fill_(@)\".}\n\n# Shapeshifting\n# -----------------------------------------------------------------------\n\nfunc reshape*(self: Tensor): Tensor {.varargs, importcpp: \"#.reshape({@})\".}\nfunc view*(self: Tensor): Tensor {.varargs, importcpp: \"#.reshape({@})\".}\n\n# Automatic Differentiation\n# -----------------------------------------------------------------------\n\nfunc backward*(self: var Tensor){.importcpp: \"#.backward()\".}\n\n# Low-level slicing API\n# -----------------------------------------------------------------------\n\ntype\n  TorchSlice* {.importcpp: \"torch::indexing::Slice\", bycopy.} = object\n  # libtorch/include/ATen/TensorIndexing.h\n\n  TensorIndexType*{.size: sizeof(cint), bycopy, importcpp: \"torch::indexing::TensorIndexType\".} = enum\n    ## This is passed to torchSlice functions\n    IndexNone = 0\n    IndexEllipsis = 1\n    IndexInteger = 2\n    IndexBoolean = 3\n    IndexSlice = 4\n    IndexTensor = 5\n\n  SomeSlicer* = TensorIndexType or SomeSignedInt\n\nproc SliceSpan*(): TorchSlice {.importcpp: \"at::indexing::Slice()\".}\n    ## This is passed to the \"index\" function\n    ## This is Python \":\", span / whole dimension\n\nfunc torchSlice*(){.importcpp: \"torch::indexing::Slice(@)\", constructor.}\nfunc torchSlice*(start: SomeSlicer): TorchSlice {.importcpp: \"torch::indexing::Slice(@)\", constructor.}\nfunc torchSlice*(start: SomeSlicer, stop: SomeSlicer): TorchSlice {.importcpp: \"torch::indexing::Slice(@)\", constructor.}\nfunc torchSlice*(start: SomeSlicer, stop: SomeSlicer, step: SomeSlicer): TorchSlice {.importcpp: \"torch::indexing::Slice(@)\", constructor.}\nfunc start*(s: TorchSlice): int64 {.importcpp: \"#.start()\".}\nfunc stop*(s: TorchSlice): int64 {.importcpp: \"#.stop()\".}\nfunc step*(s: TorchSlice): int64 {.importcpp: \"#.step()\".}\n\n# Operators\n# -----------------------------------------------------------------------\n\nfunc `not`*(self: Tensor): Tensor {.importcpp: \"(~#)\".}\nfunc `-`*(self: Tensor): Tensor {.importcpp: \"(-#)\".}\n\nfunc `+`*(self: Tensor, b: Tensor): Tensor {.importcpp: \"(# + #)\".}\nfunc `-`*(self: Tensor, b: Tensor): Tensor {.importcpp: \"(# - #)\".}\nfunc `*`*(self: Tensor, b: Tensor): Tensor {.importcpp: \"(# * #)\".}\n\nfunc `*`*(a: cfloat or cdouble, b: Tensor): Tensor {.importcpp: \"(# * #)\".}\nfunc `*`*(self: Tensor, b: cfloat or cdouble): Tensor {.importcpp: \"(# * #)\".}\n\nfunc `+=`*(self: var Tensor, b: Tensor) {.importcpp: \"(# += #)\".}\nfunc `+=`*(self: var Tensor, s: Scalar) {.importcpp: \"(# += #)\".}\nfunc `-=`*(self: var Tensor, b: Tensor) {.importcpp: \"(# -= #)\".}\nfunc `-=`*(self: var Tensor, s: Scalar) {.importcpp: \"(# -= #)\".}\nfunc `*=`*(self: var Tensor, b: Tensor) {.importcpp: \"(# *= #)\".}\nfunc `*=`*(self: var Tensor, s: Scalar) {.importcpp: \"(# *= #)\".}\nfunc `/=`*(self: var Tensor, b: Tensor) {.importcpp: \"(# /= #)\".}\nfunc `/=`*(self: var Tensor, s: Scalar) {.importcpp: \"(# /= #)\".}\n\nfunc `and`*(self: Tensor, b: Tensor): Tensor {.importcpp: \"#.bitwise_and(#)\".}\n  ## bitwise `and`.\nfunc `or`*(self: Tensor, b: Tensor): Tensor {.importcpp: \"#.bitwise_or(#)\".}\n  ## bitwise `or`.\nfunc `xor`*(self: Tensor, b: Tensor): Tensor {.importcpp: \"#.bitwise_xor(#)\".}\n  ## bitwise `xor`.\n\nfunc bitand_mut*(self: var Tensor, s: Tensor) {.importcpp: \"#.bitwise_and_(#)\".}\n  ## In-place bitwise `and`.\nfunc bitor_mut*(self: var Tensor, s: Tensor) {.importcpp: \"#.bitwise_or_(#)\".}\n  ## In-place bitwise `or`.\nfunc bitxor_mut*(self: var Tensor, s: Tensor) {.importcpp: \"#.bitwise_xor_(#)\".}\n  ## In-place bitwise `xor`.\n\nfunc eq*(a, b: Tensor): Tensor {.importcpp: \"#.eq(#)\".}\n  ## Equality of each tensor values\nfunc equal*(a, b: Tensor): bool {.importcpp: \"#.equal(#)\".}\ntemplate `==`*(a, b: Tensor): bool =\n  a.equal(b)\n\n# Functions.h\n# -----------------------------------------------------------------------\n\nfunc toType*(self: Tensor, dtype: ScalarKind): Tensor {.importcpp: \"#.toType(@)\".}\nfunc toSparse*(self: Tensor): Tensor {.importcpp: \"#.to_sparse()\".}\nfunc toSparse*(self: Tensor, sparseDim: int64): Tensor {.importcpp: \"#.to_sparse(@)\".}\n\nfunc eye*(n: int64): Tensor {.importcpp: \"torch::eye(@)\".}\nfunc eye*(n: int64, options: TensorOptions): Tensor {.importcpp: \"torch::eye(@)\".}\nfunc eye*(n: int64, scalarKind: ScalarKind): Tensor {.importcpp: \"torch::eye(@)\".}\nfunc eye*(n: int64, device: DeviceKind): Tensor {.importcpp: \"torch::eye(@)\".}\n\nfunc zeros*(dim: int64): Tensor {.importcpp: \"torch::zeros(@)\".}\nfunc zeros*(dim: IntArrayRef): Tensor {.importcpp: \"torch::zeros(@)\".}\nfunc zeros*(dim: IntArrayRef, options: TensorOptions): Tensor {.importcpp: \"torch::zeros(@)\".}\nfunc zeros*(dim: IntArrayRef, scalarKind: ScalarKind): Tensor {.importcpp: \"torch::zeros(@)\".}\nfunc zeros*(dim: IntArrayRef, device: DeviceKind): Tensor {.importcpp: \"torch::zeros(@)\".}\n\nfunc linspace*(start, stop: Scalar, steps: int64, options: TensorOptions) : Tensor {.importcpp: \"torch::linspace(@)\".}\nfunc linspace*(start, stop: Scalar, steps: int64, options: ScalarKind) : Tensor {.importcpp: \"torch::linspace(@)\".}\nfunc linspace*(start, stop: Scalar, steps: int64, options: DeviceKind) : Tensor {.importcpp: \"torch::linspace(@)\".}\nfunc linspace*(start, stop: Scalar, steps: int64, options: Device) : Tensor {.importcpp: \"torch::linspace(@)\".}\nfunc linspace*(start, stop: Scalar, steps: int64) : Tensor {.importcpp: \"torch::linspace(@)\".}\nfunc linspace*(start, stop: Scalar) : Tensor {.importcpp: \"torch::linspace(@)\".}\n\nfunc logspace*(start, stop: Scalar, steps, base: int64, options: TensorOptions) : Tensor {.importcpp: \"torch::logspace(@)\".}\nfunc logspace*(start, stop: Scalar, steps, base: int64, options: ScalarKind) : Tensor {.importcpp: \"torch::logspace(@)\".}\nfunc logspace*(start, stop: Scalar, steps, base: int64, options: DeviceKind) {.importcpp: \"torch::logspace(@)\".}\nfunc logspace*(start, stop: Scalar, steps, base: int64, options: Device)  : Tensor {.importcpp: \"torch::logspace(@)\".}\nfunc logspace*(start, stop: Scalar, steps, base: int64) : Tensor {.importcpp: \"torch::logspace(@)\".}\nfunc logspace*(start, stop: Scalar, steps: int64)  : Tensor {.importcpp: \"torch::logspace(@)\".}\nfunc logspace*(start, stop: Scalar)  : Tensor {.importcpp: \"torch::logspace(@)\".}\n\nfunc arange*(stop: Scalar, options: TensorOptions) : Tensor  {.importcpp: \"torch::arange(@)\".}\nfunc arange*(stop: Scalar, options: ScalarKind) : Tensor  {.importcpp: \"torch::arange(@)\".}\nfunc arange*(stop: Scalar, options: DeviceKind) : Tensor  {.importcpp: \"torch::arange(@)\".}\nfunc arange*(stop: Scalar, options: Device) : Tensor  {.importcpp: \"torch::arange(@)\".}\nfunc arange*(stop: Scalar) : Tensor  {.importcpp: \"torch::arange(@)\".}\n\nfunc arange*(start, stop: Scalar, options: TensorOptions) : Tensor  {.importcpp: \"torch::arange(@)\".}\nfunc arange*(start, stop: Scalar, options: ScalarKind) : Tensor  {.importcpp: \"torch::arange(@)\".}\nfunc arange*(start, stop: Scalar, options: DeviceKind) : Tensor  {.importcpp: \"torch::arange(@)\".}\nfunc arange*(start, stop: Scalar, options: Device) : Tensor  {.importcpp: \"torch::arange(@)\".}\nfunc arange*(start, stop: Scalar) : Tensor  {.importcpp: \"torch::arange(@)\".}\n\nfunc arange*(start, stop, step: Scalar, options: TensorOptions) : Tensor  {.importcpp: \"torch::arange(@)\".}\nfunc arange*(start, stop, step: Scalar, options: ScalarKind) : Tensor  {.importcpp: \"torch::arange(@)\".}\nfunc arange*(start, stop, step: Scalar, options: DeviceKind) : Tensor  {.importcpp: \"torch::arange(@)\".}\nfunc arange*(start, stop, step: Scalar, options: Device) : Tensor  {.importcpp: \"torch::arange(@)\".}\nfunc arange*(start, stop, step: Scalar) : Tensor  {.importcpp: \"torch::arange(@)\".}\n\n# Operations\n# -----------------------------------------------------------------------\nfunc add*(self: Tensor, other: Tensor, alpha: Scalar = 1): Tensor {.importcpp: \"#.add(@)\".}\nfunc add*(self: Tensor, other: Scalar, alpha: Scalar = 1): Tensor {.importcpp: \"#.add(@)\".}\nfunc addmv*(self: Tensor, mat: Tensor, vec: Tensor, beta: Scalar = 1, alpha: Scalar = 1): Tensor {.importcpp: \"#.addmv(@)\".}\nfunc addmm*(t, mat1, mat2: Tensor, beta: Scalar = 1, alpha: Scalar = 1): Tensor {.importcpp: \"#.addmm(@)\".}\nfunc mm*(t, other: Tensor): Tensor {.importcpp: \"#.mm(@)\".}\nfunc matmul*(t, other: Tensor): Tensor {.importcpp: \"#.matmul(@)\".}\nfunc bmm*(t, other: Tensor): Tensor {.importcpp: \"#.bmm(@)\".}\n\nfunc luSolve*(t, data, pivots: Tensor): Tensor {.importcpp: \"#.lu_solve(@)\".}\n\nfunc qr*(self: Tensor, some: bool = true): CppTuple2[Tensor, Tensor] {.importcpp: \"#.qr(@)\".}\n  ## Returns a tuple:\n  ## - Q of shape (∗,m,k)\n  ## - R of shape (∗,k,n)\n  ## with k=min(m,n) if some is true otherwise k=m\n  ##\n  ## The QR decomposition is batched over dimension(s) *\n  ## t = QR\n\n# addr?\nfunc all*(self: Tensor, axis: int64): Tensor {.importcpp: \"#.all(@)\".}\nfunc all*(self: Tensor, axis: int64, keepdim: bool): Tensor {.importcpp: \"#.all(@)\".}\nfunc allClose*(t, other: Tensor, rtol: float64 = 1e-5, abstol: float64 = 1e-8, equalNan: bool = false): bool {.importcpp: \"#.allclose(@)\".}\nfunc any*(self: Tensor, axis: int64): Tensor {.importcpp: \"#.any(@)\".}\nfunc any*(self: Tensor, axis: int64, keepdim: bool): Tensor {.importcpp: \"#.any(@)\".}\nfunc argmax*(self: Tensor): Tensor {.importcpp: \"#.argmax()\".}\nfunc argmax*(self: Tensor, axis: int64, keepdim: bool = false): Tensor {.importcpp: \"#.argmax(@)\".}\nfunc argmin*(self: Tensor): Tensor {.importcpp: \"#.argmin()\".}\nfunc argmin*(self: Tensor, axis: int64, keepdim: bool = false): Tensor {.importcpp: \"#.argmin(@)\".}\n\n# aggregate\n# -----------------------------------------------------------------------\n\n# sum needs wrapper procs/templates to allow for using nim arrays and single axis.\nfunc sum*(self: Tensor): Tensor {.importcpp: \"#.sum()\".}\nfunc sum*(self: Tensor, dtype: ScalarKind): Tensor {.importcpp: \"#.sum(@)\".}\nfunc sum*(self: Tensor, axis: int64, keepdim: bool = false): Tensor {.importcpp: \"#.sum(@)\".}\nfunc sum*(self: Tensor, axis: int64, keepdim: bool = false, dtype: ScalarKind): Tensor {.importcpp: \"#.sum(@)\".}\nfunc sum*(self: Tensor, axis: IntArrayRef, keepdim: bool = false): Tensor {.importcpp: \"#.sum(@)\".}\nfunc sum*(self: Tensor, axis: IntArrayRef, keepdim: bool = false, dtype: ScalarKind): Tensor {.importcpp: \"#.sum(@)\".}\n\n# mean as well\nfunc mean*(self: Tensor): Tensor {.importcpp: \"#.mean()\".}\nfunc mean*(self: Tensor, dtype: ScalarKind): Tensor {.importcpp: \"#.mean(@)\".}\nfunc mean*(self: Tensor, axis: int64, keepdim: bool = false): Tensor {.importcpp: \"#.mean(@)\".}\nfunc mean*(self: Tensor, axis: int64, keepdim: bool = false, dtype: ScalarKind): Tensor {.importcpp: \"#.mean(@)\".}\nfunc mean*(self: Tensor, axis: IntArrayRef, keepdim: bool = false): Tensor {.importcpp: \"#.mean(@)\".}\nfunc mean*(self: Tensor, axis: IntArrayRef, keepdim: bool = false, dtype: ScalarKind): Tensor {.importcpp: \"#.mean(@)\".}\n\n# median requires std::tuple\n\nfunc prod*(self: Tensor): Tensor {.importcpp: \"#.prod()\".}\nfunc prod*(self: Tensor, dtype: ScalarKind): Tensor {.importcpp: \"#.prod(@)\".}\nfunc prod*(self: Tensor, axis: int64, keepdim: bool = false): Tensor {.importcpp: \"#.prod(@)\".}\nfunc prod*(self: Tensor, axis: int64, keepdim: bool = false, dtype: ScalarKind): Tensor {.importcpp: \"#.prod(@)\".}\n\nfunc min*(self: Tensor): Tensor {.importcpp: \"#.min()\".}\nfunc min*(self: Tensor, axis: int64, keepdim: bool = false): CppTuple2[Tensor, Tensor] {.importcpp: \"torch::min(@)\".}\n  ## Returns a tuple (values, indices) of type (TensorT, TensorInt64)\n  ## of the minimum values and their index in the specified axis\n\nfunc max*(self: Tensor): Tensor {.importcpp: \"#.max()\".}\nfunc max*(self: Tensor, axis: int64, keepdim: bool = false): CppTuple2[Tensor, Tensor] {.importcpp: \"torch::max(@)\".}\n  ## Returns a tuple (values, indices) of type (TensorT, TensorInt64)\n  ## of the maximum values and their index in the specified axis\n\nfunc variance*(self: Tensor, unbiased: bool = true): Tensor {.importcpp: \"#.var(@)\".} # can't use `var` because of keyword.\nfunc variance*(self: Tensor, axis: int64, unbiased: bool = true, keepdim: bool = false): Tensor {.importcpp: \"#.var(@)\".}\nfunc variance*(self: Tensor, axis: IntArrayRef, unbiased: bool = true, keepdim: bool = false): Tensor {.importcpp: \"#.var(@)\".}\n\nfunc stddev*(self: Tensor, unbiased: bool = true): Tensor {.importcpp: \"#.std(@)\".}\nfunc stddev*(self: Tensor, axis: int64, unbiased: bool = true, keepdim: bool = false): Tensor {.importcpp: \"#.std(@)\".}\nfunc stddev*(self: Tensor, axis: IntArrayRef, unbiased: bool = true, keepdim: bool = false): Tensor {.importcpp: \"#.std(@)\".}\n\n# algorithms:\n# -----------------------------------------------------------------------\n\nfunc sort*(self: Tensor, axis: int64 = -1, descending: bool = false): CppTuple2[Tensor, Tensor] {.importcpp: \"#.sort(@)\".}\n  ## Sorts the elements of the input tensor along a given dimension in ascending order by value.\n  ## If dim is not given, the last dimension of the input is chosen (dim=-1).\n  ## Returns (values, originalIndices) or type (TensorT, TensorInt64)\n  ## where originalIndices is the original index of each values (before sorting)\nfunc argsort*(self: Tensor, axis: int64 = -1, descending: bool = false): Tensor {.importcpp: \"#.argsort(@)\".}\n\n# math\n# -----------------------------------------------------------------------\nfunc abs*(self: Tensor): Tensor {.importcpp: \"#.abs()\".}\nfunc absolute*(self: Tensor): Tensor {.importcpp: \"#.absolute()\".}\nfunc angle*(self: Tensor): Tensor {.importcpp: \"#.angle()\".}\nfunc sgn*(self: Tensor): Tensor {.importcpp: \"#.sgn()\".}\nfunc conj*(self: Tensor): Tensor {.importcpp: \"#.conj()\".}\nfunc acos*(self: Tensor): Tensor {.importcpp: \"#.acos()\".}\nfunc arccos*(self: Tensor): Tensor {.importcpp: \"#.arccos()\".}\nfunc acosh*(self: Tensor): Tensor {.importcpp: \"#.acosh()\".}\nfunc arccosh*(self: Tensor): Tensor {.importcpp: \"#.arccosh()\".}\nfunc asinh*(self: Tensor): Tensor {.importcpp: \"#.asinh()\".}\nfunc arcsinh*(self: Tensor): Tensor {.importcpp: \"#.arcsinh()\".}\nfunc atanh*(self: Tensor): Tensor {.importcpp: \"#.atanh()\".}\nfunc arctanh*(self: Tensor): Tensor {.importcpp: \"#.arctanh()\".}\nfunc asin*(self: Tensor): Tensor {.importcpp: \"#.asin()\".}\nfunc arcsin*(self: Tensor): Tensor {.importcpp: \"#.arcsin()\".}\nfunc atan*(self: Tensor): Tensor {.importcpp: \"#.atan()\".}\nfunc arctan*(self: Tensor): Tensor {.importcpp: \"#.arctan()\".}\nfunc cos*(self: Tensor): Tensor {.importcpp: \"#.cos()\".}\nfunc sin*(self: Tensor): Tensor {.importcpp: \"#.sin()\".}\nfunc tan*(self: Tensor): Tensor {.importcpp: \"#.tan()\".}\nfunc exp*(self: Tensor): Tensor {.importcpp: \"#.exp()\".}\nfunc exp2*(self: Tensor): Tensor {.importcpp: \"#.exp2()\".}\nfunc erf*(self: Tensor): Tensor {.importcpp: \"#.erf()\".}\nfunc erfc*(self: Tensor): Tensor {.importcpp: \"#.erfc()\".}\nfunc reciprocal*(self: Tensor): Tensor {.importcpp: \"#.reciprocal()\".}\nfunc neg*(self: Tensor): Tensor {.importcpp: \"#.neg()\".}\nfunc clamp*(self: Tensor, min, max: Scalar): Tensor {.importcpp: \"#.clamp(@)\".}\nfunc clampMin*(self: Tensor, min: Scalar): Tensor {.importcpp: \"#.clamp_min(@)\".}\nfunc clampMax*(self: Tensor, max: Scalar): Tensor {.importcpp: \"#.clamp_max(@)\".}\n\nfunc dot*(self: Tensor, other: Tensor): Tensor {.importcpp: \"#.dot(@)\".}\n{\nfunc squeeze*(self: Tensor): Tensor {.importcpp: \"#.squeeze()\".}\nfunc squeeze*(self: Tensor, axis: int64): Tensor {.importcpp: \"#.squeeze(@)\".}\nfunc unsqueeze*(self: Tensor, axis: int64): Tensor {.importcpp: \"#.unsqueeze(@)\".}\n\n{.pop.}\n\nfunc inv*(self: Tensor) : Tensor {.importcpp: \"#.linalg_inv(@)\", header: \"linalg.h\".}\n\n# FFT\n# -----------------------------------------------------------------------\n{.push header: \"fft.h\".}\nfunc fftshift*(self: Tensor): Tensor {.importcpp: \"torch::fft_fftshift(@)\".}\nfunc fftshift*(self: Tensor, dim: IntArrayRef): Tensor {.importcpp: \"torch::fft_ifftshift(@)\".}\nfunc ifftshift*(self: Tensor): Tensor {.importcpp: \"torch::fft_fftshift(@)\".}\nfunc ifftshift*(self: Tensor, dim: IntArrayRef): Tensor {.importcpp: \"torch::fft_ifftshift(@)\".}\n\nlet defaultNorm : CppString = initCppString(\"backward\")\n\nfunc fft*(self: Tensor, n: int64, dim: int64 = -1, norm: CppString = defaultNorm): Tensor {.importcpp: \"torch::fft_fft(@)\".}\n## Compute the 1-D Fourier transform\n## ``n`` represents signal size. If given, each dimension dim[i] will either be zero padded or trimmed to the length s[i] before computing the FFT.\n## ``norm`` can be :\n##    * \"forward\" - normalize by 1/n\n##    * \"backward\" - no normalization\n##    * \"ortho\" - normalize by 1/sqrt(n)\nfunc fft*(self: Tensor, dim: int64 = -1, norm: CppString = defaultNorm): Tensor {.importcpp: \"torch::fft_fft(@)\".}\n## Compute the 1-D Fourier transform\n\nfunc ifft*(self: Tensor, n: int64, dim: int64 = -1, norm: CppString = defaultNorm): Tensor {.importcpp: \"torch::ifft_ifft(@)\".}\n## Compute the 1-D Fourier transform\n## ``norm`` can be :\n##   * \"forward\" - no normalization\n##   * \"backward\" - normalization by 1/n\n##   * \"ortho\" - normalization by 1/sqrt(n)\nfunc ifft*(self: Tensor, dim: int64 = -1, norm: CppString = defaultNorm): Tensor {.importcpp: \"torch::ifft_ifft(@)\".}\n## Compute the 1-D Fourier transform\n\nfunc fft2*(self: Tensor, s: IntArrayRef, dim: IntArrayRef, norm: CppString = defaultNorm): Tensor {.\n    importcpp: \"torch::fft_fft2(@)\".}\n## Compute the 2-D Fourier transform\n## ``s`` represents signal size. If given, each dimension dim[i] will either be zero padded or trimmed to the length s[i] before computing the FFT.\n## ``norm`` can be :\n##    * \"forward\" - normalize by 1/n\n##    * \"backward\" - no normalization\n##    * \"ortho\" - normalize by 1/sqrt(n)\n## With n the logical FFT size: ``n = prod(s)``.\nfunc fft2*(self: Tensor, s: IntArrayRef): Tensor {.importcpp: \"torch::fft_fft2(@)\".}\n## Compute the 2-D Fourier transform\n## ``s`` represents signal size. If given, each dimension dim[i] will either be zero padded or trimmed to the length s[i] before computing the FFT.\nfunc fft2*(self: Tensor): Tensor {.importcpp: \"torch::fft_fft2(@)\".}\n## Compute the 2-D Fourier transform\n\nfunc ifft2*(self: Tensor, s: IntArrayRef, dim: IntArrayRef, norm: CppString = defaultNorm): Tensor {.\n    importcpp: \"torch::fft_ifft2(@)\".}\n## Compute the 2-D Inverse Fourier transform\n## ``s`` represents signal size. If given, each dimension dim[i] will either be zero padded or trimmed to the length s[i] before computing the FFT.\n## ``norm`` can be :\n##   * \"forward\" - no normalization\n##   * \"backward\" - normalization by 1/n\n##   * \"ortho\" - normalization by 1/sqrt(n)\n## With n the logical FFT size: ``n = prod(s)``.\nfunc ifft2*(self: Tensor, s: IntArrayRef): Tensor {.importcpp: \"torch::fft_ifft2(@)\".}\n## Compute the 2-D Inverse Fourier transform\n## ``s`` represents signal size. If given, each dimension dim[i] will either be zero padded or trimmed to the length s[i] before computing the FFT.\nfunc ifft2*(self: Tensor): Tensor {.importcpp: \"torch::fft_ifft2(@)\".}\n## Compute the 2-D Inverse Fourier transform\n\nfunc fftn*(self: Tensor, s: IntArrayRef, dim: IntArrayRef, norm: CppString = defaultNorm): Tensor {.importcpp: \"torch::fft_fftn(@)\".}\n## Compute the N-D Fourier transform\n## ``s`` represents signal size. If given, each dimension dim[i] will either be zero padded or trimmed to the length s[i] before computing the FFT.\n## ``norm`` can be :\n##    * \"forward\" normalize by 1/n\n##    * \"backward\" - no normalization\n##    * \"ortho\" normalize by 1/sqrt(n)\n## With n the logical FFT size: ``n = prod(s)``.\nfunc fftn*(self: Tensor, s: IntArrayRef): Tensor {.importcpp: \"torch::fft_fftn(@)\".}\n## Compute the N-D Fourier transform\n## ``s`` represents signal size. If given, each dimension dim[i] will either be zero padded or trimmed to the length s[i] before computing the FFT.\nfunc fftn*(self: Tensor): Tensor {.importcpp: \"torch::fft_fftn(@)\".}\n## Compute the N-D Fourier transform\n\nfunc ifftn*(self: Tensor, s: IntArrayRef, dim: IntArrayRef, norm: CppString = defaultNorm): Tensor {.importcpp: \"torch::fft_ifftn(@)\".}\n## Compute the N-D Inverse Fourier transform\n## ``s`` represents signal size. If given, each dimension dim[i] will either be zero padded or trimmed to the length s[i] before computing the FFT.\n## ``norm`` can be :\n##   * \"forward\" - no normalization\n##   * \"backward\" - normalization by 1/n\n##   * \"ortho\" - normalization by 1/sqrt(n)\n## With n the logical FFT size: ``n = prod(s)``.\nfunc ifftn*(self: Tensor, s: IntArrayRef): Tensor {.importcpp: \"torch::fft_ifftn(@)\".}\n## Compute the N-D Inverse Fourier transform\n## ``s`` represents signal size. If given, each dimension dim[i] will either be zero padded or trimmed to the length s[i] before computing the FFT.\nfunc ifftn*(self: Tensor): Tensor {.importcpp: \"torch::fft_ifftn(@)\".}\n## Compute the N-D Inverse Fourier transform\n\nfunc rfft*(self: Tensor, n: int64, dim: int64 = -1, norm: CppString = defaultNorm): Tensor {.importcpp: \"torch::fft_rfft\".}\n## Computes the one dimensional Fourier transform of real-valued input.\nfunc rfft*(self: Tensor, dim: int64 = -1, norm: CppString = defaultNorm): Tensor {.importcpp: \"torch::fft_rfft\".}\n## Computes the one dimensional Fourier transform of real-valued input.\n\nfunc irfft*(self: Tensor, n: int64, dim: int64 = -1, norm: CppString = defaultNorm): Tensor {.importcpp: \"torch::fft_irfft\".}\n## Computes the one dimensional Fourier transform of real-valued input.\nfunc irfft*(self: Tensor, dim: int64 = -1, norm: CppString = defaultNorm): Tensor {.importcpp: \"torch::fft_irfft\".}\n## Computes the one dimensional Fourier transform of real-valued input.\n\nfunc rfft2*(self: Tensor, s: IntArrayRef, dim: IntArrayRef, norm: CppString = defaultNorm): Tensor {.importcpp: \"torch::fft_rfft2(@)\".}\n## Compute the N-D Fourier transform\n## ``s`` represents signal size. If given, each dimension dim[i] will either be zero padded or trimmed to the length s[i] before computing the FFT.\n## ``norm`` can be :\n##    * \"forward\" - normalize by 1/n\n##    * \"backward\" - no normalization\n##    * \"ortho\" - normalize by 1/sqrt(n)\n## With n the logical FFT size: ``n = prod(s)``.\nfunc rfft2*(self: Tensor, s: IntArrayRef): Tensor {.importcpp: \"torch::fft_rfft2(@)\".}\n## Compute the N-D Fourier transform\n## ``s`` represents signal size. If given, each dimension dim[i] will either be zero padded or trimmed to the length s[i] before computing the FFT.\nfunc rfft2*(self: Tensor): Tensor {.importcpp: \"torch::fft_rfft2(@)\".}\n## Compute the N-D Fourier transform\n\nfunc irfft2*(self: Tensor, s: IntArrayRef, dim: IntArrayRef, norm: CppString = defaultNorm): Tensor {.\n    importcpp: \"torch::fft_irfft2(@)\".}\n## Compute the N-D Inverse Fourier transform\n## ``s`` represents signal size. If given, each dimension dim[i] will either be zero padded or trimmed to the length s[i] before computing the FFT.\n## ``norm`` can be :\n##   * \"forward\" - no normalization\n##   * \"backward\" - normalization by 1/n\n##   * \"ortho\" - normalization by 1/sqrt(n)\n## With n the logical FFT size: ``n = prod(s)``.\nfunc irfft2*(self: Tensor, s: IntArrayRef): Tensor {.importcpp: \"torch::fft_irfft2(@)\".}\n## Compute the N-D Inverse Fourier transform\n## ``s`` represents signal size. If given, each dimension dim[i] will either be zero padded or trimmed to the length s[i] before computing the FFT.\nfunc irfft2*(self: Tensor): Tensor {.importcpp: \"torch::fft_irfft2(@)\".}\n## Compute the N-D Inverse Fourier transform\n\n\nfunc rfftn*(self: Tensor, s: IntArrayRef, dim: IntArrayRef, norm: CppString = defaultNorm): Tensor {.importcpp: \"torch::fft_rfftn(@)\".}\n## Compute the N-D Fourier transform\n## ``s`` represents signal size. If given, each dimension dim[i] will either be zero padded or trimmed to the length s[i] before computing the FFT.\n## ``norm`` can be :\n##    * \"forward\" - normalize by 1/n\n##    * \"backward\" - no normalization\n##    * \"ortho\" - normalize by 1/sqrt(n)\n## With n the logical FFT size: ``n = prod(s)``.\nfunc rfftn*(self: Tensor, s: IntArrayRef): Tensor {.importcpp: \"torch::fft_rfftn(@)\".}\n## Compute the N-D Fourier transform\n## ``s`` represents signal size. If given, each dimension dim[i] will either be zero padded or trimmed to the length s[i] before computing the FFT.\nfunc rfftn*(self: Tensor): Tensor {.importcpp: \"torch::fft_rfftn(@)\".}\n## Compute the N-D Fourier transform\n\nfunc irfftn*(self: Tensor, s: IntArrayRef, dim: IntArrayRef, norm: CppString = defaultNorm): Tensor {.\n    importcpp: \"torch::fft_irfftn(@)\".}\n## Compute the N-D Inverse Fourier transform\n## ``s`` represents signal size. If given, each dimension dim[i] will either be zero padded or trimmed to the length s[i] before computing the FFT.\n## ``norm`` can be :\n##   * \"forward\" - no normalization\n##   * \"backward\" - normalization by 1/n\n##   * \"ortho\" - normalization by 1/sqrt(n)\n## With n the logical FFT size: ``n = prod(s)``.\nfunc irfftn*(self: Tensor, s: IntArrayRef): Tensor {.importcpp: \"torch::fft_irfftn(@)\".}\n## Compute the N-D Inverse Fourier transform\n## ``s`` represents signal size. If given, each dimension dim[i] will either be zero padded or trimmed to the length s[i] before computing the FFT.\nfunc irfftn*(self: Tensor): Tensor {.importcpp: \"torch::fft_irfftn(@)\".}\n## Compute the N-D Inverse Fourier transform\n\nfunc hfft*(self: Tensor, n: int64, dim: int64 = -1, norm: CppString = defaultNorm) : Tensor {.importcpp: \"torch::hfft\".}\n## Computes the 1 dimensional FFT of a onesided Hermitian signal.\nfunc hfft*(self: Tensor, dim: int64 = -1, norm: CppString = defaultNorm) : Tensor {.importcpp: \"torch::hfft\".}\n## Computes the 1 dimensional FFT of a onesided Hermitian signal.\nfunc ihfft*(self: Tensor, n: int64, dim: int64 = -1, norm: CppString = defaultNorm) : Tensor {.importcpp: \"torch::ihfft\".}\n## Computes the inverse FFT of a real-valued Fourier domain signal.\nfunc ihfft*(self: Tensor, dim: int64 = -1, norm: CppString = defaultNorm) : Tensor {.importcpp: \"torch::ihfft\".}\n## Computes the inverse FFT of a real-valued Fourier domain signal.\n\n#func convolution*(self: Tensor, weight: Tensor, bias: Tensor, stride, padding, dilation: int64, transposed: bool, outputPadding: int64, groups: int64): Tensor {.importcpp: \"torch::convolution(@)\".}\n"}]}}

Unable to parse data as RequestMessage

Got valid Notification message of type textDocument/didChange

Got document change for URI: file:///home/rcaillaud/Workspace/localws/ForkedProjects/flambeau/flambeau/raw_bindings/tensors.nim saving to /tmp/nimlsp/00000000FF0711BA.nim

Got diagnostics: @[(section: ideChk, symKind: skUnknown, qualifiedPath: , forth: Hint, filePath: ???, line: 0, column: -1, doc: tensors [Processing], quality: 0, line: 0, prefix: None), (section: ideChk, symKind: skUnknown, qualifiedPath: , forth: Hint, filePath: /home/rcaillaud/Workspace/localws/ForkedProjects/flambeau/flambeau/raw_bindings/tensors.nim, line: 50, column: 47, doc: template/generic instantiation from here, quality: 0, line: 50, prefix: None), (section: ideChk, symKind: skUnknown, qualifiedPath: , forth: Error, filePath: /home/rcaillaud/Workspace/localws/ForkedProjects/flambeau/flambeau/raw_bindings/tensors.nim, line: 37, column: 13, doc: string literal expected, quality: 0, line: 37, prefix: None), (section: ideChk, symKind: skUnknown, qualifiedPath: , forth: Hint, filePath: /home/rcaillaud/Workspace/localws/ForkedProjects/flambeau/flambeau/raw_bindings/tensors.nim, line: 56, column: 34, doc: template/generic instantiation from here, quality: 0, line: 56, prefix: None), (section: ideChk, symKind: skUnknown, qualifiedPath: , forth: Error, filePath: /home/rcaillaud/Workspace/localws/ForkedProjects/flambeau/flambeau/raw_bindings/tensors.nim, line: 37, column: 13, doc: string literal expected, quality: 0, line: 37, prefix: None), (section: ideChk, symKind: skUnknown, qualifiedPath: , forth: Hint, filePath: /home/rcaillaud/Workspace/localws/ForkedProjects/flambeau/flambeau/raw_bindings/tensors.nim, line: 58, column: 44, doc: template/generic instantiation from here, quality: 0, line: 58, prefix: None), (section: ideChk, symKind: skUnknown, qualifiedPath: , forth: Error, filePath: /home/rcaillaud/Workspace/localws/ForkedProjects/flambeau/flambeau/raw_bindings/tensors.nim, line: 37, column: 13, doc: string literal expected, quality: 0, line: 37, prefix: None), (section: ideChk, symKind: skUnknown, qualifiedPath: , forth: Hint, filePath: /home/rcaillaud/Workspace/localws/ForkedProjects/flambeau/flambeau/raw_bindings/tensors.nim, line: 61, column: 46, doc: template/generic instantiation from here, quality: 0, line: 61, prefix: None), (section: ideChk, symKind: skUnknown, qualifiedPath: , forth: Error, filePath: /home/rcaillaud/Workspace/localws/ForkedProjects/flambeau/flambeau/raw_bindings/tensors.nim, line: 37, column: 13, doc: string literal expected, quality: 0, line: 37, prefix: None), (section: ideChk, symKind: skUnknown, qualifiedPath: , forth: Hint, filePath: /home/rcaillaud/Workspace/localws/ForkedProjects/flambeau/flambeau/raw_bindings/tensors.nim, line: 79, column: 14, doc: template/generic instantiation from here, quality: 0, line: 79, prefix: None), (section: ideChk, symKind: skUnknown, qualifiedPath: , forth: Error, filePath: /home/rcaillaud/Workspace/localws/ForkedProjects/flambeau/flambeau/raw_bindings/tensors.nim, line: 37, column: 13, doc: string literal expected, quality: 0, line: 37, prefix: None)] and 460 more

Trying to read frame

Once the bug occurs, I can only kill neovim with SIGINT .

@PMunch
Copy link
Owner

PMunch commented Apr 7, 2021

Hmm, strange. NimLSP just appears to be waiting for more requests. Do you see NimLSP consuming a lot of CPU when this happens? It might be a neovim or neovim/NimLSP interaction that just causes it to hang somehow.

@Clonkk
Copy link
Author

Clonkk commented Apr 7, 2021

When the bugs occurs, it's the process nvim that gets stuck at 100% CPU, nimlsp is fine.

I tried 2 different LSP Client (neoclide/coc and prabirshrestha/vim-lsp) and 2 different Nevoim version (latest devel and 0.4.3), with the same result

@zetashift
Copy link

zetashift commented Apr 7, 2021

Running neovim-nightly on Linux x64, using nvim-lspconfig and I can't reproduce? How long do I have to wait for it to spike to 100%?

@Clonkk
Copy link
Author

Clonkk commented Apr 7, 2021

Here is what I do that reproduce :

  • Open tensors.nim
  • Wait around 30 seconds
  • Press "o" (newline + insert mode)
  • Write "{" character

Sometimes I have to repeat the last 2 step once before triggerring it.

This is on NVIM v0.5.0-dev+1176-g94381310c2. I'll update to the new nightly NVIM v0.5.0-dev+1227-gb518b9076 and test again but I had the same result with v0.4.3 and v0.4.4.

I'm using OpenSuse Leap 15.2 and use nvim.appimage.

Opening non-Nim file (tried .jl, .cpp, .py files) do not have any issue.
Opening other Nim file trigger the issue

Edit with new nightly : NVIM v0.5.0-dev+1227-gb518b9076

  • Same results except it took a bit longer (I'd say around a minute) before bug appeared

@zetashift
Copy link

Ah yes, can reproduce, after writing the { character, neovim spikes all the way up...

@zetashift
Copy link

zetashift commented Apr 7, 2021

Might be syntax highlighting, I used :set syntax=off and it doesn't spike anymore.

@Clonkk
Copy link
Author

Clonkk commented Apr 7, 2021

Could be, as writing # { do not trigger the spike as well

@zetashift
Copy link

I'm using https://github.com/alaviss/nim.nvim for syntax highlighting, if I use vim-polyglot for syntax highlighting, I cannot reproduce this.

@PMunch
Copy link
Owner

PMunch commented Apr 7, 2021

I use zah/nim.vim for highlighting with normal Vim (VIM - Vi IMproved 8.2 (2019 Dec 12, compiled Feb 09 2021 23:51:55)) and I can't reproduce.

@Clonkk
Copy link
Author

Clonkk commented Apr 7, 2021

Same result as @zetashift I use alaviss/nim.nvim as well. If I stop using the plugin or use another one it seems to work.

@zetashift
Copy link

That probably makes this a alaviss/nim.nvim issue no?

@Clonkk
Copy link
Author

Clonkk commented Apr 7, 2021

I created alaviss/nim.nvim#41

@Clonkk Clonkk closed this as completed Apr 7, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants