Skip to content

1.4.0

Compare
Choose a tag to compare
@nsthorat nsthorat released this 11 Dec 20:38
· 2612 commits to master since this release
da343c1

High-level additions

  • Kernel modularization has begun! This will not affect any code today but in
    the future we'll add support for dropping code for kernels not used for your
    model.
  • The WebAssembly backend is now on NPM in alpha. You can check out details in the
    README (here)[https://github.com/tensorflow/tfjs/tree/master/tfjs-backend-wasm].
    We're not listing all the WASM PRs here yet, this will happen with the first non-alpha
    release.

Core (1.3.0 ==> 1.4.0)

Features

  • [WASM] Add resizeBilinear kernel. (#2436).
  • [WASM] Add ArgMax kernel (#2433).
  • [WASM] Add relu, relu6 kernels. (#2432).
  • [core, wasm] Modularize FromPixels and make it work for the w… (#2429).
  • [WASM] Fuse relu, relu6, prelu activations for conv2d. Add fuse… (#2424).
  • [WASM] Add FloorDiv used by PoseNet (#2426).
  • tf.linalg.bandPart (#2226). Thanks, @DirkToewe.
  • tf.broadcastTo (#2238). Thanks, @DirkToewe.
  • [WASM] Add NonMaxSuppressionV3 (#2414).
  • [WASM] Add AvgPool kernel. (#2411).
  • [core] Do not fail if half float extension not found and we are… (#2410).
  • [WASM] Add Addn used by MobileNet (#2408).
  • [WASM] Add maxPool kernel. (#2396).
  • [WASM] Add ClipByValue (#2405).
  • [WASM] Add PadV2 kernel (#2404).
  • [WASM] Add DepthwiseConv2dNative (#2374).
  • added metadata to the model artifacts on the loaders (#2392).
  • [WASM] Add FusedConv2D which only supports fusing bias. (#2356).
  • [WASM] Add CropAndResize kernel. (#2307).
  • [core]Add divNoNan op (#2320).
  • [WASM] Add Conv2D that calls into xnn pack. (#2283).
  • [WASM] Add Concat and Transpose kernels (#2303).
  • [WASM] Add min / max kernels. (#2289).
  • [WASM] Add batchnorm kernel. (#2264).
  • [WASM] Add Slice and Square kernels (#2269).

Bug fixes

  • [WASM] Support integer inputs for bilinear resizing and crop an… (#2480).
  • [core] Relax fusing logic. (#2422).
  • [Core] Fix bug with printing complex64 numbers. (#2347).
  • [WASM] Transpose the filter of the convolution before calling x… (#2344).
  • [Core] Fix bug with WEBGL_PACK=false and uploading packed tensor (#2291).
  • [Core] Fix bug with pointwise conv2d with packing (#2290).

Development

  • [core] Rename fused ops for internal scopeName tracking. (#2417).
  • fix padding_test.ts & 'for .. in' to 'for .. of' (#2388). Thanks, @karikera.
  • Revert "[Core] Update to TypeScript 3.6.3. " (#2376) (#2355).
  • [Core] Update to TypeScript 3.6.3. (#2355).
  • [WASM] Add a hierarchical test spec. (#2305).
  • [Core] Test WEBGL_PACK=false on CI (#2294).

Misc

  • Add cloud func to sync RN test app to browserStack
  • remove SavedModelTensorInfo, use ModelTensorInfo (#2379).
  • [tfjs-core/tfjs-react-native] run core unit tests in react-native

Data (1.3.0 ==> 1.4.0)

Layers (1.3.0 ==> 1.4.0)

Features

  • [layers] Add masking support to tf.layers.bidirectional() (#2371).
  • [layers] Add SpatialDropout1D Layer (#2369).

Bug fixes

  • [layers] Properly support channelsFirst dataFormat in Flatten layer (#2346).

Development

  • fix padding_test.ts & 'for .. in' to 'for .. of' (#2388). Thanks, @karikera.

Misc

  • Update layers integration tests to 1.3.0 (#2282).

Converter (1.3.0 ==> 1.4.0)

Features

  • support fused matmul (#2462).
  • support FusedDepthwiseConv2dNative op (#2446).
  • add support to fused depthwise conv2d (#2439).
  • Support GraphModel execution with SignatureDef keys (#2393).
  • [converter] Add support for op DivNoNan (#2343).

Breaking changes

  • the tensorflow cpu pip fails on mac (#2389).

Bug fixes

  • [converter] Fix mobilenet conversion problem (#2468).
  • enable quantization for tfhub modules (#2466).
  • add output format auto fill for frozen model and required param check (#2434).
  • the tensorflow cpu pip fails on mac (#2389).
  • do not quantize int32 tensors (#2368).
  • add output for noop and tensorArrayExit, this allows child now with control dep to proceed (#2312).
  • fix the versions missing from the model topology json (#2306).
  • clean the unused control flow input nodes from the graph (#2287).

Performance

  • support fused matmul (#2462).
  • [converter] Fold conv/depthwiseconv + add + batchnorm (#2463).

Misc

  • move signature def to userDefinedMetadata section (#2381).
  • fix the Concat and ConcatV2 op when the input tensor list length mismatch with the attribute param N (#2352).
  • add signature def to model.json (#2326).
  • added support prelu op (#2333).
  • remove the nodes that are skipped during batch norm folding for control node inputs (#2337).
  • rely on only the tensorflow cpu pip (#2325).
  • Fix g3 errors (#2311).
  • fix g3 errors (#2310).
  • sync g3 change to github (#2308).
  • replace the weights node name with scaled weights node (#2284).
  • move the prelu fusing logic after grappler optimization (#2234).

Node (1.3.0 ==> 1.4.0)

Features

  • [node] Add node n-api version 5 for new node releases (#2438).
  • [node] Add SavedModel execution (#2336).
  • [tfjs-node] Add SavedModel signatureDef loading and deleting API (#2217).

Bug fixes

  • [node] Fix typo when parsing string in c bindings (#2435).
  • Fix C++ char array creation syntax. (#2406).

Development

  • [node] Add node n-api version 5 for new node releases (#2438).
  • [node] Add SavedModel execution (#2336).
  • [node] Use ModelTensorInfo when read SavedModel (#2353).
  • [node] Add divnonan in tfjs-node (#2351).
  • [node] Add prep-gpu for windows system (#2334).
  • simplify install script, remove dependency to yarn (#2340). Thanks, @valette.

Documentation

  • add raspberry-pi doc (#2444).

Misc

  • [node] Depend on latest core (#2496).