New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Connectionist Temporal Classification example #32
Comments
Thank you for inquiring. We are working on an implementation. It will be released if/when we are happy with its performance, API, and documentation. |
Here's another request for a CTC layer / objective. I'm working on speech modeling, and CTC is pretty much essential to this. |
Here here, I'm working on speech modeling too. A CTC implementation will help my effort greatly. |
Just an update on timelines: we're getting closer but it won't happen before January sometime at the earliest. |
Baidu has just released a fast open source implementation of CTC for CPUs and GPUs. It has a very simple C interface and it should hopefully be fairly easy to create Tensorflow bindings (we're happy to help). The release includes Torch bindings. |
China is out silicon valleying silicon valley! |
what's the timeline now @ebrevdo |
We're closer. Still cleaning up code for release.
|
Hi Ebrevdo, A CTC implementation would be really useful to me for some genome analysis tasks that are handled poorly by HMM approaches. Matthew |
@ebrevdo any progress? :) |
Also highly interested in this and I hope that you can release your implementation soon! @ebrevdo |
Coming very soon (I hope).
|
The CTC loss and two decoders (greedy & beam search for CTC) should be in the next push. |
(it'll be accessible via tf.contrib.ctc.ctc_loss etc.) |
Awesome! |
Very Awesome |
very very awesome! |
ctc_loss and friends is now part of contrib directory -- once it matures and we fix any issues, we'll add it to the core. |
Added a verbosity control option
Update CROSSTOOL_hipcc.tpl to support ROCm 1.8
Add use_explicit_batch parameter available in OpConverterParams and other places Formatting and make const bool everywhere Enable use_explicit_batch for TRT 6.0 Revise validation checks to account for use_explicit_batch. Propagate flag to ConversionParams and TRTEngineOp Rename use_explicit_batch/use_implicit_batch Formatting Add simple activtion test for testing dynamic input shapes. Second test with None dims is disabled Update ConvertAxis to account for use_implicit batch fix use of use_implicit_batch (tensorflow#7) * fix use of use_implicit_batch * change order of parameters in ConvertAxis function fix build (tensorflow#8) Update converters for ResNet50 (except Binary ops) (tensorflow#9) * Update RN50 converters for use_implicit_batch: Conv2D, BiasAdd, Transpose, MaxPool, Squeeze, MatMul, Pad * Fix compilation errors * Fix tests Use TRT6 API's for dynamic shape (tensorflow#11) * adding changes for addnetworkv2 * add plugin utils header file in build * optimization profile api added * fix optimization profile * TRT 6.0 api changes + clang format * Return valid errors in trt_engine_op * add/fix comments * Changes to make sure activation test passes with TRT trunk * use HasStaticShape API, add new line at EOF Allow opt profiles to be set via env variables temporarily. Undo accidental change fix segfault by properly returning the status from OverwriteStaticDims function Update GetTrtBroadcastShapes for use_implicit_batch (tensorflow#14) * Update GetTrtBroadcastShapes for use_implicit_batch * Formatting Update activation test Fix merge errors Update converter for reshape (tensorflow#17) Allow INT32 for elementwise (tensorflow#18) Add Shape op (tensorflow#19) * Add Shape op * Add #if guards for Shape. Fix formatting Support dynamic shapes for strided slice (tensorflow#20) Support dynamic shapes for strided slice Support const scalars + Pack on constants (tensorflow#21) Support const scalars and pack with constants in TRT6 Fixes/improvements for BERT (tensorflow#22) * Support shrink_axis_mask for StridedSlice * Use a pointer for final_shape arg in ConvertStridedSliceHelper. Use final_shape for unpack/unstack * Support BatchMatMulV2. * Remove TODO and update comments * Remove unused include * Update Gather for TRT 6 * Update BatchMatMul for TRT6 - may need more changes * Update StridedSlice shrink_axis for TRT6 * Fix bugs with ConvertAxis, StridedSlice shrink_axis, Gather * Fix FC and broadcast * Compile issue and matmul fix * Use nullptr for empty weights * Update Slice * Fix matmul for TRT6 * Use enqueueV2. Don't limit to 1 input per engine Change INetworkConfig to IBuilderConfig Allow expand dims to work on dynamic inputs by slicing shape. Catch problems with DepthwiseConv. Don't try to verify dynamic shapes in CheckValidSize (tensorflow#24) Update CombinedNMS converter (tensorflow#23) * Support CombinedNMS in non implicit batch mode. The squeeze will not work if multiple dimensions are unknown * Fix compile error and formatting Support squeeze when input dims are unknown Support an additional case of StridedSlice where some dims aren't known Use new API for createNetworkV2 Fix flag type for createNetworkV2 Use tensor inputs for strided slice Allow squeeze to work on -1 dims Add TRT6 checks to new API spliting ConvertGraphDefToEngine (tensorflow#29) * spliting ConvertGraphDefToEngine into ConvertGraphDefToNetwork and BuildEngineFromNetwork * some compiler error * fix format Squeeze Helper function (tensorflow#31) * Add squeeze helper * Fix compile issues * Use squeeze helper for CombinedNMS Update Split & Unpack for dynamic shapes (tensorflow#32) * Update Unpack for dynamic shapes * Fix compilation error Temporary hack to fix bug in config while finding TRT library Fix errors from rebasing Remove GatherV2 limitations for TRT6 Fix BiasAdd elementwise for NCHW case with explicit batch mode (tensorflow#34) Update TRT6 headers, Make tests compile (tensorflow#35) * Change header files for TRT6 in configure script * Fix bug with size of scalars. Use implicit batch mode based on the converter flag when creating network * Fix compilation of tests and Broadcast tests Properly fix biasadd nchw (tensorflow#36) Revert tensorflow#29 to fix weight corruption (tensorflow#37) * Revert tensorflow#29 to fix weight corruption * Revert change in test Fix bug with converters and get all tests passing for TRT6 (tensorflow#39) Update DepthToSpace and SpaceToTest for TRT6 + dynamic shapes (tensorflow#40) Add new C++ tests for TRT6 converters (tensorflow#41) * Remove third shuffle layer since bug with transpose was fixed * Add new tests for TRT6 features * Update TRT6 headers list Fix compilation errors Remove bazel_build.sh Enable quantization mnist test back Disabled by mistake I believe Remove undesirable changes in quantization_mnist_test Add code back that was missed during rebase Fix bug: change "type" to type_key
…pt-target Remove extra target.lst file and add the targets in Dockerfile
added stubbed add operator
* Added readme to build Intel-tensorflow container * Fix * Fix * Format fix * Format fix * format fix * Changed tag and added instructions for avx
I found these lines in array_grad.py
Is there any implementation of CTC cost function in tensorflow, or any example concern CTC ?
The text was updated successfully, but these errors were encountered: