Skip to content

Check what's the reason to use double-precision in topic models #1576

Open
@menshikh-iv

Description

@menshikh-iv

Our TMs return vectors with double-precision float64, it looks like very suspicious, because float32 is enough for all. Need to check, what's a reason of this behavior and what's a concrete method.

The first step - look at this line in the test, after it - collect all TMs, that depends on this tests and check, where and why float64 happened.

Result - detailed description (where and why), and fixing this behavior after discussion (if needed)

Metadata

Metadata

Assignees

No one assigned

    Labels

    HacktoberfestIssues marked for hacktoberfestbugIssue described a bugdifficulty easyEasy issue: required small fixgood first issueIssue for new contributors (not required gensim understanding + very simple)performanceIssue related to performance (in HW meaning)

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions