Skip to content

Commit

Permalink
Fix for bug 690, where fpmax returned incorrect support values (#692)
Browse files Browse the repository at this point in the history
* Fix for bug 690, where fpmax returned incorrect support values

* add entry to changelog

Co-authored-by: rasbt <mail@sebastianraschka.com>
  • Loading branch information
harenbergsd and rasbt committed May 20, 2020
1 parent d250289 commit 56944fc
Show file tree
Hide file tree
Showing 3 changed files with 18 additions and 0 deletions.
1 change: 1 addition & 0 deletions docs/sources/CHANGELOG.md
Expand Up @@ -30,6 +30,7 @@ The CHANGELOG for the current development version is available at

- Fixes axis DeprecationWarning in matplotlib v3.1.0 and newer. ([#673](https://github.com/rasbt/mlxtend/pull/673))
- Fixes an issue with using `meshgrid` in `no_information_rate` function used by the `bootstrap_point632_score` function for the .632+ estimate. ([#688](https://github.com/rasbt/mlxtend/pull/688))
- Fixes an issue in `fpmax` that could lead to incorrect support values. ([#692](https://github.com/rasbt/mlxtend/pull/692) via [Steve Harenberg](https://github.com/harenbergsd))

### Version 0.17.2 (02-24-2020)

Expand Down
2 changes: 2 additions & 0 deletions mlxtend/frequent_patterns/fpmax.py
Expand Up @@ -100,6 +100,8 @@ def fpmax_step(tree, minsup, mfit, colnames, max_len, verbose):
mfit.insert_itemset(largest_set)
if max_len is None or len(largest_set) <= max_len:
support = tree.root.count
if len(items) > 0:
support = min([tree.nodes[i][0].count for i in items])
yield support, largest_set

if verbose:
Expand Down
15 changes: 15 additions & 0 deletions mlxtend/frequent_patterns/tests/test_fpmax.py
Expand Up @@ -74,3 +74,18 @@ def test_output(self):
class TestEx3(unittest.TestCase, FPTestEx3All):
def setUp(self):
FPTestEx3All.setUp(self, fpmax)


class TestEx4(unittest.TestCase):
def setUp(self):
self.df = pd.DataFrame(
[[1, 1, 0], [1, 0, 1], [0, 0, 1]], columns=['a', 'b', 'c'])
self.fpalgo = fpmax

def test_output(self):
res_df = self.fpalgo(self.df, min_support=0.01, use_colnames=True)
expect = pd.DataFrame([[0.3333333333333333, frozenset(['a', 'b'])],
[0.3333333333333333, frozenset(['a', 'c'])]],
columns=['support', 'itemsets'])

compare_dataframes(res_df, expect)

0 comments on commit 56944fc

Please sign in to comment.