Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
feat(custom normalizer): allow custom control of normalization (#172)
* feat(custom normalizer): allow custom control of normalization Adds an optional {normalizer} option to query functions; this is a transformation function run over candidate match text after it has had `trim` or `collapseWhitespace` run on it, but before any matching text/function/regexp is tested against it. The use case is for tidying up DOM text (which may contain, for instance, invisible Unicode control characters) before running matching logic, keeping the matching logic and normalization logic separate. * Expand acronyms out * Add `getDefaultNormalizer()` and move existing options This commit moves the implementation of `trim` + `collapseWhitespace`, making them just another normalizer. It also exposes a `getDefaultNormalizer()` function which provides the default normalization and allows for the configuration of it. Removed `matches` and `fuzzyMatches` from being publicly exposed. Updated tests, added new documentation for normalizer and getDefaultNormalizer, and removed documentation for the previous top-level `trim` and `collapseWhitespace` options. * Apply normalizer treatment to queryAllByDisplayValue
- Loading branch information
Showing
9 changed files
with
294 additions
and
87 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,25 +1,28 @@ | ||
import {fuzzyMatches, matches} from '../' | ||
import {fuzzyMatches, matches} from '../matches' | ||
|
||
// unit tests for text match utils | ||
|
||
const node = null | ||
const normalizer = str => str | ||
|
||
test('matchers accept strings', () => { | ||
expect(matches('ABC', node, 'ABC')).toBe(true) | ||
expect(fuzzyMatches('ABC', node, 'ABC')).toBe(true) | ||
expect(matches('ABC', node, 'ABC', normalizer)).toBe(true) | ||
expect(fuzzyMatches('ABC', node, 'ABC', normalizer)).toBe(true) | ||
}) | ||
|
||
test('matchers accept regex', () => { | ||
expect(matches('ABC', node, /ABC/)).toBe(true) | ||
expect(fuzzyMatches('ABC', node, /ABC/)).toBe(true) | ||
expect(matches('ABC', node, /ABC/, normalizer)).toBe(true) | ||
expect(fuzzyMatches('ABC', node, /ABC/, normalizer)).toBe(true) | ||
}) | ||
|
||
test('matchers accept functions', () => { | ||
expect(matches('ABC', node, text => text === 'ABC')).toBe(true) | ||
expect(fuzzyMatches('ABC', node, text => text === 'ABC')).toBe(true) | ||
expect(matches('ABC', node, text => text === 'ABC', normalizer)).toBe(true) | ||
expect(fuzzyMatches('ABC', node, text => text === 'ABC', normalizer)).toBe( | ||
true, | ||
) | ||
}) | ||
|
||
test('matchers return false if text to match is not a string', () => { | ||
expect(matches(null, node, 'ABC')).toBe(false) | ||
expect(fuzzyMatches(null, node, 'ABC')).toBe(false) | ||
expect(matches(null, node, 'ABC', normalizer)).toBe(false) | ||
expect(fuzzyMatches(null, node, 'ABC', normalizer)).toBe(false) | ||
}) |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.