Skip to content

emileae/compromise

 
 

Repository files navigation

compromise
modest natural language processing
npm install compromise

compromise tries its best.

Welcome to v12! - Release Notes here 👍

.match():

compromise makes it simple to interpret and match text:

let doc = nlp(entireNovel)

doc.if('the #Adjective of times').text()
// "it was the blurst of times??"
if (doc.has('simon says #Verb')) {
  return true
}

.verbs():

conjugate and negate verbs in any tense:

let doc = nlp('she sells seashells by the seashore.')
doc.verbs().toPastTense()
doc.text()
// 'she sold seashells by the seashore.'

.nouns():

transform nouns to plural and possessive forms:

let doc = nlp('the purple dinosaur')
doc.nouns().toPlural()
doc.text()
// 'the purple dinosaurs'

.numbers():

interpret plaintext numbers

nlp.extend(require('compromise-numbers'))

let doc = nlp('ninety five thousand and fifty two')
doc.numbers().add(2)
doc.text()
// 'ninety five thousand and fifty four'

.topics():

grab subjects in a text:

let doc = nlp(buddyHolly)
doc
  .people()
  .if('mary')
  .json()
// [{text:'Mary Tyler Moore'}]

let doc = nlp(freshPrince)
doc
  .places()
  .first()
  .text()
// 'West Phillidelphia'

doc = nlp('the opera about richard nixon visiting china')
doc.topics().json()
// [
//   { text: 'richard nixon' },
//   { text: 'china' }
// ]

.contractions():

work with contracted and implicit words:

let doc = nlp("we're not gonna take it, no we ain't gonna take it.")

// match an implicit term
doc.has('going') // true

// transform
doc.contractions().expand()
dox.text()
// 'we are not going to take it, no we are not going to take it.'

Use it on the client-side:

<script src="https://unpkg.com/compromise"></script>
<script src="https://unpkg.com/compromise-numbers"></script>
<script>
  nlp.extend(compromiseNumbers)

  var doc = nlp('two bottles of beer')
  doc.numbers().minus(1)
  document.body.innerHTML = doc.text()
  // 'one bottle of beer'
</script>

or as an es-module:

import nlp from 'compromise'

var doc = nlp('London is calling')
doc.verbs().toNegative()
// 'London is not calling'

or if you don't care about POS-tagging, you can use the tokenize-only build: (90kb!)

<script src="https://unpkg.com/compromise/builds/compromise-tokenize.js"></script>
<script>
  var doc = nlp('No, my son is also named Bort.')
  //you can see the text has no tags
  console.log(doc.has('#Noun')) //false
  //but the whole api still works
  console.log(doc.has('my .* is .? named /^b[oa]rt/')) //true
</script>

compromise is 170kb (minified):

it's pretty fast. It can run on keypress:

it works mainly by conjugating many forms of a basic word list.

The final lexicon is ~14,000 words:

you can read more about how it works, here.

.extend():

set a custom interpretation of your own words:

let myWords = {
  kermit: 'FirstName',
  fozzie: 'FirstName',
}
let doc = nlp(muppetText, myWords)

or make more changes with a compromise-plugin.

const nlp = require('compromise')

nlp.extend((Doc, world) => {
  // add new tags
  world.addTags({
    Character: {
      isA: 'Person',
      notA: 'Adjective',
    },
  })

  // add or change words in the lexicon
  world.addWords({
    kermit: 'Character',
    gonzo: 'Character',
  })

  // add methods to run after the tagger
  world.postProcess(doc => {
    doc.match('light the lights').tag('#Verb . #Plural')
  })

  // add a whole new method
  Doc.prototype.kermitVoice = function() {
    this.sentences().prepend('well,')
    this.match('i [(am|was)]').prepend('um,')
    return this
  }
})

API:

Constructor

(these methods are on the nlp object)

Utils
  • .all() - return the whole original document ('zoom out')
  • .found [getter] - is this document empty?
  • .parent() - return the previous result
  • .parents() - return all of the previous results
  • .tagger() - (re-)run the part-of-speech tagger on this document
  • .wordCount() - count the # of terms in the document
  • .length [getter] - count the # of characters in the document (string length)
  • .clone() - deep-copy the document, so that no references remain
  • .cache({}) - freeze the current state of the document, for speed-purposes
  • .uncache() - un-freezes the current state of the document, so it may be transformed
Accessors
Match

(all match methods use the match-syntax.)

  • .match('') - return a new Doc, with this one as a parent
  • .not('') - return all results except for this
  • .matchOne('') - return only the first match
  • .if('') - return each current phrase, only if it contains this match ('only')
  • .ifNo('') - Filter-out any current phrases that have this match ('notIf')
  • .has('') - Return a boolean if this match exists
  • .lookBehind('') - search through earlier terms, in the sentence
  • .lookAhead('') - search through following terms, in the sentence
  • .before('') - return all terms before a match, in each phrase
  • .after('') - return all terms after a match, in each phrase
  • .lookup([]) - quick find for an array of string matches
Case
Whitespace
  • .pre('') - add this punctuation or whitespace before each match
  • .post('') - add this punctuation or whitespace after each match
  • .trim() - remove start and end whitespace
  • .hyphenate() - connect words with hyphen, and remove whitespace
  • .dehyphenate() - remove hyphens between words, and set whitespace
  • .toQuotations() - add quotation marks around these matches
  • .toParentheses() - add brackets around these matches
Tag
  • .tag('') - Give all terms the given tag
  • .tagSafe('') - Only apply tag to terms if it is consistent with current tags
  • .unTag('') - Remove this term from the given terms
  • .canBe('') - return only the terms that can be this tag
Loops
  • .map(fn) - run each phrase through a function, and create a new document
  • .forEach(fn) - run a function on each phrase, as an individual document
  • .filter(fn) - return only the phrases that return true
  • .find(fn) - return a document with only the first phrase that matches
  • .some(fn) - return true or false if there is one matching phrase
  • .random(fn) - sample a subset of the results
Insert
Transform
Output
Selections
Subsets

Plugins:

These are some helpful extensions:

Adjectives

npm install compromise-adjectives

Dates

npm install compromise-dates

Numbers

npm install compromise-numbers

Export

npm install compromise-export

  • .export() - store a parsed document for later use
  • nlp.load() - re-generate a Doc object from .export() results
Html

npm install compromise-html

  • .html({}) - generate sanitized html from the document
Hash

npm install compromise-hash

  • .hash() - generate an md5 hash from the document+tags
  • .isEqual(doc) - compare the hash of two documents for semantic-equality
Keypress

npm install compromise-keypress

Ngrams

npm install compromise-ngrams

Paragraphs

npm install compromise-paragraphs this plugin creates a wrapper around the default sentence objects.

Sentences

npm install compromise-sentences

Syllables

npm install compromise-syllables

  • .syllables() - split each term by its typical pronounciation

Typescript

Typescript support is still a work in progress. So far support for plugins has been mostly complete, and can be used to type-safely extend NLP.

import nlp from 'compromise'
import ngrams from 'compromise-ngrams'
import numbers from 'compromise-numbers'

// .extend() can be chained
const nlpEx  = nlp.extend(ngrams).extend(numbers)

nlpEx('This is type safe!').ngrams({ min: 1 })
nlpEx('This is type safe!').numbers()

Type-safe Plugins

The .extend() function returns an nlp type with updated Document and World types (Phrase, Term and Pool are not currently supported). While the global nlp also recieves the plugin from a runtime perspective; it's type will not be updated - this is a limitation of Typescript.

Typesafe plugins can be created by using the nlp.Plugin type:

interface myExtendedDoc {
  sayHello(): string
}

interface myExtendedWorld {
  hello: string
}

const myPlugin: nlp.Plugin<myExtendedDoc, myExtendedWorld> = (Doc, world) => {
  world.hello = 'Hello world!'

  Doc.prototype.sayHello = () => world.hello
}

const _nlp = nlp.extend(myPlugin)
const doc = _nlp('This is safe!')
doc.sayHello()
doc.world.hello = "Hello again!"

Known Issues

  • compromise_1.default is not a function - This is a problem with your tsconfig.json it can be solved by adding "esModuleInterop": true. Make sure to run tsc --init when starting a new Typescript project.

Docs:

Tutorials:
3rd party:
Talks:
Some fun Applications:

Limitations:

  • slash-support: We currently split slashes up as different words, like we do for hyphens. so things like this don't work: nlp('the koala eats/shoots/leaves').has('koala leaves') //false

  • inter-sentence match: By default, sentences are the top-level abstraction. Inter-sentence, or multi-sentence matches aren't supported: nlp("that's it. Back to Winnipeg!").has('it back')//false

  • nested match syntax: the danger beauty of regex is that you can recurse indefinitely. Our match syntax is much weaker. Things like this are not (yet) possible: doc.match('(modern (major|minor))? general') complex matches must be achieved with successive .match() statements.

  • dependency parsing: Proper sentence transformation requires understanding the syntax tree of a sentence, which we don't currently do. We should! Help wanted with this.

FAQ

    ☂️ Isn't javascript too...

      yeah it is!
      it wasn't built to compete with NLTK, and may not fit every project.
      string processing is synchronous too, and parallelizing node processes is weird.
      See here for information about speed & performance, and here for project motivations

    💃 Can it run on my arduino-watch?

      Only if it's water-proof!
      Read quick start for running compromise in workers, mobile apps, and all sorts of funny environments.

    🌎 Compromise in other Languages?

      we've got work-in-progress forks for German and French, in the same philosophy.
      and need some help.

    ✨ Partial builds?

      we do offer a [compromise-tokenize](./builds/compromise-tokenize.js) build, which has the POS-tagger pulled-out.
      but otherwise, compromise isn't easily tree-shaken.
      the tagging methods are competitive, and greedy, so it's not recommended to pull things out.
      Note that without a full POS-tagging, the contraction-parser won't work perfectly. ((spencer's cool) vs. (spencer's house))
      It's recommended to run the library fully.

See Also:

  •   naturalNode - fancier statistical nlp in javascript
  •   superScript - clever conversation engine in js
  •   nodeBox linguistics - conjugation, inflection in javascript
  •   reText - very impressive text utilities in javascript
  •   jsPos - javascript build of the time-tested Brill-tagger
  •   spaCy - speedy, multilingual tagger in C/python
  •   Prose - quick tagger in Go by Joseph Kato

MIT

About

modest natural-language processing

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • JavaScript 99.9%
  • TypeScript 0.1%