The "Get tokens from text analysis" documentation forgets to mention the "tokenizer" body property.
There are parameters for character filters ("char_filter") and token filters ("filter"), but no way to specify a tokenizer?
The "tokenizer" property is in fact supported by the _analyze
REST endpoint:
POST /{index}/_analyze
{
"text": "Hello there!",
"tokenizer": "keyword"
}