Skip to content

Tokenize all the words in german_text using word_tokenize(), and print the result. Tokenize only the capital words in german_text. First, write a pattern called capital_words to match only capital words. Then, tokenize it using regexp_tokenize(). Tokenize only the emoji in german_text.

License

Notifications You must be signed in to change notification settings

sujaysreedharg/Tokenization-using-python-for-nlp

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Tokenization is the basics of Natural Language Processing. Tokenize all the words in german_text using word_tokenize(), and print the result. Tokenize only the capital words in german_text. First, write a pattern called capital_words to match only capital words. Then, tokenize it using regexp_tokenize(). Tokenize only the emoji in german_text.

About

Tokenize all the words in german_text using word_tokenize(), and print the result. Tokenize only the capital words in german_text. First, write a pattern called capital_words to match only capital words. Then, tokenize it using regexp_tokenize(). Tokenize only the emoji in german_text.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages