A quick excercise on how I process a short text from Wikipedia using a pre-trained English and Thai language models to break down the text into individual sentences and words. (Tokenization) I then use functions in spaCy to do Part of Speech (POS) tagging and Syntactic Dependency parsing and visualize it. A feature called Named Entity Recognition is also added and labelled.