Add get_added_tokens_decoder function, as well as token_to_id and id_to_token
Example python usage:
tokenizer.token_to_id("hello") # 31373
tokenizer.id_to_token(31373) # "hello"
tokenizer.get_added_tokens_decoder() # {50256: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True, ...}