Skip to content

Decoding of AddedTokens containing Latin-Extended A characters are wrong #1915

@andrewivan123

Description

@andrewivan123

I added vietnamese words as AddedTokens to a pretrained tokenizer. An example is "đem". However, when I decoded the token ID of that added token, the result is wrong. This is caused by the character đ which are mapped to the byte \x11. Here are my code:

from tokenizers import Tokenizer

tokenizer = Tokenizer.from_file('/home/ec2-user/efs/OLMo/olmo_data/tokenizers/allenai_dolma2.json')
tokenizer.add_tokens(["đem"])
tokenizer.decode([tokenizer.token_to_id("đem")])

and here is the result

'\x11em'

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type
    No fields configured for issues without a type.

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions