D
Size: a a a
D
IS
D
---> 13 tokenizer_ru = GPT2Tokenizer.from_pretrained('./all/unfreeze_all/m_checkpoint-3364613/')
1 frames
/usr/local/lib/python3.6/dist-packages/transformers/tokenization_utils.py in _from_pretrained(cls, pretrained_model_name_or_path, *init_inputs, **kwargs)
1012 ", ".join(s3_models),
1013 pretrained_model_name_or_path,
-> 1014 list(cls.vocab_files_names.values()),
1015 )
1016 )
OSError: Model name './all/unfreeze_all/m_checkpoint-3364613/' was not found in tokenizers model name list (gpt2, gpt2-medium, gpt2-large, gpt2-xl, distilgpt2). We assumed './all/unfreeze_all/m_checkpoint-3364613/' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.json', 'merges.txt'] but couldn't find such vocabulary files at this path or url.
D
['vocab.json', 'merges.txt']
IS
D
D
IS
IS
IS
D
IS
D
IS
D
IS
D
---> 13 tokenizer_ru = GPT2Tokenizer.from_pretrained('./all/unfreeze_all/m_checkpoint-3364613/')
1 frames
/usr/local/lib/python3.6/dist-packages/transformers/tokenization_utils.py in _from_pretrained(cls, pretrained_model_name_or_path, *init_inputs, **kwargs)
1012 ", ".join(s3_models),
1013 pretrained_model_name_or_path,
-> 1014 list(cls.vocab_files_names.values()),
1015 )
1016 )
OSError: Model name './all/unfreeze_all/m_checkpoint-3364613/' was not found in tokenizers model name list (gpt2, gpt2-medium, gpt2-large, gpt2-xl, distilgpt2). We assumed './all/unfreeze_all/m_checkpoint-3364613/' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.json', 'merges.txt'] but couldn't find such vocabulary files at this path or url.
IS
D
IS