Last nights update to tokenizer files broke my tool

#143
by devenv571 - opened

Hi, I create a tokenizer the following way:

model_name = "mistralai/Mistral-7B-Instruct-v0.2"
tokenizer_name = "mistralai/Mistral-7B-Instruct-v0.2"

model = AutoModelForCausalLM.from_pretrained(model_name, load_in_4bit=True, device_map='auto')
tokenizer = AutoTokenizer.from_pretrained(tokenizer_name)

All worked great until this morning. Now I get the following error:

transformers/tokenization_utils_fast.py", line 111, in init
fast_tokenizer = TokenizerFast.from_file(fast_tokenizer_file)
Exception: data did not match any variant of untagged enum PyPreTokenizerTypeWrapper at line 40 column 3

Any idea how to mitigate this?

Thank you!

Mistral AI_ org

Hi deven, have you updated your transformers? Everything seems to be working fine here, if the issue continues could you provide more details on your setup?

This comment has been hidden

Hi, I also have this error with transformers==4.38.2. Yesterday everything worked fine, this morning all the code stopped working.
And I get error like this:
data did not match any variant of untagged enum PyPreTokenizerTypeWrapper at line 40 column 3

Mistral AI_ org

4.38.2 indeed does not work, transformers==4.41.2 and the most recent version 4.42.3 work fine, consider updating your transformers, there were a few changes related to the tokenizers in general πŸ‘

be sure to properly clean and uninstall the previous version and should work fine !

Hello!
I have the same issue.
My tool is simply unable to be used at the current time.

I don't understand how I'm supposed to update the transformers package, considering that the package is provided in the Docker image of the Mistral model, which is getting served to me by TGI. So my hands are a bit tied at this point.

Are you expecting to fix this tokenizer issue today by EOD?

Thanks in advance!

4.38.2 indeed does not work, transformers==4.41.2 and the most recent version 4.42.3 work fine, consider updating your transformers, there were a few changes related to the tokenizers in general πŸ‘

be sure to properly clean and uninstall the previous version and should work fine !

Thank you very much for your help, everything worked well

Mistral AI_ org
β€’
edited Jul 4

Hello!
I have the same issue.
My tool is simply unable to be used at the current time.

I don't understand how I'm supposed to update the transformers package, considering that the package is provided in the Docker image of the Mistral model, which is getting served to me by TGI. So my hands are a bit tied at this point.

Are you expecting to fix this tokenizer issue today by EOD?

Thanks in advance!

I see, might be because of the transformers version on the config file, we will look into what version should be the best to update to πŸ‘

Update: The issue seems to be related to the docker file, I will mention something here once its solved

Hello!
I have the same issue.
My tool is simply unable to be used at the current time.

I don't understand how I'm supposed to update the transformers package, considering that the package is provided in the Docker image of the Mistral model, which is getting served to me by TGI. So my hands are a bit tied at this point.

Are you expecting to fix this tokenizer issue today by EOD?

Thanks in advance!

I see, might be because of the transformers version on the config file, we will look into what version should be the best to update to πŸ‘

Update: The issue seems to be related to the docker file, I will mention something here once its solved

I just went to test the latest version of the TGI image and it's back to working normal, so thank you community for solving this on such a short notice!

We encountered the same issue with the latest commit for this model when deploying the NVIDIA Triton server with vLLM using the image.

repository: nvcr.io/nvidia/tritonserver
tag: "24.02-vllm-python-py3"

This issue has been resolved in the latest image. It appears that the transformers library has been updated in the newer version:

repository: nvcr.io/nvidia/tritonserver
tag: "24.06-vllm-python-py3"

Issue Details

I0705 18:38:14.261557 1 python_be.cc:2545] TRITONBACKEND_ModelInstanceFinalize: delete instance state
E0705 18:38:14.261999 1 backend_model.cc:691] ERROR: Failed to create instance: Exception: data did not match any variant of untagged enum PyPreTokenizerTypeWrapper at line 40 column 3

At:
  /usr/local/lib/python3.10/dist-packages/transformers/tokenization_utils_fast.py(111): __init__
  /usr/local/lib/python3.10/dist-packages/transformers/models/llama/tokenization_llama_fast.py(133): __init__
  /usr/local/lib/python3.10/dist-packages/transformers/tokenization_utils_base.py(2288): _from_pretrained
  /usr/local/lib/python3.10/dist-packages/transformers/tokenization_utils_base.py(2048): from_pretrained
  /usr/local/lib/python3.10/dist-packages/transformers/models/auto/tokenization_auto.py(825): from_pretrained
  /usr/local/lib/python3.10/dist-packages/vllm/transformers_utils/tokenizer.py(50): get_tokenizer
  /usr/local/lib/python3.10/dist-packages/vllm/transformers_utils/tokenizer.py(100): __init__
  /usr/local/lib/python3.10/dist-packages/vllm/engine/llm_engine.py(163): _init_tokenizer
  /usr/local/lib/python3.10/dist-packages/vllm/engine/llm_engine.py(100): __init__
  /usr/local/lib/python3.10/dist-packages/vllm/engine/async_llm_engine.py(364): _init_engine
  /usr/local/lib/python3.10/dist-packages/vllm/engine/async_llm_engine.py(319): __init__
  /usr/local/lib/python3.10/dist-packages/vllm/engine/async_llm_engine.py(623): from_engine_args
  /tmp/foldergSveTX/1/model.py(59): initialize

I0705 18:38:14.262071 1 python_be.cc:2360] TRITONBACKEND_ModelFinalize: delete model state
E0705 18:38:14.262510 1 model_lifecycle.cc:638] failed to load 'mistral7b' version 1: Internal: Exception: data did not match any variant of untagged enum PyPreTokenizerTypeWrapper at line 40 column 3

I also have the same issue when trying to setup a model with sagemaker and using the following image:
763104351884.dkr.ecr.eu-central-1.amazonaws.com/huggingface-pytorch-tgi-inference:2.1.1-tgi1.4.5-gpu-py310-cu121-ubuntu22.04

When I deployed privateGpt locally in windows 10, an error occurred. I tried updating the transforms version above but failed to solve the problem. I did not use a graphics card locally, was it related to nvida?

poetry run python -m private_gpt
10:24:25.303 [INFO ] private_gpt.settings.settings_loader - Starting application with profiles=['default', 'ollama']
None of PyTorch, TensorFlow >= 2.0, or Flax have been found. Models won't be available and only tokenizers, configuration and file/data utilities can be used.
--- Logging error ---
Traceback (most recent call last):
File "F:\private-gpt-0.5.0\private-gpt-main.venv\Lib\site-packages\injector_init_.py", line 798, in get
return self._context[key]
~~~~~~~~~~~~~^^^^^
KeyError: <class 'private_gpt.ui.ui.PrivateGptUi'>

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "F:\private-gpt-0.5.0\private-gpt-main.venv\Lib\site-packages\injector_init_.py", line 798, in get
return self._context[key]
~~~~~~~~~~~~~^^^^^
KeyError: <class 'private_gpt.server.ingest.ingest_service.IngestService'>

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "F:\private-gpt-0.5.0\private-gpt-main.venv\Lib\site-packages\injector_init_.py", line 798, in get
return self._context[key]
~~~~~~~~~~~~~^^^^^
KeyError: <class 'private_gpt.components.llm.llm_component.LLMComponent'>

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "F:\private-gpt-0.5.0\private-gpt-main\private_gpt\components\llm\llm_component.py", line 30, in init
AutoTokenizer.from_pretrained(
File "F:\private-gpt-0.5.0\private-gpt-main.venv\Lib\site-packages\transformers\models\auto\tokenization_auto.py", line 825, in from_pretrained
return tokenizer_class.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\private-gpt-0.5.0\private-gpt-main.venv\Lib\site-packages\transformers\tokenization_utils_base.py", line 2048, in from_pretrained
return cls._from_pretrained(
^^^^^^^^^^^^^^^^^^^^^
File "F:\private-gpt-0.5.0\private-gpt-main.venv\Lib\site-packages\transformers\tokenization_utils_base.py", line 2287, in _from_pretrained
tokenizer = cls(*init_inputs, **init_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\private-gpt-0.5.0\private-gpt-main.venv\Lib\site-packages\transformers\models\llama\tokenization_llama_fast.py", line 133, in init
super().init(
File "F:\private-gpt-0.5.0\private-gpt-main.venv\Lib\site-packages\transformers\tokenization_utils_fast.py", line 111, in init
fast_tokenizer = TokenizerFast.from_file(fast_tokenizer_file)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Exception: data did not match any variant of untagged enum PyPreTokenizerTypeWrapper at line 40 column 3

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "D:\Python311\Lib\logging_init.py", line 1110, in emit
msg = self.format(record)
^^^^^^^^^^^^^^^^^^^
File "D:\Python311\Lib\logging_init
.py", line 953, in format
return fmt.format(record)
^^^^^^^^^^^^^^^^^^
File "D:\Python311\Lib\logging_init.py", line 687, in format
record.message = record.getMessage()
^^^^^^^^^^^^^^^^^^^
File "D:\Python311\Lib\logging_init
.py", line 377, in getMessage
msg = msg % self.args
~~~~^~~~~~~~~~~
TypeError: not all arguments converted during string formatting
Call stack:
File "", line 198, in run_module_as_main
File "", line 88, in run_code
File "F:\private-gpt-0.5.0\private-gpt-main\private_gpt_main
.py", line 5, in
from private_gpt.main import app
File "", line 1176, in find_and_load
File "", line 1147, in find_and_load_unlocked
File "", line 690, in load_unlocked
File "", line 940, in exec_module
File "", line 241, in call_with_frames_removed
File "F:\private-gpt-0.5.0\private-gpt-main\private_gpt\main.py", line 6, in
app = create_app(global_injector)
File "F:\private-gpt-0.5.0\private-gpt-main\private_gpt\launcher.py", line 63, in create_app
ui = root_injector.get(PrivateGptUi)
File "F:\private-gpt-0.5.0\private-gpt-main.venv\Lib\site-packages\injector_init
.py", line 91, in wrapper
return function(*args, **kwargs)
File "F:\private-gpt-0.5.0\private-gpt-main.venv\Lib\site-packages\injector_init
.py", line 974, in get
provider_instance = scope_instance.get(interface, binding.provider)
File "F:\private-gpt-0.5.0\private-gpt-main.venv\Lib\site-packages\injector_init
.py", line 91, in wrapper
return function(*args, **kwargs)
File "F:\private-gpt-0.5.0\private-gpt-main.venv\Lib\site-packages\injector_init
.py", line 800, in get
instance = self.get_instance(key, provider, self.injector)
File "F:\private-gpt-0.5.0\private-gpt-main.venv\Lib\site-packages\injector_init
.py", line 811, in get_instance
return provider.get(injector)
File "F:\private-gpt-0.5.0\private-gpt-main.venv\Lib\site-packages\injector_init
.py", line 264, in get
return injector.create_object(self.cls)
File "F:\private-gpt-0.5.0\private-gpt-main.venv\Lib\site-packages\injector_init
.py", line 998, in create_object
self.call_with_injection(init, self
=instance, kwargs=additional_kwargs)
File "F:\private-gpt-0.5.0\private-gpt-main.venv\Lib\site-packages\injector_init.py", line 1031, in call_with_injection
dependencies = self.args_to_inject(
File "F:\private-gpt-0.5.0\private-gpt-main.venv\Lib\site-packages\injector_init
.py", line 91, in wrapper
return function(*args, **kwargs)
File "F:\private-gpt-0.5.0\private-gpt-main.venv\Lib\site-packages\injector_init.py", line 1079, in args_to_inject
instance: Any = self.get(interface)
File "F:\private-gpt-0.5.0\private-gpt-main.venv\Lib\site-packages\injector_init
.py", line 91, in wrapper
return function(*args, **kwargs)
File "F:\private-gpt-0.5.0\private-gpt-main.venv\Lib\site-packages\injector_init.py", line 974, in get
provider_instance = scope_instance.get(interface, binding.provider)
File "F:\private-gpt-0.5.0\private-gpt-main.venv\Lib\site-packages\injector_init
.py", line 91, in wrapper
return function(*args, **kwargs)
File "F:\private-gpt-0.5.0\private-gpt-main.venv\Lib\site-packages\injector_init.py", line 800, in get
instance = self.get_instance(key, provider, self.injector)
File "F:\private-gpt-0.5.0\private-gpt-main.venv\Lib\site-packages\injector_init
.py", line 811, in get_instance
return provider.get(injector)
File "F:\private-gpt-0.5.0\private-gpt-main.venv\Lib\site-packages\injector_init
.py", line 264, in get
return injector.create_object(self.cls)
File "F:\private-gpt-0.5.0\private-gpt-main.venv\Lib\site-packages\injector_init
.py", line 998, in create_object
self.call_with_injection(init, self
=instance, kwargs=additional_kwargs)
File "F:\private-gpt-0.5.0\private-gpt-main.venv\Lib\site-packages\injector_init.py", line 1031, in call_with_injection
dependencies = self.args_to_inject(
File "F:\private-gpt-0.5.0\private-gpt-main.venv\Lib\site-packages\injector_init
.py", line 91, in wrapper
return function(*args, **kwargs)
File "F:\private-gpt-0.5.0\private-gpt-main.venv\Lib\site-packages\injector_init.py", line 1079, in args_to_inject
instance: Any = self.get(interface)
File "F:\private-gpt-0.5.0\private-gpt-main.venv\Lib\site-packages\injector_init
.py", line 91, in wrapper
return function(*args, **kwargs)
File "F:\private-gpt-0.5.0\private-gpt-main.venv\Lib\site-packages\injector_init.py", line 974, in get
provider_instance = scope_instance.get(interface, binding.provider)
File "F:\private-gpt-0.5.0\private-gpt-main.venv\Lib\site-packages\injector_init
.py", line 91, in wrapper
return function(*args, **kwargs)
File "F:\private-gpt-0.5.0\private-gpt-main.venv\Lib\site-packages\injector_init.py", line 800, in get
instance = self.get_instance(key, provider, self.injector)
File "F:\private-gpt-0.5.0\private-gpt-main.venv\Lib\site-packages\injector_init
.py", line 811, in get_instance
return provider.get(injector)
File "F:\private-gpt-0.5.0\private-gpt-main.venv\Lib\site-packages\injector_init
.py", line 264, in get
return injector.create_object(self.cls)
File "F:\private-gpt-0.5.0\private-gpt-main.venv\Lib\site-packages\injector_init
.py", line 998, in create_object
self.call_with_injection(init, self
=instance, kwargs=additional_kwargs)
File "F:\private-gpt-0.5.0\private-gpt-main.venv\Lib\site-packages\injector_init_.py", line 1040, in call_with_injection
return callable(*full_args, **dependencies)
File "F:\private-gpt-0.5.0\private-gpt-main\private_gpt\components\llm\llm_component.py", line 37, in init
logger.warning(
Message: 'Failed to download tokenizer %s. Falling back to default tokenizer.'
Arguments: ('mistralai/Mistral-7B-Instruct-v0.2', Exception('data did not match any variant of untagged enum PyPreTokenizerTypeWrapper at line 40 column 3'))
10:24:28.748 [INFO ] private_gpt.components.llm.llm_component - Initializing the LLM in mode=ollama
10:24:29.225 [INFO ] private_gpt.components.embedding.embedding_component - Initializing the embedding model in mode=ollama
10:24:29.227 [INFO ] llama_index.core.indices.loading - Loading all indices.
10:24:29.513 [INFO ] private_gpt.ui.ui - Mounting the gradio UI, at path=/
10:24:29.631 [INFO ] uvicorn.error - Started server process [20140]
10:24:29.631 [INFO ] uvicorn.error - Waiting for application startup.
10:24:29.632 [INFO ] uvicorn.error - Application startup complete.
10:24:29.634 [INFO ] uvicorn.error - Uvicorn running on http://0.0.0.0:8001 (Press CTRL+C to quit)

4.38.2 indeed does not work, transformers==4.41.2 and the most recent version 4.42.3 work fine, consider updating your transformers, there were a few changes related to the tokenizers in general πŸ‘

be sure to properly clean and uninstall the previous version and should work fine !

Hi, sorry for bringin this back up, how do i properly uninstall the previous version?

Using the latest version of transformers but getting the same error!

Exception: data did not match any variant of untagged enum PyPreTokenizerTypeWrapper at line 40 column 3

from mistral_inference.model import Transformer
from mistral_inference.generate import generate

model = Transformer.from_folder(mistral_models_path)
out_tokens, _ = generate([tokens], model, max_tokens=64, temperature=0.0, eos_id=tokenizer.instruct_tokenizer.tokenizer.eos_id)

result = tokenizer.decode(out_tokens[0])

print(result)

I want to run this code , after that i try to install this library " mistral_inference " but it's doesn't work
this is the error message : ImportError: cannot import name 'Transformer' from 'mistral_inference.model'

Does anyone have a solution for this the above solutions to down and or upgrade packages did not help.
Keep getting the same error.

Same here still getting errors :(

Exception: data did not match any variant of untagged enum PyPreTokenizerTypeWrapper at line 40 column 3

same here, updating the transformers didn't help

Mistral AI_ org

from mistral_inference.model import Transformer
from mistral_inference.generate import generate

model = Transformer.from_folder(mistral_models_path)
out_tokens, _ = generate([tokens], model, max_tokens=64, temperature=0.0, eos_id=tokenizer.instruct_tokenizer.tokenizer.eos_id)

result = tokenizer.decode(out_tokens[0])

print(result)

I want to run this code , after that i try to install this library " mistral_inference " but it's doesn't work
this is the error message : ImportError: cannot import name 'Transformer' from 'mistral_inference.model'

Hi there, there was an update since the release of codestral mamba to also work with the mamba architecture, and now its from mistral_inference.transformer import Transformer, I've just updated the readme !

Sign up or log in to comment