Skip to content

ZeroDivisionError and ModelProto error #20

@anjugopinath

Description

@anjugopinath

I ran sh scripts_asmv2/eval/psg_eval.sh OpenGVLab/ASMv2.
I get 2 errors: RuntimeError: Internal: could not parse ModelProto from OpenGVLab/ASMv2/tokenizer.model and ZeroDivisionError: division by zero
I am not using Docker.

(/s/red/a/nobackup/vision/anju/allseeing/cvenv) carnap:/s/red/a/nobackup/vision/anju/allseeing/all-seeing-main/all-seeing-v2$ sh scripts_asmv2/eval/psg_eval.sh OpenGVLab/ASMv2
Traceback (most recent call last):
File "/s/red/a/nobackup/vision/anju/allseeing/cvenv/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/s/red/a/nobackup/vision/anju/allseeing/cvenv/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/s/red/a/nobackup/vision/anju/allseeing/all-seeing-main/all-seeing-v2/llava/eval/model_vqa_loader.py", line 143, in
eval_model(args)
File "/s/red/a/nobackup/vision/anju/allseeing/all-seeing-main/all-seeing-v2/llava/eval/model_vqa_loader.py", line 79, in eval_model
tokenizer, model, image_processor, context_len = load_pretrained_model(model_path, args.model_base, model_name)
File "/s/red/a/nobackup/vision/anju/allseeing/all-seeing-main/all-seeing-v2/llava/model/builder.py", line 105, in load_pretrained_model
tokenizer = AutoTokenizer.from_pretrained(model_path, use_fast=False)
File "/s/red/a/nobackup/vision/anju/allseeing/cvenv/lib/python3.8/site-packages/transformers/models/auto/tokenization_auto.py", line 702, in from_pretrained
return tokenizer_class.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
File "/s/red/a/nobackup/vision/anju/allseeing/cvenv/lib/python3.8/site-packages/transformers/tokenization_utils_base.py", line 1841, in from_pretrained
return cls._from_pretrained(
File "/s/red/a/nobackup/vision/anju/allseeing/cvenv/lib/python3.8/site-packages/transformers/tokenization_utils_base.py", line 2004, in _from_pretrained
tokenizer = cls(*init_inputs, **init_kwargs)
File "/s/red/a/nobackup/vision/anju/allseeing/cvenv/lib/python3.8/site-packages/transformers/models/llama/tokenization_llama.py", line 144, in init
self.sp_model.Load(vocab_file)
File "/s/red/a/nobackup/vision/anju/allseeing/cvenv/lib/python3.8/site-packages/sentencepiece/init.py", line 961, in Load
return self.LoadFromFile(model_file)
File "/s/red/a/nobackup/vision/anju/allseeing/cvenv/lib/python3.8/site-packages/sentencepiece/init.py", line 316, in LoadFromFile
return _sentencepiece.SentencePieceProcessor_LoadFromFile(self, arg)
RuntimeError: Internal: could not parse ModelProto from OpenGVLab/ASMv2/tokenizer.model
/s/red/a/nobackup/vision/anju/allseeing/cvenv/lib/python3.8/site-packages/torch/init.py:747: UserWarning: torch.set_default_tensor_type() is deprecated as of PyTorch 2.1, please use torch.set_default_dtype() and torch.set_default_device() as alternatives. (Triggered internally at ../torch/csrc/tensor/python_tensor.cpp:431.)
_C._set_default_tensor_type(t)
Traceback (most recent call last):
File "llava/eval/eval_psg.py", line 252, in
eval_psg(
File "llava/eval/eval_psg.py", line 226, in eval_psg
print(f'Recall: {sum(recall) / len(recall) * 100:.2f}')
ZeroDivisionError: division by zero

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions