Dataset Viewer issue

#1
by nanaxnana - opened

The dataset viewer is not working.

Error details:

Exception:    SplitsNotFoundError
Message:      The split names could not be parsed from the dataset config.
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 160, in compute
                  compute_split_names_from_info_response(
                File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 132, in compute_split_names_from_info_response
                  config_info_response = get_previous_step_or_raise(kind="config-info", dataset=dataset, config=config)
                File "/src/libs/libcommon/src/libcommon/simple_cache.py", line 539, in get_previous_step_or_raise
                  raise CachedArtifactError(
              libcommon.simple_cache.CachedArtifactError: The previous step failed.
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 499, in get_dataset_config_info
                  for split_generator in builder._split_generators(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/webdataset/webdataset.py", line 72, in _split_generators
                  first_examples = list(islice(pipeline, self.NUM_EXAMPLES_FOR_FEATURES_INFERENCE))
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/webdataset/webdataset.py", line 33, in _get_pipeline_from_tar
                  current_example[field_name.lower()] = f.read()
                File "/usr/local/lib/python3.9/tarfile.py", line 690, in read
                  raise ReadError("unexpected end of data")
              tarfile.ReadError: unexpected end of data
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 76, in compute_split_names_from_streaming_response
                  for split in get_dataset_split_names(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 572, in get_dataset_split_names
                  info = get_dataset_config_info(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 504, in get_dataset_config_info
                  raise SplitsNotFoundError("The split names could not be parsed from the dataset config.") from err
              datasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.

cc @albertvillanova @lhoestq @severo .

As far as I am aware, the dataset viewer doesn't support data of the complexity that we have in this repo - I'm happy to make any config changes that would support this, however.

indeed, I don't think it's possible to have both audio and images in the same dataset, even in two different configs. Can you confirm @lhoestq @albertvillanova . Anyway, you might be interested in configuring the README.md to show at least part of the data: https://huggingface.co/docs/hub/datasets-data-files-configuration

Having both images and audio is supported ;)

But the dataset needs to be in a format the Viewer can parse, e.g. WebDataset for TAR archives

I see - that's great! That being said, this dataset is a pairs of audio clips, and a video per clip, along with a images associated at an video (not clip) scale. While I think the dataset could be adapted to fit in the viewer (maybe by splitting off the character images portion into a separate dataset), it's rather challenging to flatten the tree-structure of the data into a nice row-wise dataset for WebDataset. I'll think about this, but it seems like a fairly large up-front time investment right now.

Sign up or log in to comment