Skip to content

Update dependency datasets to v4#4460

Open
renovate-bot wants to merge 1 commit intoGoogleCloudPlatform:mainfrom
renovate-bot:renovate/datasets-4.x
Open

Update dependency datasets to v4#4460
renovate-bot wants to merge 1 commit intoGoogleCloudPlatform:mainfrom
renovate-bot:renovate/datasets-4.x

Conversation

@renovate-bot
Copy link
Copy Markdown
Contributor

@renovate-bot renovate-bot commented Mar 5, 2026

ℹ️ Note

This PR body was truncated due to platform limits.

This PR contains the following updates:

Package Change Age Confidence
datasets ==2.18.0==4.8.4 age confidence

Release Notes

huggingface/datasets (datasets)

v4.8.4

Compare Source

What's Changed

Full Changelog: huggingface/datasets@4.8.3...4.8.4

v4.8.3

Compare Source

What's Changed

Full Changelog: huggingface/datasets@4.8.2...4.8.3

v4.8.2

Compare Source

What's Changed

Full Changelog: huggingface/datasets@4.8.1...4.8.2

v4.8.1

Compare Source

What's Changed

Full Changelog: huggingface/datasets@4.8.0...4.8.1

v4.8.0

Compare Source

Dataset Features

  • Read (and write) from HF Storage Buckets: load raw data, process and save to Dataset Repos by @​lhoestq in #​8064

    from datasets import load_dataset
    # load raw data from a Storage Bucket on HF
    ds = load_dataset("buckets/username/data-bucket", data_files=["*.jsonl"])
    # or manually, using hf:// paths
    ds = load_dataset("json", data_files=["hf://buckets/username/data-bucket/*.jsonl"])
    # process, filter
    ds = ds.map(...).filter(...)
    # publish the AI-ready dataset
    ds.push_to_hub("username/my-dataset-ready-for-training")

    This also fixes multiprocessed push_to_hub on macos that was causing segfault (now it uses spawn instead of fork).
    And it bumps dill and multiprocess versions to support python 3.14

  • Datasets streaming iterable packaged improvements and fixes by @​Michael-RDev in #​8068

    • added max_shard_size to IterableDataset.push_to_hub (but requires iterating twice to know the full dataset twice - improvements are welcome)
    • more arrow-native iterable operations for IterableDataset
    • better support of glob patterns in archives, e.g. zip://*.jsonl::hf://datasets/username/dataset-name/data.zip
    • fixes for to_pandas, videofolder, load_dataset_builder kwargs

What's Changed

New Contributors

Full Changelog: huggingface/datasets@4.7.0...4.8.0

v4.7.0

Compare Source

Datasets Features
  • Add Json() type by @​lhoestq in #​8027
    • JSON Lines files that contain arbitrary JSON objects like tool calling datasets are now supported. When there is a field or subfield containing mixed types (e.g. mix of str/int/float/dict/list or dictionaries with arbitrary keys), the Json()type is used to store such data that would normally not be supported in Arrow/Parquet
    • Use the Json() type in Features() for any dataset, it is supported in any functions that accepts features=like load_dataset(), .map(), .cast(), .from_dict(), .from_list()
    • Use on_mixed_types="use_json" to automatically set the Json() type on mixed types in .from_dict(), .from_list() and .map()

Examples:

You can use on_mixed_types="use_json" or specify features= with a [Json] type:

>>> ds = Dataset.from_dict({"a": [0, "foo", {"subfield": "bar"}]})
Traceback (most recent call last):
  ...
  File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Could not convert 'foo' with type str: tried to convert to int64

>>> features = Features({"a": Json()})
>>> ds = Dataset.from_dict({"a": [0, "foo", {"subfield": "bar"}]}, features=features)
>>> ds.features
{'a': Json()}
>>> list(ds["a"])
[0, "foo", {"subfield": "bar"}]

This is also useful for lists of dictionaries with arbitrary keys and values, to avoid filling missing fields with None:

>>> ds = Dataset.from_dict({"a": [[{"b": 0}, {"c": 0}]]})
>>> ds.features
{'a': List({'b': Value('int64'), 'c': Value('int64')})}
>>> list(ds["a"])
[[{'b': 0, 'c': None}, {'b': None, 'c': 0}]]  # missing fields are filled with None

>>> features = Features({"a": List(Json())})
>>> ds = Dataset.from_dict({"a": [[{"b": 0}, {"c": 0}]]}, features=features)
>>> ds.features
{'a': List(Json())}
>>> list(ds["a"])
[[{'b': 0}, {'c': 0}]]  # OK

Another example with tool calling data and the on_mixed_types="use_json" argument (useful to not have to specify features= manually):

>>> messages = [
...     {"role": "user", "content": "Turn on the living room lights and play my electronic music playlist."},
...     {"role": "assistant", "tool_calls": [
...         {"type": "function", "function": {
...             "name": "control_light",
...             "arguments": {"room": "living room", "state": "on"}
...         }},
...         {"type": "function", "function": {
...             "name": "play_music",
...             "arguments": {"playlist": "electronic"}  # mixed-type here since keys ["playlist"] and ["room", "state"] are different
...         }}]
...     },
...     {"role": "tool", "name": "control_light", "content": "The lights in the living room are now on."},
...     {"role": "tool", "name": "play_music", "content": "The music is now playing."},
...     {"role": "assistant", "content": "Done!"}
... ]
>>> ds = Dataset.from_dict({"messages": [messages]}, on_mixed_types="use_json")
>>> ds.features
{'messages': List({'role': Value('string'), 'content': Value('string'), 'tool_calls': List(Json()), 'name': Value('string')})}
>>> ds[0][1]["tool_calls"][0]["function"]["arguments"]
{"room": "living room", "state": "on"}
What's Changed
New Contributors

Full Changelog: huggingface/datasets@4.6.1...4.7.0

v4.6.1

Compare Source

Bug fix

Full Changelog: huggingface/datasets@4.6.0...4.6.1

v4.6.0

Compare Source

Dataset Features
  • Support Image, Video and Audio types in Lance datasets

    >>> from datasets import load_dataset
    >>> ds = load_dataset("lance-format/Openvid-1M", streaming=True, split="train")
    >>> ds.features
    {'video_blob': Video(),
     'video_path': Value('string'),
     'caption': Value('string'),
     'aesthetic_score': Value('float64'),
     'motion_score': Value('float64'),
     'temporal_consistency_score': Value('float64'),
     'camera_motion': Value('string'),
     'frame': Value('int64'),
     'fps': Value('float64'),
     'seconds': Value('float64'),
     'embedding': List(Value('float32'), length=1024)}
  • Push to hub now supports Video types

     >>> from datasets import Dataset, Video
    >>> ds = Dataset.from_dict({"video": ["path/to/video.mp4"]})
    >>> ds = ds.cast_column("video", Video())
    >>> ds.push_to_hub("username/my-video-dataset")
  • Write image/audio/video blobs as is in parquet (PLAIN) in push_to_hub() by @​lhoestq in #​7976

    • this enables cross-format Xet deduplication for image/audio/video, e.g. deduplicate videos between Lance, WebDataset, Parquet files and plain video files and make downloads and uploads faster to Hugging Face
    • E.g. if you convert a Lance video dataset to a Parquet video dataset on Hugging Face, the upload will be much faster since videos don't need to be reuploaded. Under the hood, the Xet storage reuses the binary chunks from the videos in Lance format for the videos in Parquet format
    • See more info here: https://huggingface.co/docs/hub/en/xet/deduplication

image

  • Add IterableDataset.reshard() by @​lhoestq in #​7992

    Reshard the dataset if possible, i.e. split the current shards further into more shards.
    This increases the number of shards and the resulting dataset has num_shards >= previous_num_shards.
    Equality may happen if no shard can be split further.

    The resharding mechanism depends on the dataset file format:

    • Parquet: shard per row group instead of per file
    • Other: not implemented yet (contributions are welcome !)
    >>> from datasets import load_dataset
    >>> ds = load_dataset("fancyzhx/amazon_polarity", split="train", streaming=True)
    >>> ds
    IterableDataset({
        features: ['label', 'title', 'content'],
        num_shards: 4
    })
    >>> ds.reshard()
    IterableDataset({
        features: ['label', 'title', 'content'],
        num_shards: 3600
    })
What's Changed
New Contributors

Full Changelog: huggingface/datasets@4.5.0...4.6.0

v4.5.0

Compare Source

Dataset Features

  • Add lance format support by @​eddyxu in #​7913

    • Support for both Lance dataset (including metadata / manifests) and standalone .lance files
    • e.g. with lance-format/fineweb-edu
    from datasets import load_dataset
    
    ds = load_dataset("lance-format/fineweb-edu", streaming=True)
    for example in ds["train"]:
        ...

What's Changed

New Contributors

Full Changelog: huggingface/datasets@4.4.2...4.5.0

v4.4.2

Compare Source

Bug fixes

Minor additions

New Contributors

Full Changelog: huggingface/datasets@4.4.1...4.4.2

v4.4.1

Compare Source

Bug fixes and improvements

Full Changelog: huggingface/datasets@4.4.0...4.4.1

v4.4.0

Compare Source

Dataset Features

  • Add nifti support by @​CloseChoice in #​7815

    • Load medical imaging datasets from Hugging Face:
    ds = load_dataset("username/my_nifti_dataset")
    ds["train"][0]  # {"nifti": <nibabel.nifti1.Nifti1Image>}
    • Load medical imaging datasets from your disk:
    files = ["/path/to/scan_001.nii.gz", "/path/to/scan_002.nii.gz"]
    ds = Dataset.from_dict({"nifti": files}).cast_column("nifti", Nifti())
    ds["train"][0]  # {"nifti": <nibabel.nifti1.Nifti1Image>}
  • Add num channels to audio by @​CloseChoice in #​7840

# samples have shape (num_channels, num_samples)
ds = ds.cast_column("audio", Audio())  # default, use all channels
ds = ds.cast_column("audio", Audio(num_channels=2))  # use stereo
ds = ds.cast_column("audio", Audio(num_channels=1))  # use mono

What's Changed

New Contributors

Full Changelog: huggingface/datasets@4.3.0...4.4.0

v4.3.0

Compare Source

Dataset Features

Enable large scale distributed dataset streaming:

These improvements require huggingface_hub>=1.1.0 to take full effect

What's Changed

New Contributors

Full Changelog: huggingface/datasets@4.2.0...4.3.0

v4.2.0

Compare Source

Dataset Features

  • Sample without replacement option when interleaving datasets by @​radulescupetru in #​7786

    ds = interleave_datasets(datasets, stopping_strategy="all_exhausted_without_replacement")
  • Parquet: add on_bad_files argument to error/warn/skip bad files by @​lhoestq in #​7806

    ds = load_dataset(parquet_dataset_id, on_bad_files="warn")
  • Add parquet scan options and docs by @​lhoestq in #​7801

    • docs to select columns and filter data efficiently
    ds = load_dataset(parquet_dataset_id, columns=["col_0", "col_1"])
    ds = load_dataset(parquet_dataset_id, filters=[("col_0", "==", 0)])
    • new argument to control buffering and caching when streaming
    fragment_scan_options = pyarrow.dataset.ParquetFragmentScanOptions(cache_options=pyarrow.CacheOptions(prefetch_limit=1, range_size_limit=128 << 20))
    ds = load_dataset(parquet_dataset_id, streaming=True, fragment_scan_options=fragment_scan_options)

What's Changed

New Contributors

Full Changelog: huggingface/datasets@4.1.1...4.2.0

v4.1.1

Compare Source

What's Changed

New Contributors

Full Changelog: huggingface/datasets@4.1.0...4.1.1

v4.1.0

Compare Source

Dataset Features

  • feat: use content defined chunking by @​kszucs in #​7589

    • internally uses use_content_defined_chunking=True when writing Parquet files
    • this enables fast deduped uploads to Hugging Face !
    # Now faster thanks to content defined chunking
    ds.push_to_hub("username/dataset_name")
    • this optimizes Parquet for Xet, the dedupe-based storage backend of Hugging Face. It allows to not have to upload data that already exist somewhere on HF (on an other file / version for example). Parquet content defined chunking defines Parquet pages boundaries based on the content of the data, in order to detect duplicate data easily.
    • with this change, the new default row group size for Parquet is set to 100MB
  • Concurrent push_to_hub by @​lhoestq in #​7708

  • Concurrent IterableDataset push_to_hub by @​lhoestq in #​7710

  • HDF5 support by @​klamike in #​7690

    • load HDF5 datasets in one line of code
    ds = load_dataset("username/dataset-with-hdf5-files")
    • each (possibly nested) field in the HDF5 file is parsed a a column, with the first dimension used for rows

Other improvements and bug fixes

New Contributors

Full Changelog: huggingface/datasets@4.0.0...4.1.0

v4.0.0

Compare Source

New Features

  • Add IterableDataset.push_to_hub() by @​lhoestq in #​7595

    # Build streaming data pipelines in a few lines of code !
    from datasets import load_dataset
    
    ds = load_dataset(..., streaming=True)
    ds = ds.map(...).filter(...)
    ds.push_to_hub(...)
  • Add num_proc= to .push_to_hub() (Dataset and IterableDataset) by @​lhoestq in #​7606

    # Faster push to Hub ! Available for both Dataset and IterableDataset
    ds.push_to_hub(..., num_proc=8)
  • New Column object

    # Syntax:
    ds["column_name"]  # datasets.Column([...]) or datasets.IterableColumn(...)
    
    # Iterate on a column:
    for text in ds["text"]:
        ...
    
    # Load one cell without bringing the full column in memory
    first_text = ds["text"][0]  # equivalent to ds[0]["text"]
  • Torchcodec decoding by @​TyTodd in #​7616

    • Enables streaming only the ranges you need !
    # Don't download full audios/videos when it's not necessary
    # Now with torchcodec it only streams the required ranges/frames:
    from datasets import load_dataset
    
    ds = load_dataset(..., streaming=True)
    for example in ds:
        video = example["video"]
        frames = video.get_frames_in_range(start=0, stop=6, step=1)  # only stream certain frames
    • Requires torch>=2.7.0 and FFmpeg >= 4
    • Not available for Windows yet but it is coming soon - in the meantime please use datasets<4.0
    • Load audio data with AudioDecoder:
    audio = dataset[0]["audio"]  # <datasets.features._torchcodec.AudioDecoder object at 0x11642b6a0>
    samples = audio.get_all_samples()  # or use get_samples_played_in_range(...)
    samples.data  # tensor([[ 0.0000e+00,  0.0000e+00,  0.0000e+00,  ...,  2.3447e-06, -1.9127e-04, -5.3330e-05]]
    samples.sample_rate  # 16000
    
    # old syntax is still supported
    array, sr = audio["array"], audio["sampling_rate"]
    • Load video data with VideoDecoder:
    video = dataset[0]["video"] <torchcodec.decoders._video_decoder.VideoDecoder object at 0x14a61d5a0>
    first_frame = video.get_frame_at(0)
    first_frame.data.shape  # (3, 240, 320)
    first_frame.pts_seconds  # 0.0
    frames = video.get_frames_in_range(0, 6, 1)
    frames.data.shape  # torch.Size([5, 3, 240, 320])

Breaking changes

  • Remove scripts altogether by @​lhoestq in #​7592

    • trust_remote_code is no longer supported
  • Torchcodec decoding by @​TyTodd in #​7616

    • torchcodec replaces soundfile for audio decoding
    • torchcodec replaces decord for video decoding
  • Replace Sequence by List by @​lhoestq in #​7634

    • Introduction of the List type
    from datasets import Features, List, Value
    
    features = Features({
        "texts": List(Value("string")),
        "four_paragraphs": List(Value("string"), length=4)
    })
    • Sequence was a legacy type from tensorflow datasets which converted list of dicts to dicts of lists. It is no longer a type but it becomes a utility that returns a List or a dict depending on the subfeature
    from datasets import Sequence
    
    Sequence(Value("string"))  # List(Value("string"))
    Sequence({"texts": Value("string")})  # {"texts": List(Value("string"))}

Other improvements and bug fixes

New Contributors


Configuration

📅 Schedule: (UTC)

  • Branch creation
    • At any time (no schedule defined)
  • Automerge
    • At any time (no schedule defined)

🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.

Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.

🔕 Ignore: Close this PR and you won't be reminded about this update again.


  • If you want to rebase/retry this PR, check this box

This PR was generated by Mend Renovate. View the repository job log.

@renovate-bot renovate-bot requested a review from a team as a code owner March 5, 2026 19:24
@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request updates the datasets Python library to its latest stable version, bringing a wealth of new capabilities and performance enhancements. The upgrade focuses on improving data handling, especially for large-scale and multimedia datasets, by introducing more efficient streaming, parallel processing, and support for modern data formats. This ensures the project can leverage the most recent advancements in data loading and manipulation provided by the huggingface/datasets ecosystem.

Highlights

  • Dependency Update: The datasets library has been updated from version 2.18.0 to 4.6.1, incorporating numerous new features, improvements, and bug fixes across multiple minor and major releases.
  • Enhanced Data Streaming and Uploads: Significant improvements have been made to data streaming and push_to_hub functionality, including IterableDataset.push_to_hub(), concurrent uploads with num_proc, and content-defined chunking for faster, deduped Parquet uploads.
  • Advanced Media Handling: The library now features torchcodec for efficient streaming and decoding of audio and video data, allowing access to specific ranges/frames without downloading entire files. Nifti support for medical imaging datasets has also been added.
  • New Data Structures and Formats: A new Column object enables lazy column access and iteration. Support for Lance and HDF5 dataset formats has been introduced, alongside IterableDataset.reshard() for dynamic shard splitting.
  • Breaking Changes: Key breaking changes include the removal of trust_remote_code (scripts altogether) and the replacement of the Sequence type with the more explicit List type.
Changelog
  • skills/vertex-tuning/references/requirements.txt
    • Updated datasets dependency from 2.18.0 to 4.6.1.
Activity
  • The pull request was automatically generated by renovate-bot to update the datasets dependency.
  • Automatic merging is disabled, requiring manual review and merge.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request updates the datasets dependency to version 4.6.1. This is a significant major version bump from 2.18.0, and the release notes for datasets v4.0.0 indicate several breaking changes. It is crucial to verify that the existing code, particularly in skills/vertex-tuning/scripts/prepare_dataset.py, remains fully compatible with the new library version to prevent runtime errors and ensure continued functionality.

Note: Security Review has been skipped due to the limited scope of the PR.

numpy==2.4.2
pandas==3.0.1
datasets==2.18.0
datasets==4.6.1
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The datasets library has been updated to a new major version (v4.x). The release notes for datasets v4.0.0 mention several breaking changes, including the removal of scripts, changes in audio/video decoding, and Sequence being replaced by List. Please verify that the existing code in skills/vertex-tuning/scripts/prepare_dataset.py is fully compatible with these changes. Specifically, ensure that functionalities like load_dataset, map, filter, train_test_split, and to_json work as expected with datasets==4.6.1.

@renovate-bot renovate-bot force-pushed the renovate/datasets-4.x branch 2 times, most recently from 265f805 to 0bead61 Compare March 11, 2026 19:47
@renovate-bot renovate-bot force-pushed the renovate/datasets-4.x branch 4 times, most recently from ebf0a09 to 936fcb7 Compare March 23, 2026 15:31
@renovate-bot renovate-bot force-pushed the renovate/datasets-4.x branch from 936fcb7 to 717c324 Compare March 26, 2026 16:44
@renovate-bot renovate-bot changed the title chore(deps): update dependency datasets to v4 Update dependency datasets to v4 Apr 14, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant