Datasets Server automatically converts and publishes datasets on the Hub as Parquet files. Parquet files are column-based and they shine when youβre working with big data. There are several ways you can work with Parquet files, and this guide will show you how to:
Polars is a fast DataFrame library written in Rust with Arrow as its foundation.
π‘ Learn more about how to get the dataset URLs in the List Parquet files guide.
Letβs start by grabbing the URLs to the train
split of the blog_authorship_corpus
dataset from Datasets Server:
r = requests.get("https://datasets-server.huggingface.co/parquet?dataset=blog_authorship_corpus")
j = r.json()
urls = [f['url'] for f in j['parquet_files'] if f['split'] == 'train']
urls
['https://huggingface.co/datasets/blog_authorship_corpus/resolve/refs%2Fconvert%2Fparquet/blog_authorship_corpus/blog_authorship_corpus-train-00000-of-00002.parquet',
'https://huggingface.co/datasets/blog_authorship_corpus/resolve/refs%2Fconvert%2Fparquet/blog_authorship_corpus/blog_authorship_corpus-train-00001-of-00002.parquet']
To read from a single Parquet file, use the read_parquet
function to read it into a DataFrame and then execute your query:
import polars as pl
df = (
pl.read_parquet("https://huggingface.co/datasets/blog_authorship_corpus/resolve/refs%2Fconvert%2Fparquet/blog_authorship_corpus/blog_authorship_corpus-train-00000-of-00002.parquet")
.groupby("horoscope")
.agg(
[
pl.count(),
pl.col("text").str.n_chars().mean().alias("avg_blog_length")
]
)
.sort("avg_blog_length", descending=True)
.limit(5)
)
print(df)
shape: (5, 3)
βββββββββββββ¬ββββββββ¬ββββββββββββββββββ
β horoscope β count β avg_blog_length β
β --- β --- β --- β
β str β u32 β f64 β
βββββββββββββͺββββββββͺββββββββββββββββββ‘
β Aquarius β 34062 β 1129.218836 β
β Cancer β 41509 β 1098.366812 β
β Capricorn β 33961 β 1073.2002 β
β Libra β 40302 β 1072.071833 β
β Leo β 40587 β 1064.053687 β
βββββββββββββ΄ββββββββ΄ββββββββββββββββββ
To read multiple Parquet files - for example, if the dataset is sharded - youβll need to use the concat
function to concatenate the files into a single DataFrame:
import polars as pl
df = (
pl.concat([pl.read_parquet(url) for url in urls])
.groupby("horoscope")
.agg(
[
pl.count(),
pl.col("text").str.n_chars().mean().alias("avg_blog_length")
]
)
.sort("avg_blog_length", descending=True)
.limit(5)
)
print(df)
shape: (5, 3)
βββββββββββββββ¬ββββββββ¬ββββββββββββββββββ
β horoscope β count β avg_blog_length β
β --- β --- β --- β
β str β u32 β f64 β
βββββββββββββββͺββββββββͺββββββββββββββββββ‘
β Aquarius β 49568 β 1125.830677 β
β Cancer β 63512 β 1097.956087 β
β Libra β 60304 β 1060.611054 β
β Capricorn β 49402 β 1059.555261 β
β Sagittarius β 50431 β 1057.458984 β
βββββββββββββββ΄ββββββββ΄ββββββββββββββββββ
Polars offers a lazy API that is more performant and memory-efficient for large Parquet files. The LazyFrame API keeps track of what you want to do, and itβll only execute the entire query when youβre ready. This way, the lazy API doesnβt load everything into RAM beforehand, and it allows you to work with datasets larger than your available RAM.
To lazily read a Parquet file, use the scan_parquet
function instead. Then, execute the entire query with the collect
function:
import polars as pl
q = (
pl.scan_parquet("https://huggingface.co/datasets/blog_authorship_corpus/resolve/refs%2Fconvert%2Fparquet/blog_authorship_corpus/blog_authorship_corpus-train-00000-of-00002.parquet")
.groupby("horoscope")
.agg(
[
pl.count(),
pl.col("text").str.n_chars().mean().alias("avg_blog_length")
]
)
.sort("avg_blog_length", descending=True)
.limit(5)
)
df = q.collect()
You can also use the popular Pandas DataFrame library to read Parquet files.
To read from a single Parquet file, use the read_parquet
function to read it into a DataFrame:
import pandas as pd
df = (
pd.read_parquet("https://huggingface.co/datasets/blog_authorship_corpus/resolve/refs%2Fconvert%2Fparquet/blog_authorship_corpus/blog_authorship_corpus-train-00000-of-00002.parquet")
.groupby('horoscope')['text']
.apply(lambda x: x.str.len().mean())
.sort_values(ascending=False)
.head(5)
)
To read multiple Parquet files - for example, if the dataset is sharded - youβll need to use the concat
function to concatenate the files into a single DataFrame:
df = (
pd.concat([pd.read_parquet(url) for url in urls])
.groupby('horoscope')['text']
.apply(lambda x: x.str.len().mean())
.sort_values(ascending=False)
.head(5)
)
DuckDB is a database that supports reading and querying Parquet files really fast. Begin by creating a connection to DuckDB, and then install and load the httpfs
extension to read and write remote files:
import duckdb
url = "https://huggingface.co/datasets/blog_authorship_corpus/resolve/refs%2Fconvert%2Fparquet/blog_authorship_corpus/blog_authorship_corpus-train-00000-of-00002.parquet"
con = duckdb.connect()
con.execute("INSTALL httpfs;")
con.execute("LOAD httpfs;")
Now you can write and execute your SQL query on the Parquet file:
con.sql(f"SELECT horoscope, count(*), AVG(LENGTH(text)) AS avg_blog_length FROM '{url}' GROUP BY horoscope ORDER BY avg_blog_length DESC LIMIT(5)")
βββββββββββββ¬βββββββββββββββ¬βββββββββββββββββββββ
β horoscope β count_star() β avg_blog_length β
β varchar β int64 β double β
βββββββββββββΌβββββββββββββββΌβββββββββββββββββββββ€
β Aquarius β 34062 β 1129.218836239798 β
β Cancer β 41509 β 1098.366812016671 β
β Capricorn β 33961 β 1073.2002002296751 β
β Libra β 40302 β 1072.0718326633914 β
β Leo β 40587 β 1064.0536871412028 β
βββββββββββββ΄βββββββββββββββ΄βββββββββββββββββββββ
To query multiple files - for example, if the dataset is sharded:
con.sql(f"SELECT horoscope, count(*), AVG(LENGTH(text)) AS avg_blog_length FROM read_parquet({urls[:2]}) GROUP BY horoscope ORDER BY avg_blog_length DESC LIMIT(5)")
βββββββββββββββ¬βββββββββββββββ¬βββββββββββββββββββββ
β horoscope β count_star() β avg_blog_length β
β varchar β int64 β double β
βββββββββββββββΌβββββββββββββββΌβββββββββββββββββββββ€
β Aquarius β 49568 β 1125.8306770497095 β
β Cancer β 63512 β 1097.95608703867 β
β Libra β 60304 β 1060.6110539931017 β
β Capricorn β 49402 β 1059.5552609206104 β
β Sagittarius β 50431 β 1057.4589835616982 β
βββββββββββββββ΄βββββββββββββββ΄βββββββββββββββββββββ
DuckDB-Wasm, a package powered by , is also availabe for running DuckDB in a browser. This could be useful, for instance, if you want to create a web app to query Parquet files from the browser!