How to Split a Large CSV File into Smaller Files
3 methods — browser tool, Python, and command line
Large CSV files cause problems: they crash Excel, time out on imports, and exceed API file size limits. Splitting by row count is the most reliable fix — here's how to do it in three different ways.
Method 1: Using Tabular (browser, outputs a ZIP)
Upload once, get multiple files back in a ZIP — no code, no software.
- 1Go to the CSV File Splitter tool on Tabular.
- 2Upload your CSV file.
- 3Set the number of rows per chunk (e.g. 1000, 5000, 10000).
- 4Click Run — Tabular splits the file and packages all chunks into a ZIP.
- 5Download the ZIP and extract to get individual CSV files.
The header row is automatically included in every chunk, so each file is a valid standalone CSV.
Method 2: Using Python (pandas)
Best for very large files or automated workflows.
- 1Install pandas: pip install pandas
- 2Run the script below, adjusting chunk_size as needed.
python
import pandas as pd
import math
input_file = "large_file.csv"
chunk_size = 10000 # rows per output file
df = pd.read_csv(input_file)
total_chunks = math.ceil(len(df) / chunk_size)
for i, chunk_start in enumerate(range(0, len(df), chunk_size)):
chunk = df.iloc[chunk_start:chunk_start + chunk_size]
chunk.to_csv(f"output_part_{i + 1:03d}.csv", index=False)
print(f"Split into {total_chunks} files")For very large files (1 GB+), use chunksize in pd.read_csv() to avoid loading the whole file into memory at once.
Method 3: Using the command line (Unix/macOS/Linux)
The fastest option if you're comfortable with the terminal. No dependencies needed.
- 1Open your terminal.
- 2Run the command below. Replace 1001 with your desired rows per file + 1 (to account for the header).
- 3This produces files named output_aa, output_ab, etc. The -l flag sets lines per file.
bash
# Split into chunks of 1000 rows (+ 1 for header)
# First, save the header
head -1 large_file.csv > header.csv
# Split the data (skip header row)
tail -n +2 large_file.csv | split -l 1000 - output_
# Add header back to each chunk
for file in output_*; do
cat header.csv "$file" > "${file}_with_header.csv"
rm "$file"
done
rm header.csvOn Windows, use WSL (Windows Subsystem for Linux) to run bash commands, or use the Python method instead.
Frequently asked questions
How many rows per chunk should I use?
It depends on your destination. For Excel: keep chunks under 100,000 rows (Excel's limit is 1,048,576 but large files are slow). For most email marketing tools: 5,000-10,000 rows. For CRM imports: check the platform's documented limit — HubSpot supports up to 100,000 rows per import.
Does each chunk include the header row?
With Tabular and the Python method, yes — the header row is automatically added to every chunk so each file is a valid standalone CSV. With the raw split command, you need to add the header back manually (the bash script above does this for you).
Can I split by file size instead of row count?
Row count is more reliable because rows vary in size. Splitting by file size can produce chunks where the last row is cut in the middle. If you need a target file size, estimate your rows per chunk by dividing your target size by the average row size in bytes.
What if my CSV is too large to upload to Tabular?
Tabular's Pro plan supports files up to 50 MB. For larger files, use the Python or command line methods, which have no file size limits and run entirely on your machine.
Ready to try the fastest method?
Split a large CSV into smaller files by row count. Download all parts as a ZIP archive.
CSV Splitter — free
Papiral
Tabular