Tools: Latest: SeaweedFS Has a Free API: Distributed Object Storage for Billions of Files
What Is SeaweedFS?
Quick Start
SeaweedFS API: Upload and Retrieve Files
S3 API Compatibility
Filer API: Directory-Based Access
SeaweedFS vs MinIO SeaweedFS is a fast distributed storage system for blobs, objects, files, and data lake. It implements an object store with O(1) disk seek and transparent cloud integration, handling billions of files efficiently. SeaweedFS started as a distributed file system inspired by Facebook Haystack paper. It has evolved into a full-featured distributed storage system with S3 API compatibility, FUSE mount, Hadoop integration, and WebDAV support. Need to scrape web data at scale? Check out my web scraping tools on Apify — production-ready actors for Reddit, Google Maps, and more. Questions? Email me at [email protected] Templates let you quickly answer FAQs or store snippets for re-use. Hide child comments as well For further actions, you may consider blocking this person and/or reporting abuse
# Install SeaweedFS
-weight: 500;">wget https://github.com/seaweedfs/seaweedfs/releases/download/3.71/linux_amd64.tar.gz
tar xzf linux_amd64.tar.gz # Start master server
./weed master -mdir=/tmp/mdata -port=9333 & # Start volume server
./weed volume -dir=/tmp/vdata -max=5 -mserver=localhost:9333 -port=8080 & # Start filer (optional, for directory structure)
./weed filer -master=localhost:9333 -port=8888 &
# Install SeaweedFS
-weight: 500;">wget https://github.com/seaweedfs/seaweedfs/releases/download/3.71/linux_amd64.tar.gz
tar xzf linux_amd64.tar.gz # Start master server
./weed master -mdir=/tmp/mdata -port=9333 & # Start volume server
./weed volume -dir=/tmp/vdata -max=5 -mserver=localhost:9333 -port=8080 & # Start filer (optional, for directory structure)
./weed filer -master=localhost:9333 -port=8888 &
# Install SeaweedFS
-weight: 500;">wget https://github.com/seaweedfs/seaweedfs/releases/download/3.71/linux_amd64.tar.gz
tar xzf linux_amd64.tar.gz # Start master server
./weed master -mdir=/tmp/mdata -port=9333 & # Start volume server
./weed volume -dir=/tmp/vdata -max=5 -mserver=localhost:9333 -port=8080 & # Start filer (optional, for directory structure)
./weed filer -master=localhost:9333 -port=8888 &
import requests MASTER = "http://localhost:9333"
FILER = "http://localhost:8888" # Upload via master (volume-level)
# Step 1: Get a file ID
assign = requests.get(f"{MASTER}/dir/assign").json()
fid = assign["fid"]
url = assign["url"]
print(f"Assigned: fid={fid}, url={url}") # Step 2: Upload the file
with open("photo.jpg", "rb") as f: response = requests.post( f"http://{url}/{fid}", files={"file": f} )
print(f"Uploaded: {response.json()}") # Step 3: Read it back
data = requests.get(f"http://{url}/{fid}")
with open("downloaded.jpg", "wb") as f: f.write(data.content)
import requests MASTER = "http://localhost:9333"
FILER = "http://localhost:8888" # Upload via master (volume-level)
# Step 1: Get a file ID
assign = requests.get(f"{MASTER}/dir/assign").json()
fid = assign["fid"]
url = assign["url"]
print(f"Assigned: fid={fid}, url={url}") # Step 2: Upload the file
with open("photo.jpg", "rb") as f: response = requests.post( f"http://{url}/{fid}", files={"file": f} )
print(f"Uploaded: {response.json()}") # Step 3: Read it back
data = requests.get(f"http://{url}/{fid}")
with open("downloaded.jpg", "wb") as f: f.write(data.content)
import requests MASTER = "http://localhost:9333"
FILER = "http://localhost:8888" # Upload via master (volume-level)
# Step 1: Get a file ID
assign = requests.get(f"{MASTER}/dir/assign").json()
fid = assign["fid"]
url = assign["url"]
print(f"Assigned: fid={fid}, url={url}") # Step 2: Upload the file
with open("photo.jpg", "rb") as f: response = requests.post( f"http://{url}/{fid}", files={"file": f} )
print(f"Uploaded: {response.json()}") # Step 3: Read it back
data = requests.get(f"http://{url}/{fid}")
with open("downloaded.jpg", "wb") as f: f.write(data.content)
import boto3 # Connect to SeaweedFS S3 gateway
s3 = boto3.client( "s3", endpoint_url="http://localhost:8333", aws_access_key_id="any", aws_secret_access_key="any"
) # Create bucket
s3.create_bucket(Bucket="my-data") # Upload file
s3.upload_file("report.pdf", "my-data", "reports/2026/q1.pdf") # List objects
objects = s3.list_objects_v2(Bucket="my-data", Prefix="reports/")
for obj in objects.get("Contents", []): print(f"{obj[Key]}: {obj[Size]} bytes")
import boto3 # Connect to SeaweedFS S3 gateway
s3 = boto3.client( "s3", endpoint_url="http://localhost:8333", aws_access_key_id="any", aws_secret_access_key="any"
) # Create bucket
s3.create_bucket(Bucket="my-data") # Upload file
s3.upload_file("report.pdf", "my-data", "reports/2026/q1.pdf") # List objects
objects = s3.list_objects_v2(Bucket="my-data", Prefix="reports/")
for obj in objects.get("Contents", []): print(f"{obj[Key]}: {obj[Size]} bytes")
import boto3 # Connect to SeaweedFS S3 gateway
s3 = boto3.client( "s3", endpoint_url="http://localhost:8333", aws_access_key_id="any", aws_secret_access_key="any"
) # Create bucket
s3.create_bucket(Bucket="my-data") # Upload file
s3.upload_file("report.pdf", "my-data", "reports/2026/q1.pdf") # List objects
objects = s3.list_objects_v2(Bucket="my-data", Prefix="reports/")
for obj in objects.get("Contents", []): print(f"{obj[Key]}: {obj[Size]} bytes")
# Upload via filer (preserves directory structure)
-weight: 500;">curl -F "[email protected]" http://localhost:8888/datasets/2026/ # List directory
-weight: 500;">curl http://localhost:8888/datasets/2026/?pretty=y # Download
-weight: 500;">curl http://localhost:8888/datasets/2026/data.csv -o local.csv
# Upload via filer (preserves directory structure)
-weight: 500;">curl -F "[email protected]" http://localhost:8888/datasets/2026/ # List directory
-weight: 500;">curl http://localhost:8888/datasets/2026/?pretty=y # Download
-weight: 500;">curl http://localhost:8888/datasets/2026/data.csv -o local.csv
# Upload via filer (preserves directory structure)
-weight: 500;">curl -F "[email protected]" http://localhost:8888/datasets/2026/ # List directory
-weight: 500;">curl http://localhost:8888/datasets/2026/?pretty=y # Download
-weight: 500;">curl http://localhost:8888/datasets/2026/data.csv -o local.csv - O(1) disk seek for file access
- S3 API compatible
- FUSE mount support
- Automatic data replication
- Erasure coding for storage efficiency
- Built-in tiering to cloud storage
- WebDAV, HDFS support - SeaweedFS GitHub — 24K+ stars
- SeaweedFS Wiki