File Upload API vs AWS S3: Which Is Simpler for Your Project?
You need to store files and serve them over a URL. The obvious choice for many developers is AWS S3, it's the industry standard, it scales infinitely, and it costs fractions of a cent per gigabyte. But obvious doesn't always mean practical.
If you've ever spent an afternoon debugging IAM policies, fighting CORS headers, or configuring CloudFront distributions just to let users upload a profile picture, you already know the problem. AWS S3 is an incredibly powerful service built for an incredibly wide range of use cases. When all you need is "accept a file, give me a URL," that power becomes overhead.
This post compares two approaches side by side: using AWS S3 directly versus using a dedicated file upload API like FilePost. We'll look at real setup code, realistic cost scenarios, and the trade-offs so you can decide which fits your project.
The Problem with AWS S3 for Simple File Hosting
AWS S3 is a storage primitive. It stores objects in buckets. Everything else, public access, CDN delivery, upload authentication, CORS configuration, is something you layer on top. Here's what a typical "accept file uploads and serve them publicly" setup requires:
IAM Configuration
You need an IAM user or role with a policy granting s3:PutObject, s3:GetObject, and possibly s3:DeleteObject permissions. You'll also want to scope this to a specific bucket and prefix. That's a JSON policy document you'll need to write and test, and you'll need to securely store the resulting access key and secret.
Bucket Policies and Public Access
By default, S3 buckets block all public access (this is a good default). To serve files publicly, you need to either create a bucket policy allowing s3:GetObject on your prefix, or generate presigned URLs for every file. If you choose the bucket policy route, you also need to disable the "Block Public Access" settings at the bucket level, a setting AWS deliberately makes scary because misconfiguring it is a real security risk.
CORS Configuration
If users upload from a browser, you need a CORS configuration on the bucket. This means writing another JSON document specifying allowed origins, methods, and headers. Get it wrong and your frontend silently fails with an opaque error.
CloudFront (Optional but Recommended)
S3 serves files from a single region. For global performance, you need CloudFront in front of it. That's another service to configure: create a distribution, set up an origin access identity, configure cache behaviors, and wait for deployment (which can take 15+ minutes per change).
Presigned URLs (If Private)
If you want uploads to go directly from the browser to S3 (to avoid routing large files through your server), you need to implement presigned URL generation on your backend. That's a server endpoint, a signing step, and client-side logic to use the presigned URL with the correct headers.
None of this is impossible. It's all well-documented. But for a project where file hosting isn't the core feature, this setup can easily consume a day or more of development time.
What a Dedicated File Upload API Gives You
A dedicated file upload API collapses all of the above into a single HTTP call. You send a file, you get a CDN URL back. There's no bucket to configure, no IAM policy to write, no CORS to debug, and no CloudFront distribution to deploy.
With FilePost, the entire model is:
- Sign up with your email and receive an API key
- Send a
POSTrequest with the file and your API key - Get back a permanent, CDN-backed public URL
That's it. No infrastructure. No configuration. No permissions to manage. Storage is unlimited, bandwidth is unlimited, and files live forever.
Setup Comparison: S3 vs FilePost
Let's compare what it takes to upload a file and get a public URL with each approach.
AWS S3 Setup (Python with boto3)
First, install the SDK and configure credentials:
pip install boto3
Then configure your environment variables or ~/.aws/credentials file:
export AWS_ACCESS_KEY_ID=AKIAIOSFODNN7EXAMPLE
export AWS_SECRET_ACCESS_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
export AWS_DEFAULT_REGION=us-east-1
Now the upload code:
import boto3
from botocore.exceptions import ClientError
s3 = boto3.client('s3')
bucket = 'my-app-uploads'
key = 'uploads/profile-pic.png'
# Upload the file
try:
s3.upload_file(
'profile-pic.png',
bucket,
key,
ExtraArgs={'ContentType': 'image/png'}
)
except ClientError as e:
print(f"Upload failed: {e}")
raise
# Build the public URL (assumes bucket policy allows public reads)
url = f"https://{bucket}.s3.amazonaws.com/{key}"
# Or if using CloudFront:
# url = f"https://d1234abcdef8.cloudfront.net/{key}"
print(url)
This assumes you've already created the bucket, configured its public access policy, set up CORS, and optionally configured CloudFront. That's the hidden work.
FilePost Setup (Python with requests)
import requests
response = requests.post(
"https://filepost.dev/v1/upload",
headers={"X-API-Key": "your-api-key"},
files={"file": open("profile-pic.png", "rb")}
)
data = response.json()
print(data["url"])
# https://cdn.filepost.dev/file/filepost/uploads/a1/a1b2c3.png
No SDK to install beyond requests (which you probably already have). No credentials file. No bucket creation. No policy configuration. The URL you get back is already served from a CDN.
The Same Comparison in cURL
S3 with presigned URLs requires generating a signature on your server first. With FilePost, it's one command:
curl -X POST https://filepost.dev/v1/upload \
-H "X-API-Key: your-api-key" \
-F "file=@profile-pic.png"
Response:
{
"url": "https://cdn.filepost.dev/file/filepost/uploads/a1/a1b2c3.png",
"file_id": "a1b2c3d4e5f6",
"size": 84210
}
Cost Comparison
AWS S3 pricing is notoriously complex. You pay for storage, PUT requests, GET requests, and data transfer separately. Let's compare realistic scenarios assuming average file sizes of 500 KB.
| Scenario | AWS S3 (estimated) | FilePost |
|---|---|---|
| 300 uploads/month | ~$0.05 (storage) + $0.01 (requests) + CloudFront costs | $0 (Free plan) |
| 1,000 uploads/month | ~$0.50 + $0.05 + data transfer + your engineering time | $9/mo (Starter plan) |
| 10,000 uploads/month | ~$5.00 + $0.50 + $1-5 data transfer + CloudFront | $29/mo (Pro plan) |
At first glance, S3 looks cheaper at higher volumes. But this comparison is misleading for two reasons:
- Engineering time is not free. The hours you spend configuring S3, writing IAM policies, debugging CORS, and setting up CloudFront have a cost. If you bill your time at $50-150/hour and the setup takes 4-8 hours, that's $200-$1200 before you've stored a single file. FilePost's paid plans pay for themselves in saved engineering time within the first month.
- S3 costs grow in non-obvious ways. Data transfer (egress) charges are where AWS bills add up. Serving a popular file thousands of times can cost significantly more than the storage. FilePost includes unlimited bandwidth on all plans.
For projects that upload millions of files or need complex access patterns, S3's raw pricing advantage is real. For projects under 25,000 uploads per month, the total cost of ownership (including developer time) almost always favors a managed API.
Skip the S3 Configuration
Get a file upload API working in under a minute. Free plan includes 300 uploads/month with CDN delivery.
Get Your API KeyWhen to Use AWS S3
S3 is the right choice when your requirements go beyond simple file hosting:
- Large enterprise with existing AWS infrastructure. If your team already manages IAM, CloudFormation, and CloudFront, adding an S3 bucket is incremental effort. You're already paying the complexity tax.
- Complex access patterns. If you need fine-grained permissions per file (some public, some private, some shared with specific users), S3's policy system gives you the control you need.
- Data processing pipelines. If uploaded files trigger Lambda functions, feed into AWS Glue, or flow into other AWS services, S3 is the natural starting point. The AWS ecosystem integration is unmatched.
- Regulatory compliance. If you need specific data residency (files must be stored in eu-west-1), encryption at rest with customer-managed keys, or access logging for audits, S3 provides these controls.
- Very high volume. If you're storing millions of files and serving them billions of times, S3's per-request pricing will be cheaper than any managed API at that scale.
When to Use a File Upload API
A dedicated file upload API is the better choice when simplicity and speed matter more than raw infrastructure control:
- MVPs and prototypes. You need file uploads working today, not after a day of AWS configuration. Ship the feature in 5 minutes and move on to what makes your product unique.
- Automation workflows. If you're connecting tools in Zapier, Make, or n8n and need to host files from form submissions or email attachments, a single API call is far simpler than integrating with S3.
- Small teams without DevOps. If nobody on your team manages AWS infrastructure day-to-day, the overhead of learning and maintaining S3 configurations is a recurring cost. A managed API removes that entirely.
- Static sites and JAMstack. If your app runs on Vercel, Netlify, or GitHub Pages, you don't have a backend server to handle S3 SDK calls. A file upload API gives you the functionality with a simple HTTP request.
- Client projects and freelance work. When you're billing for deliverables, spending hours on AWS setup for a simple file hosting need is time you're not spending on the actual project.
Making the Switch
If you're currently using S3 and considering a simpler alternative for part of your file hosting, the migration path is straightforward:
- Sign up for FilePost by sending a POST request to
https://filepost.dev/v1/signupwith your email. You'll receive an API key immediately. - Replace your upload code. Swap the boto3
upload_filecall with a simple HTTP POST tohttps://filepost.dev/v1/upload. The response includes the CDN URL directly, no need to construct it yourself. - Update your URL references. New files will have
cdn.filepost.devURLs. Existing S3 files can stay where they are, you don't need to migrate everything at once. - Remove the AWS infrastructure once you're confident in the switch. Delete the IAM user, bucket policy, and CloudFront distribution you no longer need.
You can also run both in parallel. Use FilePost for new, simple uploads and keep S3 for anything that needs AWS-specific features like Lambda triggers or fine-grained permissions.
The decision isn't really S3 vs. not-S3. It's about matching your tool to your actual requirement. If you need a file storage primitive with infinite configurability, use S3. If you need to accept a file and get a URL back, use an API built for exactly that.