File Upload API vs AWS S3: Which Is Simpler for Your Project?

March 31, 2026 · 8 min read

You need to store files and serve them over a URL. The obvious choice for many developers is AWS S3, it's the industry standard, it scales infinitely, and it costs fractions of a cent per gigabyte. But obvious doesn't always mean practical.

If you've ever spent an afternoon debugging IAM policies, fighting CORS headers, or configuring CloudFront distributions just to let users upload a profile picture, you already know the problem. AWS S3 is an incredibly powerful service built for an incredibly wide range of use cases. When all you need is "accept a file, give me a URL," that power becomes overhead.

This post compares two approaches side by side: using AWS S3 directly versus using a dedicated file upload API like FilePost. We'll look at real setup code, realistic cost scenarios, and the trade-offs so you can decide which fits your project.

The Problem with AWS S3 for Simple File Hosting

AWS S3 is a storage primitive. It stores objects in buckets. Everything else, public access, CDN delivery, upload authentication, CORS configuration, is something you layer on top. Here's what a typical "accept file uploads and serve them publicly" setup requires:

IAM Configuration

You need an IAM user or role with a policy granting s3:PutObject, s3:GetObject, and possibly s3:DeleteObject permissions. You'll also want to scope this to a specific bucket and prefix. That's a JSON policy document you'll need to write and test, and you'll need to securely store the resulting access key and secret.

Bucket Policies and Public Access

By default, S3 buckets block all public access (this is a good default). To serve files publicly, you need to either create a bucket policy allowing s3:GetObject on your prefix, or generate presigned URLs for every file. If you choose the bucket policy route, you also need to disable the "Block Public Access" settings at the bucket level, a setting AWS deliberately makes scary because misconfiguring it is a real security risk.

CORS Configuration

If users upload from a browser, you need a CORS configuration on the bucket. This means writing another JSON document specifying allowed origins, methods, and headers. Get it wrong and your frontend silently fails with an opaque error.

CloudFront (Optional but Recommended)

S3 serves files from a single region. For global performance, you need CloudFront in front of it. That's another service to configure: create a distribution, set up an origin access identity, configure cache behaviors, and wait for deployment (which can take 15+ minutes per change).

Presigned URLs (If Private)

If you want uploads to go directly from the browser to S3 (to avoid routing large files through your server), you need to implement presigned URL generation on your backend. That's a server endpoint, a signing step, and client-side logic to use the presigned URL with the correct headers.

None of this is impossible. It's all well-documented. But for a project where file hosting isn't the core feature, this setup can easily consume a day or more of development time.

What a Dedicated File Upload API Gives You

A dedicated file upload API collapses all of the above into a single HTTP call. You send a file, you get a CDN URL back. There's no bucket to configure, no IAM policy to write, no CORS to debug, and no CloudFront distribution to deploy.

With FilePost, the entire model is:

  1. Sign up with your email and receive an API key
  2. Send a POST request with the file and your API key
  3. Get back a permanent, CDN-backed public URL

That's it. No infrastructure. No configuration. No permissions to manage. Storage is unlimited, bandwidth is unlimited, and files live forever.

Setup Comparison: S3 vs FilePost

Let's compare what it takes to upload a file and get a public URL with each approach.

AWS S3 Setup (Python with boto3)

First, install the SDK and configure credentials:

pip install boto3

Then configure your environment variables or ~/.aws/credentials file:

export AWS_ACCESS_KEY_ID=AKIAIOSFODNN7EXAMPLE
export AWS_SECRET_ACCESS_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
export AWS_DEFAULT_REGION=us-east-1

Now the upload code:

import boto3
from botocore.exceptions import ClientError

s3 = boto3.client('s3')
bucket = 'my-app-uploads'
key = 'uploads/profile-pic.png'

# Upload the file
try:
    s3.upload_file(
        'profile-pic.png',
        bucket,
        key,
        ExtraArgs={'ContentType': 'image/png'}
    )
except ClientError as e:
    print(f"Upload failed: {e}")
    raise

# Build the public URL (assumes bucket policy allows public reads)
url = f"https://{bucket}.s3.amazonaws.com/{key}"
# Or if using CloudFront:
# url = f"https://d1234abcdef8.cloudfront.net/{key}"
print(url)

This assumes you've already created the bucket, configured its public access policy, set up CORS, and optionally configured CloudFront. That's the hidden work.

FilePost Setup (Python with requests)

import requests

response = requests.post(
    "https://filepost.dev/v1/upload",
    headers={"X-API-Key": "your-api-key"},
    files={"file": open("profile-pic.png", "rb")}
)

data = response.json()
print(data["url"])
# https://cdn.filepost.dev/file/filepost/uploads/a1/a1b2c3.png

No SDK to install beyond requests (which you probably already have). No credentials file. No bucket creation. No policy configuration. The URL you get back is already served from a CDN.

The Same Comparison in cURL

S3 with presigned URLs requires generating a signature on your server first. With FilePost, it's one command:

curl -X POST https://filepost.dev/v1/upload \
  -H "X-API-Key: your-api-key" \
  -F "file=@profile-pic.png"

Response:

{
  "url": "https://cdn.filepost.dev/file/filepost/uploads/a1/a1b2c3.png",
  "file_id": "a1b2c3d4e5f6",
  "size": 84210
}

Cost Comparison

AWS S3 pricing is notoriously complex. You pay for storage, PUT requests, GET requests, and data transfer separately. Let's compare realistic scenarios assuming average file sizes of 500 KB.

Scenario AWS S3 (estimated) FilePost
300 uploads/month ~$0.05 (storage) + $0.01 (requests) + CloudFront costs $0 (Free plan)
1,000 uploads/month ~$0.50 + $0.05 + data transfer + your engineering time $9/mo (Starter plan)
10,000 uploads/month ~$5.00 + $0.50 + $1-5 data transfer + CloudFront $29/mo (Pro plan)

At first glance, S3 looks cheaper at higher volumes. But this comparison is misleading for two reasons:

For projects that upload millions of files or need complex access patterns, S3's raw pricing advantage is real. For projects under 25,000 uploads per month, the total cost of ownership (including developer time) almost always favors a managed API.

Skip the S3 Configuration

Get a file upload API working in under a minute. Free plan includes 300 uploads/month with CDN delivery.

Get Your API Key

When to Use AWS S3

S3 is the right choice when your requirements go beyond simple file hosting:

When to Use a File Upload API

A dedicated file upload API is the better choice when simplicity and speed matter more than raw infrastructure control:

Making the Switch

If you're currently using S3 and considering a simpler alternative for part of your file hosting, the migration path is straightforward:

  1. Sign up for FilePost by sending a POST request to https://filepost.dev/v1/signup with your email. You'll receive an API key immediately.
  2. Replace your upload code. Swap the boto3 upload_file call with a simple HTTP POST to https://filepost.dev/v1/upload. The response includes the CDN URL directly, no need to construct it yourself.
  3. Update your URL references. New files will have cdn.filepost.dev URLs. Existing S3 files can stay where they are, you don't need to migrate everything at once.
  4. Remove the AWS infrastructure once you're confident in the switch. Delete the IAM user, bucket policy, and CloudFront distribution you no longer need.

You can also run both in parallel. Use FilePost for new, simple uploads and keep S3 for anything that needs AWS-specific features like Lambda triggers or fine-grained permissions.

The decision isn't really S3 vs. not-S3. It's about matching your tool to your actual requirement. If you need a file storage primitive with infinite configurability, use S3. If you need to accept a file and get a URL back, use an API built for exactly that.