File Uploads Are Broken: Stop Buffering, Start Streaming (Directly to S3)

Mar 2026·8 min read

Let's be honest. The first time you built a file upload feature, you probably used multer. You probably streamed the file through your Node.js server. You probably thought, “This is fine. It works on my machine.”

It works because you are the only user.

Put that same code in production with 100 users trying to upload 50MB video files simultaneously, and watch your Event Loop scream for mercy.

This isn't a tutorial on “how to use S3.” This is an intervention. We are going to build a Direct-to-S3 architecture that handles infinite scale, costs almost nothing, and cleans up after itself when users inevitably rage-quit their browser tabs.

Phase 1: The RAM Trap (Why Your Server is Dying)

If you are accepting multipart/form-data on your backend, you are actively choosing violence.

When a user uploads a file to your server, your Node.js process has to hold that connection open. It has to allocate buffers. It has to context switch. V8 is single-threaded. While it's juggling chunks of a 4K video file, it isn't handling API requests. It isn't authenticating users. It is acting as a glorified pipe, and a leaky one at that.

The Memory Spike

1100
5 MB500 MB
650 MB

Your server
(multer)

19 MB

Direct to S3
(your server)

Drag the sliders. Watch what happens when 100 users upload 500MB files simultaneously.

Stop paying for EC2 instances just to buffer bytes. Let Amazon handle the I/O. That's what they are paid for.

Phase 2: The “Standard” Solution's Flaw (The Zombie Graveyard)

So you grew up. You read the AWS docs. You implemented Presigned URLs.

You generate a URL, the frontend uploads directly to S3. Your server is happy. But you created a new problem. A simpler, quieter, more expensive problem.

The “Abandoned Cart” of Files.

User clicks “Upload”. You generate a key: projectMedia/a1b2-funny_cat.mp4. User starts uploading to S3. User gets bored. User closes tab. User never submits the form.

Your database has no record of this file. But S3 does. And S3 never forgets. Use this architecture for a year, and you are paying for terabytes of “Zombie Files” that belong to nobody.

The S3 Bill Simulator

1005,000
5%60%
M1
M2
M3
M4
M5
M6
M7
M8
M9
M10
M11
M12
Valid storage
Zombie files (nobody's)

After 12 months: $16.56/mo for ghost files nobody owns20% of your total $82.80/mo bill

Phase 3: The Fix — Guilty Until Proven Innocent

The solution is not a cron job that scans all of S3 (that costs thousands of dollars in ListObjects requests). The solution is Optimistic Deletion.

The Core Philosophy: Every file is scheduled for execution before it is even created. It must earn the right to survive by successfully completing the form submission.

If the user uploads and vanishes? The file dies. If the user uploads and submits? The file gets a pardon.

We use Redis for this. Because Redis is fast, and we need a Dead Man's Switch.

Phase 4: The Implementation (The Codebase Deep Dive)

We are using a Sorted Set (ZSET) in Redis. Why? Because we need to order items by “Time of Death.”

Step A — Generating the Presigned URL

In S3Service.ts, when we generate a presigned URL, we instantly create a “Death Warrant” in Redis.

S3Service.ts
async createPreSignedUploadUrl(fileMetaData: FileMetaData) {
  // ... validation logic ...
  const key = `projectMedia/${uniqueId}-${originalName}`.replace(/\s+/g, "");

  // 1. Generate the S3 PutObject Command
  const putCommand = new PutObjectCommand({
    Bucket: process.env.S3_BUCKET,
    Key: key,
    ContentType: fileType,
  });

  // 2. Get the signed URL (5 min to start uploading)
  const uploadSignedUrl = await getSignedUrl(this.s3Client, putCommand, {
    expiresIn: 300,
  });

  // 3. THE TRAP: Schedule for deletion immediately
  const redisUploadKey = `s3upload_${key}`;
  const expiryTimestamp = Date.now() + 320 * 1000; // 5m 20s grace period

  const metadata = {
    timestamp: Date.now(),
    expectedType: fileType,
    expectedSize: fileSize,
  };

  await redisClient.setex(redisUploadKey, 420, JSON.stringify(metadata));

  // ZADD adds it to our "Death Row" sorted set
  // Score = The specific millisecond it should die
  await redisClient.zadd(
    "media-cleanup-schedule",
    expiryTimestamp,
    redisUploadKey
  );

  return { uploadSignedUrl, key };
}

Notice the expiryTimestamp in zadd. It is Date.now() + 320s. If we don't hear back from the client in 5 minutes and 20 seconds, this key is getting reaped.

Step B — Uploading Directly from the Browser

The frontend needs to respect the contract. In useProjectSubmission.ts, we handle the direct upload.

Crucial Detail: You MUST send the Content-Type. S3 signatures sign the headers. If your signature says image/png and your browser sends application/octet-stream, AWS sends a 403.

useProjectSubmission.ts
const [keys, error] = await tryCatch<string[]>(() =>
  Promise.all(
    urlResponse.data.map(async (urlData, index) => {
      const file = allFiles[index];

      // DIRECT TO S3 via PUT
      await axios.put(urlData.uploadSignedUrl, file, {
        headers: {
          // This MUST match what you told the backend
          "Content-Type": file.type,
        },
      });

      return urlData.key;
    })
  )
);

The Data Lifecycle Flow

Backend
Redis
ZADD death-row {timestamp}

Your backend generates a presigned URL and immediately adds the S3 key to Redis with a death timestamp. The file is on Death Row before it even exists.

auto-advancing · hover to pause

Step C — Verifying the Upload

The user successfully uploads the file AND submits the form. projects.controller.ts receives the keys.

Do not trust the client. They could send you a key for a 5TB file they uploaded elsewhere, or a different file entirely.

We use HeadObject to verify the file exists on S3 and matches our expected metadata (Size & Type) that we stashed in Redis.

S3Service.ts
async validateAndCreatePreSignedDownloadUrl(key: string) {
  const redisUploadKey = `s3upload_${key}`;

  // Check if the key exists in our temporary metadata store
  const metadataStr = await redisClient.get(redisUploadKey);
  if (!metadataStr) throw new Error("Key expired or invalid");

  const metadata = JSON.parse(metadataStr);

  // BAIT AND SWITCH CHECK
  const validateCommand = new HeadObjectCommand({
    Bucket: process.env.S3_BUCKET,
    Key: key,
  });

  const response = await this.s3Client.send(validateCommand);

  // Does S3 match what we promised?
  if (response.ContentType !== metadata.expectedType) {
    throw new Error("Invalid upload. File deleted.");
  }

  // THE SALVATION: Remove from Death Row
  await redisClient.del(redisUploadKey);
  await redisClient.zrem("media-cleanup-schedule", redisUploadKey);

  return `${process.env.S3_CLOUDFRONT_DISTRIBUTION}/${key}`;
}

If ZREM runs, the file is safe. It is now a permanent resident of your application.

Step D — Running the Cleanup Worker

What happens if the user closes the tab? ZREM never runs. The key sits in media-cleanup-schedule with a score roughly 5 minutes in the future.

Enter S3Cleanup.worker.ts. It wakes up periodically (cron: 0 0 */12 * * *) and asks Redis: “Who is past their expiration date?”

S3Cleanup.worker.ts
const now = Date.now();

// 1. Fetch items with score < now (Expired)
const expiredKeys = await redisClient.zrangebyscore(
  "media-cleanup-schedule",
  0,
  now
);

for (const key of expiredKeys) {
  // 2. Kill the file on S3
  const actualS3Key = key.replace("s3upload_", "");
  await s3Service.deleteObject(actualS3Key);

  // 3. Remove the record (Closure)
  await redisClient.zrem("media-cleanup-schedule", key);
}

It creates a perfect, self-cleaning loop. We don't scan S3. We don't manage complex state. We just check a sorted list of timestamps.

Conclusion

This architecture gives you:

  • Infinite Upload Scale: Your server handles JSON, S3 handles the blobs.
  • Zero Waste: If a file isn’t linked to a project in 5 minutes, it ceases to exist.
  • Simplicity: No complex job queues, just a Redis Sorted Set.

Stop building upload servers. Start building upload architectures.