Load Balancing
ServerPlane lets you distribute traffic for any application across multiple backend servers using Nginx as a load balancer. This guide explains how load balancing works, what it does and does not handle, and how to prepare your application for a multi-server setup.
How It Works
When you enable load balancing for an app, ServerPlane:
- Keeps your current server as the primary backend — it continues serving traffic as before.
- Deploys your app to additional backend servers — the same Git repository, build steps, and configuration are run on each new backend via Ansible.
- Configures Nginx on the primary server as a reverse proxy with an
upstreamblock that distributes incoming requests across all backends. - Shares the database — all backends connect to the database on the primary server. No database is created on backend servers.
In short: load balancing in ServerPlane is your Git-deployed application running on multiple servers, all pointing at the same database. This is the standard architecture for horizontally scaled web applications.
What Load Balancing Does NOT Do
CRITICAL: Files uploaded to one backend are NOT automatically available on other backends.
Each backend server has its own independent filesystem. If a user uploads an image through Backend A, that file only exists on Backend A's disk. A subsequent request routed to Backend B will not find that file.
This affects:
- WordPress — Media uploads in
wp-content/uploads/ - Laravel — Files in
storage/app/public/ - Node.js / Python / Docker — Any application that writes files to the local filesystem
- User sessions — File-based sessions (PHP, Laravel
filedriver) are local to each backend
ServerPlane does not sync files between backends. This is by design — real-time file synchronization introduces complexity, latency, and conflict resolution problems that are best solved at the application level.
Preparing Your Application for Load Balancing
To run reliably behind a load balancer, your application must be stateless — meaning any backend can handle any request without depending on local files or sessions from a previous request.
1. Move File Uploads to Object Storage
Instead of writing uploads to the local filesystem, configure your application to use an S3-compatible object storage service (AWS S3, DigitalOcean Spaces, Cloudflare R2, MinIO, etc.).
WordPress:
Install and configure a plugin that offloads media to S3:
- WP Offload Media (recommended)
- Media Cloud
These plugins automatically upload new media to S3 and rewrite URLs so images are served from the storage bucket instead of the local filesystem.
Laravel:
Laravel has built-in support for S3 storage. In your .env file on each backend (via the Environment Variables tab):
FILESYSTEM_DISK=s3
AWS_ACCESS_KEY_ID=your-key
AWS_SECRET_ACCESS_KEY=your-secret
AWS_DEFAULT_REGION=us-east-1
AWS_BUCKET=your-bucket
AWS_URL=https://your-bucket.s3.amazonaws.com
Then use Laravel's Storage facade as normal — files will be stored in S3 instead of locally:
// This now writes to S3, not local disk
Storage::put('uploads/avatar.jpg', $fileContents);
Node.js:
Use the AWS SDK (@aws-sdk/client-s3) to upload files to S3 instead of writing to the local filesystem.
Python (Django):
Use django-storages with the S3 backend:
# settings.py
DEFAULT_FILE_STORAGE = 'storages.backends.s3boto3.S3Boto3Storage'
AWS_STORAGE_BUCKET_NAME = 'your-bucket'
2. Use External Session Storage
File-based sessions break across multiple backends because each server has its own session files. Switch to a centralized session store.
WordPress:
WordPress stores sessions in the database by default, so this is already handled. If you are using a plugin that uses PHP file sessions, switch to a database-backed or Redis-backed session plugin.
Laravel:
Switch from the file session driver to database, redis, or cookie:
SESSION_DRIVER=database
or, if you have Redis available:
SESSION_DRIVER=redis
Alternatively, use the IP Hash load balancing method in ServerPlane. This routes all requests from the same client IP to the same backend, so file-based sessions will work — but this is a workaround, not a solution. If that backend goes down, sessions are lost.
3. Use External Cache
If your application uses file-based caching, switch to Redis or Memcached so all backends share the same cache:
CACHE_STORE=redis
4. Ensure Stateless Background Jobs
If your application runs background jobs (queues), make sure the job workers on each backend can process jobs independently. Use a shared queue backend like Redis or a database queue.
Load Balancing Methods
| Method | Behavior | Best For |
|---|---|---|
| Round Robin | Requests are distributed evenly across backends in order | General use, stateless apps |
| Least Connections | New requests go to the backend with the fewest active connections | Apps with varying request duration |
| IP Hash | Requests from the same client IP always go to the same backend | Apps that rely on local sessions (workaround) |
Health Checks
ServerPlane performs periodic health checks against each backend by sending an HTTP request to the configured health check path (default: /). Backends that fail health checks are marked as unhealthy in the dashboard.
- Healthy: HTTP 200, 201, 204, 301, or 302 response
- Unhealthy: Connection refused, timeout, or HTTP 5xx response
- Deploying: Backend is still being provisioned
Unhealthy backends remain in the Nginx upstream configuration but are not automatically removed. You can toggle them off or remove them from the Load Balancer tab.
Database Considerations
All backends connect to the database on the primary server. ServerPlane automatically:
- Opens the database port (5432 for PostgreSQL, 3306 for MariaDB) on the primary's firewall for each backend IP
- Configures PostgreSQL/MariaDB to accept remote connections from backend servers
- Deploys the correct database host, name, and credentials to each backend's environment
You do not need to manually configure database access. However, be aware that:
- All database traffic crosses the network between backend and primary servers. For latency-sensitive applications, choose backends in the same datacenter or region as the primary.
- Database connection limits may need to be increased if you add many backends, since each backend opens its own connection pool.
Quick Checklist
Before enabling load balancing, verify:
- File uploads use object storage (S3, R2, Spaces), not local disk
- Sessions use database, Redis, or cookies — not file-based sessions
- Cache uses Redis or Memcached — not file cache
- Background job workers use a shared queue (Redis, database)
- Your application does not write persistent state to the local filesystem
- DNS points to the primary server's IP (the load balancer entry point)
If your application already follows these practices, enabling load balancing is as simple as clicking Enable Load Balancer and adding backend servers.