Rent NVIDIA L40S GPUs on Demand from $0.72/hr
48GB GDDR6 ECC Ada Lovelace data center GPU, tuned for inference, video, and visual AI.
You can rent an NVIDIA L40S on Spheron starting at $0.72/hr per GPU per hour on dedicated (99.99% SLA, non-interruptible), with spot pricing cheaper still. Per-minute billing, no long-term contracts, and instances deploy in under 2 minutes across data center partners in multiple regions. Each card ships with 48GB of GDDR6 ECC memory, 4th generation Tensor Cores with FP8 support, 3rd generation RT Cores, and hardware AV1 encode. The L40S is purpose-built for production inference of 7B-30B LLMs, Stable Diffusion and SDXL serving, video transcoding pipelines, and mixed AI + graphics workloads where you need data center reliability without H100 pricing.
Technical specifications
Pricing comparison
| Provider | Price/hr | Savings |
|---|---|---|
SpheronYour price | $0.72/hr | - |
RunPod | $0.79/hr | 1.1x more expensive |
Lambda Labs | $1.29/hr | 1.8x more expensive |
CoreWeave | $1.89/hr | 2.6x more expensive |
AWS (g6e.xlarge) | $1.86/hr | 2.6x more expensive |
Need More L40S Than What's Listed?
Reserved Capacity
Commit to a duration, lock in availability and better rates
Custom Clusters
8 to 512+ GPUs, specific hardware, InfiniBand configs on request
Supplier Matchmaking
Spheron sources from its certified data center network, negotiates pricing, handles setup
Need more L40S capacity? Tell us your requirements and we'll source it from our certified data center network.
Typical turnaround: 24–48 hours
When to pick the L40S
Pick L40S if
You're running production inference for 7B-30B LLMs, SDXL serving, or video transcoding pipelines and need ECC + data center drivers without H100 pricing. Also the pick when you need FP8 support but don't need HBM bandwidth, and when AV1 hardware encode is on the requirements list.
Pick A100 80GB instead if
Your workload is training-heavy and bandwidth-bound. A100 has 2 TB/s HBM2e (vs 864 GB/s GDDR6 on L40S), making it faster for pre-training and fine-tuning. L40S wins at inference, A100 wins at training.
Pick RTX 4090 instead if
Your model fits in 24GB and you're running dev / testing workloads where ECC and multi-tenant isolation don't matter. RTX 4090 is roughly half the hourly rate of L40S.
Pick H100 instead if
You need HBM3 bandwidth (3.35 TB/s) or NVLink for multi-GPU tensor parallelism. H100 is the right pick for 70B+ inference or any training job where memory bandwidth is the bottleneck.
Ideal use cases
AI Inference at Scale
Run cost-effective inference workloads with 48GB memory and INT8 support for high-throughput production deployments.
Video Processing & Encoding
Leverage hardware-accelerated video pipelines for live streaming, transcoding, and video analytics at scale.
Visual Computing & Rendering
Combine AI acceleration with professional graphics capabilities for rendering and visualization workloads.
Mixed AI + Graphics Workloads
Take advantage of the L40S's unique combination of AI and graphics acceleration for next-generation creative and visual AI applications.
Performance benchmarks
Serve Llama 3.1 8B at FP8 on L40S
L40S's 48GB GDDR6 ECC and FP8 Tensor Cores make it a strong fit for production 7B-13B inference with heavy concurrency. vLLM gives you an OpenAI-compatible endpoint in one command.
# SSH into your L40S instancessh root@<instance-ip> # Install vLLMpip install vllm # Launch Llama 3.1 8B FP8 with high concurrencyvllm serve meta-llama/Llama-3.1-8B-Instruct \ --quantization fp8 \ --max-model-len 16384 \ --max-num-seqs 64 \ --gpu-memory-utilization 0.9 # Test the endpointcurl http://localhost:8000/v1/completions \ -H "Content-Type: application/json" \ -d '{"model":"meta-llama/Llama-3.1-8B-Instruct","prompt":"Hello","max_tokens":50}'For 30B models (Qwen 2.5 32B, Mixtral 8x7B at AWQ), FP8 weights still fit with room for KV cache at moderate batch sizes.
Related resources
GPU Cloud Benchmarks 2026
See how L40S performs against A100 and RTX 4090 in real-world benchmarks across GPU cloud providers.
Best NVIDIA GPUs for LLMs: Complete Ranking Guide
Where L40S fits in the GPU lineup for LLM inference, and when it's the right budget choice.
The GPU Cloud Cost Optimization Playbook
How to cut your AI compute bill by 60%, including when to pick L40S over pricier alternatives.