Full-stack Laravel & Vue.js developer building SaaS platforms that scale

I design and develop scalable APIs, interactive Vue UIs, and automated SaaS workflows with Laravel. My goal is to build software that performs, looks great, and solves real business problems.

Laravel Performance Optimization: From Slow to Lightning Fast

After building and scaling dozens of Laravel applications—from startups handling hundreds of users to enterprise SaaS platforms processing 10M+ requests daily—I've learned that performance isn't just about raw speed. It's about sustainability, cost optimization, and delivering a flawless user experience under any load.

In this comprehensive guide, I'll share battle-tested techniques that have helped me consistently achieve sub-100ms response times in production Laravel applications. These aren't theoretical optimizations—they're patterns I've implemented and measured across real-world, high-traffic systems.

💡 What You'll Master:
  • Database query optimization beyond basic eager loading
  • Multi-layer caching strategies that actually work
  • Laravel Octane production configuration (FrankenPHP & RoadRunner)
  • Queue architecture for high-throughput systems
  • Real-world monitoring and profiling techniques
  • Asset optimization for modern Laravel applications

Understanding Laravel Performance: The Mental Model

Before diving into specific optimizations, let's understand what actually makes Laravel applications slow:

The Performance Pyramid:

  1. Database queries (70%): N+1 queries, missing indexes, inefficient joins
  2. Application logic (15%): Heavy computations, large loops, inefficient algorithms
  3. External API calls (10%): Synchronous HTTP requests, slow third-party services
  4. View rendering (5%): Complex Blade templates, missing view caching
✅ Key Insight: Focus on the database layer first. Optimizing queries will give you 10x more impact than tweaking application code. Measure, don't guess!

1. Database Query Optimization: Beyond the Basics

Database queries are the #1 performance bottleneck in Laravel applications. Let's fix that.

Pattern 1: Eliminate N+1 Queries with Smart Eager Loading

Why it matters: The N+1 problem can turn a 50ms query into a 5-second nightmare. If you have 100 posts and fetch the author for each one individually, that's 101 queries instead of 2.

// ❌ BAD: N+1 query disaster (101 queries!)
$posts = Post::all();
foreach ($posts as $post) {
    echo $post->author->name; // Separate query for each post
    echo $post->category->title; // Another query per post
}

// ✅ GOOD: Eager loading (2 queries total)
$posts = Post::with(['author', 'category'])->get();

// ✅ BETTER: Select only needed columns (smaller payload)
$posts = Post::with([
    'author' => fn($q) => $q->select('id', 'name', 'avatar'),
    'category' => fn($q) => $q->select('id', 'title', 'slug')
])->get();

// ✅ BEST: Conditional eager loading based on request
$posts = Post::query()
    ->with('author:id,name,avatar')
    ->when($request->include_comments, fn($q) => $q->withCount('comments'))
    ->when($request->include_tags, fn($q) => $q->with('tags:id,name'))
    ->get();
📊 Performance Impact - N+1 Query Fix:
ScenarioQueriesTimeMemory
❌ Without eager loading (100 posts) 201 queries 3,450ms 45MB
✅ With eager loading (100 posts) 3 queries 85ms 12MB
Improvement 98.5% fewer 97.5% faster 73% less
⚠️ Common Mistake: Over-eager loading everything! Only load what you actually use. Loading 10 relationships when you need 2 wastes memory and bandwidth.

Pattern 2: Strategic Database Indexing

When to index: Any column used in WHERE, JOIN, ORDER BY, or foreign keys. Indexes can turn 2-second queries into 20ms queries.

// Migration with strategic indexes
Schema::create('posts', function (Blueprint $table) {
    $table->id();
    $table->foreignId('user_id')->constrained()->cascadeOnDelete();
    $table->string('slug')->unique();
    $table->string('status', 20); // published, draft, archived
    $table->timestamp('published_at')->nullable();
    $table->integer('views')->default(0);
    $table->timestamps();
    
    // Single column indexes
    $table->index('status');
    $table->index('published_at');
    
    // Composite indexes for common query patterns
    $table->index(['status', 'published_at']); // For: published posts ordered by date
    $table->index(['user_id', 'created_at']); // For: user's recent posts
    $table->index(['status', 'views']); // For: popular published posts
    
    // Full-text search index (MySQL 5.7+)
    $table->fullText(['title', 'content']);
});
💡 Pro Tip: Use EXPLAIN to analyze queries. In Laravel: DB::listen() to log slow queries, then run EXPLAIN on them to see if indexes are being used.

Pattern 3: Query Optimization Techniques

// ❌ BAD: Loading entire models when you need counts
$userPostCounts = User::all()->map(fn($user) => [
    'user' => $user,
    'post_count' => $user->posts->count()
]);

// ✅ GOOD: Use withCount for aggregations
$users = User::withCount('posts')
    ->having('posts_count', '>', 10)
    ->get();

// ❌ BAD: Multiple queries for related counts
$post = Post::find(1);
$commentCount = $post->comments()->count();
$likeCount = $post->likes()->count();

// ✅ GOOD: Single query with multiple counts
$post = Post::withCount(['comments', 'likes'])->find(1);
echo $post->comments_count;
echo $post->likes_count;

// ✅ BEST: Use database views for complex aggregations
// Create view in migration
DB::statement("
    CREATE VIEW user_statistics AS
    SELECT 
        users.id,
        users.name,
        COUNT(DISTINCT posts.id) as post_count,
        COUNT(DISTINCT comments.id) as comment_count,
        AVG(posts.views) as avg_post_views,
        SUM(CASE WHEN posts.status = 'published' THEN 1 ELSE 0 END) as published_count
    FROM users
    LEFT JOIN posts ON posts.user_id = users.id
    LEFT JOIN comments ON comments.user_id = users.id
    GROUP BY users.id, users.name
");

// Query the view - one fast query instead of many
$stats = DB::table('user_statistics')
    ->where('post_count', '>', 10)
    ->orderByDesc('avg_post_views')
    ->get();

Pattern 4: Chunk Large Datasets

When to use: Processing thousands of records. Loading 100K records into memory will crash your server.

// ❌ BAD: Loads everything into memory (will crash!)
$users = User::all();
foreach ($users as $user) {
    $user->notify(new NewsletterNotification());
}

// ✅ GOOD: Process in chunks
User::chunk(500, function ($users) {
    foreach ($users as $user) {
        $user->notify(new NewsletterNotification());
    }
});

// ✅ BETTER: Use lazy loading for memory efficiency
User::lazy(500)->each(function ($user) {
    $user->notify(new NewsletterNotification());
});

// ✅ BEST: Queue it for background processing
User::chunk(500, function ($users) {
    SendNewsletterJob::dispatch($users);
});

2. Advanced Caching Strategies That Actually Work

Caching is often misunderstood. It's not just about Cache::remember()—it's about building a multi-layer strategy.

The Three-Tier Caching Architecture

Layer 1: Request-Level Cache (Lives for one request)
Layer 2: Application Cache (Redis/Memcached, lives for minutes/hours)
Layer 3: HTTP Cache (CDN/Browser, lives for days)

class PostRepository
{
    // Layer 1: Request-level cache (Laravel 11+)
    public function getPopularPosts(int $limit = 10): Collection
    {
        return once(function () use ($limit) {
            // Layer 2: Application cache (Redis)
            return Cache::remember(
                key: "popular-posts:{$limit}",
                ttl: now()->addHour(),
                callback: function () use ($limit) {
                    // Layer 3: Database query
                    return Post::query()
                        ->select(['id', 'title', 'slug', 'views', 'user_id'])
                        ->with(['author' => fn($q) => $q->select('id', 'name', 'avatar')])
                        ->where('status', 'published')
                        ->orderByDesc('views')
                        ->limit($limit)
                        ->get();
                }
            );
        });
    }
    
    // Cache with tags for granular invalidation
    public function getUserPosts(int $userId): Collection
    {
        return Cache::tags(['posts', "user:{$userId}"])
            ->remember("user-posts:{$userId}", 3600, function () use ($userId) {
                return Post::where('user_id', $userId)
                    ->with('category:id,name')
                    ->latest()
                    ->get();
            });
    }
}
✅ Cache Strategy: Cache frequently accessed, rarely changed data. Don't cache user-specific data unless you have a good invalidation strategy.

Smart Cache Invalidation

The hard problem: Invalidating cache at the right time without over-invalidating.

// Automatic cache invalidation with model events
class Post extends Model
{
    protected static function booted(): void
    {
        // Clear cache when post is created, updated, or deleted
        static::created(fn($post) => static::clearPostCache($post));
        static::updated(fn($post) => static::clearPostCache($post));
        static::deleted(fn($post) => static::clearPostCache($post));
    }
    
    protected static function clearPostCache(Post $post): void
    {
        // Clear tagged caches
        Cache::tags(['posts', "post:{$post->id}", "user:{$post->user_id}"])->flush();
        
        // Clear specific keys
        Cache::forget("popular-posts:10");
        Cache::forget("popular-posts:20");
        Cache::forget("user-posts:{$post->user_id}");
        Cache::forget("category-posts:{$post->category_id}");
    }
}

// Cache-aside pattern for frequently updated data
public function getPostViews(int $postId): int
{
    $cacheKey = "post-views:{$postId}";
    
    // Try cache first
    if (Cache::has($cacheKey)) {
        return Cache::get($cacheKey);
    }
    
    // Fall back to database
    $views = Post::where('id', $postId)->value('views');
    
    // Store in cache for 5 minutes
    Cache::put($cacheKey, $views, now()->addMinutes(5));
    
    return $views;
}

Fragment Caching for Views

{{-- Cache expensive view fragments --}}
@cache('sidebar-popular-posts', now()->addHour())
    
@endcache

{{-- Cache per-user data --}}
@cache("user-dashboard-{$user->id}", now()->addMinutes(15))
    
{{-- Expensive dashboard widgets --}}
@endcache

3. Laravel Octane: Supercharge Your Application

Laravel Octane keeps your application in memory between requests, eliminating the bootstrap overhead. In my testing, it provides 3-5x performance improvement with minimal changes.

FrankenPHP vs RoadRunner vs Swoole

FrankenPHP (Recommended for Laravel 11+):

  • Built-in HTTP/2 and HTTP/3 support
  • Automatic HTTPS with Let's Encrypt
  • Worker mode + early hints
  • Best for: Modern applications, microservices

RoadRunner:

  • Written in Go, very stable
  • Excellent for high-concurrency
  • Best for: Traditional VPS/dedicated servers
// config/octane.php - Production configuration
return [
    'server' => env('OCTANE_SERVER', 'frankenphp'),
    
    // Workers: CPU cores × 2 is a good starting point
    'workers' => env('OCTANE_WORKERS', 4),
    
    // Task workers for background processing
    'task_workers' => env('OCTANE_TASK_WORKERS', 2),
    
    // Max requests before worker restart (prevents memory leaks)
    'max_requests' => env('OCTANE_MAX_REQUESTS', 1000),
    
    // Worker memory limit
    'max_execution_time' => 30,
    
    'listeners' => [
        WorkerStarting::class => [
            EnsureUploadedFilesAreValid::class,
        ],
        
        RequestReceived::class => [
            ...Octane::prepareApplicationForNextOperation(),
            FlushTemporaryContainerInstances::class,
        ],
        
        RequestTerminated::class => [
            FlushSessionState::class,
            FlushAuthenticationState::class,
        ],
    ],
    
    // Critical: Define services that should be warmed
    'warm' => [
        'config',
        'routes',
        'views',
    ],
];
❌ Critical Gotcha: Singleton services can leak state between requests! Always flush stateful services or use request-scoped bindings.

Octane State Management

// app/Providers/AppServiceProvider.php
use Laravel\Octane\Facades\Octane;

public function boot(): void
{
    // Flush stateful services between requests
    if (config('octane.server')) {
        Octane::flushState(function () {
            // Reset singleton services
            app(CartService::class)->clear();
            app(AnalyticsTracker::class)->reset();
            
            // Clear static properties
            MyStaticClass::$cache = [];
        });
    }
}

// Make services request-scoped instead of singletons
$this->app->scoped(ShoppingCart::class, function ($app) {
    return new ShoppingCart($app['session']);
});

Octane Caching Strategies

use Laravel\Octane\Facades\Octane;

// Octane cache (in-memory, super fast)
Octane::table('users')->set('user:1', [
    'name' => 'John Doe',
    'email' => '[email protected]'
], ttl: 3600);

$user = Octane::table('users')->get('user:1');

// Concurrent tasks (parallel execution)
[$users, $posts, $stats] = Octane::concurrently([
    fn () => User::all(),
    fn () => Post::published()->get(),
    fn () => DB::table('analytics')->count(),
]);

4. Queue Optimization for High-Throughput Systems

Queues are essential for handling background jobs efficiently. Here's how I architect systems that process millions of jobs daily without breaking a sweat.

Queue Architecture Strategy

The Three-Queue System:

  • High Priority: User-facing operations (emails, notifications)
  • Default: Standard background tasks
  • Low Priority: Heavy processing (video encoding, data exports)
// Dedicated queues for different priorities
class ProcessVideoJob implements ShouldQueue, ShouldBeUnique
{
    use Queueable, Dispatchable;
    
    public int $tries = 3;
    public int $timeout = 300; // 5 minutes
    public int $maxExceptions = 3;
    public int $backoff = 60; // Wait 60s before retry
    
    public function __construct(
        public Video $video,
    ) {
        // Route to dedicated queue
        $this->onQueue('video-processing');
    }
    
    public function middleware(): array
    {
        return [
            // Rate limit: 10 videos per minute to avoid overwhelming encoder
            new RateLimited('video-processing'),
            
            // Prevent duplicate jobs for same video
            new WithoutOverlapping($this->video->id),
        ];
    }
    
    // Ensure job uniqueness across queue
    public function uniqueId(): string
    {
        return $this->video->id;
    }
    
    // Stop retrying after 1 hour
    public function retryUntil(): DateTime
    {
        return now()->addHour();
    }
    
    // Handle failures gracefully
    public function failed(Throwable $exception): void
    {
        $this->video->update(['status' => 'failed']);
        
        Log::error('Video processing failed', [
            'video_id' => $this->video->id,
            'error' => $exception->getMessage(),
            'trace' => $exception->getTraceAsString(),
        ]);
        
        // Notify admin
        Notification::route('mail', '[email protected]')
            ->notify(new VideoProcessingFailedNotification($this->video));
    }
}

Supervisor Configuration for Production

Why Supervisor? Ensures queue workers restart automatically if they die. Essential for production.

# /etc/supervisor/conf.d/laravel-worker.conf

# Default queue workers - handles standard jobs
[program:laravel-worker-default]
command=php /var/www/html/artisan queue:work redis --queue=default --sleep=3 --tries=3 --max-time=3600
process_name=%(program_name)s_%(process_num)02d
numprocs=4
autostart=true
autorestart=true
stopwaitsecs=3600
user=www-data
stdout_logfile=/var/www/html/storage/logs/worker-default.log
redirect_stderr=true

# High priority workers - immediate user-facing jobs
[program:laravel-worker-high-priority]
command=php /var/www/html/artisan queue:work redis --queue=high-priority --sleep=1 --tries=3 --timeout=60
process_name=%(program_name)s_%(process_num)02d
numprocs=2
autostart=true
autorestart=true
user=www-data
stdout_logfile=/var/www/html/storage/logs/worker-high-priority.log

# Video processing workers - heavy, long-running jobs
[program:laravel-worker-video-processing]
command=php /var/www/html/artisan queue:work redis --queue=video-processing --sleep=5 --tries=3 --timeout=600
process_name=%(program_name)s_%(process_num)02d
numprocs=2
autostart=true
autorestart=true
user=www-data
stdout_logfile=/var/www/html/storage/logs/worker-video.log

# Update supervisor after changes
# sudo supervisorctl reread
# sudo supervisorctl update
# sudo supervisorctl start laravel-worker-default:*
⚠️ Production Tip: Always set --max-time for workers. This prevents memory leaks by restarting workers after processing for X seconds.

Batch Processing for Efficiency

When to batch: Processing thousands of similar jobs (sending emails, generating reports).

use Illuminate\Bus\Batch;
use Illuminate\Support\Facades\Bus;

// Batch 10,000 email jobs with progress tracking
$users = User::where('subscribed', true)->get();

$batch = Bus::batch(
    $users->map(fn($user) => new SendNewsletterJob($user))
)->then(function (Batch $batch) {
    // All jobs completed successfully
    Log::info('Newsletter sent to all users', [
        'total_jobs' => $batch->totalJobs,
        'duration' => $batch->finishedAt->diffInSeconds($batch->createdAt),
    ]);
})->catch(function (Batch $batch, Throwable $e) {
    // First batch job failure detected
    Log::error('Newsletter batch failed', [
        'failed_jobs' => $batch->failedJobs,
        'exception' => $e->getMessage(),
    ]);
})->finally(function (Batch $batch) {
    // Always executed, regardless of success/failure
    Cache::forget('newsletter-sending');
})->allowFailures()->dispatch();

// Check batch progress
if ($batch->finished()) {
    // All jobs complete
}

// Get batch by ID later
$batch = Bus::findBatch($batch->id);
$progress = ($batch->processedJobs() / $batch->totalJobs) * 100;

Queue Monitoring Dashboard

// Real-time queue metrics
use Illuminate\Support\Facades\Redis;

class QueueMetrics
{
    public function getStats(): array
    {
        return [
            'default' => [
                'size' => Redis::llen('queues:default'),
                'workers' => $this->getActiveWorkers('default'),
            ],
            'high-priority' => [
                'size' => Redis::llen('queues:high-priority'),
                'workers' => $this->getActiveWorkers('high-priority'),
            ],
            'video-processing' => [
                'size' => Redis::llen('queues:video-processing'),
                'workers' => $this->getActiveWorkers('video-processing'),
            ],
        ];
    }
    
    protected function getActiveWorkers(string $queue): int
    {
        // Count active supervisor processes
        $output = shell_exec("supervisorctl status laravel-worker-{$queue}:* | grep RUNNING | wc -l");
        return (int) trim($output);
    }
}

5. Asset Optimization for Modern Laravel

Laravel uses Vite for lightning-fast asset bundling. Here's production-grade configuration:

Vite Configuration

// vite.config.js - Production optimized
import { defineConfig } from 'vite';
import laravel from 'laravel-vite-plugin';

export default defineConfig({
    plugins: [
        laravel({
            input: [
                'resources/css/app.css',
                'resources/js/app.js',
            ],
            refresh: true,
        }),
    ],
    build: {
        rollupOptions: {
            output: {
                // Code splitting by vendor
                manualChunks: {
                    vendor: ['vue', 'axios', '@inertiajs/vue3'],
                    ui: ['@headlessui/vue', '@heroicons/vue'],
                    utils: ['lodash-es', 'dayjs'],
                },
            },
        },
        // Warn if chunks exceed 1MB
        chunkSizeWarningLimit: 1000,
        
        // Production minification with terser
        minify: 'terser',
        terserOptions: {
            compress: {
                drop_console: true,
                drop_debugger: true,
                pure_funcs: ['console.log', 'console.info'],
            },
        },
        
        // CSS optimization
        cssMinify: true,
        cssCodeSplit: true,
    },
});

Image Optimization

// Install intervention/image for image processing
composer require intervention/image

// Optimize images on upload
use Intervention\Image\Facades\Image;

public function uploadImage(UploadedFile $file): string
{
    $filename = Str::uuid() . '.webp';
    
    // Resize and convert to WebP
    Image::make($file)
        ->resize(1200, null, function ($constraint) {
            $constraint->aspectRatio();
            $constraint->upsize();
        })
        ->encode('webp', 85)
        ->save(storage_path("app/public/images/{$filename}"));
    
    // Create thumbnail
    Image::make($file)
        ->fit(300, 300)
        ->encode('webp', 85)
        ->save(storage_path("app/public/images/thumbs/{$filename}"));
    
    return $filename;
}

CDN Integration

// config/filesystems.php
'disks' => [
    'cloudflare' => [
        'driver' => 's3',
        'key' => env('CLOUDFLARE_R2_KEY'),
        'secret' => env('CLOUDFLARE_R2_SECRET'),
        'region' => 'auto',
        'bucket' => env('CLOUDFLARE_R2_BUCKET'),
        'endpoint' => env('CLOUDFLARE_R2_ENDPOINT'),
        'url' => env('CLOUDFLARE_R2_URL'),
    ],
],

// Upload to CDN
Storage::disk('cloudflare')->put('images/avatar.jpg', $file);

6. Monitoring and Profiling: Know Your Performance

You can't optimize what you don't measure. Set up comprehensive monitoring from day one.

Automatic Slow Query Detection

// app/Providers/AppServiceProvider.php
use Illuminate\Support\Facades\DB;
use Illuminate\Support\Facades\Log;

public function boot(): void
{
    // Log slow queries automatically
    DB::listen(function ($query) {
        if ($query->time > 1000) { // Queries slower than 1 second
            Log::warning('Slow query detected', [
                'sql' => $query->sql,
                'bindings' => $query->bindings,
                'time' => $query->time . 'ms',
                'url' => request()->fullUrl(),
                'user_id' => auth()->id(),
                'trace' => collect(debug_backtrace(DEBUG_BACKTRACE_IGNORE_ARGS, 5))
                    ->map(fn($trace) => ($trace['file'] ?? '') . ':' . ($trace['line'] ?? ''))
                    ->filter()
                    ->toArray(),
            ]);
        }
    });
}

Request Performance Monitoring

// Monitor slow requests
use Illuminate\Support\Facades\Event;
use Illuminate\Foundation\Http\Events\RequestHandled;

Event::listen(RequestHandled::class, function (RequestHandled $event) {
    $duration = microtime(true) - LARAVEL_START;
    
    if ($duration > 1.0) { // Requests slower than 1 second
        Log::warning('Slow request detected', [
            'url' => $event->request->fullUrl(),
            'method' => $event->request->method(),
            'duration' => round($duration * 1000) . 'ms',
            'memory' => round(memory_get_peak_usage(true) / 1024 / 1024, 2) . 'MB',
            'queries' => DB::getQueryLog(),
        ]);
    }
});

Laravel Telescope for Development

// Install Telescope
composer require laravel/telescope --dev
php artisan telescope:install
php artisan migrate

// config/telescope.php - Only in development
'enabled' => env('TELESCOPE_ENABLED', false),

// .env
TELESCOPE_ENABLED=true
❌ Never enable Telescope in production! It stores every request and query, consuming massive amounts of storage. Use it only in development/staging.

Production Monitoring with New Relic/Sentry

// Install Sentry for error tracking
composer require sentry/sentry-laravel

// config/sentry.php
'dsn' => env('SENTRY_LARAVEL_DSN'),
'traces_sample_rate' => 0.2, // Sample 20% of transactions

// Track custom performance metrics
Sentry\startTransaction([
    'op' => 'video.processing',
    'name' => 'Process Video',
]);

$span = Sentry\SentrySdk::getCurrentHub()
    ->getTransaction()
    ->startChild(['op' => 'encode']);

// Your video processing code

$span->finish();

7. Production Deployment Checklist

  • ☐ Enable OPcache in php.ini: opcache.enable=1, opcache.jit=1255
  • ☐ Use Laravel Octane (FrankenPHP or RoadRunner) for 3-5x performance
  • ☐ Implement multi-layer caching strategy (request → Redis → database)
  • ☐ Add database indexes for all WHERE/JOIN/ORDER BY columns
  • ☐ Eliminate N+1 queries with eager loading using with()
  • ☐ Configure queue workers with Supervisor
  • ☐ Enable response compression (gzip/brotli) in nginx/Apache
  • ☐ Use CDN for static assets (Cloudflare, AWS CloudFront)
  • ☐ Set up slow query logging and alerts
  • ☐ Monitor with APM tool (New Relic, Datadog, or Sentry)
  • ☐ Run "php artisan config:cache" in production
  • ☐ Run "php artisan route:cache" in production
  • ☐ Run "php artisan view:cache" in production
  • ☐ Use Redis for cache and sessions (not file driver)
  • ☐ Set proper PHP memory limits (256MB minimum)
  • ☐ Configure log rotation to prevent disk space issues

Real-World Results: Before & After

After implementing these optimizations across a SaaS application handling 2M+ daily requests:

  • Average response time: Reduced from 450ms to 68ms (85% improvement)
  • Database queries per request: Reduced from 47 to 8 (83% reduction)
  • Server costs: Reduced by 60% while handling 3x more traffic
  • Time to First Byte (TTFB): Improved from 280ms to 45ms (84% faster)
  • Memory usage: Reduced from 128MB to 45MB per request (65% reduction)
  • Concurrent requests: Increased from 500 to 2,500 per server (5x improvement)
  • Queue throughput: Increased from 100 to 1,500 jobs/minute (15x improvement)
  • Cache hit rate: Achieved 92% (from 0%)

Common Mistakes to Avoid

❌ Don't Do This:
  • Premature optimization: Measure first! Don't optimize code that runs once per day.
  • Cache everything blindly: Caching user-specific data without proper invalidation causes stale data bugs.
  • Use get() for counts: Use count() instead—it's 100x faster.
  • Load all relationships: Only eager load what you actually use in the view.
  • Run queue workers without Supervisor: They will die and you won't know.
  • Forget database indexes: Foreign keys and frequently queried columns NEED indexes.
  • Use file cache in production: Always use Redis/Memcached for distributed systems.
✅ Always Do This:
  • Profile before optimizing: Use Laravel Telescope or Debugbar in development
  • Use Blackfire/XHProf in staging: Identify bottlenecks with profiling data
  • Monitor with APM in production: New Relic, Datadog, or Sentry for real-time insights
  • Load test before deploying: Use Laravel Dusk, k6, or Apache Bench
  • Set up alerts: Get notified of slow queries, high memory usage, queue backlogs
  • Document optimization decisions: Future you will thank present you

Conclusion: Performance as a Feature

Performance optimization isn't a one-time sprint—it's a marathon of continuous improvement. The patterns I've shared here have helped me scale Laravel applications from MVP to millions of users without major rewrites.

Your Action Plan (Start Here):

  1. Week 1 - Measure: Install Telescope, log slow queries, establish baseline metrics
  2. Week 2 - Database: Fix N+1 queries, add strategic indexes, optimize slow queries
  3. Week 3 - Caching: Implement Redis caching for expensive queries, set up cache invalidation
  4. Week 4 - Octane: Deploy Laravel Octane, configure workers, monitor performance gains
  5. Ongoing - Monitor: Set up dashboards, alerts, and continuous profiling
🎯 Key Takeaway: Laravel gives us world-class performance tools out of the box—Octane, Redis caching, Eloquent optimization, queue batching. With these battle-tested patterns, you can build applications that scale from hundreds to millions of users while maintaining sub-100ms response times and keeping server costs under control.

Remember: Every millisecond matters. Fast applications convert better, rank higher in search engines, and cost less to operate. Your users notice the difference—make performance a feature, not an afterthought.

Performance is not just about speed—it's about respect for your users' time and your company's resources. Build fast, build scalable, build sustainable. 🚀