Performance Workflow and Optimization Techniques for Composable Frontends with Nuxt and Vue


6/3/2025

This article covers more than just common performance issues you might encounter during a project’s lifecycle. It also describes a workflow that can make your life easier for future projects by building performance optimization into your development process from day one.

The Performance Workflow

Similar to Test-driven Development (TDD), you can implement Test-driven Performance Optimization (TDPO). Yes, I coined this term specifically for this blog post, but the concept is solid and practical.

How You Probably Did It Before

Most agencies only focus on website performance at the end of a project. This approach is problematic because by the time all features are complete and interdependent, you have accumulated technical debt and spaghetti code that’s harder to optimize. Additionally, once customers see that all features are ready, they’ll pressure the agency to go live as quickly as possible. This creates a perfect storm: mounting pressure to launch immediately combined with significant performance issues that need addressing.

The Requirements

To implement effective performance monitoring, you’ll need:

  • Automated deployments to staging environments whenever features are merged into the main branch
  • Lighthouse integration in your CI pipeline (for example, using Playwright to send results to Prometheus)
  • Prometheus and Grafana for performance monitoring (example setup here)
  • Alertmanager setup with alerting rules configured in Prometheus

How We Should Approach It in an Ideal World

The reality is that these requirements aren’t difficult to implement—the key is setting them up before the project begins. While you could argue that developers should care about the performance of every feature they build, how do we ensure this actually happens? When new developers join the project, they need to understand these expectations immediately.

The most effective approach is creating a setup that automatically ensures good performance throughout the entire development process. Developers already juggle many responsibilities, so it’s wise to support them with automated pipelines that monitor the entire application.

Think of it as snapshot testing, but for performance: create the pipeline, generate artifacts, run checks after every merge request, compare against previously collected data, and trigger alerts when performance degrades.

Performance Optimization Techniques

Rather than diving deep into every performance optimization, this serves as a practical checklist of useful hints, tips, and links to additional resources.

General Performance Guidelines

  • Prioritize Core Web Vitals over achieving perfect Lighthouse scores
    • Remember that Lighthouse scores are synthetic metrics, not real-world measurements
  • Minimize unnecessary requests and data such as oversized JSON payloads, unoptimized images, etc.
    • Monitor the number of network requests and file sizes
    • Keep an eye on DOM size and avoid unnecessary DOM elements
  • Use a CDN or Image Service to serve your images efficiently
  • Use Lighthouse strategically to identify problems, but balance the time investment with the actual performance improvements you’ll gain

Shopware 6 Specific Optimizations

  • Use the latest PHP and MySQL versions for optimal performance
  • Implement a reverse proxy like Varnish or Fastly
    • Ensure you’re testing in production mode (cache is disabled in development)
  • Follow official guidelines: Check the performance optimization page in the Shopware 6 documentation for your specific version
    • For stores with large product catalogs, Elasticsearch/OpenSearch is essential—see the setup guide
    • Always disable fine-grained caching (this feature will be removed in version 6.7)
    • With Shopware 6.7 the store API caching was removed, so you need to implement your own caching strategy. There is already a experimental Shopware Plugin that can help you with that.
  • Optimize HTML output: Minify your HTML when using the Twig Storefront
  • Use image processing services: Implement a Thumbnail Processor (Image Service) instead of serving WEBP images directly
  • Monitor your instance: Install FroshTools for a comprehensive overview of your Shopware instance via the admin panel
  • Optimize database queries: Raw SQL is faster than the DAL (Data Abstraction Layer/Repositories)
    • If you frequently call repository searches, consider using SQL queries that can combine the required data
    • When using the DAL results in numerous (INNER/LEFT) JOINs in the generated SQL, compare performance against a plain SQL query
  • Implement performance monitoring tools:
    • Blackfire or Xdebug Profiler are excellent for analyzing PHP code performance
    • New Relic or Datadog offer comprehensive metrics collection (both have free tiers)
    • Track client-side JavaScript errors (hosting services like Vercel and Cloudflare provide this functionality)

Nuxt Specific Optimizations

  • Standardize Node.js versions across all environments to ensure consistent performance
    • Upgrading from Node.js 16 to 22 can provide 10–30% better performance in execution speed, memory usage, and startup time, depending on your workload
  • Master Nuxt’s rendering modes and understand when to use each one:
    • Prerender: Builds pages during the build process—excellent for SEO and performance, but can cause build time issues and stale data problems
    • SWR (Stale-while-revalidate): Fetches data from the server and caches it until the API response changes
    • ISR (Incremental Static Regeneration): Generates pages on-demand once until the next deployment, cached on CDN
    • CSR (Client-Side Rendering): Set SSR to false for pages like admin dashboards that don’t need server-side rendering
    • Consider moving SSR requests to CSR when the data isn’t relevant for SEO—otherwise, page rendering will be blocked or take longer
  • Optimize third-party scripts: These are always performance bottlenecks—move them to service workers to free up the main thread
  • Optimize image loading: Images convey your site’s emotions—ensure they load quickly using Nuxt Images
  • Implement Lazy Hydration: Use Lazy Hydration to control your app’s chunk size effectively
  • Use the Nitro Cache API: Have a look at this blog post for more details.

Vue Specific Optimizations

Vue provides comprehensive performance best practices. For advanced debugging, consider using the Vue DevTools.

  • Use v-show vs v-if appropriately
    • Use v-show for frequently toggled elements (only toggles CSS display)
    • Use v-if for conditionally rendered content that changes rarely (actually adds/removes DOM elements)
  • Optimize list rendering with proper key attributes
    • Always use unique, stable keys in v-for loops
    • Avoid using array indices as keys when the list order can change
    • This helps Vue’s virtual DOM efficiently track and update list items
  • Use v-memo for expensive list items (Vue 3.2+)
    • Cache rendering results for list items that rarely change
    • Particularly useful for large lists with complex item templates
  • Implement lazy loading for components
    • Use defineAsyncComponent() to split large components into separate chunks
    • Load components only when they’re actually needed
    • Example: const HeavyComponent = defineAsyncComponent(() => import('./HeavyComponent.vue'))
  • Optimize computed properties and watchers
    • Prefer computed properties over methods for derived data (they’re cached)
    • Use shallowRef() and shallowReactive() for large objects when deep reactivity isn’t needed
    • Avoid expensive operations in computed properties; consider debouncing watchers for user input
  • Use v-once for static content
    • Render expensive static content only once with the v-once directive
    • Useful for large lists of static data or complex templates that never change
  • Implement virtual scrolling for large lists
    • Use libraries like vue-virtual-scroller or @tanstack/vue-virtual for lists with hundreds or thousands of items
    • Only render visible items in the viewport to reduce DOM size
  • Optimize component props and emits
    • Use defineProps() with TypeScript for better tree-shaking
    • Avoid passing large objects as props; use provide/inject for deeply nested data
    • Use shallowRef for props that contain large objects
  • Bundle optimization techniques
    • Use tree-shaking friendly imports: import { ref } from 'vue' instead of import Vue from 'vue'
    • Implement code splitting at the route level
    • Use dynamic imports for heavy libraries: const library = await import('heavy-library')
  • Memory management best practices
    • Clean up event listeners, timers, and subscriptions in onUnmounted()
    • Avoid creating reactive objects in render functions
    • Use markRaw() for objects that should never be reactive (like third-party class instances)
  • Template optimization
    • Minimize the number of root-level reactive dependencies in templates
    • Use v-pre to skip compilation for static content
    • Avoid complex expressions in templates; move them to computed properties
  • Use the Composition API efficiently
    • Group related reactive state and logic together
    • Use readonly() to prevent accidental mutations of props or store state
    • Leverage toRefs() when destructuring reactive objects to maintain reactivity

How to Debug Memory Leaks (Quick Guide)

The good news is that everything you need to identify memory leaks is already built into your browser.

You have two primary tools for debugging memory leaks in Chrome-based browsers:

  • The Performance tab in browser dev tools (F12)
  • The Memory tab in browser dev tools (F12)

The key question is: What should you look for?

  • Detached DOM nodes: Elements removed from the DOM but still referenced in JavaScript
  • Event listeners: Not properly removed when components are destroyed
  • Timers and intervals: setTimeout/setInterval not cleared properly
  • Closures: Functions holding references to large objects
  • Global variables: Accidentally created global variables
  • Circular references: Objects referencing each other, preventing garbage collection

Example Workflow for Finding Memory Leaks

Use the Performance tab first to identify the problem (big picture view), then use the Memory tab to find the root cause.

  1. Detection Phase (Performance Tab)

    • Purpose: Identify IF you have a memory leak and WHEN it occurs
    • Process:
      • Open the Performance tab
      • Start recording
      • Perform normal user interactions for 2-3 minutes
      • Stop recording
      • Examine the memory timeline (blue line)
    • What to look for: Memory that continuously increases without dropping back down (sawtooth patterns are normal; continuous upward trends are not)
  2. Isolation Phase (Performance Tab)

    • Purpose: Narrow down which specific actions cause the leak
    • Process:
      • Record shorter sessions focusing on specific features
      • Test one action at a time (e.g., opening/closing modals, navigating pages)
      • Identify the exact user action that causes memory to spike without releasing
  3. Investigation Phase (Memory Tab)

    • Purpose: Find the root cause and specific objects causing the leak
    • Process:
      • Heap Snapshots: Take before/after snapshots of the problematic action
      • Allocation Timeline: Record during the problematic action to see real-time allocations
      • Analysis: Use the comparison view to identify accumulating objects
  4. Root Cause Analysis (Memory Tab)

    • Purpose: Understand WHY objects aren’t being garbage collected
    • Process:
      • Examine the Retainers view for leaked objects
      • Check for detached DOM nodes
      • Identify event listeners, timers, or closures holding references

After fixing the identified issues, repeat this workflow to confirm the leak has been resolved.

Summary

Performance optimization is a marathon, not a sprint. The most successful projects are those that embed performance considerations into their DNA from day one, rather than treating it as an afterthought or a last-minute fire drill.

The Test-driven Performance Optimization (TDPO) workflow we’ve outlined isn’t just another methodology—it’s a fundamental shift in how we approach web development. By establishing automated performance monitoring, setting up alerting systems, and creating feedback loops early in the development process, you’re building a safety net that catches performance regressions before they compound into major problems.

Remember the key principles:

  • Start early: Set up your performance monitoring pipeline before writing your first feature
  • Automate everything: Manual performance checks are inconsistent and often forgotten under pressure
  • Make it visible: Performance metrics should be as visible as your build status—use dashboards, alerts, and regular reports
  • Iterate continuously: Small, consistent improvements over time beat massive optimization sprints
  • Educate your team: Everyone should understand how their code impacts performance, not just the “performance expert”

The techniques and optimizations covered in this article are your toolkit, but the workflow is your foundation. Without a solid process that makes performance a first-class citizen in your development cycle, even the best optimization techniques become reactive band-aids rather than proactive solutions.

Think of performance optimization like physical fitness: you can’t get in shape by working out intensively for one week before summer. It requires consistent, daily habits and long-term commitment. The same applies to web performance—consistent monitoring, regular optimization, and a culture that values performance will always outperform last-minute heroics.

Start implementing this workflow on your next project. Your future self (and your users) will thank you when you’re not scrambling to fix performance issues while the client is breathing down your neck for a launch date.

Comments 💬 & Reactions ✨