Ensuring high frontend performance in composable Next.js apps


In today’s web development landscape, composable architectures are gaining popularity for their flexibility and scalability. However, this approach introduces unique performance challenges. In this article, we will explore strategies and best practices for ensuring high frontend performance in composable applications, using Open Self Service as a practical example.
Open Self Service is a new framework for building enterprise-grade frontend solutions.
Our aim is to create an open-source set of tools that would allow building not only storefronts but different client-facing frontends, with the main focus on customer self-service apps. We want to be backend-agnostic and to some extent eliminate vendor lock-in, so that the frontends you build are safe from backend changes or upgrades. Composable architecture helps us to achieve all of this, so we need to introduce it before we show you how we deal with the performance challenges we faced.
Understanding composable architecture
What is a composable architecture?
In a nutshell, composable architecture is an approach to building applications by assembling modular, independent components that work together to create a complete solution — not only in the context of frontend, but across the whole system architecture, especially backend components. In the context of Open Self Service, we implemented this architecture in the form of a framework that enables the integration of multiple API-based services to provide a seamless user experience.
At its core, a few principles characterize composable architecture:
applications are built from discrete, interchangeable components that can be developed, deployed, and scaled independently,
components communicate through well-defined APIs allowing for a wide flexibility in implementation details,
decoupled systems (like frontend and backend components), which allows for each to evolve independently without affecting the other.
Composable frontends provide significant advantages — you can be quite flexible in replacing backend components without disruption, are free from vendor lock-in through multi-backend integration, and are able to adapt to changing requirements with the ability to scale specific parts based on business demands.
The separation of concerns
In building Open Self Service, we chose to implement a clear separation of concerns between different layers of the application. While there are multiple ways to achieve composable architecture, our approach focuses on:
complete separation of the presentation layer from the data and business logic layers, which allows each to evolve independently and enables the frontend to work seamlessly with multiple backend services
introduction of an intermediate API composition layer that acts as a bridge between the frontend and various backend APIs. This layer aggregates data from multiple sources and orchestrates data flows between systems. It efficiently combines static content with dynamic data while handling complex logic server-side, reducing browser processing overhead
Press enter or click to view image in full size
This kind of approach ensures backend service changes don’t require frontend code modifications (as long as that backend API is still backwards compatible), reducing maintenance overhead and increasing the overall flexibility.
Performance strategies
Now that we understand the architectural foundation, let’s explore the specific performance strategies that make composable applications fast and responsive. These techniques leverage the modular nature of our blocks system to deliver optimal user experiences.
Leveraging server components
Probably one of the easiest “wins” is to take full advantage of Next.js server components to perform data fetching and initial rendering on the server. Each block in our system follows a clear separation between server and client components:
export const OrderDetailsServer = async ({ id, orderId }) => {
// Fetch block data from API composition layer
const data = await sdk.blocks.getOrderDetails({ id: orderId });
// Pass data to the client component
return <OrderDetailsClient id={id} {...data} />;
};
'use client';
export const OrderDetailsClient = (props) => {
// Render the actual component
return (<div>...</div>);
};
This pattern ensures that data fetching occurs on the server, reducing client-side JavaScript bundle size and eliminating client-server waterfalls. The server component fetches the necessary data and passes it to a client component that handles interactivity.
Streaming with Suspense
By using server components, we can also easily implement component-level streaming using React’s Suspense, allowing parts of the page to load progressively rather than waiting for all data to be available. This approach ensures that slow-loading blocks (e.g. due to a slow or complex backend API calls) don’t block the rendering of faster ones, and users can start interacting with parts of the page while others are still loading.
Strategic placement of Suspense boundaries is crucial for optimal streaming performance. In our implementation, we place these boundaries at the block level rather than at the page level, allowing for more granular control over the loading experience:
each block has its own Suspense boundary, allowing it to stream independently
more complex blocks can prepare the loading state to more or less represent how the component may actually look when it’s ready
Let’s look at the OrderDetails block that is responsible for showing the users the details of one of their orders. It consists of a title, some tiles arranged in a grid, and a list of products that were purchased.