INSIGHTS The Tadarida project: Fast and flexible like a bat with React and gRPC

9 min read

When I did my first internship at a web agency in Hamburg in 2010, I never thought I would be "Head of Web Development" at a listed company almost twelve years later. But that's exactly what my job is today. As Head of Web Development, I and my teams are responsible for the strategic direction and technical decisions for our front-end architecture at ABOUT YOU. This article is about the challenges we face in this area and our recent learnings. After all, our last few months have been shaped by one project in particular: The "Tadarida Project", with which we fundamentally rebuilt our frontend architecture. In case you are wondering what "Tadarida" stands for: "Tadarida brasiliensis" is the Latin name for a Brazilian bat species that flies at speeds of up to 160km/h. The goal of our project was to fundamentally rebuild our aging, redundant front-end architecture, making it blazingly fast and flexible at the same time - just like a Tadarida bat.

BEFORE: Redundant business logic. Slow development

When we were thinking about starting the Tadarida project, we actually already had a pretty good frontend stack. Originally, had been built with the PHP framework Laravel, but my colleagues had already decoupled the frontend and built it on React in the last few years, just before my first day at ABOUT YOU. This happened in a record-breaking project time of only four months, but also came with some legacy issues. In many places, the team had opted for pragmatic solutions that were now, round about three years later, reaching their limits as growth continued.

There was no sensible strategy by which we developed and delivered our business logic for the desktop, mobile and app versions of, nor was there a unified data layer that governed how our frontends communicated with the backends. A lot of things were duplicated and triplicated, making it increasingly difficult for us to keep track of and develop and run new features across our different frontends and platforms.

Initially, we had a separate backend for each platform, and later even accessed the desktop API from the mobile website to leverage existing functionality - but this added to the complexity. At some point, we reached the point where we realized it couldn't go on like this. Out of this unmanageable complexity, the Tadarida project was finally born.

Category pages at are a major entrypoint and have a lot of complexity under the hood.

AFTER: Centralised business logic. Maximum speed and flexibility

The goals behind the Tadarida project can be summarized as follows:

  • Build a central API with all business logic
  • Remove redundant business logic from the frontend components
  • Connect frontend components to the API via a data layer and gRPC
  • Restructure all code (frontend and backend) and establish strong standards

During the implementation, we wanted to refrain from large React frameworks or similar tools, and consider exactly which libraries we really needed. Motivating this decision was the fact that none of the frameworks available at the time really offered the kind of "data fetching" that we needed. Above all, the ability to put modules next to each other on the page that use their own data fetching independently of each other was important to us - i.e. similar to GraphQL Apollo, but without unnecessarily increasing the complexity with a new query language.

Monorepo with frontend code and isolated backend

To avoid redundancies and complicated development steps in the future, we had to figure out how to structure our code base in such a way that multiple frontend teams kept as good an overview of our code as possible, yet could work closely with the backend. Our choice fell on a sort of monorepo strategy where backend dependencies could be installed.

Since our backend and frontend teams work closely together but have very different workflows, we decided not to put everything in one repository, but to divide the frontend into many logical layers.

One of these layers is the gRPC client. This is generated from the schema definition, which is stored in a separate repository. Thus, we have a tight contract between the frontend and the backend, and the backend can also be used by other frontends, e.g. the ABOUT YOU mobile app. In case of breaking changes - e.g. if new fields are expected in a response - the build pipelines fail in doubt in the frontend and backend repository and then have to be extended accordingly to make the build pipeline go through again. 

Furthermore, business logic should always be separated from UI in our projects. Both are a separate package in the mono repository - "logic-components" and "ui-components". The strict separation of the different frontend layers thus also enables a distinction between base packages and working packages. In most cases - for example, for work on new features and maintenance, only working packages such as "ui-components", "logic-components" and "application" need to be touched. Base packages like "router", "data-fetcher" or "i18n" (a short form for “internationalization”) provide utility code to make building the actual features as easy and standardized as possible - they are only touched if we have not implemented basic features yet.

This approach allows us to break each new feature into smaller changes. So, ideally, only the UI should be implemented in the first step, and the logic later, or the other way around. Storing the logic in the Mono repository for the frontend supports this development workflow in the best possible way.

And if a feature is more complex and a developer explores many packages, the merge request can be split before the code review to make it easier for the reviewer. This saves us time and nerves as a team.

In general, it was important for us to reflect on our ways of working while building the new architecture. One of our goals was not only to make the site faster for the end user, but also to increase the development speed and thus become more efficient as a development team.

Clean ArticleDetailPage without complex logic. That's how we love it!

At this point, a note for other requirements: We made a conscious decision to rely on gRPC for our requirements (high traffic e-commerce platform) and thus on a strong coupling of the frontend with the server-side business logic. This doesn't have to be the best solution for everyone. For example, if you are running an application where the interfaces change often or a lot of flexibility should be in the client (e.g. if many frontend teams or clients work independently with the same API), an approach with GraphQL interface may be the better one.

gRPC as a uniform interface

Many developers are not yet using gRPC as an alternative to REST or GraphQL in the frontend. At the start of our project, we did a lot of research and carried out tests with the various interfaces. Roughly speaking, we use gRPC as a mixture of REST and GraphQL. The .proto definitions in gRPC give you clear, type-safe interfaces like a GraphQL schema, but without the complexity of running a GraphQL endpoint or having to write complicated gql queries in the frontend. In addition, gRPC is also extremely fast thanks to the Protobuf messaging format optimized for microservices. Only in the web browser is there an overhead when decoding the gRPC binary format into JavaScript objects, but this is so small that it is practically negligible. We also use Go for our new backend API, which comes with excellent gRPC support. In the frontend, we rely on grpc-web.

A service protocol definition for the article detail page (.proto-file). It connects the frontend with the gRPC-Backend. Everything defined here can be easily called from a react component.

Good to know: We originally used the official library grpc-web to generate our gRPC Typescript clients. The generated clients are `class` based and therefore difficult for us to "code-split". Accordingly, we implemented our own client generator during the course of the project. This enabled us to further increase performance for the end user and improve the developer experience through strict types.

Data fetching with React Query

For data fetching with React, we initially chose SWR from vercel, which satisfied our needs at the time. Later, however, we switched to React Query because this library seemed even more suitable for our use case. 

A quick refresher on libraries like React Query and SWR: these Javascript libraries are about making the communication between the frontend application and the API (the backend) as user-friendly as possible. This is done, for example, by updating data asynchronously in the background and playing it out live - without the user being shown a loader. By using it, we don't have to store the data from the backend in another store (like you would do it in Redux for example) but can directly load, save, modify and change the UI based on the response in the react render function.

React Query itself has no support for Server Side Rendering. We have therefore decided to implement our own wrapper for the server. So we can now write the same code for client side rendering and server side rendering, the rest is done by the “data-fetcher” package. 

ProductSlider showing the use of the DataFetcher component and GrpcRequests.

Gradual migration

At first, between two and four developers worked on the Tadarida project on the side, and later more than five developers were assigned full-time for about a year to rebuild the foundation for But how did we actually get the new front-end architecture live? With a multi-million session live project like, it's not that easy, since many features have to be re-implemented and a complete rebuild would be even more costly. Fortunately, we use a page-based rendering approach at, so we can decide granularly for which pages we want to use the new stack. First of all, it was important for us to convert the most important pages to the new architecture - especially the category and article detail pages. Many people think that especially the start page should be super fast, but especially the category and article detail pages have the most traffic and a high degree of innovation. With the new frontend architecture, we can now develop new features extremely quickly and gradually expand the innovative concepts to other pages without being under too much time pressure.   


For me personally, the Tadarida project was an extremely big win. It challenged us as a team and took a lot of energy in some places, but it's fantastic how we've grown together and evolved. And seeing the new architecture live makes us want more. If you are currently in a similar phase in your company or are working on a relaunch of your frontend architecture, here are my most important lessons: 

  • Get as many people from the company as possible and actively discuss your ideas - especially those with strong opinions. Other developers will question your ideas and give you valuable feedback that you can implement right at the beginning and not when it's too late.
  • Invest time as early as possible and test different approaches to find the right technology for you. Build prototypes.
  • Always plan phases for “readjustments” in the ongoing project. For example, we kept discussing different approaches when implementing the filter because we found inconsistencies. That was often tedious in the meantime, but today we are grateful for it.
  • Be aware that the new architecture must also reach your developers. Talk about your project and explain the benefits of the new architecture to others. Start documenting early (screencasts, examples,...) and act as role models (e.g. through pair programming, workshops,...)

We will therefore have to explain, improve and communicate a lot in the coming months in order to get to be able to switch completely to the new architecture. But I'm already looking forward to it! 

If you want to participate to this exciting project or one of the many others led by our Shop Applications team, check out our open positions on our jobs page