· 5 min read

One codebase, five sauce brands

What happens when one food company owns Ken's, Kogi, Sweet Baby Ray's, and Sticky Fingers — and wants every website to move at the same speed.

Ken's Foods retail website — one of five brand sites in the monorepo
Ken's retail — one of five brands sharing a single codebase

Through the Jaybird Group years, I spent a long stretch shipping web properties for Ken’s Foods — a U.S. condiment manufacturer that owns a cluster of consumer brands most Americans have eaten from at some point:

  • Ken’s — the original salad-dressing line
  • Sweet Baby Ray’s — barbecue sauce that appears in roughly every American grocery aisle
  • Sticky Fingers — Memphis-style sauces
  • Kogi Sauce — Korean BBQ sauce (kogisauce.com)
  • ... plus food-service variants of most of the above, which is a different business (restaurants, institutional kitchens) with different pages, different catalogs, different spec sheets

That adds up to more than a dozen distinct web properties owned by one parent, each with its own voice, its own visual identity, and its own stakeholders. The question the Jaybird team had to answer — and the one I ended up owning the technical side of — was how to ship all of them without paying that cost twelve times over.

The honest temptation

The temptation, every time, was to just fork. New brand, new repo, new deploy pipeline, new hosting bill, new pile of dependencies to keep up-to-date forever. That pattern works for the first two brands. It breaks quietly on the third, audibly on the fourth, and by the fifth you’re spending more engineering hours on infrastructure maintenance than on any actual site.

The other temptation was the opposite — one monster CMS with brand-configurable themes. That one sounds responsible but produces the kind of software that can’t say yes to a simple brand request without two weeks of theming arguments.

The middle path

What we eventually settled on, and what I spent a lot of the later Jaybird-era years refining, was a monorepo with shared bones and per-brand flesh:

  • Nx monorepo. One workspace, every brand site is an app inside it. Shared libraries for components, design primitives, commerce integrations, analytics, content adapters.
  • Next.js per brand. Each brand is its own Next.js application with its own content, its own styling, its own routing. They are not skins of each other. They are siblings that share a skeleton.
  • AWS Amplify for delivery. Each brand is its own Amplify app, its own domain, its own build pipeline. The monorepo builds what changed, not everything.
  • Internal tools as separate apps. Specsheet generator, order portal, food-service toolbox — all living in the same workspace as the public sites, sharing components, deploying independently.

The payoff was exactly what you’d want. A product shot component written once worked in four brands. A bug fix in the commerce integration landed in every retail site on the next build. A new brand variant could be spun up in days instead of weeks. But each brand still looked and behaved exactly the way its marketing team wanted it to — because per-brand code was real code, not configuration files in a CMS.

What that taught me about client work

The lesson I took away has less to do with Nx or Next.js and more to do with the shape of long-running client relationships:

If you work with a client long enough, the right answer eventually stops being “ship a website” and starts being “build the machine that ships their websites.”

That’s the moment you stop being an agency and start being a platform team in disguise. It’s also the moment the work gets interesting for the engineer, because the compounding returns on good architecture finally start landing in your own calendar.

Cybind inherits that instinct. When a problem repeats, I’d rather invest a week in a tool I will use forever than spend the next decade solving it by hand.