You Might Not Need Module Federation: Orchestrate your Microfrontends at Runtime with Import Maps


Managing microfrontends in a complex feature-rich app can become a tedious task and easily turn it into a Frankenstein’s monster when there’s no clear strategy involved.

Using third-party tools like Webpack Module Federation helps to streamline the building and loading of microfrontends, but leads to vendor lock-in, which can be a problem.

Import Maps can be seen as a web native alternative to Webpack Module Federation to manage microfrontends at runtime. In this article, we will:

• Explore the concept of Import Maps

• Build a demo app

• Summarize the pros & cons

Import Maps in a nutshell

The concept of Import Maps was born in 2018 and made its long way until it was declared a new web standard implemented by Chrome in 2021 and some other browsers.

Import Maps let you define a JSON where the keys are ES module names and the values are versioned/digested file paths, for example:

<script type="importmap">
    "imports": {
      "my-component": ""

Such mapping can be resolved directly in the browser, so you can build apps with ES modules without the need for transpiling or bundling. This frees you from needing Vite, Webpack, npm, or similar.

Import maps allow us to write:

import 'my-component';

instead of the following:

import '';

and let the browser resolve the actual path at runtime.

Advanced Import Maps features

You can reuse an import specifier (for example lodash below) to refer to different versions of the same library by using scopes. This allows you to change the meaning of an import within that given scope. As a result, any module in the path will be using lodash-es version 3.9.3 while all other modules will use version 4.17.21.

<script type="importmap">
    "imports": {
      "my-component": "",
      // allows import { my-component } from "component-library" syntax
      "component-library/": "",
      "lodash": "", ⬅
      "lazyload": "IntersectionObserver" in window ? "./lazyload.js" : "./lazyload-fallback.js",
    "scopes": {
      "": {
        "lodash": "" ⬅

You can also construct an import map dynamically based on conditions: the example above taken from this article shows how you can load different modules based on the support of IntersectionObserver API.

Demo app

In this article, we bring the idea of Import Maps further by placing an Import Map between the host app and microfrontends and applying the dependency inversion principle. This makes the host app not directly dependent on a concrete microfrontend version or its location, but rather on its abstraction via name or alias.

We are going to build an online store that has only one, but highly customizable assortment type: T-shirt.

Step 1: Outline the Architecture

There is an arbitrary number of microfrontends that are assigned to the development teams, each of them is free to choose the tech stack, build and CI/CD tools. The “only” constraint is to make sure each build pipeline produces 3 artefacts: ESM bundleManifest and other Static Assets.

The lightweight Nest.js Import Map Resolver server has two main roles: store and update the importmap, but also handle the submission of JS assets. Single-spa has a similar solution available.

The Publisher will read your Manifest, extract the bundle filename as well as externalized dependencies and publish them to the Import Map Resolver.

The Assets Server is used as a Web-enabled storage to host JS assets. To store images, videos, and other assets we can choose an arbitrary Storage, for example, an Amazon S3 bucket. CDN is used to serve third-party libs and frameworks as ES modules, a good one is

ESM bundle

Your production-ready application ESM bundle is generated by Webpack, Vite, Rollup or any other bundler of your choice. For simplicity of the setup, CSS Injected By JS plugin for Vite is used along with the scoped styles to build a single ES module with injected CSS.

If your build produces more than one bundle (for example, due to code splitting), you have two options:

  • concatenate them after the build, for example via concat
  • alter the Publisher to loop over the multiple entry chunks and add the prefix, e.g.: my-component:mainmy-component:polyfill, and so on.


This is a JSON file that contains the mapping of non-hashed asset filenames to their hashed versions and, if you are using Vite, you just need to add manifest: true to your Vite config. This will produce the following file in the /dist of your project:

  "main.js": {
    "file": "assets/index.fb458d74.js",
    "isEntry": true
  "views/cart.js": {
    "file": "assets/foo.869aea0d.js"

The generated Manifest will be used by the Publisher to know the mapping of your microfrontend unique name to its ESM bundle.

Static assets

Everything else, such as images, videos and other static files required by your microfrontend.

Step 2: Define the UI and split into microfrontends

Our online store demo app will have three views: Home, Product & Cart:

Vue is used as a core “metaframework” to have out-of-the-box routing, simple state management with Pinia and Vite as a bundler. It is not necessary at all to use the “metaframework”, moreover, during the build, you’ll get errors from Vite’s internal import-analysis plugin because of unresolved imports (good news, there is a solution for that, see “Challenges → Metaframework”).

To demonstrate how several microfrontends can co-exist together on the same page, they are built with four different frameworks. To make each app’s setup look similar, Vite template presets are used to generate Vue, React, Lit and Svelte microfrontends that are compiled into Web Components. You may consider splitting your app by functional area and building your microfrontends around business domains, such as Order, Product, etc.

Step 3: Build the app

The full source of the Demo app can be found here.

Common problems & solutions

Take control away from the bundler when resolving imports

How do bundlers work? If you ignore the implementation details and go to the highest level of abstraction, they concatenate all the jungle of JS modules and put them into one big chunk, that is minified, uglified, and tree-shaked to get rid of unused code. Simple concatenation wouldn’t work. You need to indicate the entry point and make sure you don’t have modules that import themselves – cyclic dependencies. Most of the bundlers solve this by building an abstract syntax tree. For example, Rollup does it with Acorn.

Using micro-frontends resolved via Import Maps introduces a challenge for your bundler that should normally know your dependencies at compile time, not at runtime. We need to tell Rollup to stop building the tree once a dependency from the Import Map is met and make the current module a leaf node.

Luckily, Vite, Rollup and Webpack have options to take control away from the bundler and let the browser resolve the specified imports by providing their names in the configuration.

import { defineConfig } from "vite";

export default defineConfig({
  build: {
    rollupOptions: {
      external: [

Load import map dynamically

Specs say that “any import maps must be present and successfully fetched before any module resolution is done”. Essentially, it means that the importmap must be inserted in the DOM earlier than any other async script.

Vite is internally using build-html plugin, that produces index.html with the entry point added via <script type="module" src="bundle.js> tag to the <head> section. This is not what we want. Instead, we would like to execute a script that will fetch the import map first, add it to the page, and then load the app script.

To build a custom index.html the Async Import Map plugin for Vite was created that is internally using Rollup Plugin HTML. The plugin extracts the entry point script from the list of generated assets (by lookup for isEntry: true), stashes it, loads the import map from the specified URL and then unstashes and appends the entry point script, giving control back to your app.

See full source here.

Shared state and communication

Eventually, there are only 6 ways to share the state across microfrontends:

  • Windowed Observable uses the global window object as the medium to share the data, often wrapped into a pub-sub library
  • Web storage, such as Local Storage, Session Storage or Cookies
  • URL via query / params
  • In-memory (e.g. Redux)
  • Backend (session or persisted state)
  • Props and Custom events / Callbacks

Everything else you may come across could be just an abstraction on top of these methods. Here is a good summary of their pros and cons.

Since the goal is to use as many native web capabilities and avoid vendor lock-in, we can stick to Props & Custom Events. One important note to mention: to let an event “escape” from the Shadow DOM, we need to set bubbles: true and composed: true. This way we make sure events propagate through the parent-child as well as the shadow tree hierarchy. A nice explanation can be found here.

  new CustomEvent("select-color", {
    bubbles: true,
    composed: true,
    detail: 'blue',

Shared dependencies

To share your microfrontend dependencies, you can declare them as “external” by providing them in the configuration as follows:

import { defineConfig } from "vite";

export default defineConfig({
  build: {
    rollupOptions: {
      external: ["react", "react-dom", "react-to-webcomponent"],
      output: {
        globals: {
          react: "react",
          reactDom: "react-dom",
          reactToWebComponent: "react-to-webcomponent",

Here we are telling Rollup to not bundle React dependencies as well as to provide global variables for them.

But how do we deal with dependency mismatches when one or more microfrontends are using the same lib, but with different versions? Let’s say Footer and Header are two React major versions apart. As mentioned before, we can use scopes:

<script type="importmap">
    "imports": {
      "header": "/path/to/header/index.5475c608.js",
      "footer": "/path/to/footer/index.6087f008.js",
      "react": ""
    "scopes": {
      "/path/to/header/": {
        "react": ""
      "/path/to/footer/": {
        "react": ""

Alternatively, we can provide different import specifiers:

<script type="importmap">
    "imports": {
      "header": "/path/to/header/index.5475c608.js",
      "footer": "/path/to/footer/index.6087f008.js",
      "react@16": "",
      "react@18": "",

If you need some sophisticated logic to build your import map, Import Map Resolver is the place to put it. Let’s say one of your microfrontends publishes its new version that uses react@17.0.1, but you already have react@17.0.0 in your importmap. In this case, the Import Map Resolver would remove an older version and replace it with the newest one. It is one minor version ahead, assuming backward compatibility is guaranteed.

Library microfrontend

Microfrontends can be published as a Custom Components library.

Example using Vite and Svelte:

import { defineConfig } from 'vite';
import { svelte } from '@sveltejs/vite-plugin-svelte';

export default defineConfig({
  build: {
    rollupOptions: {
      input: [
  plugins: [
      compilerOptions: {
        customElement: true,

This will produce two separate chunks, one for the Header and one for the Footer. Vite supports library Mode for Vue and other frameworks.

Without going into each library configuration details, the general principle is to alter your main.ts entry point (or each of your entry points if they are many) in a way you’d like to expose your microfrontend defined as a Custom Element.

import MyComponent from '.src/my-component';

customElements.define("my-component", componentWrapperFn(MyComponent));

where componentWrapperFn is a function provided by your (or a third-party) library that returns a custom element constructor that extends HTMLElement. It could be native defineCustomElement in the case of Vue or third-party reactToWebComponent from react-to-webcomponentHere (and also here) is a great summary of how to build Web Components with different libraries and frameworks.


As mentioned in the section Demo app, a metaframework is used to glue the microfrontends together. Choosing no framework is also a valid option. Import Maps perfectly support this case by resolving imports directly in the browser. The choice of using Vue is mainly to avoid writing boilerplate code for routing and make the container components lean, having little to no low-level DOM manipulation. There is a good article explaining why we need the container components and how to structure them.


Routing between container components/pages is covered by the metaframework in case you are using one. If not, you can opt for Navigo as a simple dependency-free JS router.

In rare cases, when you need navigation within individual microfrontends this is where it gets tricky: at the end, you only have one address bar. You can map the flattened structure of your compound URL state (for example, map and to to enable two-level routing with the help of your framework. There is a library for Angular that uses URL Serializer to intercept and manipulate browser URLs.

That being said, this approach also introduces an unwanted tight coupling among microfrontends: the host app shouldn’t know the details of individual microfrontends routing. When a team needs to change or add a new URL, the host app would need to be adjusted and redeployed. Instead, try avoiding two-level routing at the stage of application design. To better understand all the consequences of this approach, you may want to read the book Micro Frontends in Action by Michael Geers, chapter “App shell with two-level routing”.


Let’s summarize all the benefits that Import Maps offer:

• flexibility for microfrontend teams (each team can have its own tech stack, CI/CD, coding guidelines, infrastructure: everything before final artefacts are built)

• easy step-by-step migration of existing codebases by replacing components or entire pages with microfrontends

• the host app is lean and detached from the development of microfrontends and focuses on the composition of pages, providing the data and handling events

• the host app is not aware neither of the existence nor the implementation details of your microfrontends: the only “contract” is the API of your microfrontend (props/events)

• import map entries are lazy-loaded: no JS is downloaded before you actually import()

• you may not need any build tools at all: import maps work in the browser at runtime

• it takes seconds to update your app (by changing entry in the import map)

• it takes seconds to rollback


Let’s summarize all the drawbacks that the usage of Import Maps brings:

• import Maps are not supported in some browsers, however, there are polyfills

• the overall amount of bytes downloaded when using microfrontends in comparison to monolith is unavoidably higher. Even though you can externalize and share your app dependencies, you cannot avoid eventual code duplication in individual microfrontends

• not suitable for small and medium size projects where single core framework would be a better fit


Using Module Federation in comparison to Import Maps has a major drawback, which is vendor lock-in, that makes your product dependent on another product: Webpack. All your micro-frontends as well as your host app must comply with it and be on the correct version. You also cannot avoid the compilation step, while Import Maps can be used directly in the browser.

At the same time, new web standards are emerging and replacing the need for third-party products. While the standard is being developed further, getting new features, such as multiple import map support and programmatic API, you can already start using them now with the help of ES Module Shims or System JS.

Photo by AltumCode on Unsplash

Running multiple versions of a Stencil design system without conflicts

Microfrontends and reusable Web Components are state-of-the-art concepts in Web Development. Combining both in complex, real-world scenarios can lead to nasty conflicts. This article explores how to run components in multiple versions without conflicts.

Microfrontend Environments (MFE)

In an MFE different product teams work on separate features of a larger application. One team might be working on the search feature, while another team works on the product detail page. Ultimately, all features will be integrated together in the final application.

These features range from being very independent to being closely coupled to other features on the page. Generally speaking, teams try to work as independently as possible, meaning also that they can choose which package dependencies or even frameworks they use – and which versions thereof.

Custom Elements

Web Components are a popular way of sharing and reusing components across applications and JavaScript frameworks today. Custom Elements lie at the heart of Web Components. They can be registered like this:

customElements.define('my-component', MyComponent);

You’re now ready to use <my-component> in the DOM. There can only be one Custom Element for a given tagName.

The Problem

Let’s imagine the following situation: The MFE features should reuse certain components, more specifically they should reuse the Web Components provided by the Design System (DS). The DS is being actively developed and exists in different versions.

As each feature is independent, different teams might use different versions of the Design System. Separate features are developed in isolation and work fine with their specific version of the DS. Once multiple features are integrated in a larger application we’ll have multiple versions of the DS running. And this causes naming conflicts because each Custom Element can only be registered once:

Feature-A uses <my-component> in version 1.2.3 and Feature-B uses <my-component> in version 2.0.0 

Oops! Now what? How do we address this problem? Is there a technical solution? Or maybe a strategic solution?

Forcing feature teams to use the same DS version

One way to address this issue is to let the “shell application” provide one version of the DS. All integrated features would no longer bring their own DS version, but make use of the provided one. We no longer have multiple DS versions running.

While this might work in smaller environments, it’s unrealistic for many complex environments. All DS upgrades would now need to be coordinated and take place at exactly the same time. In our case dictating the version is not an option.

The Design System

The problem is common when reusing Custom Elements in a complex MFE. It’s not specifically created by Custom Elements but it’s one that can be addressed by making small adjustments in the right places of the Custom Elements.

Our hypothetical Design System called “Things” has been built with Stencil – a fantastic tool for building component libraries. All components are using Shadow DOM. Some components are quite independent like <th-icon>. Others are somewhat interconnected like <th-tabs> and <th-tab>. Let’s check out the tabs component and its usage: 

  <th-tab active>First</th-tab>

You can find the full code of the components in their initial state here

A Stencil solution

The first thing we’ll do is enable the transformTagName flag in our stencil.config.ts:

export const config: Config = {
  // ...
  extras: {
    tagNameTransform: true,
  // ...

This allows us to register Custom Elements with a custom prefix or suffix.

import { defineCustomElements } from 'things/loader';

// registers custom elements with tagName suffix
defineCustomElements(window, {
  transformTagName: (tagName) => `${tagName}-v1`

Great! Feature teams can now register their own custom instances of the components. This prevents naming conflicts with other components and each feature time can work a lot more independently. Alternatively, the “shell application” could provide version-specific instances of the DS.

<!-- using v1 version of the tabs component -->

<!-- using v2 version of the tabs component -->

Let’s imagine having 2 versions available. Feature teams can now pick from the provided options without having to provide their own custom versions.

We’re not done, yet

Looking at <th-tabs-v1> we can see that the icon component is no longer rendered. And the click handler even throws an error! So what’s going on here?

Wherever a component references other components we’ll potentially run into problems because the referenced components might not exist.

  • <th-tab-v1> tries to render <th-icon> internally, but <th-icon> does not exist.
  • <th-tab-v1> tries to apply styles to the th-icon selector which no longer selects anything
  • on click, <th-tab-v1> calls a function of <th-tabs>, but <th-tabs> does not exist
  • <th-tabs-v1> provides a method setActiveTab which no longer finds any <th-tab> child element

For every reference to another custom tagName we need to consider that the tagName might have been transformed using transformTagName. As transformTagName executes at runtime our component also needs to figure out the correctly transformed tagNames during runtime. It would be great if Stencil provided a transformTagName function that we could execute at runtime. Unfortunately, that’s not the case. Instead, we can implement a (slightly ugly) solution ourselves.

transformTagName at runtime

export const transformTagName = (tagNameToBeTransformed: string, knownUntransformedTagName: string, knownUntransformedTagNameElementReference: HTMLElement): string => {
  const actualCurrentTag = knownUntransformedTagNameElementReference.tagName.toLowerCase();
  const [prefix, suffix] = actualCurrentTag.split(knownUntransformedTagName);
  return prefix + tagNameToBeTransformed + suffix;

This function is not pretty. It requires 3 parameters to return a transformed tagName:

  • tagNameToBeTransformed: tagName that we want to transform, i.e. th-tabs
  • knownUntransformedTagName: untransformed tagName of another component, i.e. th-tab
  • knownUntransformedTagNameElementReference: reference to element with that untransformed tagName, i.e this.el

Usage example:

transformTagName('th-tabs', 'th-tab', this.el); // 'th-tabs-v1'

Note that this.el is a reference to the host element of the Custom Element created by the Element Decorator

Fixing our components

Using our transformTagName function we’re now able to figure out which tagName transformation needs to be considered during runtime.

TypeScript call expressions 

A Custom Element tagName may be referenced in querySelector(tagName)closest(tagName)createElement(tagName) or other functions. Before we call these, we need to find out the transformed tagName.

// Before
this.tabsEl = this.el.closest('th-tabs');

// After
const ThTabs = transformTagName('th-tabs', 'th-tab', this.el);
this.tabsEl = this.el.closest(ThTabs);

JSX element rendering

// Before
public render() {
  return <th-icon />

// After
public render() {
  const ThIcon = transformTagName('th-icon', 'th-tab', this.el);
  return <ThIcon class="icon" />;

Please note the .icon class, which will be required for the next step.

CSS Selectors

// before
th-icon { /* styles */ }

// after
.icon { /* styles */ }

Wrapping it up

And we’re done!

With a few small changes, we’ve adjusted the codebase to support running multiple versions of the same Custom Elements. This is a huge step for complex Microfrontend Environments. It gives feature teams more freedom in choosing the versions they want to use and releasing them when they want to release. It avoids couplings of features or feature teams. It also reduces coordination and communication efforts.

Find the code of the referenced example project in this Github repo. The second commit shows all required adjustments to support tagName transformations.

Performance considerations

Loading and running multiple versions of the same components at the same time will come with a performance cost. The amount of simultaneously running versions should be managed and minimal.