Can you recommend INSPIRING books for software engineers?

“One of my favorite books is “The Phoenix Project“. It is not a technical book (so not really a programming book), but it’s rather about IT culture and DevOps. I liked it a lot because you can read it as you read any other kind of novel, and it shows you how IT can be compared to traditional processes in manufacturing. It also introduces you to “The Three Ways” of DevOps. On top, I would also include “Accelerate: The Science of Lean Software and DevOps: Building and Scaling High Performing Technology Organizations“. For more recommendations, you can check out this reading list:  

Clean Code. It’s a bit older but a classic that shows how you easily write, read, understand, and change code. A lot of times people like to write complex code that only one person understands. Then other members of the team have to spend hours to understand it. In my opinion, it’s a mandatory book for any FE or BE developer.”

“I find “The Programmer’s Brain” very interesting. It’s basically about how our brain works and how we think about code.”

“If you’re a fan of “The Phoenix Project“, you should also read “The Unicorn Project” and of course “The DevOps Handbook“. I can also recommend “The C++ Programming Language” book.

“It’s not a programming book, but the concepts apply to broad system architecture: The 5th Discipline. A good one is also Gayle Laakmann McDowell’s “Cracking the Coding Interview“, for process it’s “Peopleware: Productive Projects and Teams” by Tom DeMarco, and if you want to learn about the Liveness in Jira, it’s “Java Concurrency in Practice“. My favorite chapter about programming is in the “Clean Code: A Handbook of Agile Software Craftsmanship” book. And let’s not forget the classics “Thinking in Java” and “Effective C++: 55 Specific Ways to Improve Your Programs and Designs“.

Picture from Unsplash by Fang-Wei Lin

6 Reasons Why You Should Enjoy Reviewing Pull Requests

This article was originally published by Raphael Marques on Medium.

Reviewing Pull Requests (PRs) can be a tedious and dull task, especially when you have other tasks on your to-do list. However, it’s also one of the most efficient ways to maintain the quality of your codebase, improve your knowledge, and help your colleagues.

Photo by Christin Hume on Unsplash

The key to making tedious tasks interesting is to have a good reason for doing them. That’s why I’ll give you six reasons why you should enjoy reviewing PRs:

1 — Help new joiners and junior members

Showing new team members how you maintain your codebase by reviewing their PRs can help them understand your practices and standards, the same happens for junior colleagues.

2 — There’s more than one way to solve a problem

Reviewing your colleagues’ PRs can provide you with new perspectives on how to solve problems. Even if their approach is not what you would have done, it can still be a valuable addition to your toolbox. Keep an open mind and be pragmatic.

3 — Avoid the “Looks good to me” (LGTM) mentality

Instead of just giving a quick LGTM comment, take the time to provide good feedback. Even if you think there’s nothing to improve, try to find something interesting to highlight about their code. If you find something wrong, be kind and straightforward, explaining why you think it’s wrong and offering potential solutions.

4 — Keep your Pull Requests small

Breaking down your code changes into smaller chunks will make it easier for your peers to review them. This approach will help you catch any potential bugs, code smells, or duplication. It’s collaborative work, so making it easier for everyone involved will benefit the team in the long run.

At first glance, it doesn’t make much sense, since “more code, more issues, right?” However, when we’re faced with a large amount of code to review, we tend to overlook important details, such as potential bugs, code smells, and duplication, which can make our PR reviews less effective.

5 — Review your own Pull Requests

Before asking for others to review your code, review it yourself. Take a break and come back to it with fresh eyes. Check for any unnecessary code, such as console.log or forgotten commented code. Highlight any decisions you’re unsure about, so the reviewer knows to pay extra attention.

6 — Take your time

Reviewing PRs requires careful attention to detail. Take it slow and read every line of code. Don’t hesitate to ask questions if you’re unsure about something. Remember, there are no stupid questions. The most important thing is to keep the codebase and the team sharp.


In conclusion, as mentioned before, reviewing PRs may seem like a tedious and dull task, but it is an essential part of maintaining the quality of your codebase and building a stronger team. By taking the time to review PRs thoroughly, you can help new team members, gain new perspectives, provide valuable feedback, and improve the overall code quality. This collaborative effort not only benefits the team but also contributes to personal growth by developing a deeper understanding of best practices and standards. Therefore, reviewing PRs should be an enjoyable and fulfilling task that is worth the investment of time and effort.

This article is written by Raphael Marques.

You Might Not Need Module Federation: Orchestrate your Microfrontends at Runtime with Import Maps


Managing microfrontends in a complex feature-rich app can become a tedious task and easily turn it into a Frankenstein’s monster when there’s no clear strategy involved.

Using third-party tools like Webpack Module Federation helps to streamline the building and loading of microfrontends, but leads to vendor lock-in, which can be a problem.

Import Maps can be seen as a web native alternative to Webpack Module Federation to manage microfrontends at runtime. In this article, we will:

• Explore the concept of Import Maps

• Build a demo app

• Summarize the pros & cons

Import Maps in a nutshell

The concept of Import Maps was born in 2018 and made its long way until it was declared a new web standard implemented by Chrome in 2021 and some other browsers.

Import Maps let you define a JSON where the keys are ES module names and the values are versioned/digested file paths, for example:

<script type="importmap">
    "imports": {
      "my-component": ""

Such mapping can be resolved directly in the browser, so you can build apps with ES modules without the need for transpiling or bundling. This frees you from needing Vite, Webpack, npm, or similar.

Import maps allow us to write:

import 'my-component';

instead of the following:

import '';

and let the browser resolve the actual path at runtime.

Advanced Import Maps features

You can reuse an import specifier (for example lodash below) to refer to different versions of the same library by using scopes. This allows you to change the meaning of an import within that given scope. As a result, any module in the path will be using lodash-es version 3.9.3 while all other modules will use version 4.17.21.

<script type="importmap">
    "imports": {
      "my-component": "",
      // allows import { my-component } from "component-library" syntax
      "component-library/": "",
      "lodash": "", ⬅
      "lazyload": "IntersectionObserver" in window ? "./lazyload.js" : "./lazyload-fallback.js",
    "scopes": {
      "": {
        "lodash": "" ⬅

You can also construct an import map dynamically based on conditions: the example above taken from this article shows how you can load different modules based on the support of IntersectionObserver API.

Demo app

In this article, we bring the idea of Import Maps further by placing an Import Map between the host app and microfrontends and applying the dependency inversion principle. This makes the host app not directly dependent on a concrete microfrontend version or its location, but rather on its abstraction via name or alias.

We are going to build an online store that has only one, but highly customizable assortment type: T-shirt.

Step 1: Outline the Architecture

There is an arbitrary number of microfrontends that are assigned to the development teams, each of them is free to choose the tech stack, build and CI/CD tools. The “only” constraint is to make sure each build pipeline produces 3 artefacts: ESM bundleManifest and other Static Assets.

The lightweight Nest.js Import Map Resolver server has two main roles: store and update the importmap, but also handle the submission of JS assets. Single-spa has a similar solution available.

The Publisher will read your Manifest, extract the bundle filename as well as externalized dependencies and publish them to the Import Map Resolver.

The Assets Server is used as a Web-enabled storage to host JS assets. To store images, videos, and other assets we can choose an arbitrary Storage, for example, an Amazon S3 bucket. CDN is used to serve third-party libs and frameworks as ES modules, a good one is

ESM bundle

Your production-ready application ESM bundle is generated by Webpack, Vite, Rollup or any other bundler of your choice. For simplicity of the setup, CSS Injected By JS plugin for Vite is used along with the scoped styles to build a single ES module with injected CSS.

If your build produces more than one bundle (for example, due to code splitting), you have two options:

  • concatenate them after the build, for example via concat
  • alter the Publisher to loop over the multiple entry chunks and add the prefix, e.g.: my-component:mainmy-component:polyfill, and so on.


This is a JSON file that contains the mapping of non-hashed asset filenames to their hashed versions and, if you are using Vite, you just need to add manifest: true to your Vite config. This will produce the following file in the /dist of your project:

  "main.js": {
    "file": "assets/index.fb458d74.js",
    "isEntry": true
  "views/cart.js": {
    "file": "assets/foo.869aea0d.js"

The generated Manifest will be used by the Publisher to know the mapping of your microfrontend unique name to its ESM bundle.

Static assets

Everything else, such as images, videos and other static files required by your microfrontend.

Step 2: Define the UI and split into microfrontends

Our online store demo app will have three views: Home, Product & Cart:

Vue is used as a core “metaframework” to have out-of-the-box routing, simple state management with Pinia and Vite as a bundler. It is not necessary at all to use the “metaframework”, moreover, during the build, you’ll get errors from Vite’s internal import-analysis plugin because of unresolved imports (good news, there is a solution for that, see “Challenges → Metaframework”).

To demonstrate how several microfrontends can co-exist together on the same page, they are built with four different frameworks. To make each app’s setup look similar, Vite template presets are used to generate Vue, React, Lit and Svelte microfrontends that are compiled into Web Components. You may consider splitting your app by functional area and building your microfrontends around business domains, such as Order, Product, etc.

Step 3: Build the app

The full source of the Demo app can be found here.

Common problems & solutions

Take control away from the bundler when resolving imports

How do bundlers work? If you ignore the implementation details and go to the highest level of abstraction, they concatenate all the jungle of JS modules and put them into one big chunk, that is minified, uglified, and tree-shaked to get rid of unused code. Simple concatenation wouldn’t work. You need to indicate the entry point and make sure you don’t have modules that import themselves – cyclic dependencies. Most of the bundlers solve this by building an abstract syntax tree. For example, Rollup does it with Acorn.

Using micro-frontends resolved via Import Maps introduces a challenge for your bundler that should normally know your dependencies at compile time, not at runtime. We need to tell Rollup to stop building the tree once a dependency from the Import Map is met and make the current module a leaf node.

Luckily, Vite, Rollup and Webpack have options to take control away from the bundler and let the browser resolve the specified imports by providing their names in the configuration.

import { defineConfig } from "vite";

export default defineConfig({
  build: {
    rollupOptions: {
      external: [

Load import map dynamically

Specs say that “any import maps must be present and successfully fetched before any module resolution is done”. Essentially, it means that the importmap must be inserted in the DOM earlier than any other async script.

Vite is internally using build-html plugin, that produces index.html with the entry point added via <script type="module" src="bundle.js> tag to the <head> section. This is not what we want. Instead, we would like to execute a script that will fetch the import map first, add it to the page, and then load the app script.

To build a custom index.html the Async Import Map plugin for Vite was created that is internally using Rollup Plugin HTML. The plugin extracts the entry point script from the list of generated assets (by lookup for isEntry: true), stashes it, loads the import map from the specified URL and then unstashes and appends the entry point script, giving control back to your app.

See full source here.

Shared state and communication

Eventually, there are only 6 ways to share the state across microfrontends:

  • Windowed Observable uses the global window object as the medium to share the data, often wrapped into a pub-sub library
  • Web storage, such as Local Storage, Session Storage or Cookies
  • URL via query / params
  • In-memory (e.g. Redux)
  • Backend (session or persisted state)
  • Props and Custom events / Callbacks

Everything else you may come across could be just an abstraction on top of these methods. Here is a good summary of their pros and cons.

Since the goal is to use as many native web capabilities and avoid vendor lock-in, we can stick to Props & Custom Events. One important note to mention: to let an event “escape” from the Shadow DOM, we need to set bubbles: true and composed: true. This way we make sure events propagate through the parent-child as well as the shadow tree hierarchy. A nice explanation can be found here.

  new CustomEvent("select-color", {
    bubbles: true,
    composed: true,
    detail: 'blue',

Shared dependencies

To share your microfrontend dependencies, you can declare them as “external” by providing them in the configuration as follows:

import { defineConfig } from "vite";

export default defineConfig({
  build: {
    rollupOptions: {
      external: ["react", "react-dom", "react-to-webcomponent"],
      output: {
        globals: {
          react: "react",
          reactDom: "react-dom",
          reactToWebComponent: "react-to-webcomponent",

Here we are telling Rollup to not bundle React dependencies as well as to provide global variables for them.

But how do we deal with dependency mismatches when one or more microfrontends are using the same lib, but with different versions? Let’s say Footer and Header are two React major versions apart. As mentioned before, we can use scopes:

<script type="importmap">
    "imports": {
      "header": "/path/to/header/index.5475c608.js",
      "footer": "/path/to/footer/index.6087f008.js",
      "react": ""
    "scopes": {
      "/path/to/header/": {
        "react": ""
      "/path/to/footer/": {
        "react": ""

Alternatively, we can provide different import specifiers:

<script type="importmap">
    "imports": {
      "header": "/path/to/header/index.5475c608.js",
      "footer": "/path/to/footer/index.6087f008.js",
      "react@16": "",
      "react@18": "",

If you need some sophisticated logic to build your import map, Import Map Resolver is the place to put it. Let’s say one of your microfrontends publishes its new version that uses react@17.0.1, but you already have react@17.0.0 in your importmap. In this case, the Import Map Resolver would remove an older version and replace it with the newest one. It is one minor version ahead, assuming backward compatibility is guaranteed.

Library microfrontend

Microfrontends can be published as a Custom Components library.

Example using Vite and Svelte:

import { defineConfig } from 'vite';
import { svelte } from '@sveltejs/vite-plugin-svelte';

export default defineConfig({
  build: {
    rollupOptions: {
      input: [
  plugins: [
      compilerOptions: {
        customElement: true,

This will produce two separate chunks, one for the Header and one for the Footer. Vite supports library Mode for Vue and other frameworks.

Without going into each library configuration details, the general principle is to alter your main.ts entry point (or each of your entry points if they are many) in a way you’d like to expose your microfrontend defined as a Custom Element.

import MyComponent from '.src/my-component';

customElements.define("my-component", componentWrapperFn(MyComponent));

where componentWrapperFn is a function provided by your (or a third-party) library that returns a custom element constructor that extends HTMLElement. It could be native defineCustomElement in the case of Vue or third-party reactToWebComponent from react-to-webcomponentHere (and also here) is a great summary of how to build Web Components with different libraries and frameworks.


As mentioned in the section Demo app, a metaframework is used to glue the microfrontends together. Choosing no framework is also a valid option. Import Maps perfectly support this case by resolving imports directly in the browser. The choice of using Vue is mainly to avoid writing boilerplate code for routing and make the container components lean, having little to no low-level DOM manipulation. There is a good article explaining why we need the container components and how to structure them.


Routing between container components/pages is covered by the metaframework in case you are using one. If not, you can opt for Navigo as a simple dependency-free JS router.

In rare cases, when you need navigation within individual microfrontends this is where it gets tricky: at the end, you only have one address bar. You can map the flattened structure of your compound URL state (for example, map and to to enable two-level routing with the help of your framework. There is a library for Angular that uses URL Serializer to intercept and manipulate browser URLs.

That being said, this approach also introduces an unwanted tight coupling among microfrontends: the host app shouldn’t know the details of individual microfrontends routing. When a team needs to change or add a new URL, the host app would need to be adjusted and redeployed. Instead, try avoiding two-level routing at the stage of application design. To better understand all the consequences of this approach, you may want to read the book Micro Frontends in Action by Michael Geers, chapter “App shell with two-level routing”.


Let’s summarize all the benefits that Import Maps offer:

• flexibility for microfrontend teams (each team can have its own tech stack, CI/CD, coding guidelines, infrastructure: everything before final artefacts are built)

• easy step-by-step migration of existing codebases by replacing components or entire pages with microfrontends

• the host app is lean and detached from the development of microfrontends and focuses on the composition of pages, providing the data and handling events

• the host app is not aware neither of the existence nor the implementation details of your microfrontends: the only “contract” is the API of your microfrontend (props/events)

• import map entries are lazy-loaded: no JS is downloaded before you actually import()

• you may not need any build tools at all: import maps work in the browser at runtime

• it takes seconds to update your app (by changing entry in the import map)

• it takes seconds to rollback


Let’s summarize all the drawbacks that the usage of Import Maps brings:

• import Maps are not supported in some browsers, however, there are polyfills

• the overall amount of bytes downloaded when using microfrontends in comparison to monolith is unavoidably higher. Even though you can externalize and share your app dependencies, you cannot avoid eventual code duplication in individual microfrontends

• not suitable for small and medium size projects where single core framework would be a better fit


Using Module Federation in comparison to Import Maps has a major drawback, which is vendor lock-in, that makes your product dependent on another product: Webpack. All your micro-frontends as well as your host app must comply with it and be on the correct version. You also cannot avoid the compilation step, while Import Maps can be used directly in the browser.

At the same time, new web standards are emerging and replacing the need for third-party products. While the standard is being developed further, getting new features, such as multiple import map support and programmatic API, you can already start using them now with the help of ES Module Shims or System JS.

Photo by AltumCode on Unsplash

Pairing is daring


Pair programming is old news for most developers. It’s quite possible that you “paired” with a colleague before and might have had some good and some bad experiences. In this blog post, you will get a quick overview of pair programming. You will learn that there are lots of benefits and potential downsides. Additionally, there are some hints on the process and pitfalls to keep in mind. Finally, you will find a quick overview of remote collaboration tools.


According to a study conducted by a Microsoft Research team [1], participating developers reported “fewer bugs”, the “spread of code understanding”, and “higher quality of code” as the main benefits of pair programming.

Fewer Bugs

In many work areas, the four-eyes principle allows teams to spot errors early on. Developers usually conduct code reviews before code goes out to production. Think of pair programming as giving a programmer a second pair of eyes. They might spot bugs right away, leading to reduced friction down the road. The earlier you spot a bug, the easier it is to fix.

@Saad Bin Akhlaq: After a pairing session, I have more confidence in pushing the code since 2 engineers have already gone over it.

Spread of Code Understanding

According to the Agile Alliance, a team that applies pair programming will gain “better diffusion of knowledge among the team, in particular when a developer unfamiliar with a component is pairing with one who knows it much better.” You will notice a “better transfer of skills, as junior developers pick up micro-techniques or broader skills from more experienced team members.” Furthermore, the Microsoft study highlights that “there is never only one person in the team who knows all the code”.

In the paper “The Costs and Benefits of Pair Programming” [2] by Alistair Cockburn and Laurie Williams, they claim that “if [a] pair can work together, then they learn ways to communicate more easily and they communicate more often. This raises the communication bandwidth and frequency within the project, increasing overall information flow within the team.”

@Jobrann Mous: I especially enjoy a pairing session when I am exposed to different topics, codebases, and coding practices.

Higher Quality Code

The authors of the Microsoft study state that pair programming helps to improve code quality “through more extensive review and collaboration.” Also, “‘Programming out loud’ leads to a clearer articulation of the complexities and hidden details in coding tasks, reducing the risk of error or going down blind alleys”, says the Agile Alliance.

In addition to that, a quantitative study at the University of Utah [3] showed that pairs not only completed their programs with superior quality, but they consistently implemented the same functionality as the individuals in fewer lines of code.

Critique of pair programming

In the same study from Microsoft, developers listed cost efficiency, scheduling, and personality clash as their biggest problems when applying pair programming.

Cost efficiency

Management most likely is in doubt about the efficiency of pair programming. In the Microsoft study “the number one problem reported with pair programming is cost. Two people are being paid to do the work of one.” On paper, you halve the allocatable time of two developers. But there are studies suggesting that pair programming actually increases time spent by only 15% (and not 100%), while at the same time reducing the error rate by 15%. [2] Occasional pairing programming will balance out the cost with the benefits.


Yet another meeting? Calendars are already full and scheduling meetings is a pain? Pair programming will not improve the situation. If scheduling a spontaneous pairing session is hard, your team should review the number of meetings in general.

What worked well for the cross-functional engineering teams of Bertha and Mercedes Me Service apps, is a fixed pairing slot every two weeks. The engineers begin the 2 hours slot with a short alignment, check who is available, and what topics they can pair on. Then the pairs (or sometimes groups of more than two) start their sessions.

During sprint planning for Bertha, we schedule pairing sessions to tackle tickets with high estimations together.

The backend team of Reach Me is doing it differently. Instead of scheduling anything, they start their day with a Teams-call which they keep running in the background for most of the day. It simulates a being-together-in-the-office feeling and allows for spontaneous conversations, some of which lead to pair programming sessions.

Personality clash

Pair programming is not the best work mode for every problem. While some of us are happy to collaborate on a task and align with a colleague, others need alone time. They want to play around with different possible solutions, sketch something, etc.

Most programmers tend to shy away from collaborative tasks. They want to be on their own while coding. Practice (and constant nudging) is your only chance to convince introverted people to pick up pairing more often.

There are also big differences in the way people articulate themselves. Pairing a person who thinks longer before answering with a fast speaker might end up in a one-sided conversation. It’s important to give everyone enough space. You don’t need to rush anything.

It is an acquired taste

You sometimes hear “I need to get this done, I don’t have time for a pairing session”. Or “it will take me half the time explaining the issue to my colleague before being productive”. Please reconsider. True, pair programming comes with a cost. You have to set it up, agree on a work mode, explain the context. It adds cognitive overhead. Practice can mitigate most of that. If you are getting used to pair-programming it becomes a standard way of working within your team. Doing so will feel normal, and you will consider it for more and more assignments. Jumping on a pairing session with someone lacking the context should be the outlier.


4 simple tips for pair programming. Just give it a try.

Agree on topic and scope

There should be a common understanding of what the problem is, and a general direction of how to tackle it. Keep reminding each other when you are deviating too far from your goal. It’s easy to get lost. Now you have someone helping you to focus!

Pick your role, switch them

Usually, when sitting next to each other at one desk, there is one keyboard to share. This puts the person on the keyboard in a different position than the person next to them. The person who types – the “driver” – is more concerned about the coding details, types, syntax, formatting. The person off the keyboard – the “navigator” – can think about the bigger picture, e.g. abstraction layers, dependencies, potential refactorings, etc.

That situation changes when you are leveraging remote collaboration tools. All participants can now write to the current codebase, they can be “drivers” and “navigators” at the same time.

Regardless of the setup, you should keep all participants engaged by flipping through the two roles. This allows everyone to contribute to the lower-level and big-picture problems.

Acknowledge different pairing situations

You should always be aware of the circumstances and the setting of each session.

@Svitlana Piddubna: “Not all tasks are equally good for pairing, so I think it works best when someone sees a ticket that could be a good one for pairing and offers to work on it together”

Are you and your pairing partner on the same page about the issue? Can you both focus on the issue at hand? Does the schedule suit both of you?

Plan enough breaks beforehand. This allows accommodating for individual daily routines, habits, speed of thought, etc. Especially during a pandemic, we need to pay special attention to a flexible schedule (eg: no daycare!). Some people need their time thinking through things alone. Take a break from the session when needed. Pair programming doesn’t mean you are bound to each other for the whole session.

Consider also the implicit or explicit power dynamic within a pair. An example: You are the most experienced developer in your team. Your pairing partner is a junior member with knowledge of the context, but lacking coding experience. As a senior, you have the responsibility to motivate, acknowledge, and drive the conversation positively. Don’t be condescending, don’t turn it into a lecturing lesson. You only get the inspiration of your counterpart flowing when it is a mutual experience.

Programming out loud

Reflection is an important step in learning. You are allowing your brain to streamline your thoughts on a topic. More often than not, only when attempting to write down or articulate, do you realize that you haven’t understood a problem. Now imagine you are articulating and reflecting on your current assignment. You gain an improved understanding of the topic and your pairing partner gains insights into your thought process.

The Agile Alliance advised, that “‘Programming out loud’ leads to a clearer articulation of the complexities and hidden details in coding tasks, reducing the risk of error or going down blind alleys.”

PRO TIP: Think of TV chefs and how they orate every detail of their process.

Remote pair programming

The obvious downside of remote pair programming is the fact that you can’t grab the keyboard and switch between navigator and driver. The “navigator” is not looking at the same monitor as the person who “drives”. To overcome these barriers, a variety of tools can help.
The easiest and most intuitive solution is simple screen-sharing. If you need more elaborate features, look at tools like JetBrain’s Code With Me and Microsoft’s Live Share. They come with plenty of features to accommodate for a sophisticated pairing session.

Microsoft Teams

At the main tool for remote collaboration is Microsoft Teams. Its screen sharing feature already allows for a great remote pairing environment. You set up a call, share your screen, and go. There is even a function to take over the controls of the shared screen. This can be helpful when a topic is hard to articulate or to switch roles between “driver” and “navigator”.

JetBrains Code With Me

The Code With Me service [4] allows any JetBrains IDE user to share and open up their codebase for collaboration with remote colleagues. It works across almost all their IDEs (Android Studio coming soon). You can give your colleagues instant access to your current project. Up to 10 developers can join one session, and collaborate on one codebase. Especially for a mob-programming session, this is quite a valuable solution. Each participant can choose to either sync with (or follow) the “driver” and see what they are seeing, or work on their code area, all within one session.

Visual Studio Live Share

Visual Studio and Visual Studio Code offer a similar service, Live Share [5]. You can start a collaboration session and share a link to it for others to join. You have fine-grained control over the permissions the session members get.


Pair programming is an important tool in the arsenal of software engineering teams. It’s a great way to share knowledge, reduce friction, and increase communication bandwidth. Most of us are currently working remotely. Introducing collaborative work methodologies like pair programming helps you to cope with that. Remote collaboration tools like screen sharing make it super easy to do. Now that you know more about pair programming, try it out more. You will be surprised about the positive outcome!



The top five articles of January from tech practice

We hope you had as good a start to the new year as we did. In the new year, we have news fodder again and therefore prepared small snacks for you to digest. #


– Not only reading but attending conferences is also a great way to absorb new knowledge. Getting a good overview of upcoming conferences and finding the right ones for you is important. created a helpful list from different tech areas. The list can be filtered by country and preferred technology. Pick the right one for you.


– there is an extension for diagrams for visual studio code. This is worth its weight in gold if you want to manipulate values and diagrams programmatically or collaborate with others.


–  Jetbrains is also working on improving the user experience. That’s why they have a new editor currently in the preview phase called Fleet. The IDE promises lightweight, flexibility, collaboration, language inclusion, and much more. Check out the features in detail or subscribe to the updates once the IDE is available after the preview phase.


– the article shows the misuse and spans the arc to the semantically correct use of elements. Of course also the resulting advantages in terms of accessibility.

A NEW FUNCTION structuredClone() 

– will soon be supported by most browsers. It creates deep copies of objects. This blog post explains how it works.

Are you still using git stash, new branch, and git stash apply then? Optimize your workflow and check out git switch.

Hido has written another great article on the topic of remote work. There he illuminates this with a psychological eye and gives you tips to use at home. Check it out.

Andre wrote an article about what has been concerning us for a long time. He offers a solution on how to realize the execution of multiple versions of a stencil design system without conflicts. Check it out.

Enjoy reading and stay curious.


For us, it is important to share interesting tech articles related to programming and our work environment.

In our October list, you find valuable insights from the areas of frontend development, mobile development, and quality assurance. Get inspired by the news we have gathered for you!


1. An Interview With Elad Shechter on “The New CSS Reset”
2. This article discusses the benefits of Shift Left Testing in the software development lifecycle and the key considerations on how to maximize the success of a product by using this approach.
3. VS Code introduces a lightweight version of that runs entirely in the browser. Coding can thus also be done via a smartphone. Clicking on the “.” button in any public repository will open the vscode browser version for comfortable multi-editing of the files. What do you think, will you make use of it?
4. Apple introduces Tech Talks 2021, live online sessions for developers offering the opportunity to talk directly with Apple experts for receiving personal advice.
5. In this article, you find exciting and fun things we can do with CSS for common UI challenges.


1. Netflix developers are pretty good at writing articles on their own tech blog. Have a look. 
2. This article in particular is recommended for people who want to get to grips with image formats in a very excessive way.

Stay curious.

A big thank you to Claudio, Gabriel, Ruben, and Christian who regularly share interesting news with us.


For us, it is important that we share what we discover interesting in the world. Which articles we read especially in relation to programming and our work environment. Therefore, we actively share articles with you.

The result is a list of the top five articles created from the interactions with the articles shared by you as MB.ioneers.


1. What if we can just play games and learn something new while playing? Find out here with a very extensive list of web applications with which you can learn through play.

2. An article that deals with the difference between the variable types “any” and “unknown”, explains them with an example and shows the advantages.

3. Attention! Be careful when using the dot operator and reflect on whether destructuring the values is possible and makes more sense.

4. What are pitfalls when using NPM or how do you deal with upgrades?

5. It’s this time of the year again that open source maintainers love because of Hacktoberfest! It’s usually a nice time of the year to look to contribute to OSS because everyone is doing, so maintainers are usually very active in approving and interacting with your PRs. Start actively participating in the open-source community and find the first simple steps here!

But please read for yourself and stay curious.


by Roman Guivan who has just started a multi-part Pixi.js game tutorial. follow him on his journey.

and Barbara Jöbstl with a Unix command cheat sheet which is perceived by the community as very complete.

A big thank you to all those who actively share articles with their colleagues at You guys are great.