exchange.io is our way of communicating our opinions and thoughts on certain topics with an invitation to join us in a discussion and to give impulses to our readers.
Improving an Existing Documentation Project (2 of 3)
Javier Hernandez •
February 28, 2023 •
exchange.io
Now that we know the current status of our documentationbetter (See Part 1), it is time to talk about our users. Why? Because we have created the documentation for them to be able to use our products, platforms, or tools.
But what do we know about our users’needs? Who are they? We may all have assumptions about them. But in fact, we know nothing most of the time. A user survey is an excellent method to discover what our users perceive and use our docs, understand their current needs, and decide what to improve first. Read the following steps and learn how to prepare your user survey.
Step 1 – Design a User Survey
To design a user survey we need to know what to ask. But language is tricky so even if we are technical writers, we should double-check the survey’s questions with another colleague (and a UX designer, if possible).
About the following questions take them as a reference to create your own survey:
In which team/product do you work?
What is your role?
How (un)familiar are you with the [Write your documentation name here] documentation?
Not at all familiar.
Slightly familiar.
Somewhat familiar.
Moderately familiar.
Extremely familiar.
How often do you read the [Write your documentation name here] documentation?
Never.
From time to time.
A few times a month.
Monthly.
Weekly.
I don’t even close that tab.
Which topics are you looking for in the [Write your documentation name here]?
How (un)satisfied are you with the [Write your documentation name here] documentation?
Very unsatisfied.
Unsatisfied.
Neutral.
Satisfied.
Very satisfied.
Please explain briefly why you are (un)satisfied with the [Write your documentation name here] documentation.
How (un)useful do you find the [Write your documentation name here] documentation?
Not useful at all.
Not useful.
Neutral.
Useful.
Very useful.
Explain briefly why you consider the [Write your documentation name here] documentation (un)useful.
Do you bookmark the [Write your documentation name here] pages you need?
Never.
I only bookmark the topics I need.
I bookmark [Write your documentation name here] main topics only.
I bookmark the [Write your documentation name here] home page only.
Always.
Please let us know to what extent the following statements apply to you personally:
When I need to find some information on a page, I make CTRL+F:
Never.
Rarely.
Sometimes.
Often.
Always.
When I need to find some information on the [Write your documentation name here] pages, I scroll:
Never.
Rarely.
Sometimes.
Often.
Always.
How often do you use the [Write your documentation name here] page Search box:
Never.
Rarely.
Sometimes.
Often.
Always.
Regarding the content of a page, what do you prefer?
A plain content structure – No tabs, no accordions. All the information is shown at once.
A plain content structure containing some visual elements and disclosing content progressively.
Which topics would like to find in the [Write your documentation name here]?
If you were to make one suggestion for improving the [Write your documentation name here] pages, what would it be?
Design your questions according to the topic you want feedback about for example: _user browsing behavior, page layout preferences, missing topics, etc.
To know which type of feedback we are addressing with the sample survey of this page, read the following table:
Type of feedback
Question Number
Explanation
Your users (role, team/product)
1, 2
Knowing the role, team or product of our users, helps us to identify:Our audience.Which role/teams are reading our docs the most This information can lead us to develop role or team/product focused documentation, or address specific issues or lack of interest impacting the documentation.
Awareness by role, and team/product
3, 4
If our platform, toolset. product or project has a lot of people using it, checking the awareness level is a must to double check that our people know where to find the documentation they need, and that we have prepared for them.
Frequent of use
3
Anwers to this question will tell us if our documentation are being use and how much. Knowing this, we can take further actions to deepen the engagement, or reinforce it if the docs are not being checked frequently.
Topics of interest
4
Too many times we create the docs that we think would be of use for our users. This question will tell us if we are right, and which topics need to be revamped (or removed).
Level of (un)satisfaction
5, 6
A satisfied user is another word for useful docs (and product!). A low level of satisfaction gives us the chance to ask Why?, and find the source of that unsatisfaction.
Usefulness perception
7, 8
We can think that our documentation is useful but, again, our users will either confirm it or deny it. Another opportunity to improve.
Access point to our documentation
9
Sometimes we think that our users access directly our documentation typing the url in the browser. This question will tell us about the bookmarking habits of our users bookmark. With that in mind, we will have a better idea of the impacts of changing any page or section name taht could probably be bookmarked. Here, an effective and reliable communication strategy for changes is key.
Browsing and searching behaviors
10, 11
Browsing and searching behavior have a decisive impact on how we will design our pages, and which visual elements can be used.
For example, using collapsible elements may cause troubles to CTRL+F users that, for example, work with Chrome.
Reading behaviors
10, 11, 12
Same as previous topic.
Direct Improvements
13, 14
This is the open-air gold mine. Users will you what they want and see as a positive impact for the docs.
MS Forms
Use any survey creation tool that fits your needs. I used MS Forms because it was available and provided an easy way to:
Visualize the number of participants.
Provide different kind of diagrams to visualize the results of the questions.
Download the results in a consumable Excel format.
Easily share the survey.
Step 2 – Schedule the Survey
Scheduling the survey at the right time is as important as the survey itself. We are all focused on providing value to our projects and don’t want to get distracted.
So check in advance with the required roles (POs normally), the best date and time to run the survey.
If your users are not your teammates, Customer Support or an equivalent department may be the ones to seek.
When discussing the time for the survey, remember to:
Share the objectives of the survey.
Set the time available to complete the survey.
Explain the importance and benefits of their collaboration.
Ask them to request all team members to attend the meeting.
Try to not interrupt their workflow.
Tip: For agile product development teams, including the survey during the daily or retrospective meeting seems the right time.
Step 3 – Survey Time
Don’t forget to introduce yourself during the interview and:
Present the documentation improvement project: What, Why, and How.
Explain the importance of improving the docs.
Explain the structure of the survey.
Release the survey!
To Be Continued…
What`s Next?
In the next article, we will analyze the results of the survey and highlight the most important of them. Are you ready?
Share this Blog Article
Javier Hernandez
Technical Writer @
Lisbon
API Testing with Java and Spring Boot Test – Part 2: Improving the solution
Luiz Martins •
February 6, 2023 •
exchange.io
In the last part of this step-by-step, we created the project, set up the test framework, and also did all the configurations needed to run our API tests.
Let’s continue to grow our test framework, but first, we need to do some improvements to the existing code. In this guide, we’ll:
Refactor the object mapping (to be easier to handle with the JSON files)
Improve the response validations
Handle multiple environments inside our tests.
These changes will make our code base cleaner and easier to maintain for us to create a scalable framework of API tests.
Let’s do it.
Refactoring the Object mapping
We’ll take advantage of using the Spring boot Repository to separate the responsibility of mapping the objects (JSON) we’re going to use inside our tests. That way, we can do another step forward in our code cleanup.
So, first of all, we’re going to:
Create a new package called repositories
Then we create a new Class inside this package called FileUtils.
We’ll also take the opportunity to change the way we map the object to not be hard-coded but be in a proper resource file. That way when we need to change the test data, we don’t have to change the test but only the correspondent resource file.
package org.example.repositories;
import com.fasterxml.jackson.databind.JsonNode;
import com.fasterxml.jackson.databind.ObjectMapper;
import org.springframework.stereotype.Repository;
import java.io.IOException;
import java.net.URL;
@Repository
public class FileUtils {
/**
* Read File and return JsonNode
*
* @param filePath
* @return
* @throws IOException
*/
public static JsonNode readJsonFromFile(String filePath) throws IOException {
ObjectMapper mapper = new ObjectMapper();
URL res = FileUtils.class.getClassLoader().getResource(filePath);
if (res == null) {
throw new IllegalArgumentException(String.format("File not found! - %s", filePath));
}
return mapper.readTree(res);
}
}
As you can see in the file above, we created a function to read a JSON file and then return the object already mapped – similar to the approach we had before in the test file.
Now, we’ll structure the resources folder to accommodate the JSON files.
In the resources folder, let’s create a new directory called user and then create a file to store the request body of the operation we’ll do.
After that, we need to update our test. Now we want to get the file data by using the new function we created for that purpose. The updated test will look like that:
The next step should be to use this new function and improve our test. To do that, we’ll configure a GET request on YourApiService and return the full Response object. Then we should be able to check the response body.
Now, it’s just a matter of adding the test case to the ApiTest test class and using the same strategy of letting the JSON response file be in its proper directory. Finally, we’ll have something like this:
Quite easy to understand if you just look at the test case 🙂
Executing the tests over multiple environments
Now we have the tests properly set, and everything is in the right place. One thing that could be in your mind right now is: “Ok, but I have a scenario in my product, in which I need to run my test suit over multiple environments. How do I do that?”.
And the answer is – property files.
The property files are used to store specific data which we can use along our test suit, like the application host, port, and path to the API. You can also store environment variables to use within your test framework. However, be careful, since we don’t want to make this information public. You can see an example in the lines below.
With Spring boot, we take advantage of using the “profiles” to set the specifics of the environments our application has, and make them available as spring boot profiles.
So, let’s do that. Inside the resources folder, we’ll create a new file called application-prod.properties to store the values of the production cluster of the test application. The file will store something like this:
Now, the only thing missing is to change our service to get the values stored in the property file.
To get the values from the property files, we’ll use the annotation @Value. This annotation will provide the values from the properties we set in the application-prod.properties file.
**Bear in mind: ** You’ll need to set the environment variable before using it here. The @Value annotation will grab this value from the environment variables you have set.
The updated version of YourApiService class will look like this:
package org.example.services;
import com.fasterxml.jackson.databind.JsonNode;
import io.restassured.RestAssured;
import io.restassured.builder.RequestSpecBuilder;
import io.restassured.http.ContentType;
import io.restassured.response.Response;
import io.restassured.specification.RequestSpecification;
import lombok.extern.slf4j.Slf4j;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.stereotype.Service;
import javax.annotation.PostConstruct;
@Slf4j
@Service
public class YourApiService {
@Value("${apitest.base.uri}")
private String baseURI;
@Value("${apitest.base.path}")
private String basePath;
@Value("${apitest.token}")
private String myToken;
private RequestSpecification spec;
@PostConstruct
protected void init() {
RestAssured.useRelaxedHTTPSValidation();
spec = new RequestSpecBuilder().setBaseUri(baseURI).setBasePath(basePath).build();
}
public Response postRequest(String endpoint, JsonNode requestBody) {
return RestAssured.given(spec)
.contentType(ContentType.JSON)
.body(requestBody)
.when()
.post(endpoint);
}
public Response getRequest(String endpoint) {
return RestAssured.given(spec)
// In our case, we won't use the "token" variable, as the API doesn't require so.
// But if your API require, here you can use the token like this:
// .auth().basic("token", myToken)
.contentType(ContentType.JSON)
.when()
.get(endpoint);
}
}
That’s a great step up. This way, if you have multiple environments in your setup, you just need to create another application-YOUR_PROFILE_NAME.properties.
Executing the test suit
You must be wondering: How do I run the test suit with this newly created profile?
The answer is simple, just execute mvn clean test -Dspring.profiles.active=prod.
By default, if you just run the mvn clean test command, Spring Boot will try to find a file called application.properties and automatically activate it.
Now we have significantly improved the test setup of our application by:
The refactoring of the Object mapping to clean up our code and apply some best practices
Improving the response validation by adding a new dependency and using it to simplify the check
Learning how to handle multiple test environments. This should be useful when it comes to companies that have layers of environments before the code reach production
Are you curious about the article? Building a Java API test framework part 3 will further improve our application. We will then go deeper into the following topics:
Inspiration and topics discussed in our MB.io tech community in January
Ronny Schreiber •
February 6, 2023 •
exchange.io
Also in 2023, we would like to share with you the news that is being discussed in our tech communities. You will surely find some inspiring topics.
Framework or not
Write reactive components without frontend frameworks. Should you be using a framework or not? Here at MB.io, we’re always taking a serious look at this topic. Let’s put the hype aside for a moment. You can write reactive components without relying on a frontend framework. Frameworks provide a more straightforward way to write web apps. React, SolidJS, Svelte, and Lit all offer this. The article explains the features and how the various frameworks works, as well as the associated costs.
Get rid of Pre-commit hooks
This video has met with much agreement from our developers. Pre-commit hooks block frequent code commits of intermediate work. This blocks productivity on the developer’s machine. It is better to run tasks on the server where you check for general linting rules or similar. The pull request serves as your final quality gate.
CSS pseudo-classes
The browser support for the new CSS pseudo-classes has risen sharply in the year 2022. It’s time to look at their benefits. Kevin Powell explains them in an insightful way in his video. Covered are :is(), :where() and :has(). Using these makes writing rules easier, and more manageable. It also changes how specificity behaves.
Chrome DevRel Team top Core Web Vitals recommendations for 2023 In the new year, have the resolution to make your websites more pleasant to use for your users and work on the performance. The Chrome DevRel team answers this question in the article: “what are the most important recommendations we can give to developers to help them improve performance for their users? They name the most important levers with explanations and suggestions for implementation.
SVG Reference
If you want to learn more about SVGs and their possibilities, have a look at this Interactive SVG Reference. You can learn what and how to implement it. After going through it, take a look at the collection of color tools and free SVG generators.
Communication Lead of the Tech Practice Circle @
Berlin
Follow me on
Improving Your Development Documentation Project (1 of 3)
Javier Hernandez •
January 18, 2023 •
exchange.io
What this article covers First steps on how to improve an existing documentation project.
Tools Confluence, GitHub web and desktop versions, and MarkdownPad2.
Introduction
Developer Documentation is a curated set of files describing all the active workflows, setups, tools, conventions, best practices, and How-tos of your software development product. Through this article, I will refer to it as “documentation” or “docs”.
Documentation supports your team members in their daily and future developments. It also helps new joiners to reach cruise speed during the onboarding period. But to do so, your documentation must be up-to-date and well-structured.
Keeping the docs up-to-date and in good shape requires resources and dedicated time. Yet often our project time or budget constraints prevent us from taking care of our docs properly.
This series of articles aims to serve as a documentation improvement guide.
Know Your Ground
Step 1 – Organize Your Improvement Project
Developer documentation has to be visible to increase the chances of success, and to find collaborators (to improve it). To do so, keeping a space to visualize, describe and track your improvement project is a useful idea.
Use your teams/company collaboration tool for that purpose. For this article, we’ll be using Confluence.
Space Structure
The structure of an improvement project may differ from one project to another. Take the following space structure as a reference that you can adapt to your needs (then iterate!):
Space Name
Page Name
Child Page
Description
[Your Documentation Project´s Name]
Name of your documentation project.
Overview
Explain briefly the What, Why and How of the documentation.
Dashboard
Centralized page to easily access all project pages.
Analysis
Media and results of documentation analysis.
Roadmap
Visualization of the estimated dates to implement each improvement.
Improvement Project
Communication Grid
Contact person by topic.
Improvement Plan
Implementation phases and items.
Coordination Meetings
Grid to align with your manager or collaborators (Optional).
Once your improvement project space is set up, you are ready to:
• Present it to all your team members, including Product Owners and Scrum Masters. • Track and show your progress. • Visualize documentation issues/blocking points. • Access all your project resources.
Tip: Explain how documentation issues negatively impact teams´ performance. It will help Product Owners and Scrum Masters to understand and provide your project with the resources you need.
Step 2 – Identify Your Documentation Issues
Identifying your documentation issues means spotting all the types of issues living among your docs. Some documentation issues are:
Review your page structure. It shows the logical flow of the information contained according to the objective of the page, for example: Introduction, Prerequisites, First Step, Working with…/Available Features.
Naming
Define a naming strategy for page titles, sections and subsections.
Page elements
Standardize the use of the following elements: lists, tables, tabs, notes, collapsible elements and images.
Text unclear or too verbose
Be concise.
Random text formatting
Standardize the use of bold and italics for files, folder names, code snippets and code elements (functions, objects, methods, parameters, API names, etc.).
Too many topics on a single page
Stick to “One topic per page”.
Unnecessary screenshots
Use screenshots or images ONLY when strictly necessary. If you can explain it briefly, do not use screenshots.
Type of notes
Standardize each type of use case for notes (Info, Help, Warning, etc.).
Now we can start to target and record the issues of our documentation. The following table will help you to perform that task:
Nav Option
Page
Section
Subsection
Issue
Link
Add nav. option name
Page number
Section name
Subsection name
Issue name
Depending on the size and complexity of your documentation, targeting these basic issues may take a while. Take the chance and join me on the journey to better documentation and improve your documentation project now.
miroslav galic •
January 6, 2023 •
exchange.io
Introduction
Continuing with the theme of my previous article (sharing my MacOS menubar setup), I’d like to show you how I used Spotlight and why I replaced it with another tool. The app is called Raycast and it’s a real productivity swiss army knife.
It does everything Spotlight does plus some built-in features like Clipboard History, Snippets with text expansion, bookmarks search, calendar agenda, timers, reminders, convert units, math, etc. But it also has an Extension Store, where you can download community-contributed extensions/integrations. At the end of the article, I will share a few that I’m using.
If you are new to macOS or don’t use Spotlight, let’s see why the omnipresent search bar is powerful.
Why Spotlight is great
Spotlight is a system-wide desktop search feature of Apple’s macOS and iOS operating systems
You can open it via CMD + Space shortcut keys.
I used Spotlight mostly for opening and switching between apps. There are many ways to do just that, but I don’t know any faster way than hitting Cmd + Space and typing in the first letter of the app and hitting Enter.
My macOS dock is always set to hide, not to take that precious screen real estate on small laptop screens. And when you train (by using) Spotlight what each search letter opens, I don’t see why anybody would use Dock, or Mission Control to switch apps.
Moving away from Spotlight to Alfred
I felt one could do even more from this powerful input box so I stumbled into Alfred.
Alfred comes with a Clipboard manager, bookmark search, custom web search, etc. However, those are premium features, and it costs ~40€ to get those.
Next to that, as powerful as it is, Alfred looks like outdated software.
Here comes Raycast
After some searching, I found out about Raycast, and for a shortcut person like myself, it blew me away 🤯.
Out of the box, it does everything mentioned tools do and then some. But unlike them, it has a built in Extension Store, where you can install various community-contributed extensions.
You get all that completely free for personal use. See more about pricing plans on their website.
Features I use the most
I will go over some of the built-in features I use the most and will end it with a couple of extensions I installed myself.
Open or switch to any app quickly
When you type in the Raycast input box, it will remember which result you used with that query. This is how you train Raycast to be more relevant to you.
For example, I use one-letter shortcuts to open many apps. To open Outlook, I press o, to open Chrome, I press c, to open Visual Studio Code, I press vs… When your result is highlighted, press Enter and the app will open/focus.
Clipboard history
Remember how many times you copy something, and you need to paste it again, but now your clipboard contains different content, so you can’t paste.
Clipboard History saves what you have copied and if you need to paste it again, you press a shortcut key to open Clipboard history and search what you want to paste or use your keyboard/mouse to navigate to an item and paste it directly.
In this extension settings, you can configure a custom keyboard shortcut to show the Clipboard History window at any time. I use CMD + Shift + V. Notice how the usual paste with CMD + V still works like before.
To configure a global hotkey for an extension, open Raycast and then CMD + ,to navigate to Raycast Preferences. Follow the steps from the screenshot to get an overview of extension settings and change them to your liking.
Quick links
As a software developer, I use Jira a lot. With that comes a lot of copying and pasting of Jira issue numbers. Some nice people share links to tickets, and some others, just use ticket numbers as plain text. To be able to navigate to that ticket in Jira quickly, you can use Quick Links.
You do this by defining a custom URL and the dynamic part of it. So when you paste something in that dynamic part, it will open that link in your browser. Optionally, you can give an alias to your Quick Link for easy and quick access.
In the end, this looks like this:
• Copy issue number • Press CMD + Space to open Raycast • Press j (for Jira) • Press space to focus on the dynamic part of the URL • Paste the issue number • Press Return key to open it in a new browser tab
Make sure to check out extension settings and adjust them to your needs.
Opening bookmarks
On any given day, I open vast amounts of the same websites, eg. Jenkins jobs for builds and deployments, production Jenkins jobs, specific pages in Confluence, GitHub, team calendar in Confluence, and the list goes on and on.
Sure, one could use the browser bookmarks bar, but that requires using the mouse and clicking more than a few times just to open one page.
When you use the Search browser bookmarks extension (to which you assign a nice alias eg. b), then all you need to do is:
• Open Raycast prompt (CMD + space) • Press b (alias for browser bookmarks extension) • Press space and start typing • Press Return key when your result is highlighted
Notice how you can bookmark any different page in Jenkins, so you have very quick access to those pages, without the need to navigate using the mouse or manipulating URLs. Once you give meaningful names to your bookmarks, opening them is super fast.
Snippets
If you find yourself needing to type something repeatedly, eg. some code snippet, URL, greeting, full name, date, etc. this one is a true time-saver. I started using a global text replacement tool called Espanso some time ago, but I didn’t find it working reliably. What was cool about it, it would replace predefined shortcut text eg. :br with Best Regards as you type.
But since Raycast can do the same, I just uninstalled it and configured my shortcuts using Raycast snippets.
For example, to open our INT environment, I type in the URL bar :int and this expands to a snippet I have defied in the snippets collection.
I am pretty sure you type in your email at least once a day. If you have a long name, you can configure a snippet for your email. By giving it a shortcut it is easy as:
• Focus on any input field or text editor • Type :mail • Raycast will automatically expand that to your email.
Snippets are even more powerful than that. You can even configure parts of the snippet to be dynamic, eg. dates and time, and even where to place the cursor.
One snippet I use a lot is to access a deep object in the Redux store. For that, I have shortcuts eg: :pal and this expands to someStore.store.store.getState()['SOME_OBJECT'].
Notice how much typing that saves.
Rest of the extensions
There are many more extensions to be discovered and used in the Raycast extension store.
With Raycast prompt open, type Store and install the ones you find useful. Or just navigate to the website and browse over there.
I can recommend: Reminders, Timers, Window management, One Thing, Depcast, Brew, Kill process, etc… or just doing a prompt like: 1m to in, or 34usd to eur.
In conclusion
I can not possibly cover all the features of Raycast in one blog post. However, I hope I have shown you a new tool to be added to your arsenal should you like it.
For me, it makes mundane daily tasks a bit more fun and quicker to do. And since Raycast can do so much, it made me uninstall many other apps and simplify my setup.
I hope you had fun reading. Now go explore Raycast and let me know which are your favorite features.
Don’t forget you can type raycast in Raycast prompt. There is a handy Walkthrough feature 😉
Share this Blog Article
miroslav galic
Frontend Developer @
Berlin
Follow me on
You Might Not Need Module Federation: Orchestrate your Microfrontends at Runtime with Import Maps
Vladimir Zaikin •
January 5, 2023 •
exchange.io
TL;DR
Managing microfrontends in a complex feature-rich app can become a tedious task and easily turn it into a Frankenstein’s monster when there’s no clear strategy involved.
Using third-party tools like Webpack Module Federation helps to streamline the building and loading of microfrontends, but leads to vendor lock-in, which can be a problem.
Import Maps can be seen as a web native alternative to Webpack Module Federation to manage microfrontends at runtime. In this article, we will:
• Explore the concept of Import Maps
• Build a demo app
• Summarize the pros & cons
Import Maps in a nutshell
The concept of Import Maps was born in 2018 and made its long way until it was declared a new web standard implemented by Chrome in 2021 and some other browsers.
Import Maps let you define a JSON where the keys are ES module names and the values are versioned/digested file paths, for example:
Such mapping can be resolved directly in the browser, so you can build apps with ES modules without the need for transpiling or bundling. This frees you from needing Vite, Webpack, npm, or similar.
and let the browser resolve the actual path at runtime.
Advanced Import Maps features
You can reuse an import specifier (for example lodash below) to refer to different versions of the same library by using scopes. This allows you to change the meaning of an import within that given scope. As a result, any module in the https://assets.mycompany.io/scripts/my-components path will be using lodash-es version 3.9.3 while all other modules will use version 4.17.21.
You can also construct an import map dynamically based on conditions: the example above taken from this article shows how you can load different modules based on the support of IntersectionObserver API.
Demo app
In this article, we bring the idea of Import Maps further by placing an Import Map between the host app and microfrontends and applying the dependency inversion principle. This makes the host app not directly dependent on a concrete microfrontend version or its location, but rather on its abstraction via name or alias.
We are going to build an online store that has only one, but highly customizable assortment type: T-shirt.
Step 1: Outline the Architecture
There is an arbitrary number of microfrontends that are assigned to the development teams, each of them is free to choose the tech stack, build and CI/CD tools. The “only” constraint is to make sure each build pipeline produces 3 artefacts: ESM bundle, Manifest and other Static Assets.
The lightweight Nest.jsImport Map Resolver server has two main roles: store and update the importmap, but also handle the submission of JS assets. Single-spa has a similar solution available.
The Publisher will read your Manifest, extract the bundle filename as well as externalized dependencies and publish them to the Import Map Resolver.
The Assets Server is used as a Web-enabled storage to host JS assets. To store images, videos, and other assets we can choose an arbitrary Storage, for example, an Amazon S3 bucket. CDN is used to serve third-party libs and frameworks as ES modules, a good one is ESM.sh.
ESM bundle
Your production-ready application ESM bundle is generated by Webpack, Vite, Rollup or any other bundler of your choice. For simplicity of the setup, CSS Injected By JS plugin for Vite is used along with the scoped styles to build a single ES module with injected CSS.
If your build produces more than one bundle (for example, due to code splitting), you have two options:
concatenate them after the build, for example via concat
alter the Publisher to loop over the multiple entry chunks and add the prefix, e.g.: my-component:main, my-component:polyfill, and so on.
Manifest
This is a JSON file that contains the mapping of non-hashed asset filenames to their hashed versions and, if you are using Vite, you just need to add manifest: true to your Vite config. This will produce the following file in the /dist of your project:
The generated Manifest will be used by the Publisher to know the mapping of your microfrontend unique name to its ESM bundle.
Static assets
Everything else, such as images, videos and other static files required by your microfrontend.
Step 2: Define the UI and split into microfrontends
Our online store demo app will have three views: Home, Product & Cart:
Vue is used as a core “metaframework” to have out-of-the-box routing, simple state management with Pinia and Vite as a bundler. It is not necessary at all to use the “metaframework”, moreover, during the build, you’ll get errors from Vite’s internal import-analysis plugin because of unresolved imports (good news, there is a solution for that, see “Challenges → Metaframework”).
To demonstrate how several microfrontends can co-exist together on the same page, they are built with four different frameworks. To make each app’s setup look similar, Vite template presets are used to generate Vue, React, Lit and Svelte microfrontends that are compiled into Web Components. You may consider splitting your app by functional area and building your microfrontends around business domains, such as Order, Product, etc.
Step 3: Build the app
The full source of the Demo app can be found here.
Common problems & solutions
Take control away from the bundler when resolving imports
How do bundlers work? If you ignore the implementation details and go to the highest level of abstraction, they concatenate all the jungle of JS modules and put them into one big chunk, that is minified, uglified, and tree-shaked to get rid of unused code. Simple concatenation wouldn’t work. You need to indicate the entry point and make sure you don’t have modules that import themselves – cyclic dependencies. Most of the bundlers solve this by building an abstract syntax tree. For example, Rollup does it with Acorn.
Using micro-frontends resolved via Import Maps introduces a challenge for your bundler that should normally know your dependencies at compile time, not at runtime. We need to tell Rollup to stop building the tree once a dependency from the Import Map is met and make the current module a leaf node.
Luckily, Vite, Rollup and Webpack have options to take control away from the bundler and let the browser resolve the specified imports by providing their names in the configuration.
Specs say that “any import maps must be present and successfully fetched before any module resolution is done”. Essentially, it means that the importmap must be inserted in the DOM earlier than any other async script.
Vite is internally using build-html plugin, that produces index.html with the entry point added via <script type="module" src="bundle.js> tag to the <head> section. This is not what we want. Instead, we would like to execute a script that will fetch the import map first, add it to the page, and then load the app script.
To build a custom index.html the Async Import Map plugin for Vite was created that is internally using Rollup Plugin HTML. The plugin extracts the entry point script from the list of generated assets (by lookup for isEntry: true), stashes it, loads the import map from the specified URL and then unstashes and appends the entry point script, giving control back to your app.
Everything else you may come across could be just an abstraction on top of these methods. Here is a good summary of their pros and cons.
Since the goal is to use as many native web capabilities and avoid vendor lock-in, we can stick to Props & Custom Events. One important note to mention: to let an event “escape” from the Shadow DOM, we need to set bubbles: true and composed: true. This way we make sure events propagate through the parent-child as well as the shadow tree hierarchy. A nice explanation can be found here.
Here we are telling Rollup to not bundle React dependencies as well as to provide global variables for them.
But how do we deal with dependency mismatches when one or more microfrontends are using the same lib, but with different versions? Let’s say Footer and Header are two React major versions apart. As mentioned before, we can use scopes:
If you need some sophisticated logic to build your import map, Import Map Resolver is the place to put it. Let’s say one of your microfrontends publishes its new version that uses react@17.0.1, but you already have react@17.0.0 in your importmap. In this case, the Import Map Resolver would remove an older version and replace it with the newest one. It is one minor version ahead, assuming backward compatibility is guaranteed.
Library microfrontend
Microfrontends can be published as a Custom Components library.
This will produce two separate chunks, one for the Header and one for the Footer. Vite supports library Mode for Vue and other frameworks.
Without going into each library configuration details, the general principle is to alter your main.ts entry point (or each of your entry points if they are many) in a way you’d like to expose your microfrontend defined as a Custom Element.
import MyComponent from '.src/my-component';
customElements.define("my-component", componentWrapperFn(MyComponent));
where componentWrapperFn is a function provided by your (or a third-party) library that returns a custom element constructor that extends HTMLElement. It could be native defineCustomElement in the case of Vue or third-party reactToWebComponent from react-to-webcomponent. Here (and also here) is a great summary of how to build Web Components with different libraries and frameworks.
Metaframework
As mentioned in the section Demo app, a metaframework is used to glue the microfrontends together. Choosing no framework is also a valid option. Import Maps perfectly support this case by resolving imports directly in the browser. The choice of using Vue is mainly to avoid writing boilerplate code for routing and make the container components lean, having little to no low-level DOM manipulation. There is a good article explaining why we need the container components and how to structure them.
Routing
Routing between container components/pages is covered by the metaframework in case you are using one. If not, you can opt for Navigo as a simple dependency-free JS router.
In rare cases, when you need navigation within individual microfrontends this is where it gets tricky: at the end, you only have one address bar. You can map the flattened structure of your compound URL state (for example, map https://my-app.com/mfe-1:article/search and https://my-app.com/mfe-3:invoice/edit/8 to https://my-app.com/(mfe-1:article/search//mfe-2:invoice/edit/8)) to enable two-level routing with the help of your framework. There is a library for Angular that uses URL Serializer to intercept and manipulate browser URLs.
That being said, this approach also introduces an unwanted tight coupling among microfrontends: the host app shouldn’t know the details of individual microfrontends routing. When a team needs to change or add a new URL, the host app would need to be adjusted and redeployed. Instead, try avoiding two-level routing at the stage of application design. To better understand all the consequences of this approach, you may want to read the book Micro Frontends in Action by Michael Geers, chapter “App shell with two-level routing”.
Pros
Let’s summarize all the benefits that Import Maps offer:
• flexibility for microfrontend teams (each team can have its own tech stack, CI/CD, coding guidelines, infrastructure: everything before final artefacts are built)
• easy step-by-step migration of existing codebases by replacing components or entire pages with microfrontends
• the host app is lean and detached from the development of microfrontends and focuses on the composition of pages, providing the data and handling events
• the host app is not aware neither of the existence nor the implementation details of your microfrontends: the only “contract” is the API of your microfrontend (props/events)
• import map entries are lazy-loaded: no JS is downloaded before you actually import()
• you may not need any build tools at all: import maps work in the browser at runtime
• it takes seconds to update your app (by changing entry in the import map)
• it takes seconds to rollback
Cons
Let’s summarize all the drawbacks that the usage of Import Maps brings:
• the overall amount of bytes downloaded when using microfrontends in comparison to monolith is unavoidably higher. Even though you can externalize and share your app dependencies, you cannot avoid eventual code duplication in individual microfrontends
• not suitable for small and medium size projects where single core framework would be a better fit
Conclusion
Using Module Federation in comparison to Import Maps has a major drawback, which is vendor lock-in, that makes your product dependent on another product: Webpack. All your micro-frontends as well as your host app must comply with it and be on the correct version. You also cannot avoid the compilation step, while Import Maps can be used directly in the browser.
Our top 5 topics in November by the tech practice circle
Ronny Schreiber •
December 14, 2022 •
exchange.io
This month, the developers had stimulating conversations on MS Teams, sharing their opinions.
Sticky Scroll
Many of our frontend developers at MB.io use Visual Studio Code. The VS Code team has released a new setting that helps you keep track of your file. Sticky Scroll, this feature shows the class/function you are currently working on at the top of the editor. Just enable it in the settings: “editor.stickyScroll.enabled”. Watch the short video and let it explain the behavior.
The Microsoft Edge Dev Tools extension for VS Code
Over the past two years, the IT industry has been undergoing major changes, especially in the area of frontend development. In this report, 3703 frontend professionals from 125 countries and 19 frontend experts were surveyed to get an accurate overview of current trends and the future of frontend. The goal is to provide insights on topics such as technologies, practices, and working conditions. This survey is a good starting point to discuss the insights in the team, to read trends for ourselves and to find out what we want to focus on in the future.
100 Seconds of Code
Curiosity and the will to learn something new every day is in the genes of us as developers. Especially easy to consume are the contents of this playlist: 100 Seconds of Code. Watch one of the clips every day and you will have 133 days of fresh input. The basics and commands for tools, technology or frameworks are covered.
Practical Accessibility a online video course
A self-paced, get-right-down-to-it online video course for web designers and developers who want to start creating more accessible websites and applications today. The course is by Sara Soueidan an inclusive design engineer, author, speaker, and trainer. Because accessibility is important to us at Mercedes-Benz.io, one of our A11y gurus in our company will definitely be watching the course.
free-for.dev
Experimenting and trying out innovative new tools and services is vital in the life of a developer. This git links collection gives you the opportunity to browse what’s out there and especially which is free to use for developers. From major cloud providers, to source code repos, web hosting, analytics, to game development, you name it. What will you try next?
Thanks to all who share their knowledge in our company in this way and use our communities to exchange ideas. Stay curious!
Communication Lead of the Tech Practice Circle @
Berlin
Follow me on
DRIVING MERCEDES-BENZ.io BRAND EVOLUTION
Pedro Vasconcelos Lopes •
December 12, 2022 •
exchange.io
As many of you know or are just finding out, Mercedes-Benz.io has just celebrated its 5th anniversary, and to mark this event, we decided to roll out the plan for evolving Mercedes-Benz.io’s brand. The idea of a brand evolution has been in the works for months, but we felt that now was the perfect opportunity to stand back and take a hard look at our brand. The number of MB.ioneers is increasing, we are communicating actively in way more channels, showing up at more events, and betting on a bigger exposure of our brand. So, we need to create a bigger impact every time our brand is in the spotlight.
Celebrating 5th Anniversary
It was also important to look at the market, but most of all, to reevaluate our brand positioning as a subsidiary of one of the most popular brands in the world. I realized that our place within the Mercedes-Benz Group was not being taken fully into consideration, and our brand connection was relying too much on just our logo.
That is why we saw the opportunity for a brand evolution. And in this article, we will explain the “behind the scenes” of the changes coming to our brand, what we did and why we did it. Plus, I will tell you how nerve-wracking it is to play around with a brand and embrace color gradients in the age of dark themes and solid backgrounds.
The Brandstorming
Before approaching what we wanted to change or evolve in our brand, it was important for us to make sure we knew what needed to be kept the same – our logo. Since being more recognizable is our goal, it did not make sense to take away from all the brand-building of the last 5 years. Therefore, we also did not change our font. Although we want to be closer to the Mercedes-Benz brand, we want to keep our main visual elements and what makes our brand unique.
So, now we know the font and the logo will not change. Which makes everyone a little less scared, but also wondering, because at the end of the day what could be the substantial changes that deserve this article?
The answer to the question above is “clear” to me. As the brand designer, what was lacking in the brand was a strong visual personality. We had all the elements there, but the glue was missing. And the WOW factor was missing as well. As a driver of the Mercedes-Benz.io brand in several events, it was getting clear that our vision was not being transmitted visually in the best way possible, so we had to take all of this to the drawing board and try to produce concepts that made sense and fulfilled all the checkboxes we needed.
Mercedes-Benz.io Brand on events
The concept
Conceptually, we had a lot of ideas going on in our heads. Therefore, we tried to stick with keywords that would help us find a clear path that we could later transform into visual assets. The most predominant ones were the ideas of movement, modernity, luxury, and adaptability. Capable of being like water. Ready for everything. For every change. And adapt accordingly with the same ability as ever.
It was with all of this in mind that we agreed on these main changes.
We shifted our color palette by changing our complementary color from a bright green to a strong pink, while keeping our signature sky blue as the main color. We also brought in a fresh cohesive approach to our art direction and photography. And we have further developed our shapes into 3D ones, so we can give more depth and movement to the way we present ourselves. Let us now do a deep dive into these changes.
Movement, modernity, luxury, and adaptability
From Green to Pink to Gradients.
I will be honest, the revamp started here. The idea started here, in April 2022, when I tried to see how pink would work with our blue. And then how they would work in a gradient. In my opinion, it worked perfectly. We wanted to show fluidity, modernity, and the capability of adapting. The gradients give us that. Not only does pink supply all the contrast green did not, but it also revealed itself by being an amazing gradient partner to our blue. It allowed us to have fluid, vibrant gradients that uncover purple in most of them. And this is important, why may you ask? Well, purple is a color highly related to Luxury, which is the overall Mercedes-Benz purpose. This allows us to get closer, without losing our identity.
From 2D to 3D Shapes
Our shapes always gave us room to play around. They are great for patterns and monochromatic designs. However, they lack depth. And this, added to their lack of texture, can make it hard for our brand assets to stand out amongst other tech companies while being side by side in events, for example. Therefore, we have created alternatives. We have not crossed the 2D shapes from our brand book. We just introduced their 3D variants, with a fresh look that can be used whenever we want to have more personality and need shapes to create a “standing from the crowd” design.
Shapes and Backgrounds for the future
Cohesive Photography
We felt that we had a challenge with photography cohesiveness, due to having four different offices and relying on a lot of photos taken in each one of those offices.
For this reason, we made the decision to expand our team with a Media Producer who handles our visual assets, including taking all photos with the same clear vision, creating the unified feeling that was missing from our media. The same goes for our videos, where we were able to explore our visual language and personality, while keeping the cohesiveness of our brand identity.
Have you seen our amazing Lisbon office?
Modern Iconography
Regarding our iconography, we felt that it was lacking modern aesthetics and was too thin-lined. This could result in a loss of power when being put against other elements in a PowerPoint deck. Thus, we decided to shift towards an icon library that is more inclined toward digital platforms, with wider lines and less detail, since they were not being used in printing assets.
Wrapping it all up:
We are really excited about this brand’s evolution and all the wonderful things that can come with it. We want to stand out and be recognizable. This requires bold moves from us, which should not mean we lose any of our values and personality traits. That is why we decided to keep our main brand assets intact, adjust what could be improved, and be creative with what we have in our hands.
Since a picture is worth a thousand words, I will leave it up to all the designs you will be able to check throughout our communication channels, to WOW you as they have WOWed us. And hope you feel as connected to our brand and get as excited as we are.
Examples of the brand’s evolution applications
Share this Blog Article
Pedro Vasconcelos Lopes
Senior Brand Designer @
Lisbon
Manages Mercedes-Benz.io brand
Follow me on
API TESTING WITH JAVA AND SPRING BOOT TEST – PART 1: THE BASIC SETUP
Luiz Martins •
November 30, 2022 •
exchange.io
Here at Mercedes-benz.io (MB.io), we collaborate as multiple multi-disciplinary teams (nothing new to a Scrum-based organization).
I’m part of one of those teams, responsible for a Java-based microservice. Since this microservice sends data to a back-office application, we need to test the APIs provided by it.
With that said, we had the challenge to build a new API test framework from scratch.
In this series of articles we’ll show:
How we choose the tools
The process of creating and improving the test framework
Pipeline configuration
Test report
Choosing the language and framework
The main reason why we went for a Java-based framework is that the background of our team is Java and the microservice itself is written in this language. Our team is composed of Java developers, so they can contribute to building the right solution for our team and also maintain the code base of the test repository in case it’s needed.
The test framework we’ve chosen to be the base of our solution was Rest Assured.io. The reason behind it is that rest assured is already used in several projects within our tribe at MB.io and is also widely used and maintained in the community.
We also added Spring Boot to organize, structure, and be the foundation of the project.
Setting up the project
Step 1: Create the project
We choose Maven as our dependencies manager. Now, the first thing to do is to add the dependencies we need in our project.
With this, we should be able to start organizing our project.
Step 2: Changing the Main class
The Main class should be changed to a SpringBootApplication. And the main method must be configured to run as a SpringApplication.
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
@SpringBootApplication
public class Main {
public static void main(String[] args) {
SpringApplication.run(Main.class, args);
}
}
To abstract access and configure the requests in one single place, we can create a new Service and take advantage of it.
Here is the place to set the proper configuration of the requests.
Let’s create a new method here to abstract the use of a post request. In this post request, we’ll provide the URL and the JSON body as parameters, so the file will be something like this:
package org.example.services;
import com.fasterxml.jackson.databind.JsonNode;
import io.restassured.RestAssured;
import io.restassured.builder.RequestSpecBuilder;
import io.restassured.http.ContentType;
import io.restassured.response.Response;
import io.restassured.specification.RequestSpecification;
import lombok.extern.slf4j.Slf4j;
import org.springframework.stereotype.Service;
import javax.annotation.PostConstruct;
@Slf4j
@Service
public class YourApiService {
private RequestSpecification spec;
@PostConstruct
protected void init() {
// On init you can set some global properties of RestAssured
RestAssured.useRelaxedHTTPSValidation();
spec = new RequestSpecBuilder().setBaseUri("https://reqres.in").setBasePath("/api").build();
}
public Response postRequest(String endpoint, JsonNode requestBody) {
return RestAssured.given(spec)
.contentType(ContentType.JSON)
.body(requestBody)
.when()
.post(endpoint);
}
}
Note: We’ll return the full response to be able to validate what we want within the test itself.
As you can see in the file above, we also take advantage of the built-in RequestSpecification that Rest-assured has to set the baseURI and basePath for this service. This is a smart way to configure your service because if you have more than one service in our test framework, each of them can have its setup and host.
Step 4: Add a test case
First things first, let’s add the proper annotations to be a spring boot JUnit 5 test class.
After that, let’s add a constructor method and assign the Service to be used in our test as a class variable.
private final YourApiService yourApiService;
public ApiTest(YourApiService yourApiService) {
this.yourApiService = yourApiService;
}
Now we are good to start adding the test cases here. Let’s do that.
The postRequest method expects two parameters:
the endpoint we want to send the data as a String;
the request body as a JsonNode.
The first thing we want to do is create an object to send in the request body of our request. We’ll take advantage of the jackson-databind library to help us with the object mapping.
@Test
public void testCreateUser() throws JsonProcessingException {
ObjectMapper mapper = new ObjectMapper();
String body = "{\"name\": \"Luiz Eduardo\", \"job\": \"Senior QA Engineer\"}";
JsonNode requestBody = mapper.readTree(body);
}
Now, we need to make the request and validate what we want. Let’s add that to our test case. The final results should be something like that:
Andreas Rau •
November 11, 2022 •
exchange.io
… and what we as Software Engineers/ Designers and Businessmen can learn from it
TLDR
In this article you will learn about miscommunication pitfalls in aviation and that the same pitfalls occur in software development, design, and business. We will dive deeper into topics like the aviation decision-making (ADM) process. Have a glance at intercultural communication. How it dictates the way we speak and understand our world. Further an introduction to my own personal experiences and how you can sabotage the productivity of your organization effectively.
In the end, we will apply the ADM to foster and improve your communication.
How miscommunication can trigger disasters
On 25th January 1995, the Avianca flight from Bogota, Colombia, to JFK Airport in New York, was running out of fuel. Air traffic control (ATC) at JFK kept the airplane in a holding position until their plane’s fuel tank was running dangerously low. When revisiting the conversation between the airplane and the ATC at no point in time the word “emergency” or “mayday” was communicated by the pilots. The captain reportedly told the first officer to “tell them we are in an emergency”. instead of letting ATC know that the airplane is in a serious situation, the word “emergency” was not communicated. Instead, the first officer told ATC “We’re running out of fuel” “With air traffic control unaware of the gravity of the problem, the plane crashed just nine minutes later. Out of the 158 people on board, 73 died, including the pilot and the co-pilot.”
The complexity of reality
The Austrian philosopher, Ludwig Wittgenstein already shared with the world in 1921 in his famous book “Tractatus logico-philosophicus”.
“Ordinary language is imperfect and cannot capture the full complexity of reality.”
From this, we can derive that even under calm conditions the human language fails. In the before shown example, it gets even worse under stressful conditions.
From Aviation to Software Development
First of all a disclaimer. I am a software developer and paragliding pilot. I am well aware that in airline aviation serious mistakes can endanger passenger lives and software development in most cases does not. What I am interested in, are the circumstances of mistakes and the diversity of people involved as well as their communication. When I was revealed how many aviation accidents happened because of miscommunication I started to question if the same situations occur in my day-to-day job and even in my private life. In both worlds, we face time-critical decisions. We both need to react to events that happen while we are executing tasks. I see only a few differences. One of them is that there are more decisions and events to consider in aviation. This means that there are more of them in a shorter amount of time. Both worlds make decisions. In software development, those decisions and their effects as well as their execution by 3rd parties take more time than in aviation. Think about this statement for a moment. Compare a 2h flight with all the decisions that you might need to take as a pilot to a software development sprint of two weeks.
I am a paragliding pilot, so nothing close to a real pilot, but in my experience, the amount of decisions I have to take in a 2h flight compared to a two-week sprint is about the same. In the next part, we will dive deeper into how the aviation industry takes decisions.
The aeronautical decision-making (ADM) process
The airline industry has identified a process that every aircraft pilot has to obey. Aeronautical decision-making (ADM) is a five-step process that a pilot needs to conduct when facing an unexpected or critical event. Adhering to this process helps the pilot to maximize his success chance.
Start to identify your situation, this is the most important step. Accurately detecting it enables you to make correct decisions and raise the probability of success.
Evaluate your options and in my experience, there are often more than I expected to be in the beginning.
Choose from your generated options while accessing the risks and viability.
Act according to your plan.
Evaluate if your action was successful and prepare for further decisions. You will always have further decision points where you need to start the process of ADM again.
This process is only one of many more in aviation.
Let’s apply this to a software bug.
Identify your situation
What is the real cause for the bug?
Is it reproducable, part of my product scope or not?
Did this bug occur because of our code changes or of dependency updates?
Is this bug on live systems?
Can I resolve the bug?
Do I need help?
Can I get more information?
…
Evaluate your options
Patch the bug with a new version.
Ask for help.
Investigate further
Decline because it’s a feature and the user is using it incorrectly.
…
Choose
Let’s assume the bug is on a live system and needs to be fixed asap -> Patch the bug with a new version.
Act
Please enter your routine for fixing a bug here
Evaluate if your action was successful
Is the live system running as expected and was the bug resolved?
Should we establish a standardized process to fix bugs?
Did I resolve the bug in time? If not practice time management.
Is there anything we can do to mature the product?
Feedback to QA.
Share your insights.
Improve test procedures.
…
This is an easy example to illustrate how the ADM process can be applied to software development. A lesson I learned from paragliding and software development is to always finish your plan even if the circumstances change during your action. Trust in your abilities and execute your plan. If you followed the previous steps correctly your actions cannot be severely wrong. Given that the information you based your analysis on was correct.
Which language we use is an important part of correctly communicating with each other, let’s have a look at aviation English next.
Aviation English
Pilots and crew, regardless of their own native language or any other languages they speak, travel across the world. They have to be able to communicate with every airport and every ATC they face, on a daily basis. This was a challenge to solve, which occurred with the rise of civil aviation in the mid-20th century. There was already an unspoken agreement in place. The language of the sky at that time was Aviation English. Now Aviation English is, as misleading as it sounds not the English language that we know. In fact, it is a separate language compared to what is spoken on the ground. Even native English speakers have quite a long road to learn it ahead of them. It uses standardized phraseology in radio communications to ensure aviation safety. Since the manufacturing, as well as the operation of air crafts, was dominated by English-speaking countries. The International Civil Aviation Organization (ICAO) slowly but steadily understood one thing.
Good processes and procedures themselves will not solve the issue.
In 1951 they suggested that English should be the de facto international language of civil aviation. Let me emphasize that ICAO in 1951 only suggested, that English should be the language of the sky. It took them 50 more years, in 2001, to actually determine English as the standardized language of air transport. With said standardization, they published a directive. It stated that all aviation personnel, including pilots, flight attendants, and aircraft controllers must pass an English proficiency test and meet the requirements. Before that, language skills were not checked in any standard way.
Now let that sink in for a moment.
Tech/Design/Business English
In Tech, Design and Business we do have our own set of languages. I am not talking about programming languages. Try to explain to your grandma/grandpa what exactly was decided in the last SAFE PI planning. You can substitute PI planning with almost every other meeting we have in our company. Now ask for honest feedback: “Can you summarize what I just told you?” Be prepared, it might be the case, that your elderly relatives are very kind to you and try to avoid the task you just gave them. But they will most likely not be able to summarize what you have explained to them. I already have issues trying to explain such things to my parents. I can already sense while speaking, that my parents won’t understand a word.
Although you and I were using English as a language. Our terminology, acronyms, processes, and neologism makes Tech/Design/Business English a very complex language.
¿Habla español?
I had the luck to work with many great people in my career so far and I am more than grateful for every one of them. Nevertheless, I discovered a couple of things for myself over the years. We are all working in the field of Information Technology, Design, and Business. We are all speaking the same language and share the same enthusiasm and skills. And still, we are different. We are all shaped and formed during our private and professional lives in ways one can only imagine. We all have a wide range of different religious, social, ethnic, and educational backgrounds. Living close to our families or far away from them. These differences became more and more visible to me with the amount of time I have spent with them. Differences in how colleagues perceive what you are trying to tell them. Differences in how people value their pursuits and sometimes sacrifice their benefits for the sake of the group. Differences in how authorities communicate with subordinates and vice versa. And these are only the people that I had the chance to work with. You have made your own experiences and shared time with so many more great souls.
What we are now tapping into is the field of intercultural communication. Gert (Gerard Hendrik) Hofstede (Dutchman, born on October 3, 1928, Haarlem, Netherlands) is a Dutch sociologist who proposed an indicators set that determines the various people’s cultural characteristics based on research conducted in the 1960-70s. The subject was part of my studies for one semester. At that time my brain did not understand the extent of the topic and how important it will be for my future life. Intercultural communication describes the discipline that studies communication across different cultures and social groups. In other words, how culture affects communication. There is an impressive amount of research done in the field of intercultural communication which investigates topics like:
Collectivist versus Individualistic
High Context versus Low Context
Power Distance
Feminity versus Masculinity
Uncertainty Avoidance
Long-term Orientation versus Short-term Orientation
Indulgence versus Restraint
All of them are worth investigating and I encourage you to do so. I have gathered further readings which should get you started.
Personal
On top of every culture, there is you, you how you perceive the world around you, and you, how you make sense of everything which is shaping you. Your personal touch might very well steer you into a counter course of what your culture tried to induce into you for the entirety of your childhood and more. I hereby am not suggesting that you are all rebels. I want to highlight that the personal level of communication can be far off from how the folks back in your hometown used to talk. If you haven’t already met a vast variety of people during your school time you will definitely do so in your professional life. In Software Development I have had the chance to work with many great people from all over the world. Although at some point already familiar with the concept of intercultural communication I often unknowingly said or did something I thought would be appropriate at this exact moment in time… it wasn’t. Having the basics of intercultural communication in mind is necessary but not sufficient. Get to know the person you are talking to and discover a new level of communication.
Interim
We have learned a couple of things about communication, let’s take a second look at the introductory example. The first officer, in disregard of what the pilot told him, made a severe mistake in not properly communicating the extent of the situation. Maybe, in her/his culture, it is common to understate issues and it can be rude to talk about severe issues or problems directly. Nevertheless, the situation required clear and fact-based information. Correct identification of the situation was therefore not possible. All further steps in the aeronautical decision-making process of the ATC were from then on based on false information and we all know the outcome. I often find myself in meetings where colleagues or superiors introduce me to a brand new process that will revolutionize how our company works, solve all the problems and issues at once and make us all happier. Thanks to good marketing everybody is excited and eager to implement the new processes with huge costs in time, money, and motivation only to find out that in the end, it didn’t work — again. I do not want to sound pessimistic, I want to tell you my perspective. Looking into software development and all the processes we have, I want to learn from aviation and invest more time and effort into educating our colleagues on how to properly and fruitfully communicate in a standardized and organized way.
Important factors for this, in my opinion, are honest, transparent, and truthful communication where hidden agendas or intercultural communication pitfalls are avoided. And to make one point clear, more communication is not equal to better communication. I think based on this foundation, processes can be fruitful.
Lost in translation
An enterprise company operating in multiple countries spread over multiple continents. What is the first thing that comes into your mind? For me, it is a rich and diverse project team over as many time zones as possible. During my early career, I was exposed to a trend in IT where teams could not be diverse enough. I know many companies which still steer 100% in this direction, but I also know many who are not. What I am trying to do here is to make you as a reader think. Think about your current circumstances, where are you working? Who are your colleagues? Do you work effectively together? Is communication easy for you or is it a burden? Do you have the feeling that meetings actually create useful artifacts or are they mostly a waste of time? Ask yourselves these questions and assess. I can only speak for myself and I have observed both. In my work, a rich and diverse team can accelerate me, but there have also been times when it slowed me down. When looking at the example I showed you earlier. Aviation English is a language that was an agreement on communication but it was not part of any training or checks. It is important to say, that this has nothing to do with the people per se. When I came to college, in Germany, I had the opportunity to choose if I wanted my studies and lectures in German or English. I was motivated and although my mother tongue is German, I wanted my studies to be in English. This was exciting to me and silly young me thought it would be good to already study in English since the language of IT was English. Now I really regret my decision. During all of my studies, I was confronted with a lot of bad English. Many interesting topics got lost in translation, simply because the lecturer was not proficient enough in her/his language skills. Please do not get me wrong here, my English at that time was not better either. I am just trying to make the point that the content of a message can be severely harmed when not communicated properly. During my career, I often did applicant interviews and had to clearly state my veto. The applicant might have had the best CV, and great experience, but bad English skills. In the IT industry, we have the great opportunity to have rich and diverse teams. This is a circumstance not exclusive to IT but it is still in my opinion a gift we should be thankful for. To make sure we respect and maintain said privilege I have a suggestion. I am suggesting that if we put so much emphasis on what technologies an applicant knows, for how many years she/he has worked with tech stack a, b or c. We should also check thoroughly if her/his English skills are proficient enough to communicate properly in a big and diverse company. We should offer and also expect and check from new hires to improve their language skills, if necessary. Language should never be a limiting factor. Coming from IT and especially web development, I have to deal with accessibility day in and day out. For me, it is easy to understand that digital tools and devices are about inclusion. In my opinion, it is the same with language.
Let’s take this a step further. A good colleague of mine introduced me to an amazing article on some second world war CIA practices.
How to effectively sabotage your organization’s productivity
The CIA created the “Simple Sabotage Field Manual” in 1944 on how everyday people could help the allies weaken their country by reducing production in factories, offices, and transportation lines. There is a wonderful article from the business insider which highlights that. Despite being written in 1944 these instructions are timeless. I am sharing with you the selected list of instructions from the business insider article, filtered for communication. See if any of those listed below remind you of our organization, your colleagues, or even yourself.
Organizations and Conferences
Insist on doing everything through “channels.” Never permit shortcuts to be taken in order to expedite decisions.
Make “speeches.” Talk as frequently as possible and at great length. Illustrate your “points” by long anecdotes and accounts of personal experiences.
When possible, refer all matters to committees, for “further study and consideration.” Attempt to make the committee as large as possible — never less than five.
Bring up irrelevant issues as frequently as possible.
Haggle over precise wordings of communications, minutes, and resolutions.
Refer back to matters decided upon at the last meeting and attempt to re-open the question of the advisability of that decision.
Managers
Hold conferences when there is more critical work to be done.
Multiply the procedures and clearances involved in issuing instructions, paychecks, and so on. See that three people have to approve everything where one would do.
Employees
Work slowly.
Contrive as many interruptions to your work as you can.
Do your work poorly and blame it on bad tools, machinery, or equipment. Complain that these things are preventing you from doing your job right.
Never pass on your skill and experience to a new or less skillful worker.
I highly encourage you to read the full business insider article for some more bitter-sweet laughs.
Foster your communication
Although being funny to read, the sad truth is that some of these instructions are common practice in our organization. I advise you to take this list into your notes, bookmark the article, read through it frequently, and ask yourself: Do my communication behaviors fall into the same categories? If yes, no worries! Everybody has to start from somewhere. Remember the ADM process:
Identify your situation and ask yourself: Am I sabotaging my company? It’s important, to be honest here! (Remember: Accurately detecting it enables you to make correct decisions and raise the probability of success)
Evaluate your options, there are plenty! Seek feedback from people you trust and ask them for honest feedback on your ways of communication, do an Udemy course. Heck, maybe even join a debating club!
Choose from your generated options while accessing the risks and viability.
Act according to your plan.
Evaluate if your action was successful and prepare for further decisions.
This is not easy – but it can be done. You can do it!
Now to bring this article to an end let’s look at the last dimension of communication.
Are you talking to me?
I often find myself in situations where I talk about colleagues and superiors rather than talking with them. Talking about colleagues is easy, but with most things in life, the outcome of easy is not great. It takes courage to work on yourself and even more to reach out for help. There is no effortless solution, be honest and ask yourself if you really want to change something about how you communicate and how you are perceived while communicating. This article is at most only a catalyst that will hopefully ignite your own journey.
Recap
We have learned how two, at first sight, completely different professions share many fundamental communication skills. We have had a look at the aeronautical decision-making process, what intercultural communication is, that processes are only as good as the material we put into them to be processed, how to effectively sabotage your company and what you can do to foster and improve your communication.
We touched on many topics today and I hope this article touched you personally in at least one of them. If you now think about what you have read in the last couple of minutes I feel already successful in my educational mission and if you remember only one thing then something along the lines of this:
‘Don’t worry the worst mistake you can make, is to not communicate at all.’
Further readings
As promised here you have a small collection to further educate yourself about the topics in this article.