API Testing with Java and Spring Boot Test – Part 2: Improving the solution

In the last part of this step-by-step, we created the project, set up the test framework, and also did all the configurations needed to run our API tests.

You can see the first part of the series here:

Let’s continue to grow our test framework, but first, we need to do some improvements to the existing code. In this guide, we’ll:

  • Refactor the object mapping (to be easier to handle with the JSON files)
  • Improve the response validations
  • Handle multiple environments inside our tests.

These changes will make our code base cleaner and easier to maintain for us to create a scalable framework of API tests.

Let’s do it.

Refactoring the Object mapping

We’ll take advantage of using the Spring boot Repository to separate the responsibility of mapping the objects (JSON) we’re going to use inside our tests. That way, we can do another step forward in our code cleanup.

So, first of all, we’re going to:

  • Create a new package called repositories
  • Then we create a new Class inside this package called FileUtils.

We’ll also take the opportunity to change the way we map the object to not be hard-coded but be in a proper resource file. That way when we need to change the test data, we don’t have to change the test but only the correspondent resource file.

package org.example.repositories;

import com.fasterxml.jackson.databind.JsonNode;
import com.fasterxml.jackson.databind.ObjectMapper;
import org.springframework.stereotype.Repository;

import java.io.IOException;
import java.net.URL;

@Repository
public class FileUtils {

    /**
     * Read File and return JsonNode
     *
     * @param filePath
     * @return
     * @throws IOException
     */
    public static JsonNode readJsonFromFile(String filePath) throws IOException {
        ObjectMapper mapper = new ObjectMapper();
        URL res = FileUtils.class.getClassLoader().getResource(filePath);
        if (res == null) {
            throw new IllegalArgumentException(String.format("File not found! - %s", filePath));
        }
        return mapper.readTree(res);
    }
}

Show code in Github Gist

As you can see in the file above, we created a function to read a JSON file and then return the object already mapped – similar to the approach we had before in the test file.

Now, we’ll structure the resources folder to accommodate the JSON files.

In the resources folder, let’s create a new directory called user and then create a file to store the request body of the operation we’ll do.

{
  "name": "Luiz Eduardo",
  "job": "Senior QA Engineer"
}

Show code in Github Gist

After that, we need to update our test. Now we want to get the file data by using the new function we created for that purpose. The updated test will look like that:

package api.test.java.tests;

import com.fasterxml.jackson.databind.JsonNode;
import io.restassured.response.Response;
import org.example.repositories.FileUtils;
import org.example.services.YourApiService;
import org.junit.jupiter.api.Test;
import org.junit.jupiter.api.TestInstance;
import org.junit.jupiter.api.extension.ExtendWith;
import org.springframework.boot.test.context.SpringBootTest;
import org.springframework.test.context.junit.jupiter.SpringExtension;

import java.io.IOException;

import static org.hamcrest.MatcherAssert.assertThat;
import static org.hamcrest.Matchers.equalTo;
import static org.hamcrest.Matchers.is;

@ExtendWith(SpringExtension.class)
@SpringBootTest
@TestInstance(TestInstance.Lifecycle.PER_CLASS)
public class ApiTest {

    private final YourApiService yourApiService;

    public ApiTest(YourApiService yourApiService) {

        this.yourApiService = yourApiService;
    }

    @Test
    public void testCreateUser() throws IOException {

        JsonNode requestBody = FileUtils.readJsonFromFile("user/createUser.json");

        Response res = yourApiService.postRequest("/users", requestBody);
        assertThat(res.statusCode(), is(equalTo(201)));
    }
}

Show code in Github Gist

Much better! By keeping the code cleaner we are helping our future selves with its maintenance – trust me, you’ll be very glad to see this.

Improving the response validation

Great! Now, let’s have a look at the response validation.

In some cases, we want to check the full response body – or at least some parts of it – to fulfill the test requirements.

To do that, we’ll create:

  • A new Repository to abstract the responsibility and help us check the full JSON response body
  • A function to handle the check of the JSON response.

We’ll also add the “jsonassert” dependency to assert the JSON.

The pom.xml file will look like that:

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>
    <parent>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-parent</artifactId>
        <version>2.7.6</version>
        <relativePath/> <!-- lookup parent from repository -->
    </parent>

    <groupId>api.test.java</groupId>
    <artifactId>apitest</artifactId>
    <version>0.0.1-SNAPSHOT</version>
    <name>api-test-java</name>
    <description>Api Tests</description>

    <dependencies>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter</artifactId>
        </dependency>

        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-devtools</artifactId>
            <scope>runtime</scope>
            <optional>true</optional>
        </dependency>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-test</artifactId>
            <scope>test</scope>
        </dependency>

        <dependency>
            <groupId>org.apache.commons</groupId>
            <artifactId>commons-text</artifactId>
            <version>1.10.0</version>
        </dependency>

        <dependency>
            <groupId>io.rest-assured</groupId>
            <artifactId>rest-assured</artifactId>
            <version>5.3.0</version>
            <exclusions><!-- https://www.baeldung.com/maven-version-collision -->
                <exclusion>
                    <groupId>org.apache.groovy</groupId>
                    <artifactId>groovy</artifactId>
                </exclusion>
                <exclusion>
                    <groupId>org.apache.groovy</groupId>
                    <artifactId>groovy-xml</artifactId>
                </exclusion>
            </exclusions>
        </dependency>

        <dependency>
            <groupId>io.rest-assured</groupId>
            <artifactId>json-schema-validator</artifactId>
            <version>5.3.0</version>
        </dependency>

        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-log4j2</artifactId>
        </dependency>

        <dependency>
            <groupId>org.projectlombok</groupId>
            <artifactId>lombok</artifactId>
            <version>1.18.24</version>
            <scope>provided</scope>
        </dependency>

        <dependency>
            <groupId>org.skyscreamer</groupId>
            <artifactId>jsonassert</artifactId>
            <version>1.5.1</version>
            <scope>provided</scope>
        </dependency>

        <dependency>
            <groupId>org.junit.jupiter</groupId>
            <artifactId>junit-jupiter-api</artifactId>
            <version>5.9.1</version>
            <scope>test</scope>
        </dependency>

        <dependency>
            <groupId>org.junit.jupiter</groupId>
            <artifactId>junit-jupiter-params</artifactId>
            <version>5.9.1</version>
            <scope>test</scope>
        </dependency>

        <dependency>
            <groupId>org.junit.jupiter</groupId>
            <artifactId>junit-jupiter-engine</artifactId>
            <version>5.9.1</version>
        </dependency>
    </dependencies>

    <build>
        <plugins>
            <plugin>
                <groupId>org.springframework.boot</groupId>
                <artifactId>spring-boot-maven-plugin</artifactId>
            </plugin>
        </plugins>
    </build>

</project>

Show code in Github Gist

The newly created ResponseUtils class will be something like this:

package org.example.repositories;

import com.fasterxml.jackson.databind.JsonNode;
import io.restassured.response.Response;
import org.json.JSONException;
import org.skyscreamer.jsonassert.JSONAssert;
import org.skyscreamer.jsonassert.JSONCompareMode;
import org.springframework.stereotype.Repository;

@Repository
public class ResponseUtils {

    public static void assertJson(String actualJson, String expectedJson, JSONCompareMode mode) throws JSONException {
        JSONAssert.assertEquals(expectedJson, actualJson, mode);
    }

    public static void assertJson(Response response, JsonNode expectedJson) throws JSONException {
        assertJson(response.getBody().asString(), expectedJson.toString(), JSONCompareMode.LENIENT);
    }
}

Show code in Github Gist

The next step should be to use this new function and improve our test. To do that, we’ll configure a GET request on YourApiService and return the full Response object. Then we should be able to check the response body.

public Response getRequest(String endpoint) {

    return RestAssured.given(spec)
        .contentType(ContentType.JSON)
    .when()
        .get(endpoint);
}

Now, it’s just a matter of adding the test case to the ApiTest test class and using the same strategy of letting the JSON response file be in its proper directory. Finally, we’ll have something like this:

@Test
public void testGetUser() throws IOException, JSONException {

    Response res = yourApiService.getRequest("/users/2");

    JsonNode expectedResponse = FileUtils.readJsonFromFile("responses/user/specific.json");
    
    assertThat(res.statusCode(), is(equalTo(200)));
    ResponseUtils.assertJson(res, expectedResponse);
}

Quite easy to understand if you just look at the test case 🙂

Executing the tests over multiple environments

Now we have the tests properly set, and everything is in the right place. One thing that could be in your mind right now is: “Ok, but I have a scenario in my product, in which I need to run my test suit over multiple environments. How do I do that?”.

And the answer is – property files.

The property files are used to store specific data which we can use along our test suit, like the application host, port, and path to the API. You can also store environment variables to use within your test framework. However, be careful, since we don’t want to make this information public. You can see an example in the lines below.

With Spring boot, we take advantage of using the “profiles” to set the specifics of the environments our application has, and make them available as spring boot profiles.

So, let’s do that. Inside the resources folder, we’ll create a new file called application-prod.properties to store the values of the production cluster of the test application. The file will store something like this:

apitest.base.uri=https://reqres.in
apitest.base.path=/api
apitest.token=${TOKEN}

Now, the only thing missing is to change our service to get the values stored in the property file.

To get the values from the property files, we’ll use the annotation @Value. This annotation will provide the values from the properties we set in the application-prod.properties file.

**Bear in mind: ** You’ll need to set the environment variable before using it here. The @Value annotation will grab this value from the environment variables you have set.

The updated version of YourApiService class will look like this:

package org.example.services;

import com.fasterxml.jackson.databind.JsonNode;
import io.restassured.RestAssured;
import io.restassured.builder.RequestSpecBuilder;
import io.restassured.http.ContentType;
import io.restassured.response.Response;
import io.restassured.specification.RequestSpecification;
import lombok.extern.slf4j.Slf4j;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.stereotype.Service;

import javax.annotation.PostConstruct;

@Slf4j
@Service
public class YourApiService {

    @Value("${apitest.base.uri}")
    private String baseURI;

    @Value("${apitest.base.path}")
    private String basePath;

    @Value("${apitest.token}")
    private String myToken;

    private RequestSpecification spec;

    @PostConstruct
    protected void init() {

        RestAssured.useRelaxedHTTPSValidation();

        spec = new RequestSpecBuilder().setBaseUri(baseURI).setBasePath(basePath).build();
    }

    public Response postRequest(String endpoint, JsonNode requestBody) {

        return RestAssured.given(spec)
            .contentType(ContentType.JSON)
            .body(requestBody)
        .when()
            .post(endpoint);
    }

    public Response getRequest(String endpoint) {

        return RestAssured.given(spec)
            // In our case, we won't use the "token" variable, as the API doesn't require so.
            // But if your API require, here you can use the token like this:
            // .auth().basic("token", myToken)
            .contentType(ContentType.JSON)
        .when()
            .get(endpoint);
    }
}

Show code in Github Gist

That’s a great step up. This way, if you have multiple environments in your setup, you just need to create another application-YOUR_PROFILE_NAME.properties.

Executing the test suit

You must be wondering: How do I run the test suit with this newly created profile?

The answer is simple, just execute mvn clean test -Dspring.profiles.active=prod.

By default, if you just run the mvn clean test command, Spring Boot will try to find a file called application.properties and automatically activate it.

Now we have significantly improved the test setup of our application by:

  • The refactoring of the Object mapping to clean up our code and apply some best practices
  • Improving the response validation by adding a new dependency and using it to simplify the check
  • Learning how to handle multiple test environments. This should be useful when it comes to companies that have layers of environments before the code reach production

Are you curious about the article? Building a Java API test framework part 3 will further improve our application. We will then go deeper into the following topics:

  • Test reporting with Allure reports
  • Configure a CI pipeline with GitHub actions
  • Publish the test report on GitHub pages

(Image by Mohammad Rahmani on Unsplash).

API TESTING WITH JAVA AND SPRING BOOT TEST – PART 1: THE BASIC SETUP

Here at Mercedes-benz.io (MB.io), we collaborate as multiple multi-disciplinary teams (nothing new to a Scrum-based organization).

I’m part of one of those teams, responsible for a Java-based microservice. Since this microservice sends data to a back-office application, we need to test the APIs provided by it.

With that said, we had the challenge to build a new API test framework from scratch.

In this series of articles we’ll show:

  • How we choose the tools
  • The process of creating and improving the test framework
  • Pipeline configuration
  • Test report

Choosing the language and framework

The main reason why we went for a Java-based framework is that the background of our team is Java and the microservice itself is written in this language. Our team is composed of Java developers, so they can contribute to building the right solution for our team and also maintain the code base of the test repository in case it’s needed.

The test framework we’ve chosen to be the base of our solution was Rest Assured.io. The reason behind it is that rest assured is already used in several projects within our tribe at MB.io and is also widely used and maintained in the community.

We also added Spring Boot to organize, structure, and be the foundation of the project.

Setting up the project

Step 1: Create the project

We choose Maven as our dependencies manager. Now, the first thing to do is to add the dependencies we need in our project.

Tip: You can use the spring boot initializer to get the basic pom.xml file with the spring initial setup.

After the initial setup, we need to add the dependencies for the rest-assured test framework and other things we’ll use to make our lives easier.

The pom.xml file should be something like that:

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>
    <parent>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-parent</artifactId>
        <version>2.7.3</version>
        <relativePath/> <!-- lookup parent from repository -->
    </parent>

    <groupId>api.test.java</groupId>
    <artifactId>apitest</artifactId>
    <version>0.0.1-SNAPSHOT</version>
    <name>api-test-java</name>
    <description>Api Tests</description>

    <dependencies>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter</artifactId>
        </dependency>

        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-devtools</artifactId>
            <scope>runtime</scope>
            <optional>true</optional>
        </dependency>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-test</artifactId>
            <scope>test</scope>
        </dependency>

        <dependency>
            <groupId>org.apache.commons</groupId>
            <artifactId>commons-text</artifactId>
            <version>1.9</version>
        </dependency>

        <dependency>
            <groupId>io.rest-assured</groupId>
            <artifactId>rest-assured</artifactId>
            <version>5.1.1</version>
            <exclusions><!-- https://www.baeldung.com/maven-version-collision -->
                <exclusion>
                    <groupId>org.apache.groovy</groupId>
                    <artifactId>groovy</artifactId>
                </exclusion>
                <exclusion>
                    <groupId>org.apache.groovy</groupId>
                    <artifactId>groovy-xml</artifactId>
                </exclusion>
            </exclusions>
        </dependency>

        <dependency>
            <groupId>io.rest-assured</groupId>
            <artifactId>json-schema-validator</artifactId>
            <version>5.1.1</version>
        </dependency>

        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-log4j2</artifactId>
        </dependency>

        <dependency>
            <groupId>org.projectlombok</groupId>
            <artifactId>lombok</artifactId>
            <version>1.18.24</version>
            <scope>provided</scope>
        </dependency>
    </dependencies>

    <build>
        <plugins>
            <plugin>
                <groupId>org.springframework.boot</groupId>
                <artifactId>spring-boot-maven-plugin</artifactId>
            </plugin>
        </plugins>
    </build>

</project>

Show code in Github Gist

With this, we should be able to start organizing our project.

Step 2: Changing the Main class

The Main class should be changed to a SpringBootApplication. And the main method must be configured to run as a SpringApplication.

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;

@SpringBootApplication
public class Main {

    public static void main(String[] args) {

        SpringApplication.run(Main.class, args);
    }
}

Show code in Github Gist

Step 3: Create a Service to manage your API

To abstract access and configure the requests in one single place, we can create a new Service and take advantage of it.

Here is the place to set the proper configuration of the requests.

Let’s create a new method here to abstract the use of a post request. In this post request, we’ll provide the URL and the JSON body as parameters, so the file will be something like this:

package org.example.services;

import com.fasterxml.jackson.databind.JsonNode;
import io.restassured.RestAssured;
import io.restassured.builder.RequestSpecBuilder;
import io.restassured.http.ContentType;
import io.restassured.response.Response;
import io.restassured.specification.RequestSpecification;
import lombok.extern.slf4j.Slf4j;
import org.springframework.stereotype.Service;

import javax.annotation.PostConstruct;

@Slf4j
@Service
public class YourApiService {

    private RequestSpecification spec;

    @PostConstruct
    protected void init() {

        // On init you can set some global properties of RestAssured
        RestAssured.useRelaxedHTTPSValidation();

        spec = new RequestSpecBuilder().setBaseUri("https://reqres.in").setBasePath("/api").build();
    }

    public Response postRequest(String endpoint, JsonNode requestBody) {

        return RestAssured.given(spec)
            .contentType(ContentType.JSON)
            .body(requestBody)
        .when()
            .post(endpoint);
    }
}

Show code in Github Gist

Note: We’ll return the full response to be able to validate what we want within the test itself.

As you can see in the file above, we also take advantage of the built-in RequestSpecification that Rest-assured has to set the baseURI and basePath for this service. This is a smart way to configure your service because if you have more than one service in our test framework, each of them can have its setup and host.

Step 4: Add a test case

First things first, let’s add the proper annotations to be a spring boot JUnit 5 test class.

@ExtendWith(SpringExtension.class)
@SpringBootTest
@TestInstance(TestInstance.Lifecycle.PER_CLASS)

After that, let’s add a constructor method and assign the Service to be used in our test as a class variable.

private final YourApiService yourApiService;

public ApiTest(YourApiService yourApiService) {

    this.yourApiService = yourApiService;
}

Now we are good to start adding the test cases here. Let’s do that.

The postRequest method expects two parameters:

  • the endpoint we want to send the data as a String;
  • the request body as a JsonNode.

The first thing we want to do is create an object to send in the request body of our request. We’ll take advantage of the jackson-databind library to help us with the object mapping.

@Test
public void testCreateUser() throws JsonProcessingException {

    ObjectMapper mapper = new ObjectMapper();

    String body = "{\"name\": \"Luiz Eduardo\", \"job\": \"Senior QA Engineer\"}";
    JsonNode requestBody = mapper.readTree(body);
}

Now, we need to make the request and validate what we want. Let’s add that to our test case. The final results should be something like that:

package api.test.java.tests;

import com.fasterxml.jackson.core.JsonProcessingException;
import com.fasterxml.jackson.databind.JsonNode;
import com.fasterxml.jackson.databind.ObjectMapper;
import io.restassured.response.Response;
import org.example.services.YourApiService;
import org.junit.jupiter.api.Test;
import org.junit.jupiter.api.TestInstance;
import org.junit.jupiter.api.extension.ExtendWith;
import org.springframework.boot.test.context.SpringBootTest;
import org.springframework.test.context.junit.jupiter.SpringExtension;

import static org.hamcrest.MatcherAssert.assertThat;
import static org.hamcrest.Matchers.equalTo;
import static org.hamcrest.Matchers.is;

@ExtendWith(SpringExtension.class)
@SpringBootTest
@TestInstance(TestInstance.Lifecycle.PER_CLASS)
public class ApiTest {

    private final YourApiService yourApiService;

    public ApiTest(YourApiService yourApiService) {

        this.yourApiService = yourApiService;
    }

    @Test
    public void testCreateUser() throws JsonProcessingException {

        ObjectMapper mapper = new ObjectMapper();

        String body = "{\"name\": \"Luiz Eduardo\", \"job\": \"Senior QA Engineer\"}";
        JsonNode requestBody = mapper.readTree(body);

        Response res = yourApiService.postRequest("/users", requestBody);
        assertThat(res.statusCode(), is(equalTo(201)));
    }
}

Show code in Github Gist

Note: Bear in mind that this is just the first iteration, we’ll improve the code to keep the responsibilities within the respective classes.

What we’ve seen so far:

  • The project’s basic setup
  • How to structure the project
  • How to abstract the requests in the service layer
  • The first functional test case

This is the end of part 1 of this series of articles. The next part will cover:

  • A short refactor on the object mapping
  • Improve the response validation
  • Property files

See you soon!

Photo by Nubelson Fernandes on Unsplash

Andreas Rau – How pilots communicate

… and what we as Software Engineers/ Designers and Businessmen can learn from it

TLDR

In this article you will learn about miscommunication pitfalls in aviation and that the same pitfalls occur in software development, design, and business. We will dive deeper into topics like the aviation decision-making (ADM) process. Have a glance at intercultural communication. How it dictates the way we speak and understand our world. Further an introduction to my own personal experiences and how you can sabotage the productivity of your organization effectively.

In the end, we will apply the ADM to foster and improve your communication.

How miscommunication can trigger disasters

On 25th January 1995, the Avianca flight from Bogota, Colombia, to JFK Airport in New York, was running out of fuel. Air traffic control (ATC) at JFK kept the airplane in a holding position until their plane’s fuel tank was running dangerously low. When revisiting the conversation between the airplane and the ATC at no point in time the word “emergency” or “mayday” was communicated by the pilots. The captain reportedly told the first officer to “tell them we are in an emergency”. instead of letting ATC know that the airplane is in a serious situation, the word “emergency” was not communicated. Instead, the first officer told ATC “We’re running out of fuel” “With air traffic control unaware of the gravity of the problem, the plane crashed just nine minutes later. Out of the 158 people on board, 73 died, including the pilot and the co-pilot.”

The complexity of reality

The Austrian philosopher, Ludwig Wittgenstein already shared with the world in 1921 in his famous book “Tractatus logico-philosophicus”.

“Ordinary language is imperfect and cannot capture the full complexity of reality.”

From this, we can derive that even under calm conditions the human language fails. In the before shown example, it gets even worse under stressful conditions.

From Aviation to Software Development

First of all a disclaimer. I am a software developer and paragliding pilot. I am well aware that in airline aviation serious mistakes can endanger passenger lives and software development in most cases does not. What I am interested in, are the circumstances of mistakes and the diversity of people involved as well as their communication. When I was revealed how many aviation accidents happened because of miscommunication I started to question if the same situations occur in my day-to-day job and even in my private life. In both worlds, we face time-critical decisions. We both need to react to events that happen while we are executing tasks. I see only a few differences. One of them is that there are more decisions and events to consider in aviation. This means that there are more of them in a shorter amount of time. Both worlds make decisions. In software development, those decisions and their effects as well as their execution by 3rd parties take more time than in aviation. Think about this statement for a moment. Compare a 2h flight with all the decisions that you might need to take as a pilot to a software development sprint of two weeks.

I am a paragliding pilot, so nothing close to a real pilot, but in my experience, the amount of decisions I have to take in a 2h flight compared to a two-week sprint is about the same. In the next part, we will dive deeper into how the aviation industry takes decisions.

The aeronautical decision-making (ADM) process

The airline industry has identified a process that every aircraft pilot has to obey. Aeronautical decision-making (ADM) is a five-step process that a pilot needs to conduct when facing an unexpected or critical event. Adhering to this process helps the pilot to maximize his success chance.

  1. Start to identify your situation, this is the most important step. Accurately detecting it enables you to make correct decisions and raise the probability of success.
  2. Evaluate your options and in my experience, there are often more than I expected to be in the beginning.
  3. Choose from your generated options while accessing the risks and viability.
  4. Act according to your plan.
  5. Evaluate if your action was successful and prepare for further decisions. You will always have further decision points where you need to start the process of ADM again.

This process is only one of many more in aviation.

Let’s apply this to a software bug.

  1. Identify your situation
    • What is the real cause for the bug?
    • Is it reproducable, part of my product scope or not?
    • Did this bug occur because of our code changes or of dependency updates?
    • Is this bug on live systems?
    • Can I resolve the bug?
    • Do I need help?
    • Can I get more information?
  2. Evaluate your options
    • Patch the bug with a new version.
    • Ask for help.
    • Investigate further
    • Decline because it’s a feature and the user is using it incorrectly.
  3. Choose
    • Let’s assume the bug is on a live system and needs to be fixed asap -> Patch the bug with a new version.
  4. Act
    • Please enter your routine for fixing a bug here
  5. Evaluate if your action was successful
    • Is the live system running as expected and was the bug resolved?
    • Should we establish a standardized process to fix bugs?
    • Did I resolve the bug in time? If not practice time management.
    • Is there anything we can do to mature the product?
    • Feedback to QA.
    • Share your insights.
    • Improve test procedures.

This is an easy example to illustrate how the ADM process can be applied to software development. A lesson I learned from paragliding and software development is to always finish your plan even if the circumstances change during your action. Trust in your abilities and execute your plan. If you followed the previous steps correctly your actions cannot be severely wrong. Given that the information you based your analysis on was correct.

Which language we use is an important part of correctly communicating with each other, let’s have a look at aviation English next.

Aviation English

Pilots and crew, regardless of their own native language or any other languages they speak, travel across the world. They have to be able to communicate with every airport and every ATC they face, on a daily basis. This was a challenge to solve, which occurred with the rise of civil aviation in the mid-20th century. There was already an unspoken agreement in place. The language of the sky at that time was Aviation English. Now Aviation English is, as misleading as it sounds not the English language that we know. In fact, it is a separate language compared to what is spoken on the ground. Even native English speakers have quite a long road to learn it ahead of them. It uses standardized phraseology in radio communications to ensure aviation safety. Since the manufacturing, as well as the operation of air crafts, was dominated by English-speaking countries. The International Civil Aviation Organization (ICAO) slowly but steadily understood one thing.

Good processes and procedures themselves will not solve the issue.

In 1951 they suggested that English should be the de facto international language of civil aviation. Let me emphasize that ICAO in 1951 only suggested, that English should be the language of the sky. It took them 50 more years, in 2001, to actually determine English as the standardized language of air transport. With said standardization, they published a directive. It stated that all aviation personnel, including pilots, flight attendants, and aircraft controllers must pass an English proficiency test and meet the requirements. Before that, language skills were not checked in any standard way.

Now let that sink in for a moment.

Tech/Design/Business English

In Tech, Design and Business we do have our own set of languages. I am not talking about programming languages. Try to explain to your grandma/grandpa what exactly was decided in the last SAFE PI planning. You can substitute PI planning with almost every other meeting we have in our company. Now ask for honest feedback: “Can you summarize what I just told you?” Be prepared, it might be the case, that your elderly relatives are very kind to you and try to avoid the task you just gave them. But they will most likely not be able to summarize what you have explained to them. I already have issues trying to explain such things to my parents. I can already sense while speaking, that my parents won’t understand a word.

Although you and I were using English as a language. Our terminology, acronyms, processes, and neologism makes Tech/Design/Business English a very complex language.

¿Habla español?

I had the luck to work with many great people in my career so far and I am more than grateful for every one of them. Nevertheless, I discovered a couple of things for myself over the years. We are all working in the field of Information Technology, Design, and Business. We are all speaking the same language and share the same enthusiasm and skills. And still, we are different. We are all shaped and formed during our private and professional lives in ways one can only imagine. We all have a wide range of different religious, social, ethnic, and educational backgrounds. Living close to our families or far away from them. These differences became more and more visible to me with the amount of time I have spent with them. Differences in how colleagues perceive what you are trying to tell them. Differences in how people value their pursuits and sometimes sacrifice their benefits for the sake of the group. Differences in how authorities communicate with subordinates and vice versa. And these are only the people that I had the chance to work with. You have made your own experiences and shared time with so many more great souls.

What we are now tapping into is the field of intercultural communication. Gert (Gerard Hendrik) Hofstede (Dutchman, born on October 3, 1928, Haarlem, Netherlands) is a Dutch sociologist who proposed an indicators set that determines the various people’s cultural characteristics based on research conducted in the 1960-70s. The subject was part of my studies for one semester. At that time my brain did not understand the extent of the topic and how important it will be for my future life. Intercultural communication describes the discipline that studies communication across different cultures and social groups. In other words, how culture affects communication. There is an impressive amount of research done in the field of intercultural communication which investigates topics like:

  • Collectivist versus Individualistic
  • High Context versus Low Context
  • Power Distance
  • Feminity versus Masculinity
  • Uncertainty Avoidance
  • Long-term Orientation versus Short-term Orientation
  • Indulgence versus Restraint

All of them are worth investigating and I encourage you to do so. I have gathered further readings which should get you started.

Personal

On top of every culture, there is you, you how you perceive the world around you, and you, how you make sense of everything which is shaping you. Your personal touch might very well steer you into a counter course of what your culture tried to induce into you for the entirety of your childhood and more. I hereby am not suggesting that you are all rebels. I want to highlight that the personal level of communication can be far off from how the folks back in your hometown used to talk. If you haven’t already met a vast variety of people during your school time you will definitely do so in your professional life. In Software Development I have had the chance to work with many great people from all over the world. Although at some point already familiar with the concept of intercultural communication I often unknowingly said or did something I thought would be appropriate at this exact moment in time… it wasn’t. Having the basics of intercultural communication in mind is necessary but not sufficient. Get to know the person you are talking to and discover a new level of communication.

Interim

We have learned a couple of things about communication, let’s take a second look at the introductory example. The first officer, in disregard of what the pilot told him, made a severe mistake in not properly communicating the extent of the situation. Maybe, in her/his culture, it is common to understate issues and it can be rude to talk about severe issues or problems directly. Nevertheless, the situation required clear and fact-based information. Correct identification of the situation was therefore not possible. All further steps in the aeronautical decision-making process of the ATC were from then on based on false information and we all know the outcome. I often find myself in meetings where colleagues or superiors introduce me to a brand new process that will revolutionize how our company works, solve all the problems and issues at once and make us all happier. Thanks to good marketing everybody is excited and eager to implement the new processes with huge costs in time, money, and motivation only to find out that in the end, it didn’t work — again. I do not want to sound pessimistic, I want to tell you my perspective. Looking into software development and all the processes we have, I want to learn from aviation and invest more time and effort into educating our colleagues on how to properly and fruitfully communicate in a standardized and organized way.

Important factors for this, in my opinion, are honest, transparent, and truthful communication where hidden agendas or intercultural communication pitfalls are avoided. And to make one point clear, more communication is not equal to better communication. I think based on this foundation, processes can be fruitful.

Lost in translation

An enterprise company operating in multiple countries spread over multiple continents. What is the first thing that comes into your mind? For me, it is a rich and diverse project team over as many time zones as possible. During my early career, I was exposed to a trend in IT where teams could not be diverse enough. I know many companies which still steer 100% in this direction, but I also know many who are not. What I am trying to do here is to make you as a reader think. Think about your current circumstances, where are you working? Who are your colleagues? Do you work effectively together? Is communication easy for you or is it a burden? Do you have the feeling that meetings actually create useful artifacts or are they mostly a waste of time? Ask yourselves these questions and assess. I can only speak for myself and I have observed both. In my work, a rich and diverse team can accelerate me, but there have also been times when it slowed me down. When looking at the example I showed you earlier. Aviation English is a language that was an agreement on communication but it was not part of any training or checks. It is important to say, that this has nothing to do with the people per se. When I came to college, in Germany, I had the opportunity to choose if I wanted my studies and lectures in German or English. I was motivated and although my mother tongue is German, I wanted my studies to be in English. This was exciting to me and silly young me thought it would be good to already study in English since the language of IT was English. Now I really regret my decision. During all of my studies, I was confronted with a lot of bad English. Many interesting topics got lost in translation, simply because the lecturer was not proficient enough in her/his language skills. Please do not get me wrong here, my English at that time was not better either. I am just trying to make the point that the content of a message can be severely harmed when not communicated properly. During my career, I often did applicant interviews and had to clearly state my veto. The applicant might have had the best CV, and great experience, but bad English skills. In the IT industry, we have the great opportunity to have rich and diverse teams. This is a circumstance not exclusive to IT but it is still in my opinion a gift we should be thankful for. To make sure we respect and maintain said privilege I have a suggestion. I am suggesting that if we put so much emphasis on what technologies an applicant knows, for how many years she/he has worked with tech stack a, b or c. We should also check thoroughly if her/his English skills are proficient enough to communicate properly in a big and diverse company. We should offer and also expect and check from new hires to improve their language skills, if necessary. Language should never be a limiting factor. Coming from IT and especially web development, I have to deal with accessibility day in and day out. For me, it is easy to understand that digital tools and devices are about inclusion. In my opinion, it is the same with language.

Let’s take this a step further. A good colleague of mine introduced me to an amazing article on some second world war CIA practices.

How to effectively sabotage your organization’s productivity

The CIA created the “Simple Sabotage Field Manual” in 1944 on how everyday people could help the allies weaken their country by reducing production in factories, offices, and transportation lines. There is a wonderful article from the business insider which highlights that. Despite being written in 1944 these instructions are timeless. I am sharing with you the selected list of instructions from the business insider article, filtered for communication. See if any of those listed below remind you of our organization, your colleagues, or even yourself.

Organizations and Conferences

  • Insist on doing everything through “channels.” Never permit shortcuts to be taken in order to expedite decisions.
  • Make “speeches.” Talk as frequently as possible and at great length. Illustrate your “points” by long anecdotes and accounts of personal experiences.
  • When possible, refer all matters to committees, for “further study and consideration.” Attempt to make the committee as large as possible — never less than five.
  • Bring up irrelevant issues as frequently as possible.
  • Haggle over precise wordings of communications, minutes, and resolutions.
  • Refer back to matters decided upon at the last meeting and attempt to re-open the question of the advisability of that decision.

Managers

  • Hold conferences when there is more critical work to be done.
  • Multiply the procedures and clearances involved in issuing instructions, paychecks, and so on. See that three people have to approve everything where one would do.

Employees

  • Work slowly.
  • Contrive as many interruptions to your work as you can.
  • Do your work poorly and blame it on bad tools, machinery, or equipment. Complain that these things are preventing you from doing your job right.
  • Never pass on your skill and experience to a new or less skillful worker.

I highly encourage you to read the full business insider article for some more bitter-sweet laughs.

Foster your communication

Although being funny to read, the sad truth is that some of these instructions are common practice in our organization. I advise you to take this list into your notes, bookmark the article, read through it frequently, and ask yourself: Do my communication behaviors fall into the same categories? If yes, no worries! Everybody has to start from somewhere. Remember the ADM process:

  • Identify your situation and ask yourself: Am I sabotaging my company? It’s important, to be honest here! (Remember: Accurately detecting it enables you to make correct decisions and raise the probability of success)
  • Evaluate your options, there are plenty! Seek feedback from people you trust and ask them for honest feedback on your ways of communication, do an Udemy course. Heck, maybe even join a debating club!
  • Choose from your generated options while accessing the risks and viability.
  • Act according to your plan.
  • Evaluate if your action was successful and prepare for further decisions.

This is not easy – but it can be done. You can do it!

Now to bring this article to an end let’s look at the last dimension of communication.

Are you talking to me?

I often find myself in situations where I talk about colleagues and superiors rather than talking with them. Talking about colleagues is easy, but with most things in life, the outcome of easy is not great. It takes courage to work on yourself and even more to reach out for help. There is no effortless solution, be honest and ask yourself if you really want to change something about how you communicate and how you are perceived while communicating. This article is at most only a catalyst that will hopefully ignite your own journey.

Recap

We have learned how two, at first sight, completely different professions share many fundamental communication skills. We have had a look at the aeronautical decision-making process, what intercultural communication is, that processes are only as good as the material we put into them to be processed, how to effectively sabotage your company and what you can do to foster and improve your communication.

We touched on many topics today and I hope this article touched you personally in at least one of them. If you now think about what you have read in the last couple of minutes I feel already successful in my educational mission and if you remember only one thing then something along the lines of this:

‘Don’t worry the worst mistake you can make, is to not communicate at all.’

Further readings

As promised here you have a small collection to further educate yourself about the topics in this article.

Full blown ADM

Aviation English

Understanding Intercultural Communication

Gert Hofstede – Intercultural Communication

Business insider – How to sabotage your organizations productivity

Workplace Communication


Photo by Avel Chuklanov on Unsplash

Use Tailwind without Tailwind

Tailwind is one of those very controversial things, some people argue that it is the best since sliced bread, others it is a tool sent by the devil itself. Nonetheless, this is not an article about if you should use it or not, there are already plenty of those articles about the pros and cons on the internet.

What makes Tailwind good is the documentation, you may not agree on the “naming” but for every class that you can use there is an equivalent showing how to use it with CSS, for that, the team has done an outstanding job.

Tailwind is super fun to work with after you pass the initial learning threshold, but as a Developer, you still need to understand CSS to use it correctly.

CSS is not hard nor is broken.

In the end, it is just a tool to help you to write CSS, and as a tool, it limits the options of what you could do, for example, you would never be able to use a custom grid layout with Tailwind only, as the one below:

MobileDesktop
Grid Layout MobileGrid Layout Desktop
Source: https://webkit.org/demos/css-grid/

In this article, you will learn how to use Tailwind documentation to write your CSS styles. The following topics will be discussed:

The idea is to have the full power to write your CSS together with the cool ideas and patterns created by the Tailwind team. So let’s get started.

Preflight

Preflight is a set of base styles for Tailwind projects that are designed to smooth over cross-browser inconsistencies and make it easier for you to work within the constraints of your design system.

Here you have basically to copy the preflight.css created by them.

/* preflight.css */
*,
::before,
::after {
  box-sizing: border-box;
  border-width: 0;
  border-style: solid;
  border-color: currentcolor;
}

html {
  line-height: 1.5;
  text-size-adjust: 100%;
  tab-size: 4;
  font-family: system-ui;
}

body {
  margin: 0;
  line-height: inherit;
}

hr {
  height: 0;
  color: inherit;
  border-top-width: 1px;
}

abbr:where([title]) {
  text-decoration: underline dotted;
}

h1,
h2,
h3,
h4,
h5,
h6 {
  font-size: inherit;
  font-weight: inherit;
}

a {
  color: inherit;
  text-decoration: inherit;
}

b,
strong {
  font-weight: bolder;
}

code,
kbd,
samp,
pre {
  font-family:
    "fontFamily.mono",
    ui-monospace,
    SFMono-Regular,
    Menlo,
    Monaco,
    Consolas,
    "Liberation Mono",
    "Courier New",
    monospace;
  font-size: 1em;
}

small {
  font-size: 80%;
}

sub,
sup {
  font-size: 75%;
  line-height: 0;
  position: relative;
  vertical-align: baseline;
}

sub {
  bottom: -0.25em;
}

sup {
  top: -0.5em;
}

table {
  text-indent: 0;
  border-color: inherit;
  border-collapse: collapse;
}

button,
input,
optgroup,
select,
textarea {
  font-family: inherit;
  font-size: 100%;
  font-weight: inherit;
  line-height: inherit;
  color: inherit;
  margin: 0;
  padding: 0;
}

button,
select {
  text-transform: none;
}

button,
[type="button"],
[type="reset"],
[type="submit"] {
  appearance: button;
  background-color: transparent;
  background-image: none;
}

:-moz-focusring {
  outline: auto;
}

:-moz-ui-invalid {
  box-shadow: none;
}

progress {
  vertical-align: baseline;
}

::-webkit-inner-spin-button,
::-webkit-outer-spin-button {
  height: auto;
}

[type="search"] {
  appearance: textfield;
  outline-offset: -2px;
}

::-webkit-search-decoration {
  appearance: none;
}

::-webkit-file-upload-button {
  appearance: button;
  font: inherit;
}

summary {
  display: list-item;
}

blockquote,
dl,
dd,
h1,
h2,
h3,
h4,
h5,
h6,
hr,
figure,
p,
pre {
  margin: 0;
}

fieldset {
  margin: 0;
  padding: 0;
}

legend {
  padding: 0;
}

ol,
ul,
menu {
  list-style: none;
  margin: 0;
  padding: 0;
}

textarea {
  resize: vertical;
}

input::placeholder,
textarea::placeholder {
  opacity: 1;
  color: #9ca3af;
}

button,
[role="button"] {
  cursor: pointer;
}

:disabled {
  cursor: default;
}

img,
svg,
video,
canvas,
audio,
iframe,
embed,
object {
  display: block;
  vertical-align: middle;
}

img,
video {
  max-width: 100%;
  height: auto;
}

That’s it, just add this CSS to your project.

Theme

The theme file is where you define your project’s color palette, type scale, fonts, breakpoints, border radius values, and more.

This is the most important part, getting the same idea of the tailwind.config.js file where you can customize your theme, create a theme file following the Tailwind definitions:

/* theme.css */
:root {
  --size-0-5: 0.125rem; /* spacing:0.5 */
  --size-1: 0.25rem; /* spacing:1 */
  --size-1-5: 0.375rem; /* spacing:1.5 */
  --size-2: 0.5rem; /* spacing:2 */
  --size-2-5: 0.625rem;
  --size-3: 0.75rem;
  --size-3-5: 0.875rem;
  --size-4: 1rem;
  --size-5: 1.25rem;
  --size-6: 1.5rem;
  --size-7: 1.75rem;
  --size-8: 2rem;
  --size-9: 2.25rem;
  --size-10: 2.5rem;
  --size-11: 2.75rem;
  --size-12: 3rem;
  --size-14: 3.5rem;
  --size-16: 4rem;
  --size-20: 5rem;
  --size-24: 6rem;
  --size-28: 7rem;
  --size-32: 8rem;
  --size-36: 9rem;
  --size-40: 10rem;
  --size-44: 11rem;
  --size-48: 12rem;
  --size-52: 13rem;
  --size-56: 14rem;
  --size-60: 15rem;
  --size-64: 16rem;
  --size-72: 18rem;
  --size-80: 20rem;
  --size-96: 24rem;
  --size-xs: 20rem; /* 320px */
  --size-sm: 24rem; /* 384px */
  --size-md: 28rem; /* 448px */
  --size-lg: 32rem; /* 512px */
  --size-xl: 36rem; /* 576px */
  --size-2xl: 42rem; /* 672px */
  --size-3xl: 48rem; /* 768px */
  --size-4xl: 56rem; /* 896px */
  --size-5xl: 64rem; /* 1024px */
  --size-6xl: 72rem; /* 1152px */
  --size-7xl: 80rem; /* 1280px */
  --size-full: 100%;
  --size-fit: fit-content;
  --size-min: min-content;
  --size-max: max-content;
  --size-auto: auto;
  --size-none: none;
  --size-prose: 65ch;
  --size-screen-width: 100vw;
  --size-screen-height: 100vh;
  --size-screen-xs: 480px;
  --size-screen-sm: 640px;
  --size-screen-md: 768px;
  --size-screen-lg: 1024px;
  --size-screen-xl: 1280px;
  --size-screen-2xl: 1536px;
  --grid-1: repeat(1, minmax(0, 1fr));
  --grid-2: repeat(2, minmax(0, 1fr));
  --grid-3: repeat(3, minmax(0, 1fr));
  --grid-4: repeat(4, minmax(0, 1fr));
  --grid-5: repeat(5, minmax(0, 1fr));
  --grid-6: repeat(6, minmax(0, 1fr));
  --grid-7: repeat(7, minmax(0, 1fr));
  --grid-8: repeat(8, minmax(0, 1fr));
  --grid-9: repeat(9, minmax(0, 1fr));
  --grid-10: repeat(10, minmax(0, 1fr));
  --grid-11: repeat(11, minmax(0, 1fr));
  --grid-12: repeat(12, minmax(0, 1fr));
  --border: 1px;
  --border-0: 0;
  --border-2: 2px;
  --border-4: 4px;
  --border-8: 8px;
  --ring: 0 0 0 var(--border);
  --ring-2: 0 0 0 var(--border-2);
  --ring-4: 0 0 0 var(--border-4);
  --ring-8: 0 0 0 var(--border-8);
  --rounded: 0.25rem;
  --rounded-sm: 0.125rem;
  --rounded-md: 0.375rem;
  --rounded-lg: 0.5rem;
  --rounded-xl: 0.75rem;
  --rounded-2xl: 1rem;
  --rounded-3xl: 1.5rem;
  --rounded-full: 9999px;
  --shadow: 0 1px 3px 0 rgb(0 0 0 / 10%), 0 1px 2px -1px rgb(0 0 0 / 10%);
  --shadow-sm: 0 1px 2px 0 rgb(0 0 0 / 5%);
  --shadow-md: 0 4px 6px -1px rgb(0 0 0 / 10%), 0 2px 4px -2px rgb(0 0 0 / 10%);
  --shadow-lg: 0 10px 15px -3px rgb(0 0 0 / 10%), 0 4px 6px -4px rgb(0 0 0 / 10%);
  --shadow-xl: 0 20px 25px -5px rgb(0 0 0 / 10%), 0 8px 10px -6px rgb(0 0 0 / 10%);
  --shadow-2xl: 0 25px 50px -12px rgb(0 0 0 / 25%);
  --shadow-inner: inset 0 2px 4px 0 rgb(0 0 0 / 5%);
  --font-weight-thin: 100;
  --font-weight-extralight: 200;
  --font-weight-light: 300;
  --font-weight-normal: 400;
  --font-weight-medium: 500;
  --font-weight-semibold: 600;
  --font-weight-bold: 700;
  --font-weight-extrabold: 800;
  --font-weight-black: 900;
  --line-spacing-xs: 1rem;
  --line-spacing-sm: 1.25rem;
  --line-spacing-md: 1.5rem;
  --line-spacing-lg: 1.75rem;
  --line-spacing-xl: 1.75rem;
  --line-spacing-2xl: 2rem;
  --line-spacing-3xl: 2.25rem;
  --line-spacing-4xl: 2.5rem;
  --line-spacing-5xl: 1;
  --line-spacing-6xl: 1;
  --line-spacing-7xl: 1;
  --line-spacing-8xl: 1;
  --line-spacing-9xl: 1;
  --text-xs: 0.75rem;
  --text-sm: 0.875rem;
  --text-md: 1rem;
  --text-lg: 1.125rem;
  --text-xl: 1.25rem;
  --text-2xl: 1.5rem;
  --text-3xl: 1.875rem;
  --text-4xl: 2.25rem;
  --text-5xl: 3rem;
  --text-6xl: 3.75rem;
  --text-7xl: 4.5rem;
  --text-8xl: 6rem;
  --text-9xl: 8rem;
  --color-canvas: #f9fafb; /* gray:50 */
  --color-contrast: #27272a; /* zinc:800 */
  --color-contrast-50: #fafafa; /* zinc:50 */
  --color-contrast-100: #f4f4f5; /* zinc:100 */
  --color-contrast-200: #e4e4e7; /* zinc:200 */
  --color-contrast-300: #d4d4d8; /* zinc:300 */
  --color-contrast-400: #a1a1aa; /* zinc:400 */
  --color-contrast-500: #71717a; /* zinc:500 */
  --color-contrast-600: #52525b; /* zinc:600 */
  --color-contrast-700: #3f3f46; /* zinc:700 */
  --color-contrast-800: #27272a; /* zinc:800 */
  --color-contrast-900: #18181b; /* zinc:900 */
  --color-primary-backdrop: #bfdbfe; /* blue:200 */
  --color-primary-focus: #93c5fd; /* blue:300 */
  --color-primary: #2563eb; /* blue:600 */
  --color-primary-content: #1e40af; /* blue:800 */
  --color-primary-contrast: #fff;
  --color-error-backdrop: #fecdd3; /* rose:200 */
  --color-error-focus: #fda4af; /* rose:300 */
  --color-error: #f43f5e; /* rose:600 */
  --color-error-content: #9f1239; /* rose:800 */
  --color-error-contrast: #fff;
  --color-success-backdrop: #bbf7d0; /* green:200 */
  --color-success-focus: #86efac; /* green:300 */
  --color-success: #22c55e; /* green:600 */
  --color-success-content: #166534; /* green:800 */
  --color-success-contrast: #fff;
  --color-content-heading: #111827; /* gray:900 */
  --color-content-body: #1f2937; /* gray:800 */
  --color-content-secondary: #374151; /* gray:700 */
  --color-content-tertiary: #6b7280; /* gray:500 */
  --color-content-disabled: #9ca3af; /* gray:400 */
  --color-content-contrast: #d1d5db; /* gray:300 */
}

@media (prefers-color-scheme: dark) {
  :root {
    --color-canvas: #27272a; /* zinc:800 */
    --color-contrast: #fff;
    --color-contrast-50: #18181b; /* zinc:900 */
    --color-contrast-100: #27272a; /* zinc:800 */
    --color-contrast-200: #3f3f46; /* zinc:700 */
    --color-contrast-300: #52525b; /* zinc:600 */
    --color-contrast-400: #71717a; /* zinc:500 */
    --color-contrast-500: #a1a1aa; /* zinc:400 */
    --color-contrast-600: #d4d4d8; /* zinc:300 */
    --color-contrast-700: #e4e4e7; /* zinc:200 */
    --color-contrast-800: #f4f4f5; /* zinc:100 */
    --color-contrast-900: #fafafa; /* zinc:50 */
    --color-primary-backdrop: #93c5fd; /* blue:300 */
    --color-primary-focus: #3b82f6; /* blue:500 */
    --color-primary: #2563eb; /* blue:600 */
    --color-primary-content: #1e40af; /* blue:800 */
    --color-primary-contrast: #fff;
    --color-error-backdrop: #fda4af; /* rose:300 */
    --color-error-focus: #f43f5e; /* rose:500 */
    --color-error: #e11d48; /* rose:600 */
    --color-error-content: #9f1239; /* rose:800 */
    --color-error-contrast: #fff;
    --color-success-backdrop: #86efac; /* green:300 */
    --color-success-focus: #22c55e; /* green:500 */
    --color-success: #16a34a; /* green:500 */
    --color-success-content: #166534; /* green:800 */
    --color-success-contrast: #fff;
    --color-content-heading: #f9fafb; /* gray:50 */
    --color-content-body: #e5e7eb; /* gray:200 */
    --color-content-secondary: #d1d5db; /* gray:300 */
    --color-content-tertiary: #6b7280; /* gray:500 */
    --color-content-disabled: #4b5563; /* gray:600 */
    --color-content-contrast: #374151; /* gray:700 */
  }
}

@custom-media --xs (min-width: var(--size-screen-xs));
@custom-media --sm (min-width: var(--size-screen-sm));
@custom-media --md (min-width: var(--size-screen-md));
@custom-media --lg (min-width: var(--size-screen-lg));
@custom-media --xl (min-width: var(--size-screen-xl));
@custom-media --xxl (min-width: var(--size-screen-2xl));

To use those variables, you can declare as follows:

.example {
  padding: var(--size-3); /* p-3 */
  border-radius: var(--rounded-lg); /* rounded-lg */
  box-shadow: var(--shadow-lg); /* shadow-lg */
  font-size: var(--text-sm); /* text-sm */
  line-height: var(--line-spacing-sm); /* text-sm */
}

You can define your CSS variables to have the same values defined in the Tailwind, so you can get through their docs and use their style. But you wouldn’t need to define a variable for every single class, some classes like inline-flex can be directly translated to display: inline-flex.

The good thing is that is one dependency less in your project, no matter how fast the tailwind (re)builds the CSS it will never the faster than having it directly declared, and the main point is that gives the developer full control. The downside is that you lose their syntax sugar.

Styles

Taking what you learned and applying it to a real-world scenario: Imagine you are building a UI Library. You start to build a badge component. This component should have options to change the size, color, and format, so you can create the following style:

/* badge.css */
.badge {
  display: inline-flex;
  flex-wrap: wrap;
  align-items: center;
  justify-content: center;
  white-space: nowrap;
  padding: var(--size-1) var(--size-2);
  text-align: center;
  vertical-align: middle;
  font-size: var(--badge-font-size, var(--text-md));
  line-height: var(--badge-line-height, var(--line-spacing-md));
  font-weight: var(--font-weight-semibold);
  border: var(--badge-border, var(--border) solid var(--color-contrast-200));
  border-radius: var(--badge-border-radius, var(--rounded-lg));
  background-color: var(--badge-background-color, var(--color-contrast-50));
  color: var(--badge-color, var(--color-content-body));

  &.is-pill {
    --badge-border-radius: var(--rounded-full);
  }

  &.is-xs {
    --badge-font-size: var(--text-xs);
    --badge-line-height: var(--line-spacing-xs);
  }

  &.is-sm {
    --badge-font-size: var(--text-sm);
    --badge-line-height: var(--line-spacing-sm);
  }

  &.is-lg {
    --badge-font-size: var(--text-lg);
    --badge-line-height: var(--line-spacing-lg);
  }

  &.is-xl {
    --badge-font-size: var(--text-xl);
    --badge-line-height: var(--line-spacing-xl);
  }

  &.is-info {
    --badge-border: var(--border) solid var(--color-primary-focus);
    --badge-background-color: var(--color-primary);
    --badge-color: var(--color-primary-contrast);
  }

  &.is-error {
    --badge-border: var(--border) solid var(--color-error-focus);
    --badge-background-color: var(--color-error);
    --badge-color: var(--color-error-contrast);
  }

  &.is-success {
    --badge-border: var(--border) solid var(--color-success-focus);
    --badge-background-color: var(--color-success);
    --badge-color: var(--color-success-contrast);
  }

  &.is-contrast {
    --badge-border: var(--border) solid var(--color-contrast-800);
    --badge-background-color: var(--color-contrast-700);
    --badge-color: var(--color-content-contrast);
  }
}

Show code in action

Using CSS variables can give you more freedom to work with while having a consistent style.

An equivalent solution using Tailwind with @apply, even if this is not the recommended way to use the tailwind, but you can get an idea if placed inside the HTML.

/* badge.css */
.badge {
  @apply inline-flex flex-wrap items-center justify-center whitespace-nowrap py-1 px-2 text-center align-middle font-semibold;

  &:not(.is-pill) {
    @apply rounded-lg;
  }

  &.is-pill {
    @apply rounded-full;
  }

  &.is-xs {
    @apply text-xs;
  }

  &.is-sm {
    @apply text-sm;
  }

  &.is-base {
    @apply text-base;
  }

  &.is-lg {
    @apply text-lg;
  }

  &.is-xl {
    @apply text-xl;
  }

  &:not(.is-info, .is-error, .is-success, .is-contrast) {
    @apply border-contrast-200 bg-contrast-50 text-content;
  }

  &.is-info {
    @apply border-primary-focus bg-primary text-primary-contrast;
  }

  &.is-error {
    @apply border-error-focus bg-error text-error-contrast;
  }

  &.is-success {
    @apply border-success-focus bg-success text-success-contrast;
  }

  &.is-contrast {
    @apply border-contrast-800 bg-contrast-700 text-content-contrast;
  }
}

If you notice they are quite similar, but with Tailwind, you declare the properties horizontally and with CSS vertically, which gives the other a more clean aspect.

Now, checking a real-world example, like the one in the Netlify’s Page, things don’t look that simple anymore.

So, using the @apply mixing, you could have the best of both worlds, right? But, in this case, what would be the real benefit of writing classes using Tailwind instead of pure CSS?

Conclusion

This is just an example to give you an idea. Maybe you don’t need that flexibility of using CSS and Tailwind helps to build what you need, then go for it, or perhaps your team dislikes Tailwind and this could give them a taste. Whatever your situation is, it is good to have options. Photo by Kelly Sikkema on Unsplash.

LEARN MORE ABOUT CSS IN OUR TRIBE OF DIGITAL ENTHUSIASTS

View job openings

Spaceships and testing in Javascript

What do a Spaceship and JavaScript have in common? Both already reached space.

The Crew Dragon Spaceship from SpaceX uses JavaScript in the main cockpit panels[1]. It’s super cool to see where the language has come from and what can be achieved with it.

Just like rockets, spaceships, and many others, critical and non-critical projects, require a lot of testing before production launch. Otherwise, a “KaBuM! Effect” could happen, and unless it is a firework, it won’t make anyone happy.

In any case, testing is not complicated, and even if most of us are not building things that can explode, treat them as if they are of equal importance. Testing makes error detection easier and can also save a lot of time. It can be tricky at first, but with practice and experience, it becomes an ally, you just need to make it part of your daily work.

Before getting started, we need to take a look and understand how things work under the hood. This article will cover the basics of testing using JavaScript, including:

  1. Testing Fundamentals
  2. Testing with Jest
  3. Mocking Fundamentals
  4. Static Code Analysis

Testing Fundamentals

One of the most common phrases in software development is: “whattaf*ck
”, some say that the quality of the code can be measured by the FPS (f*cks per second) heard during the development process. $h!t happens, and fixing it can be simple, but if it is a little more complicated, it can take days, weeks, and even months to solve it. Thus, the idea of creating an automated test is to try to catch as many errors as possible in our code before they happened.

Imagine that you are building a spaceship, and this spaceship requires a calculator module and if it fails it can explode. You aim to make sure the results are always correct. Then, you start creating the first method of this module.

export const sum = (a, b) => a + b;

Show code in action

To test this code, you have to check the result of your function to validate your assumption.

import { sum } from './calculator.js';

const expected = 4;
const result = sum(2, 2);

if (result !== expected) {
  throw new Error(`KaBuM! It Exploded!`, { cause: `${result} is not equal to ${expected}` });

Show code in action

In the example, you run and test to check if the result is what you expected. Although this implementation works, it cannot be reused. To simplify the testing process, extract the logic into a new method, that way it can now be used for more cases.

export const expect = value => ({
  toEqual(expected) {
    if (value !== expected) {
      throw new Error(`KaBuM! It Exploded!`, { cause: `${value} is not equal to ${expected}` });
    }
  }
});

Show code in action

Now, update our previous code.

import { sum } from './calculator.js';
import { expect } from './testing.js';

expect(sum(2, 2)).toEqual(4);
expect(sum(2, 'a')).toEqual(NaN); // Error

Show code in action

Much better! However, there is no description showing what is being tested, and if you start adding more tests and one fail, the remaining tests will not run, so let’s fix it by encapsulating this code inside a try/catch:

export const test = (description, fn) => {
  try {
    fn();
    console.log(`✓ ${description}`);
  } catch (error) {
    console.error(`✕ ${description}`);
    console.error(error);
  }
};
// ...

Show code in action

Now, use our new function inside our test code.

import { sum } from './calculator.js';
import { expect, test } from './testing.js';

test('sum numbers', () => expect(sum(2, 2)).toEqual(4));

Show code in action

Let’s open the terminal and run our test:

$ npx babel-node calculator.test.js

In case of an error in the code, you will see the following error message.

...
test('sum numbers', () => expect(sum(1, 2)).toEqual(4));

Show code in action

Congratulations! You have now created a simple JavaScript Testing Framework. The good news is that there are already some great tools for testing automation. The most famous is Jest, and you can make your test compatible with it by just removing one line of code and running it:

import { sum } from './calculator.js';
test('sum numbers', () => expect(sum(2, 2)).toEqual(4));

Show code in action

In the terminal, run the command:

$ npx jest calculator.test.js

There is more than just that. Let’s move on and learn more about how to test an application, even without running the code (Yep, this is possible in JavaScript).

Testing with Jest

Jest is a delightful JavaScript Testing Framework with a focus on simplicity. It has already built in most of the features you expect from a testing framework: A great set of exceptions, code coverage, mocking, runs fast, good documentation, and an incredible community around it.

First things first, you need to install Jest

$ npm install --save-dev jest

Thereafter, update the package.json file to run it:

...
"scripts": {
  "test": "jest"
},
...

Show code in action

npm run test or also manually: npx jest

If you like, you can also run jest --init to create a configuration file

Comparing Values

Let’s go ahead, now you are going to create a weapon module for your spaceship. Start creating a simple test example:

describe('Weapon Module', () => {
  test('a simple test', () => {
    expect(2 + 2).toBe(4);
  });
  // ...
});

Show code in action

After running Jest, this is the result:

The primary comparison methods are toBe and toEqualtoBe uses === to check strict equality, while toEqual makes a deep comparison of the properties of the values using Object.is.

// ...
const weapon = { type: 'laser' };
test('check object with toEqual', () => {
  expect(weapon).toEqual({ type: 'laser' });
});
test('check object with toBe', () => {
  expect(weapon).toBe({ type: 'laser' });
});
// ...

Show code in action

Running this test, you will get the following result:

It is up to you to decide which one fits better in your test case, but if you are starting with testing, using the toEqual the method will probably be the best alternative.

Comparing Strings

Regular Expression

Jest does have support for comparing strings. Besides, the regular toEqual regex can also be used for comparison. All you need is to call the toMatch method and pass in the regex string.

const text = 'hello world';
test('string comparison', () => {
  expect(text).toMatch(/hello/);
});

Show code in action

Length

It’s also possible to compare the length between two strings using toHaveLength.

expect('abc').toHaveLength(3);

Show code in action

It works with an array as well.

expect([1, 2, 3]).toHaveLength(3);

Show code in action

Comparing Numbers

Besides the basic comparison methods, you can easily compare numbers in your tests by utilizing the following methods:

  • toBeGreaterThanOrEqual
  • toBeGreaterThan
  • toBeLessThanOrEqual
  • toBeLessThan

In the following example, you can use a loop to check if the result is less than 10.

test('loop less than', () => {
  for (let i = 1; i < 10; i++) {
    expect(i).toBeLessThan(10);
  }
});

Show code in action

When changing the value from 10 to 5, you will receive the following error message.

Comparing Arrays

The toContain method is used for array comparison, which checks if the values are included in the list.

test('check an array', () => {
  const weapons = ['phaser', 'laser', 'plasma cannon', 'photon torpedo'];
  expect(weapons).toContain('laser');
});

Show code in action

Comparing Dynamic Values

In a situation where you don’t have an exact value, but you know the type of the object, so you can use the expect.any method.

Primitive Values

For primitive values like string, number, and booleans, you can use:

  • expect.any(String)
  • expect.any(Number)
  • expect.any(Boolean)
test('check dynamic string', () => {
  expect('disruptor').toEqual(expect.any(String));
  expect(1).toEqual(expect.any(Number));
  expect(false).toEqual(expect.any(Boolean));
});

Show code in action

Objects

You can check an Object with objectContaining to see if an object contains some properties inside. In this case, you don’t need to match the same properties from the object you want to evaluate.

test('check dynamic object', () => {
  const weapon = { type: 'laser', damage: 100, range: 10, available: false };
  expect(weapon).toEqual(
    expect.objectContaining({
      damage: expect.any(Number),
      type: expect.any(String),
      available: expect.any(Boolean),
    })
  );
});

Show code in action

Arrays

It’s also possible to use arrayContaining to check the values, and you can even combine them with all previous checks.

test('check dynamic array', () => {
  const weapons = [
    { type: 'phaser', damage: 150, range: 15, speed: 'fast' },
    { type: 'photon cannon', damage: 10000, range: 100, speed: 'slow' },
  ];
  expect(weapons).toEqual(
    expect.arrayContaining([
      expect.objectContaining({
        type: expect.any(String),
        damage: expect.any(Number),
        range: expect.any(Number),
        speed: expect.any(String),
      }),
    ])
  );
});

Show code in action

Asynchronous Code

There are multiple ways to handle asynchronous code, depending on your needs.

Callback

The easiest way to handle callback is to use a single done argument when calling the callback function. For example,

test('test callback', done => {
  initBattleMode((data) => {
    try {
      expect(data).toEqual({ ready: true });
      done();
    } catch (error) {
      done(error);
    }
  });
});

Show code in action

Promise

Asynchronous code with a promise is a lot easier, as all you need to do is to return the promise. Have a look at the modified example:

test('test promise', () => {
  return initBattleMode().then((data) => {
    expect(data).toEqual({ ready: true });
  });
});

Show code in action

async/await

On the other hand, using async/await is a lot more straightforward. So let’s reuse the previous example and modify it to use async/await instead.

test('test async', async () => {
  const data = await initBattleMode();
  expect(data).toEqual({ ready: true });
});

Show code in action

This is just a taste. For a complete list of matches, take a look at the reference docs.

Mocking Fundamentals

Occasionally, when doing our tests, you can’t rely on real data because it’s slow, private, or for other reasons.

Mocking allows you to intercept or erase the actual implementation of a function, capture calls (and the parameters passed in those calls), and enable test-time configuration of returned values.

One way to deal with this situation is to mock (faking) your data. Jest has already built-in some great tools with data mocking. It uses a custom resolver for imports in your tests, making it simple to mock any object outside your test’s scope. In addition, you can use mocked imports with the rich Mock Functions API to spy on function calls with readable test syntax.

Let’s focus on two types of mocks using Jest, the mock function, and the mock module.

Mock Functions

To mock a function, you just need to declare the method as a jest function: jest.fn(), with that, you can start our evaluation. Here is a quick example:

// ...
describe('Rocket Engine', () => {
  const cb = jest.fn();
  beforeEach(() => {
    cb.mockReset();
  });
  test('check callback response', () => {
    cb.mockImplementationOnce(() => 2).mockImplementation(() => 1);
    expect([1, 2, 3].map(cb)).toEqual([2, 1, 1]);
    expect(cb).toHaveBeenCalledTimes(3);
  });
  // ...
});

Show code in action

First, declare your mock method, and then you defined the first and the default outputs. Next, check if the output matches our expected result. Thereafter, check if the method was called the amount of time expected and if the parameters were correct.

If you want to learn more, refer to the reference docs

Mock Modules

Mocking a module works similarly to mocking a function, but instead of applying it to a function, you have to intercept a module import.

Back to the spaceship idea, create a startEngine method that receives a callback function as a parameter and does an HTTP call to an API server. In this case, you have to mock the unfetch module.

import fetch from 'unfetch';

export const startEngine = async (callback) => {
  const res = await fetch('https://api.space.com/rocket/engine/start');
  const json = await res?.json();
  if (json) {
    if (callback) {
      callback(json);
    }
    return json;
  }
  return undefined;
};

Show code in action

Now, declare the values you want to the mock module.

// ...
jest.mock('unfetch', () => () => ({
  json: () =>
    Promise.resolve({
      status: 'ready',
      fuel: '100%',
      power: 100,
      sensors: [{ type: 'temp', value: 50, active: true }],
    }),
}));
// ...

Show code in action

There is also an alternative way to declare your module as an esModule.

// ...
jest.mock('unfetch', () => ({
  __esModule: true,
  default: () => ({
    json: () =>
      Promise.resolve({
        status: 'ready',
        fuel: '100%',
        power: 100,
        sensors: [{ type: 'temp', value: 50, active: true }],
      }),
  }),
}));
// ...

Show code in action

The first parameter is the modules name, and the second one is the factory method. You now have configured it to make the output values always be the same.

describe('Rocket Engine', () => {
  const cb = jest.fn();

  beforeEach(() => {
    cb.mockReset();
  });

  test('check engine response', async () => {
    const data = await startEngine();

    expect(data).toMatchObject({
      power: 100,
      fuel: '100%',
      status: 'ready',
      sensors: [{ type: 'temp', value: 50, active: true }],
    });
  });
});

Show code in action

Using the previous mock, you can check the results. When executing, the output should be the same as declared before.

$ npx jest mock.test.js

For a complete list of mock functions, see the reference docs.

Static Code Analysis in JavaScript

There are a ton of ways your program can break. JavaScript is a loosely typed language. The most common bugs are typos and incorrect types, like the wrong variable name or the sum operation of two strings instead of integers.

What is “Static Analysis”?

So, what does mean “static analysis” of code? The answer is:

Predicting defects in code without running it.

Since JavaScript is a scripting language, instead of the compiler running the code analysis, you need to use formatters and linters to get the job done.

Formatters

Formatters are tools that can fix any style inconsistencies it finds automatically. For this purpose, tools like Prettier or StandardJS can do the job. There are a couple of options to configure it to best match your criteria, and it can be integrated with the most popular editors and IDEs.

To show you how does it work, here is an example of an unformatted code:

function HelloWorld({greeting = "hello", greeted = '"World"', silent = false, onMouseOver,}) {

  if(!greeting){return null};

  // TODO: Don't use random in render
  let num = Math.floor (Math.random() * 1E+7).toString().replace(/\.\d+/ig, "")

  return <div className='HelloWorld' title={`You are visitor number ${ num }`} onMouseOver={onMouseOver}>

    <strong>{ greeting.slice( 0, 1 ).toUpperCase() + greeting.slice(1).toLowerCase() }</strong>
    {greeting.endsWith(",") ? " " : <span style={{color: '\grey'}}>", "</span> }
    <em>
      { greeted }
    </em>
    { (silent)
      ? "."
      : "!"}

  </div>;

}

After using prettier, here is the result:

$ npx prettier --write unformatted_code.jsx
function HelloWorld({ greeting = 'hello', greeted = '"World"', silent = false, onMouseOver }) {
  if (!greeting) {
    return null;
  }

  // TODO: Don't use random in render
  let num = Math.floor(Math.random() * 1e7)
    .toString()
    .replace(/\.\d+/gi, '');

  return (
    <div className="HelloWorld" title={`You are visitor number ${num}`} onMouseOver={onMouseOver}>
      <strong>{greeting.slice(0, 1).toUpperCase() + greeting.slice(1).toLowerCase()}</strong>
      {greeting.endsWith(',') ? ' ' : <span style={{ color: 'grey' }}>", "</span>}
      <em>{greeted}</em>
      {silent ? '.' : '!'}
    </div>
  );
}

As you can see, the main benefit is that you don’t need to worry about these minor inconsistencies anymore. It does that for you automatically.

Remember that you write code for the machine to interpret, but for humans to read.

The clearer and more consistent your code is, the easier it is to understand what is happening.

Linters

Code linting is a way to increase code quality. It analyzes the code and reports a list of potential code quality concerns. Currently, the most used tool for that is ESLint.

ESLint is a tool for identifying and reporting on patterns found in ECMAScript/JavaScript code

Let’s check our example:

function sayHello(name) {
  alert('Hello ' + name);
}

name = 'John Doe';
sayHello(name)

Show code in action

To use ESLint, you need to install it first. Then, open the terminal and type on your project folder.

$ npx eslint --init

Now, you can run ESLint on any file or directory, like in this example.

$ npx eslint unconsistent_code.js

The linter shows where are the errors in our code, based on a set of rules in the eslinrc.{js,json,yaml} file. You can also add, remove, or change any rules. For example, let’s add a rule to check if we are missing a semicolon.

...
rules: {
  semi: ['error', 'always']
},
...

When executed again, the result will show you an error with the new rule.

This was a simple example, but the bigger the project, the more it makes sense to use it and catch many trivial errors that could take some time if done manually.

There are some sets of rules that can be extended, so you won’t need to set them one-by-one like the recommended rules (" extends": "eslint:recommended"), and others made by the community like the Airbnb or Standard that you can include into your project.

For a complete list of rules, refer to the reference docs.

Conclusion

In this article, you’ve learned more about how to start adding tests to your program and understanding the foundations of testing in JavaScript, Jest, Mocking, and Static Code Analysis and that’s only the beginning. But don’t worry about that, the most important thing is to add your tests while you are coding, saving you from numerous problems in the future.

Become part of our own moon mission

View job openings

agility: is failing an option?

October 20th, 1995, the premiere of Apollo 13. After discussing several options, Ed Harris launches one of the most iconic sentences in the movie: â€œFailure is not an option!”. In fact, considering the possible outcome, failing was not an option. But what about real life? Our day-to-day. Should we always consider failing as not an option? Or should we embrace it as part of our lives? (Photo by Damir Babacic on Unsplash).

what is a failure?

Let’s start by defining failure. The dictionary states failure as:

  • lack of success;
  • a state of inability to perform a normal function;
  • omission of performing a duty or expected action.

In a way, we can consider failing everything that falls into the spectre of the unexpected. Something that deviates from preconceived plans.

So, why do we fear failing? Why fear the unexpected? Psychology can help with the answer. For instance, in the book Drive, the author states that we have intrinsic motivators. One of them is having a purpose, which can be translated to having objectives or goals. Achieving them can give us a sense of self-realisation. We are happy when we succeed. Regardless of individual definitions, we, as human beings, tend to thrive for success.


But failure leads to the exact opposite. It makes us feel as we did something wrong and need an explanation for it. A failure is proof that our perfect plans have flaws, that we are not as good as we thought. It makes us feel fragile.
Despite this, there are some trends encouraging failure. Why? If failure makes us feel bad, why push for it? Don’t we all want to achieve success? Seems counter-intuitive, but there is a good explanation for it.

why failing?

Failing may not seem the most exciting thing, but not accepting it can be worse. Failure tends to just happen. However, every failure brings something new: a new learning. And learning is what makes us evolve.


Failing exists so that we can learn more and evolve faster. Accepting failure takes out the fear of experimenting and fosters innovation. If Thomas Edison was afraid of failing, he wouldn’t have invented the light bulb. And according to the myth, he failed more than 10.000 times. Can you imagine a world where no electricity exists? Moreover, can you imagine yourself failing 10.000 times?


But we shouldn’t just accept failure. If we fail but disregard the learning process, we won’t evolve. Edison didn’t just fail, he “found 10,000 ways that won’t work”. Our ability to realize what is wrong is what makes us great.

The key is to understand the level of failure we can sustain to not despair and keep the motivation to try again.

dealing with failure

Looking into it, we fail so that we don’t fail again. Or at least to not repeat the same failure. The more we fail, the more we learn and the better we become.
Learning fast is key to becoming better. The way to learn fast is to fail faster. So, increasing the number of failures can be the right approach to learning more. The main question that arises is “how to fail more?”. The answer is, in theory, easy: reduce the size of failure. Putting that into practice is much harder. Let’s divide this into steps:

1

If you have an idea or a theory, think about the smallest thing you can do to prove it.

2

Always keep in mind the chances of failure are bigger than the chances of succeeding. It’s rare to succeed on the first try.

3

For each failure, have some key takeaways.

4

Repeat! Take into consideration the learnings, and do not make the same mistakes.

Keep in mind this is an oversimplification of the process. Without the correct mindset, it will be hard to reach success.

You’ll need to work on your resilience. Assure yourself that you will fail many times, expect it more often than achieving success. Never lose hope and celebrate the smallest achievements. Keep building your knowledge and keep pushing.
You’ll also need to think in simpler ways. Breaking down your experiments isn’t easy, but it’s the only way to better cope with the disappointment of a failure. Remember, creating high expectations, big plans, and developing big solutions increases the risk of a bigger disappointment. And bigger the pain, the less willing to try again. So, simplify everything as much as possible.


If you work on these concepts, your chances to become better may increase. There is no guaranteed success, but it’s a step closer.

failing is not an option

So, failing is really not an option, it’s mostly a certainty of life. Everyone, even the more experienced, is prone to fail. The only option you have is how to deal with it.
Failing makes us evolve, despite not being pleasant. There is a thin line that will help you keep your mental sanity and still learn and innovate.

If you want to know more about the art of failing, I recommend “Fail Fast, Fail Often: How Losing Can Help You Win“ by John D. Krumboltz and Ryan Babineaux. There you can find more tips on how to deal with failure and how you should embrace it. It’s written by professionals in psychology, so they’ll probably know more about it than I do.

WANT TO FAIL & EVOLE WITH US?

View job openings

You don’t need a JS Library for your components

Have you ever asked yourself how many times you wrote a Button component using different libraries or frameworks? Components are the base block of any Web project, but with all the changes and new frameworks appearing, it can be hard to reuse or keep them updated. As a result, increasing the development time.

To solve this problem, Web Components can simplify this process since they work natively with the Browsers and can also be integrated into any JS Framework/Library.

It is recommended to have experience with HTML, CSS, and Javascript before getting started.

In the end, you will comprehend how to create and integrate a Web Component. I will provide links containing the example while sharing my experience, caveats, and solutions I found for the most common problems when starting to develop Native Web Components.

What is a Web Component?

A Web Component is a way to create an encapsulated, single-responsibility code block that can be reused on any page. It works by utilizing a native browser API.

The Web Component technology is older and more used than most people know. The <audio>, <meter>, <video> and many other HTML tags are implemented in each Browser with Web Component (like) technology. But, that technology was not available externally. So, what we now call “Web Components” (Custom Elements API, Templates, Shadow DOM) is that very same technology available to us all.

Building Blocks of a Web Component

The main features you need to understand to start creating your own components are:

  • Shadow DOM
  • HTML Templates
  • Custom Elements

For this tutorial, you are going to build an alert component.

<ce-alert><strong>Info alert!</strong> Change a few things up and try submitting again.</ce-alert>

Shadow DOM

A key aspect of web components is encapsulation — keeping the markup structure, style, and behavior hidden and separate from other code on the page so that different parts do not clash and the code can be kept nice and clean. The Shadow DOM API is crucial, providing a way to attach a hidden separated DOM to an element.

Shadow DOM allows hidden DOM trees to be attached to elements in the regular DOM tree — this shadow DOM tree starts with a shadow root, underneath which can be attached to any elements you want, in the same way as the standard DOM.

Shadow DOM

In simple terms, shadow DOMs are self-contained, encapsulated blocks of code within a regular DOM that have their own scope.

HTML Templates

The HTML Templates are where you create and add the HTML markup and the CSS. You just have to write your markup inside the <template> tag to use it.

The different aspect of the template is that it will be parsed but not rendered, so the template will appear in the DOM but not be presented on the page. To understand it better, let’s look at the example below.

<template>
  <div class="alert">
    <span class="alert__text">
      <slot></slot>
    </span>
    <button id="close-button" type="button" class="alert__button">x</button>
  </div>
</template>

Since there is no native support for importing HTML files into JavaScript code, the easiest way to achieve this is to add a template tag via code in the JavaScript file and assign the HTML content with the innerHTML property.

const template = document.createElement('template');
template.innerHTML = /*html*/ `
<div class="alert">
  <span class="alert__text">
    <slot></slot>
  </span>
  <button id="close-button" type="button" class="alert__button">x</button>
</div>`;

Show code in action

This is the draft of the component you will build and this is the result after registering and importing it:

<ce-alert>Hello there!</ce-alert>

I will explain the details on how to register and import it later and also how to add CSS styles. Furthermore, you probably notice a new tag called <slot> which is an important feature of the Web Component technology, so let’s check it out.

The <slot> element

The <slot> element is a placeholder inside a web component that you can fill with your own markup, which lets you create separate DOM trees and present them together, and can only be used with the Shadow DOM. The name attribute can be used to specify the target of content you want to place.

Let’s look into this example. You created a new Web Component called ce-article and it contains the following markup:

<article>
  <header>
    <slot name="header">
      <h1>title</h2>
    </slot>
    <slot name="subheader">
      <h2>subtitle</h2>
    </slot>
  </header>
  <p>
    <slot></slot>
  </p>
  <footer>
    <slot name="footer"></slot>
  </footer>
</article>

Show code in action

To make use of this component you could declare it as follows:

<ce-article>
  <h1 slot="header">My articles title</h1>
  Loren ipsum neros victus...
  <a href="#" slot="footer">Read more</a>
</ce-article>

Show code in action

Then all the content will be placed in the position you declare inside your Web Component as you can see in the image below.

<ce-article> element

Custom ElemenT

To create Custom Elements, you need to define the name and a class object representing the element’s behavior. As a rule of thumb, you should add a prefix to the component to avoid clashes with the native HTML tags, and also note that custom element names must contain a hyphen. So, in the example, you could add ce (custom element) prefix in the name of the component, like ce-alert.

Create a new Custom Element

Create a new class Alert inherited from HTMLElement and call the base constructor with the super inside the constructor method.

const template = document.createElement('template');
//...
export class Alert extends HTMLElement {
  constructor() {
    super();
  }
}
view raw

Show code in action

Register a new Custom Element

Next, you use the customElements.define method to register your new component.

const template = document.createElement('template');
//...
export class Alert extends HTMLElement {
 //...
}
customElements.define('ce-alert', Alert);

Show code in action

Custom Element Lifecycle

From the moment you create, update, or remove a custom element it fires specific methods to define each stage.

  • connectedCallback: Invoked each time the custom element is appended into a document-connected element. Each time the node is moved, this may happen before the element’s contents have been fully parsed.
  • disconnectedCallback: Invoked each time the custom element is disconnected from the document’s DOM.
  • adoptedCallback: An element can be adopted into a new document (i.e. using the document.adoptNode(element) method) and has a very specific use case. In general, this will only occur when dealing with <iframe/> elements where each iframe has its own DOM, but when it happens the adoptedCallback lifecycle hook is triggered.
  • attributeChangedCallback: Invoked each time one of the custom element’s attributes is added, removed, or changed. Which attributes to notice a change is specified in the static get observedAttributes() method

Let’s look at an example of these concepts in use.

// https://github.com/mdn/web-components-examples/tree/main/life-cycle-callbacks
// Create a class for the element
class Square extends HTMLElement {
  // Specify observed attributes so that attributeChangedCallback will work
  static get observedAttributes() {
    return ['c', 'l'];
  }
  constructor() {
    super();
    const shadow = this.attachShadow({ mode: 'open' });
    const div = document.createElement('div');
    const style = document.createElement('style');
    shadow.appendChild(style);
    shadow.appendChild(div);
  }
  connectedCallback() {
    console.log('Custom square element added to page.');
    updateStyle(this);
  }
  disconnectedCallback() {
    console.log('Custom square element removed from page.');
  }
  adoptedCallback() {
    console.log('Custom square element moved to new page.');
  }
  attributeChangedCallback(name, oldValue, newValue) {
    console.log('Custom square element attributes changed.');
    updateStyle(this);
  }
}

The class constructor is simple — here, just attach a shadow DOM to the custom element then you can append your template inside of it. The shadow mode can be open or closed. In the open state, the content inside of it can be accessed from the outside or vise-versa.

To access an element inside a custom element, you need to do query select using the custom element name, use the shadowRoot prop, then query again for the element you want.

document.querySelector('ce-alert').shadowRoot.querySelector('#close-button');

i.e. the button inside the <ce-alert> custom element.

NOTE: This is only possible when the mode is set to open while attaching the shadow root to your custom element.

To recap, the updates are all handled by the life cycle callbacks, which are placed inside the class definition as methods. The connectedCallback() runs each time the element is added to the DOM. The disconnectedCallback runs when the element is removed and the attributeChangedCallback() is called when an attribute (which is mapped in the static get observedAttributes() method) is changed.

TIP: To check whether a component is connected to the DOM, you can use this.isConnected

Define attributes and properties

Attributes and properties work slightly differently from what you used to understand in a JS library/framework. Attributes are what you declare inside the HTML tag, and properties are part of the HTMLElement class you extended, and when you define a new component, it already contains a set of properties defined. So sync attributes and properties can be achieved by reflecting properties to attributes. Let’s demonstrate that with the example:

<ce-alert color="red"></ce-alert>

It is crucial to notice that attributes are always strings. Therefore, you cannot define a method, object, or number. But, in case you need another type, you have to cast it later or declare it directly inside the element object.

Now to sync the attribute with the property in the class:

//...
export class Alert extends HTMLElement {
  //...
  set color(value) {
    this.setAttribute('color', value);
  }
  get color() {
    return this.getAttribute('color');
  }
  connectedCallback() {
    console.log(this.color); // outputs: "red"
  }
}
//...

Show code in action

Although this approach works, it can become lengthy or tedious the more and more properties your components have. But, there is an alternative that does not require declaring all properties manually: The HTMLElement.datasets interface provides read/write access to custom data attributes (data-*) on elements. It exposes a map of strings (DOMStringMap) with each data-* attribute entry, you can also combine it with the get/set properties to have even more flexibility. But for now, update the example with the dataset declaration:

<ce-alert data-color="red"></ce-alert>
//...
export class Alert extends HTMLElement {
  //...
  attributeChangedCallback() {
    console.log(this.dataset.color); // outputs: "red"
  }
}
//...

Show code in action

Sync Properties and Attributes (Bonus)

This is optional, but in case you want to do the sync between attributes and properties, here is a function that can simplify this process:

/**
 * @param target - the custom element class
 * @param props - properties that need to be synced with the attributes
 */
const defineProperties = (target, props) => {
  Object.defineProperties(
    target,
    Object.keys(props).reduce((acc, key) => {
      acc[key] = {
        enumerable: true,
        configurable: true,
        get: () => {
          const attr = target.getAttribute(getAttrName(key));
          return (attr === '' ? true : attr) ?? props[key];
        },
        set: val => {
          if (val === '' || val) {
            target.setAttribute(getAttrName(key), val === true ? '' : val);
          } else {
            target.removeAttribute(key);
          }
        }
      };
      return acc;
    }, {})
  );
};

Observe Properties and Attribute

To detect attributes or property changes, you need to return an array with all values you want using the static method observedAttributes. Next, you configure the callback function attributeChangedCallback to define what will happen when the defined property changes.

//...
export class Alert extends HTMLElement {
  //...
  static get observedAttributes() {
    return ['data-color'];
  }
  attributeChangedCallback(name, prev, curr) {
    if (prev !== curr) {
      this.shadowRoot.querySelector('.alert').classList.remove(prev);
      this.shadowRoot.querySelector('.alert').classList.add(curr);
    }
  }
}
//...

Browser Integration

You can now use your Custom Element in your HTML file. To integrate, you must import the js file as a module.

<html>
  <head>
    <style>
      ...
    </style>
    <script type="module" src="./index.js"></script>
  </head>
  <body>
    <ce-alert></ce-alert>
  </body>
</html>
view raw

Show code in action

Custom Element Stylin

At this point, you have now a fully working Web Component, the only missing part is the styling. There are at least four ways of defining a style using CSS:

  • Inline Style
  • Using “part” Attribute
  • CSS Inject
  • Link Reference

In addition to the conventional CSS selectors, Web Components supports the following ones:

  • :host/:host(name): Selects the shadow host element or if it has a certain class.
  • :host-context(name): Selects the shadow host element only if the selector given as the function’s parameter matches the shadow host’s ancestor(s) in the place it sits inside the DOM hierarchy.
  • ::slotted(): Selects a slotted element if it matches the selector.
  • ::part(): Selects any element within a shadow tree with a matching part attribute.

Inline Style

The initial and most common way for you could start styling your components is to declare the styles inside the template.

<template>
  <style>
    :host {
      --bg-color: #ffffff;
      --border-color: #d4d4d8;
      --text-color: #374151;
    }
    .alert {
      font-family: 'Segoe UI', Roboto, 'Helvetica Neue', Arial, 'Noto Sans', sans-serif;
      display: flex;
      justify-content: space-between;
      align-items: center;
      padding: 0.5rem 1.25rem;
      color: var(--text-color);
      background-color: var(--background-color);
      border: 1px solid var(--border-color);
      border-radius: 0.75rem;
    }
    .alert__text {
      font-size: 0.875rem;
      line-height: 1.25rem;
    }
    .alert__button {
      -webkit-appearance: button;
      cursor: pointer;
      color: var(--text-color);
      background-color: transparent;
      background-image: none;
      border: none;
      height: 2rem;
      width: 2rem;
      margin-left: 0.25rem;
    }
  </style>
  <div class="alert">
    <span class="alert__text">
      <slot></slot>
    </span>
    <button id="close-button" type="button" class="alert__button">x</button>
  </div>
</template>

Show code in action

The main difference here is the use of the :host selector instead of the :root selector which is not available inside the encapsulated element and cannot access global CSS variables inside the Web Component.

Using the “part” attribute

Another solution is to use the ::part selector to customize a component from the outside, making it possible to use the :root selector and create shared styles. You need to add the part attribute to the elements you want to customize, then the CSS selectors from the outside can reach in.

Let’s take a look at this example, you could update the template and change the class attribute to part.

<template>
  <style>
    //...
  </style>
  <div part="alert">
    <span part="text">
      <slot></slot>
    </span>
    <button id="close-button" type="button" part="button">x</button>
  </div>
</template>

Show code in action

Then, create a new CSS file and move all the style blocks into it and update the selectors to match the ce-alert component.

:root {
  --bg-color: #ffffff;
  --border-color: #d4d4d8;
  --text-color: #374151;
  font-family: ui-sans-serif, system-ui, -apple-system, 'Segoe UI', Roboto, 'Helvetica Neue', Arial, 'Noto Sans', sans-serif;
}

ce-alert::part(alert) {
  display: flex;
  justify-content: space-between;
  align-items: center;
  padding: 0.5rem 1.25rem;
  color: var(--text-color);
  border: 1px solid var(--border-color);
  background-color: var(--background-color);
  border-radius: 0.75rem;
}

ce-alert::part(text) {
  font-size: 0.875rem;
  line-height: 1.25rem;
}

ce-alert::part(button) {
  -webkit-appearance: button;
  color: var(--text-color);
  background-color: transparent;
  background-image: none;
  border: none;
  margin-left: 0.25rem;
  height: 2rem;
  width: 2rem;
  cursor: pointer;
}

Show code in action

NOTE: the that this selector only accepts one parameter.

To finalize, update the index.html file to import this new CSS file and that’s it.

CSS Inject

Another way to customize the elements is to inject the styles inside the Web Component. First, you must create a CSSStyleSheet object that represents a single CSS stylesheet, then replace the styles and finally apply them to the shadow root. The only downside is that it requires a special polyfill to work with safari.

const stylesheet = new CSSStyleSheet();
stylesheet.replace('body { font-size: 1rem };p { color: gray; };');
this.shadowRoot.adoptedStyleSheets = [stylesheet];

Show code in action

You can combine it with a JS Bundler and enable PostCSS features. You need to configure it to load the CSS files as string.

If you are using Vite, append the raw suffix to import as string.

import styles from './ce-alert.css?raw';

Show code in action

In case you are using Webpack, you have to install postcsspostcss-loader, and raw-loader :

npm install --save-dev postcss postcss-loader raw-loader

Afterward, update the webpack.config.js file to import the CSS files as string.

module.exports = {
  module: {
    rules: [
      {
        test: /\.css$/,
        use: ['raw-loader', 'postcss-loader']
      }
    ]
  }
};

Link Reference

Link Reference is my preferred solution because you can load external CSS files without having to duplicate any code and can even be used to integrate your Web Component with a popular CSS Framework like TailwindBulma, or Bootstrap.

For this example, you will integrate Tailwind with Vite. After following the setup instructions for Tailwind, create a tailwind.css file in the root level of the project:

@tailwind base;
@tailwind components;
@tailwind utilities;

Show code in action

Install concurrently by running the command npm install --save-dev concurrently and configure the package.json to run the tailwind compiler together with the dev server.


{
  "name": "vite-starter",
  "private": true,
  "version": "0.0.0",
  "scripts": {
    "start": "concurrently --kill-others-on-fail \"npm:dev\" \"npm:tailwind\"",
    "dev": "vite",
    "build": "vite build",
    "preview": "vite preview",
    "tailwind": "tailwindcss -i ./tailwind.css -o ./public/tailwind.css --watch"
  },
  "devDependencies": {
    "@tailwindcss/typography": "^0.5.2",
    "autoprefixer": "^10.4.5",
    "concurrently": "^7.1.0",
    "postcss": "^8.4.12",
    "postcss-import": "^14.1.0",
    "postcss-nesting": "^10.1.4",
    "tailwindcss": "^3.0.24",
    "vite": "^2.9.6"
  }
}

Show code in action

After that, update the index.html to include the styles and load the scripts as a module.

<html>
  <head>
    ...
    <link href="/tailwind.css" rel="stylesheet" />
  </head>
  <body>
    ...
    <script src="/src/main.ts" type="module"></script>
  </body>
</html>

Now, inside your Web Component, you can link the CSS library.

template>
  <link rel="stylesheet" href="/tailwind.css" />
  <div
    class="flex items-center justify-between rounded-xl border border-contrast-300 bg-canvas py-2 pl-4 pr-3 text-sm text-content shadow-sm">
    <span class="text-sm">
      <slot></slot>
    </span>
    <button
      id="close-button"
      type="button"
      class="ml-1 -mr-1 inline-flex h-8 w-8 items-center justify-center p-0.5 text-current">
      x
    </button>
  </div>
</template>

Show code in action

Final Solution

Here is the result of your new Web Component with everything you learned so far:

const template = document.createElement('template');
template.innerHTML = /*html*/ `
<style>
  :host {
    --bg-color: #ffffff;
    --border-color: #d4d4d8;
    --text-color: #374151;
  }
  .alert {
    font-family: 'Segoe UI', Roboto, 'Helvetica Neue', Arial, 'Noto Sans', sans-serif;
    display: flex;
    justify-content: space-between;
    align-items: center;
    padding: 0.5rem 1.25rem;
    color: var(--text-color);
    background-color: var(--background-color);
    border: 1px solid var(--border-color);
    border-radius: 0.75rem;
  }
  .alert__text {
    font-size: 0.875rem;
    line-height: 1.25rem;
  }
  .alert__button {
    -webkit-appearance: button;
    cursor: pointer;
    color: var(--text-color);
    background-color: transparent;
    background-image: none;
    border: none;
    height: 2rem;
    width: 2rem;
    margin-left: 0.25rem;
  }
</style>
<div class="alert">
  <span class="alert__text">
    <slot></slot>
  </span>
  <button id="close-button" type="button" class="alert__button">x</button>
</div>`;

export class Alert extends HTMLElement {
  static get observedAttributes() {
    return ['data-color'];
  }
  constructor() {
    super();
    this.shadow = this.attachShadow({ mode: 'open' });
    shadow.appendChild(template.content.cloneNode(true));
  }
  connectedCallback() {
    const button = this.shadowRoot.getElementById(`close-button`);
    button.addEventListener(
      'click',
      () => {
        this.dispatchEvent(new CustomEvent('close'));
        this.remove();
      },
      { once: true }
    );
  }
  attributeChangedCallback(name, prev, curr) {
    if (prev !== curr) {
      this.shadowRoot.querySelector('.alert').classList.remove(prev);
      this.shadowRoot.querySelector('.alert').classList.add(curr);
    }
  }
}

customElements.define('ce-alert', Alert);

Show code in action

Problems and Issues

There are good aspects of using Web Components as it can work everywhere, is small, and runs faster as it uses built-in platform APIs. But it is not only flowers, and there are also some things which might not work as you expected.

Attributes vs Properties

A downside of using attributes in a custom element is that it accepts only strings, and syncing the properties with the attributes requires manual declaration.

Component Update

Custom elements can detect if an attribute changes, but what happens next is up to the developer to define.

Styling

Styling can be problematic and tricky since the component is encapsulated and components like dropdowns, popups, or tooltips that require dynamic elements on top of others can become challenging to implement.

Accessibility

Because of Shadow DOM boundary common attributes like label/fortab-indexaria-pressed, and role are not working as you expect. But, there is an alternative using the new browser API called accessibility object model.

Forms

Using forms with custom elements requires some custom form association to make it work.

SSR Support

Due to the nature of a Web Component, it cannot be used in an SSR page since Web Components rely on browser-specific DOM APIs, and the Shadow DOM cannot be represented declaratively, so it cannot be sent as string format.

Conclusion

In this article, you learned about the world of Web Components, which consists of three blocks: HTML Template, Shadow DOM, and Custom Elements. Combining them makes it possible to create your Custom HTML Elements that can be reused in many other applications. To get a little more information about building Web Components, you can check the webcomponents.dev website, where you can discover and play with different ways of making Web Components.
Try it out, play with it, and create your first Web Component for your application.

Thanks to Simon Vizzini.

3 easy remote team building techniques

In this article, our frontend engineer Jan Philipp Paulus introduces 3 simple methods to help your team build up or strengthen their bond. You can pick up the techniques without further requirements. (Photo by Sigmund on Unsplash).

Why a good team spirit is important

đŸ€©Â Motivation: colleagues will be more motivated when there is an overall better spirit in the team.

đŸ€ Trust: colleagues will start to gain and foster trust in each other  

💬 Communication: with more trust, colleagues will start communicating and interacting more with each other.

💡 Knowledge: the more communication occurs, the more knowledge will be shared within your team.

📈 Performance: in the end, everything mentioned above can potentially lead to overall better team performance.

Disclaimer

Remote team building is not a replacement for in person team building and will most likely not be as effective as it is face-to-face.

Pro tip đŸ’Ș Try to incorporate a mix of in-person and remote team building techniques into your team building strategy.

the techniques

đŸ‘©đŸ»â€đŸ’» Create an off-topic channel

Create a channel for your team in which everyone can post random memes, images (especially of dogs or cats), recipes, and music. This channel can be an alternative to the chats that would usually happen at the coffee machine. You can even start small competitions in which everyone posts a picture of their desk and then votes for the cleanest and messiest. These are some ideas, but the options are endless. Be creative!

đŸ‘Ÿ Host virtual game nights

You might be skeptical if you’ve never attended a virtual game night, but they are a fun way of getting to know your colleagues. Depending on the game, you will be able to see the strengths and weaknesses of your colleagues. Understanding these will help you to get a better picture of the person you’re working with.

Pro tip đŸ’Ș To get everyone into the right mood, try to kick off the evening with an ice breaker. tscheck.in is a good starting point if you are looking for ice breakers.

Among the many options, here are some games I can recommend:

  • Among Us
  • Remote Work Bingo
  • Draw Something
  • Never Have I Ever (SFW)
  • skribbl.io

☕ Have virtual lunch or coffee dates

A wonderful way of getting to know your colleagues is to set up virtual lunch or coffee dates. Plus they usually don’t take much time to plan and execute. You could even install apps on Teams or Slack that will randomly pair up team members every week.

Pro tip đŸ’Ș Try to avoid starting the conversation with work-related topics. Instead, try to find out what the other person likes to do in their spare time. Finding common ground makes it easier to get a conversation started.

wrapping up

We hope these three techniques gave you a rough overview on how to start remote team building.
Give them a try and feel free to experiment with more promising methods.

The psychology of remote work and 16 tips to make it work

This article was originally published at https://heydaroff.info.

Remote working will more likely stay in long term. If not hundred percent, it will definitely push many companies to adopt a hybrid model. With the COVID hit, tens of millions of people had to move their workplaces to their living places, to their homes. In multiple studies (mostly by management consulting companies) it is estimated that more than 20 to 25 percent of the workforce could work three to five days a week remotely. Obviously,  remote work at the moment is only possible the people whose work does not require physical output. There are many dimensions impacted by the workforce going remote, such as counter-urbanization, “to-go” delivery vs. restaurants, commercial real estate, and so on. These are all business-side changes, however, there is as well a direct impact of remote working on human psychology.

photo by Sigmund

Onsite vs. remote

Obviously working onsite has quite a few benefits. When working in an office, social interaction is inevitable. We meet a lot of colleagues, we talk to them, we go to lunch or we do coffee breaks with them. If we have ad-hoc and urgent questions, you can simply go to your colleague’s desk and ask, namely, the information flow is fast. Moreover, the office is a dedicated place for working, which means that our work-life split is pretty explicit, when leaving office work ends, when coming to office work starts. Another benefit of the office is its infrastructure setup. Usually the office desks, chairs are quite comfortable, there are rooms for social hangout, kitchen with always filled fridge, sometimes you can even find beer tap in the office (the reason I love our Berlin office).

On the other hand, remote working is as well quite attractive. When working from home, we don’t have to wake up 3 hours in advance to get ready and go to the office. I cannot believe how much time we’re losing for commuting and how remote working saves that time for other activities. Another side-advantage of not commuting is that you can choose when to start working since there is no location/logistics dependence of starting to work. The location independence also means that you can travel anywhere, anytime you want, and continue working from there. I do not like Berlin weather in winter, thus I prefer to move to Algarve, Portugal, and work from there for the next few months. This type of independence is extraordinarily beautiful. Another benefit of the home office (HO) is the cost-saving on both employees as well as on the employer side. If I can work from home, then I do not need to pay a ridiculous amount of money for rent just because I live in Berlin city. Instead, I can find an apartment a little bit outside the city or maybe even move back to beautiful Freiburg city and pay half the amount for rent. On the other side, there is no need for companies to offer a 24 hours alive office space anymore that costs a lot not just monetarily but also environmentally.

However, the biggest benefit is our flexibility in shaping our work-life balance, if done correctly. Otherwise, the psychological debt of remote working can be a deal-breaker. Let’s have a look at it from a neuroscience perspective.

how does remote affect our psychology?

Some neuroscience studies suggest that our brain, besides being a central command center, is as well of a logistic center. The largest nerve in our body, the Vagus nerve carries information from our guts, through our hearts, our face, our ear canal to our brain. This nerve brings sensations from the body to the brain and carries the command feedback back to the organs. It regulates the facial muscles, influences our breathing heart rate, and is involved in how we perceive, react to and recover from stress. Neuroscientist Steven W. Porges Ph.D., in his Polyvagal Theory of Emotion, suggests that when we enhance our connection with other people, we trigger neural circuits in our bodies that calm the heart, relax the gut, and turn off the fear response. Every time we interact with people, the vagus nerve, also called our social engagement system, is in active mode. Like other muscles in our body, when activated it exercises.

When we are working remotely, we heavily decrease our human-to-human interactions, at least in the real world. Without this interaction, our vagus nerve, as it does not exercise anymore and becomes passive, starts to atrophy. When we are lonely, our brain alarms us, saying “help, we are losing our ability to connect with other humans which is very necessary to survive. Please interact with others”. Since we have not practiced solitude, the next reaction to the presence of loneliness is fear. With this fear, we become more conservative to any threats and become more self-isolated. So starts the evil cycle that starts to weaken our connection to others, accelerates the atrophy, and pulls us into depression, anxiety, and further loneliness. Physiologically as well, it’s experimented that after some period of isolation and environmental monotony, our brain mas shrinks.

More specifically remote working, there are a few psychological issues that arise, such as Placelessness, nowhereness, non-visibility, reduced creativity. As mentioned above, when we work onsite, we have a dedicated physical workplace, which is perceived and influenced by our Global Positioning System (GPS) neurons that code our navigation behavior. Bringing the workspace interaction to remote video conferencing virtual interaction, our GPS neurons, mirror neurons, self-attention networks, spindle cells, and interbrain neural oscillations get affected. This in return affects our identity and cognitive processes, such as social and professional identity, leadership, intuition, mentoring, and creativity.

Another famous phenomenon is Zoom fatigue, which is basically having the sense of tiredness, anxiety, and fatigue like discomforts. The reasons for the Zoom fatigue are non-optimal functioning of technology (“sorry, my internet cut out”, “we cannot hear you”, etc.), and a significant increase of cognitive resources to understand the meaning of others’ verbal communication since there are very reduced nonverbal cues.

how to start remote and async collaboration

So, remote working can be dangerous in long term, right?! So given that we will stay mostly remote, what should we do about it? Metaverse, VR based virtual workplace concepts could help, but we are still quite far from there. What we can do is to mitigate the psychological damage by integrating interaction and perception of having a place into our remote setups. For every new change, idea, or challenge there is a simple process that should work:

  1. Continuously monitor and identify what doesn’t work anymore or can be improved
  2. Make adjustments by doing small experiments
  3. If the adjustment advances the situation, keep it and move to the next issue. If the change won’t be successful, come up with another experiment.

TIPS FOR REMOTE & ASYNC WORK

This is a list of mixed practical tips that I could come up with. It could be more structured, but enough cognitive workload for this post.

Get rid of most non-value-adding meetings â€“ Since we already saw that meetings over videoconferencing tools is brain consuming, we do not want to overload people with random updates, non-relevant meetings. How? use your intuition. Do you need their input? Is this urgent? Should everyone be involved? Could your question be answered via email or chat? If you could answer these questions, then you have your answer to the question of if you should invite them to a meeting. There is a nice guiding post from doist on this.

Set up minimal explicit expectations about sync communications â€“ when inviting people to meetings, make sure you have the purpose, agenda, and the expected outcome of the meeting explicitly written. and do decline the meetings where this information is missing. When writing people in chat, make sure you provide the purpose of your message: Keyword FYI for just update post or the keyword INPUT/ACTION REQUIRED for input-requesting messages can be helpful to understand the intention of your message.

Have a structured messaging ecosystem â€“ your messaging tool should have a structure for an efficient communication flow. Work-related stuff should be clustered into topics channels. In order not to disturb everyone for every reply, write the comments and thoughts inside threads, which people can turn off the notifications for if they are not interested. Have a hangout/random/fun channel for non-work related stuff. Think twice before @mention’ing the whole channel, if it is not a relevant post for everyone. Have profiles tagged with their roles and other necessary information (location, product/project, contact preference).

Communicate updates mostly async â€“ If it is just an update that does not require any input from people, make it async. Create weekly digest practice where the week’s main goals are documented as well as what people focus on individually. Have a diary-like daily standups instead of meetings. For the daily standup, we tweaked the practice a bit by making it asynchronish, namely, we pre-fill the stand-up for the day and if there are discussion items, then we meet up. The time we save from sync stand-up meetings, we use for virtual hang out to talk about our day.

Log all important information & decisions in a single, easy to find place â€“ All official communications about important updates and decisions should be easy to find by both internal and external stakeholders.

Pair-Programming Practice â€“ to stimulate the human-to-human interactions, encourage pair-programming, sparring sessions among team members. it should not be just limited to the developer team, but rather extend it to other units as well.

Provide remote setup support â€“ in order to ensure the physical well-being of the employees, companies should provide remote setup support. Subsidizing internet connection, providing ergonomic desks and chairs, having a holistic communication tech stack are a few practical supports organizations give to their employees. Obviously, regular IT support is a must as well.

Dedicated roles & accountabilities within an organization to foster remote & async â€“ as for every other initiative, a DRRIver (direct responsible role or individual) with specific accountabilities should steer the implementation of the remote & async.

Some meetings should stay in sync mode â€“ One-on-Ones, important decisions, kick-offs, brainstorming events should stay sync. Sync meetings create human-to-human connections and foster engagement, brainstorming, and creativity.

Organize quarterly onsite team retreat â€“ As mentioned already, remote working may potentially create a disconnected feeling. Moreover, when new people join the team, they cannot create personal bonds virtually, due to the lack of visibility of body language, eye contact, physical interaction space, and more. Therefore, investing in regular retreats should connect the team and create a long term bond among team players.

Weekly Sync Hangout Meetings â€“ Another meeting series you want to set up in sync modus are weekly virtual hangout. It could be an after-work drink&chat session or a midday coffee break with the whole team. This type of social activity contributes to the connection among team members.

Prepare templates for every communication type â€“ to create efficient async communication and well-written documentation, it’s important to enable everyone in the organization to write better and more structured. One important means to enable people is giving them the training to improve their writing skills. Another simpler, faster, and more efficient method is to provide templates for every type of documentation, such as decision making templates, post-mortem templates, sprint planning/review/retro templates, conflict analysis templates, etc.

Start as soon as possible â€“ Only way to be successful is to start experimenting as soon as possible. Instead of creating a perfect framework, come up with the earliest testable version, implement it and get feedback, and iterate further. You can start with the direct team and make it as simple as replacing some recurring meetings with an async substitute.

Have a KPI for tracking the success of every experiment â€“ know what to measure and measure it. You need to first understand what the successful outcome looks like at the end, choose a metric that will show your progress towards that success target. It could be burndown, OKR, employee satisfaction, or something else. Make sure to have a metric that is aligned with your outcome success defined.

Don’t respond to messages or emails instantly & suggest others not to do it either â€“ Put it as a status in your messaging tool, send it as an auto-reply to all emails or make it your profile picture. Communicate to people that you’re at the moment busy doing “deep work”. Responding instantly to any communication requests shows that you’re available to be requested on ad-hoc. Dedicate yourself to a time block every day for “focused productive work”. If people request feedback as soon as possible, just reply back saying “I am currently focused on X task, thus I’d need to get back to you after X days.” Just tell them that you’re busy, that’s it.

Focus on outcome oriented performance evaluation â€“ Evaluate the individual performance based on outcomes, not on the number of hours worked. Each person should have a specific outcome they steer or a goal they’ve set up to achieve and they should be evaluated on the success rate of their outcome/goal.

One of the main values of the Agile Manifesto suggests we should value “individuals and interactions over processes and tools”. Going remote is confirmed to benefit employers and employees to be more efficient. To ensure the long-term robustness of the remote working, we should pro-actively experiment with new ideas and, based on the feedback loop, iterate further or move on. When doing these experiments, our top priority should be the satisfaction of individuals, including employees, customers, and stakeholders in general.

How to avoid a TL;DR reaction

This article is about how we can better get our messages across to readers in written form, why this is becoming increasingly difficult, and what we can do to avoid getting a tl;dr response to our written articles. 

Why am I writing about this?

I want us, developers, to write informative and readable articles.

So that we bring our knowledge and experiences into the world because we are awesome at our job and others can learn from it. Therefore, we need to be aware of the reading behavior of our potential readers and actively engage in writing readable texts, which has its challenges, especially in the technical field.

Where does the problem come from?

We live in an era where time is scarce, or we, as rushed characters, have the impression that time is short.

There are different reasons for this, but read the article by our colleague Hidayat: “prosochē – reflecting on attention.” Hidayat has looked at this in detail. You can experience how difficult it is for him to focus. 

In any case, we want to know what is coming, especially when we read articles on the internet. Often, however, we are denied this insight in advance, which means we miss the chance to get excited about the topic. Then you turn off and don’t read the article or scan frantically over the article for helpful information and possibly miss important parts.

On the other hand, it is also up to the writers who enjoy writing gushing articles or cannot formulate their statements shortly and crisply.

tl;dr explained

tl;dr Too long; didn’t read means that a contribution, be it an article, post, or similar, has been put into too many words and is therefore potentially not readable.

There are two uses of the term:

  • One is as a reaction to someone else’s post to indicate excessive verbosity. In this context, tl;dr is often used as a rude reaction.
  • On the other hand, to display an article’s summary and indicate that the contribution is too long.

The origin of tl;dr

The phrase was probably born on Reddit as a reaction to excessive posts. Like so many other things, it found its way from there to the entire internet.

The first entry in the Urban Dictionary dates back to 2008 and offers a wide variety of readings and variants of the phrase.

What can we improve when writing articles?

For not provoking the unwanted tl;dr reaction towards our articles, here are a few suggestions for action:

  1. A text formulated in understandable language is the basic prerequisite.
  2. It is important to pick up the reader initially to inspire and prepare them for what is to come. Then the reader will work his way through a dryly written, complicated, or just totally long article.
  3. The content should be structured and divided into consumable and varied bites.
  4. The same applies to the first sentence of each paragraph, which should be written as a summary of the paragraph so that the reader quickly knows which paragraphs or sections are of interest and can take a shortcut to the part of the article that is important to him.
  5. In general, texts should be reduced until only the essentials are formulated in them to avoid repetition.
  6. If a topic gets too long, you should decide to split the article. A series of articles also has further advantages.

tl;dr

Write your articles in an exciting and structured way, pay attention to your audience and take them on an exciting journey. Then you will get good reactions to your articles, and they will be valuable for the readers.

I hope this article doesn’t provoke any tl;dr or even ts;drtd;drtl;dc reactions from you. Anyways, feel free to let it out.

Bonus for all those who have read this far

TLDR Wikipedia has excellent examples summarized, these are also quite humorous. 

Image via Unsplash by Thiébaud Faix


Header image via Unsplash by Anastasia Zhenina