7 Testing microservices

This chapter covers

Microservice architecture encourages you to write minimal services that can be easily tested as a unit. For example, while testing the Order service, the only context you need to focus on is Order. To test the Order service, you have two high-level alternatives: manual testing and automated testing.

With manual testing, you must run actual and dependent services to test the entire flow. This can be a time-consuming operation compared to testing backed by a machine. This inefficient testing methodology will also slow down your software delivery. However, with automated testing, you can get fast feedback from your implementation because you don’t need to wait until you finish and test the entire system.

About code examples in this chapter

In chapter 4, we started coding our services, and in this chapter, we depend on what we implemented in chapter 4 and chapter 5 and continue to extend the code. Even though you will see step-by-step explanations of different testing strategies, you can always check the completed version here: https://github.com/huseyinbabal/microservices.

We will look at different testing strategies in this chapter, but first let’s look at the relations between them and their possible advantages and disadvantages.

7.1 Testing pyramid

The testing pyramid organizes software tests into three categories based on their context and provides insight into the percentage of tests for each category:

The unit test category is at the bottom of the pyramid, the integration test is in the middle, and the end-to-end test is at the top (figure 7.1).

Figure 7.1 Testing pyramid

As you can see, in a typical application, the percentage of unit tests is greater than that for integration tests, which is greater than the percentage for end-to-end tests. There are possible reasons/outputs for that percentage; let’s analyze them individually.

Unit tests are designed to test one component at a time, with maximum isolation. While testing a component (SUT), you should mock the other dependencies. Isolation levels get lower once you move up on the pyramid because you start to involve more components in the test suite that might break the isolation.

In a unit test, you probably have a test runner, enough to test the core features and mock the dependencies. Once you move to the integration test, you need third-party tools to maintain dependencies, such as having test containers for a DB connection. Once you start to use third parties, test execution will slow down to wait for all the dependent components.

Going from a unit test to an end-to-end test has cost increases because more components mean more resource consumption, and thus more money. The unit test has the greatest percentage of the pyramid because it is fast and cheap. In the same way, the end-to-end test has the lowest percentage of the pyramid because it is expensive and slow to run. This does not mean you need to write a unit test but not an end-to-end test. It does mean you should arrange the percentage of test types as stated in the test pyramid. Now that we can see the relations between testing strategies, let’s look at how those strategies are used to verify the microservice application behavior.

7.2 Testing with a unit test

Automated testing gives faster feedback, and thus saves on time and cost. Unit tests are good for verifying one basic operation unit at a time, but is that enough to verify your implementation? We will see different testing strategies, such as integration and end-to-end tests, but let’s look at a SUT first.

7.2.1 System under test

The SUT test contains inputs, execution conditions, and expected results to verify its behavior within a codebase. A SUT means a software element is being tested. Based on your testing strategy, a SUT can be a class or an entire application. This test is important if you are testing a specific layer in hexagonal architecture because you need to know what to test in that layer. It becomes a test suite if you group related tests to verify the behavior of a SUT. In figure 7.2, one happy path and three edge case tests form a test suite to verify different kinds of SUT behavior.

Figure 7.2 A test suite is formed by a set of related tests, and its goal is to verify a SUT.

7.2.2 Test workflow

Before diving into the internals of testing frameworks and writing actual Go code, let’s start with the phases of an automated test:

  1. Setup—In this phase, we prepare the dependencies of a SUT and initialize a SUT with them. This can also involve initializing third-party dependencies, such as a MySQL database.

  2. Invoke the SUT—If we are testing a class, in this phase, we might call a function from that class.

  3. Verify—Verify the actual result with the expected result by using assertions.

  4. Teardown—Clean up resources that are no longer needed. For example, we could destroy the MySQL database once we are done with it.

You may not need setup and teardown phases for some tests, which contain very simple logic. For example, to test the following Fibonacci behavior from the math class, you can call the SUT, Fibonacci(), and compare the actual result with the expected result:

// math.go
package math
 
func Fibonacci(n int) int {   
    if n <= 1 {
        return n
    }
    return Fibonacci(n-1) + Fibonacci(n-2)
}

Returns a value at the nth position

To implement test cases for the Fibonacci function, you can create another file with the _test.go suffix and execute it via the go test command:

// math_test.go
package math
 
import (
    "testing"
 
    "github.com/stretchr/testify/assert"   
)
 
func TestFibonacci(t *testing.T) {
    actual := Fibonacci(3)
    expected := 2
    assert.Equal(t, actual, expected)
}

External assertion library

Go has built-in libraries for testing, but you may want to use other third-party libraries for better test suite management. In this example, we use a third-party library, testify (https://github.com/stretchr/testify), a unit testing assertion library, to verify the behavior of SUTs. Since the Fibonacci function does not depend on other systems, we ignore the setup and teardown phases, even though this is not typical in microservices architecture. For example, the Order service depends on a MySQL database, and to test Order features, you need to prepare dependent systems with one of the following:

Since preparing real dependencies would be inefficient because it will slow down the testing process, in this case it’s better to use mocking to get fast feedback.

7.2.3 Working with mocks

Mocking in a test aims to isolate dependent systems’ internals and focus only on a SUT to not only have minimal setup for your test, but to also control the behavior of the dependent system based on your needs. Let’s say that the Order service depends on the Payment service and Order database, and you want to test one of the scenarios for the PlaceOrder functionality in the Order service. If you want to control the behavior of the Order database and Payment service operations, you can mock the Order database and Payment service and then simulate the method calls (figure 7.3).

Figure 7.3 The Order Service uses mock dependencies during testing for quickness and ease.

Whatever you are trying to mock should be an interface. Since we are using hexagonal architecture, and ports in that architecture are interfaces, we can quickly provide a mock without changing the real implementation. Because we already have PaymentPort and DBPort, let’s look at how we can mock them using a mock library (https://pkg.go.dev/github.com/stretchr/testify/mock).

7.2.4 Implementing a mock

Here is the PlaceOrder implementation from the Order service:

type Application struct {
    db      ports.DBPort
    payment ports.PaymentPort
}
...
func (a *Application) PlaceOrder(order domain.Order) (domain.Order, error) {
    err := a.db.Save(&order)
    if err != nil {
        return domain.Order{}, err
    }
    paymentErr := a.payment.Charge(&order)
    if paymentErr != nil {
        st, _ := status.FromError(paymentErr)
        fieldErr := &errdetails.BadRequest_FieldViolation{
            Field:       "payment",
            Description: st.Message(),
        }
        badReq := &errdetails.BadRequest{}
        badReq.FieldViolations = append(badReq.FieldViolations, fieldErr)
        orderStatus := status.New(codes.InvalidArgument, "order creation    
         failed")
        statusWithDetails, _ := orderStatus.WithDetails(badReq)
        return domain.Order{}, statusWithDetails.Err()
    }
    return order, nil
}

In the Application struct, we can see there are two dependencies we can mock, and in the PlaceOrder function, there is a db call and a payment service call. We also have error cases for both db and payment service calls. If we properly mock payment- and db-related calls, we can easily control the behavior to test branches in the PlaceOrder function.

We can use the following steps to create a mock for any interface:

  1. Create a mock struct for the payment interface.

  2. Embed mock.Mock as a field to this struct.

  3. Create a receiver function for the Charge method.

  4. Create a mock struct for the DB interface.

  5. Create a receiver function for the Save and Get functions that have the same signature as stated in a real interface.

type mockedPayment struct {
    mock.Mock                         
}
 
func (p *mockedPayment) Charge(order *domain.Order) error {
    args := p.Called(order)           
    return args.Error(0)              
}
 
type mockedDb struct {
    mock.Mock
}
 
func (d *mockedDb) Save(order *domain.Order) error {
    args := d.Called(order)
    return args.Error(0)
}
 
func (d *mockedDb) Get(id string) (domain.Order, error) {
    args := d.Called(id)
    return args.Get(0).(domain.Order), args.Error(1)
}

Embeds to track the activity of the payment

Tracks the function call with arguments

Tracks the function return values

Called() is a method on the mock object we can call directly because it is an anonymous property. (You can see the internals of this usage here: http://mng.bz/5wOq.) Now that we have mock behaviors of the Payment service, let’s add a simple test to verify PlaceOrder behavior on the Order service.

To simplify the test, let’s say that DB- and payment-related calls don’t return an error. In this case, we can verify that application.PlaceOrder does not return an error using an assert library:

func Test_Should_Place_Order(t *testing.T) {
    payment := new(mockedPayment)
    db := new(mockedDb)
    payment.On("Charge", mock.Anything).Return(nil)    
    db.On("Save", mock.Anything).Return(nil)           
 
    application := NewApplication(db, payment)
    _, err := application.PlaceOrder(domain.Order{
        CustomerID: 123,
        OrderItems: []domain.OrderItem{
            {
                ProductCode: "camera",
                UnitPrice:   12.3,
                Quantity:    3,
            },
        },
        CreatedAt: 0,
    })
    assert.Nil(t, err)                                 
 
}

There is no error on payment.Charge.

There is no error on db.Save.

err is null in this case.

We just wrote a unit test for the happy path scenario, but what happens if there is a problem on the db.Save() method? Let’s try to mock an error case to see the behavior change on PlaceOrder:

  1. db.Save() method returns an error with some message.

  2. Since this happens inside the PlaceOrder() function, we need to verify that the return value contains the error message from db.Save().

To test behavior, implement the following test:

func Test_Should_Return_Error_When_Db_Persistence_Fail(t *testing.T) {
    payment := new(mockedPayment)
    db := new(mockedDb)
    payment.On("Charge", mock.Anything).Return(nil)
    db.On("Save", mock.Anything).Return(errors.New("connection error"))   
 
    application := NewApplication(db, payment)
    _, err := application.PlaceOrder(domain.Order{
        CustomerID: 123,
        OrderItems: []domain.OrderItem{
            {
                ProductCode: "phone",
                UnitPrice:   14.7,
                Quantity:    1,
            },
        },
        CreatedAt: 0,
    })
    assert.EqualError(t, err, "connection error")                         
}

db.Save() returns a connection error.

application.PlaceOrder() should contain a connection error.

There could be an error on the payment.Charge() call, and solving this would be a bit complex because it contains a validation error message. Since the message comes from the Payment service, we get only the fields we need and return them to the end user. Let’s write a unit test for that flow:

func Test_Should_Return_Error_When_Payment_Fail(t *testing.T) {
    payment := new(mockedPayment)
    db := new(mockedDb)
    payment.On("Charge", mock.Anything).Return(errors.New("insufficient 
     balance"))                                                  
    db.On("Save", mock.Anything).Return(nil)
 
    application := NewApplication(db, payment)
    _, err := application.PlaceOrder(domain.Order{
        CustomerID: 123,
        OrderItems: []domain.OrderItem{
            {
                ProductCode: "bag",
                UnitPrice:   2.5,
                Quantity:    6,
            },
        },
        CreatedAt: 0,
    })
    st, _ := status.FromError(err)                                 
    assert.Equal(t, st.Message(), "order creation failed")         
    assert.Equal(t, 
     st.Details()[0].(*errdetails.BadRequest).FieldViolations[0]
     .Description, "insufficient balance")                       
    assert.Equal(t, st.Code(), codes.InvalidArgument)              
}

payment.Charge() fails.

Converts to status for the better assertion

This comes from the Order service.

Asserts field violations from the Payment service

Asserts status code

In this unit test, we expect to see an error after calling PlaceOrder and assert the error message and each validation error. To run tests, you can simply execute the following command:

go test ./..

If you want to see the coverage report in your test results, simply use the following test command:

go test ./... -cover -coverprofile=coverage.out

With -cover parameter, Go generates the coverage, and a report is written to the file coverage.out with a -coverprofile param. You can see the report in the execution output, as shown in figure 7.4.

Figure 7.4 The output of test execution with coverage

It seems straightforward to create mocks and control the behavior to test a SUT, but what if you have many interfaces in your project? In the next section, we will see how to generate these mocks automatically with simple automation.

7.2.5 Automatic mock generation

Hexagonal architecture encourages defining your ports as an interface, then implementing the adapters afterward. It is easy to mock interfaces because mocking libraries need exposed functions of an API. If there are lots of interfaces in your project to mock, you can use mockery (https://github.com/vektra/mockery). There are several options with mockery, and in this book, we will use a mockery executable that can be installed via homebrew if you are using macOS. Once you have mockery available in your system, navigate to your service (e.g., the order folder), then execute the following command/:

mockery –-all –-keeptree

You will see autogenerated files, such as *_mock.go, for each interface that contains the mocks. Instead of trying to mock method arguments and return values, mockery does that for us, and we can use those mocks in our unit tests. When they are needed, we can re-create mocks whenever we update an interface or introduce a new one. The flow is the same after generating mocks so that you can control their behavior and test a SUT. Now that we understand how to test individual system modules with unit testing methodology, let’s look at how to test the interaction between two modules to verify behavior.

7.3 Integration testing

In integration testing, we aim to test different modules together to verify they work as expected. For example, we have a DB adapter, and we can test if this adapter works well with a running MySQL database. You may ask, “Why do we want to access a real database?” Because we want to verify our codebase still works in case of a change in code or a version change on the DB side. This section will use Testcontainers (https://www.testcontainers.org/) to spin up a MySQL instance and pass the Testcontainers’ URL to a DB adapter. Let’s look at how to structure our test suite and initialize a MySQL instance using Testcontainers.

7.3.1 Test suite preparation

Test suite libraries help you prepare before and after tests and allow you to run your tests with those test preparations. In this book, we use suite (https://github.com/stretchr/testify/tree/master/suite), which comes from testify. You may want to act on the following cases in a test suite:

The flow diagram of test execution is shown in figure 7.5.

Figure 7.5 Test suite actions that help us to trigger action for various cases

To create a test suite, we simply create a struct and embed suite.Suite inside it. Let’s look at how we can create our test suite to prepare the MySQL test container and use it to test the DB adapter.

7.3.2 Working with Testcontainers

To use Testcontainers with Go, first, we add the testcontainers-go dependency to the Order service project; then use it in our tests to pull and run the MySQL container:

go get github.com/testcontainers/testcontainers-go

This will add the latest testcontainers-go dependency to the Order service project. In a typical Testcontainer setup, you can see the following events:

Now you can navigate to the order/internal/adapters folder and create a file named db_integration_test.go. The following struct should be added to the test file to define our test suite context:

type OrderDatabaseTestSuite struct {
    suite.Suite            
    DataSourceUrl string   
}

Enables the test suite

Datasource URL for each test

Now we are ready to add Testcontainers to initialize a MySQL instance in the Docker container in the TestSetupSuite function, with the receiver type OrderDatabaseTestSuite. This will create a MySQL container, verify it is up and running, then get the available endpoint URL and pass it to the test suite context:

func (o *OrderDatabaseTestSuite) SetupSuite() {                      
    ctx := context.Background()
    port := "3306/tcp"
    dbURL := func(port nat.Port) string {
        return fmt.Sprintf("root:s3cr3t@tcp(localhost:%s)/
         orders?charset=utf8mb4&parseTime=True&loc=Local", 
         port.Port())                                              
    }
    req := testcontainers.ContainerRequest{
        Image:        "docker.io/mysql:8.0.30",
        ExposedPorts: []string{port},
        Env: map[string]string{
            "MYSQL_ROOT_PASSWORD": "s3cr3t",
            "MYSQL_DATABASE":      "orders",
        },
        WaitingFor: wait.ForSQL(nat.Port(port), "mysql", 
         dbURL).Timeout(time.Second * 30),                         
    }
    mysqlContainer, err := testcontainers.GenericContainer(ctx, 
     testcontainers.GenericContainerRequest{
        ContainerRequest: req,
        Started:          true,
    })
    if err != nil {
        log.Fatal("Failed to start Mysql.", err)
    }
    endpoint, _ := mysqlContainer.Endpoint(ctx, "")
    o.DataSourceUrl = 
     fmt.Sprintf("root:s3cr3t@tcp(%s)/
     orders?charset=utf8mb4&parseTime=True&loc=Local", endpoint)   
}

Suite setup with receiver function

Used for a health check in the WaitFor field

Verifies DB with the SELECT 1 query

Sets DataSourceUrl to be used on each test

A security note here: we have used a root user in our tests, but it is good practice to use a nonroot user for accessing databases. Let’s implement a test for verifying DB adapter save functionality and see if MySQL is correctly initialized, then assert there is no error in the test after saving. These tests contain a receiver with the type OrderDatabaseTestSuite so that we have direct access to asserting functions. Now we can append db.Save() testing to the db_integrateion_test.go file:

func (o *OrderDatabaseTestSuite) Test_Should_Save_Order() {
    adapter, err := NewAdapter(o.DataSourceUrl)
    o.Nil(err)                                   
    saveErr := adapter.Save(&domain.Order{})     
    o.Nil(saveErr)                               
}

Assertions are available through receiver o.

Saves a new order

Asserts there is no error after saving

According to this test, there shouldn’t be an error after order information is saved to the database. Let’s add one more test for the db.Get() function verification; then we will see how to execute all tests in one suite. Create a sample order and save it to the database, then get it to verify if the returned order contains the same customerID we provided the initial order:

func (o *OrderDatabaseTestSuite) Test_Should_Get_Order() {
    adapter, _ := NewAdapter(o.DataSourceUrl)
    order := domain.NewOrder(2, []domain.OrderItem{   
        {
            ProductCode: "CAM",
            Quantity:    5,
            UnitPrice:   1.32,
        },
    })
    adapter.Save(&order)
    ord, _ := adapter.Get(order.ID)                   
    o.Equal(int64(2), ord.CustomerID)                 
}

Example order

Retrieves the order by its ID

Equal is accessible via the receiver.

Now that we have added all the tests, let’s add another function responsible for running them, suite.Run(..), with the argument OrderDatabaseTestSuite, which causes all the tests that contain a receiver with this type to run. We can append the following function to our test file as a final step:

func TestOrderDatabaseTestSuite(t *testing.T) {
    suite.Run(t, new(OrderDatabaseTestSuite))   
}

Runs all tests with the OrderDatabaseTestSuite receiver

Now you can navigate to internal/adapters/db and execute the following to run all the tests inside the suite:

go test ./...

This execution will spin up a test container for MySQL, and the tests will run against that database. This verifies that the DB adapter works well with a real MySQL database. We call it an integration test because we verify that the two modules work together. What if we want to test all the components together? Let’s look at how we can verify our application works well with all its dependencies.

7.4 End-to-end tests

We addressed testing one component with unit tests and integration tests to check consistency between two components, but to say our product is working, we need more than that. Here, we will do an end-to-end test by running the stack that contains the minimum set of required services and verifying a certain flow against this stack. To accomplish this goal, we will run MySQL database, Payment service, and Order service and use the testing techniques we used for previous sections with an order client to test a create order flow. We will create an order and get order details to assert each response field in that suite. Let’s look at high-level information about that setup and dive deep into each section.

7.4.1 Specifications

Here are the end-to-end test specifications we will use for our tests:

Based on these specifications, our test diagram will look like figure 7.6.

Figure 7.6 Application stack that test suite runs against

Now that we see the big picture, let’s look at what a Docker Compose YAML files looks like.

7.4.2 Understanding Docker Compose service definitions

Docker Compose lets you define service definitions in a YAML file, which contains two major sections: version and services. The version section helps the Docker Compose CLI understand the YAML file’s data structure. The services section defines service dependencies, requirements, and corresponding properties, as follows:

We will use these fields in this book, but if you are interested in other fields, you can refer to https://docs.docker.com/compose/. Now that we have insight into Docker Compose service fields, let’s look at how to structure our end-to-end test.

7.4.3 End-to-end test folder structure

To add a separate module to run our end-to-end tests, go to the root folder of our project and create a folder with the name e2e. You can navigate to that folder and initialize a module via the following command:

go mod init github.com/huseyinbabal/microservices/e2e

Do not forget to replace the username and repository in the go.mod file based on your needs. To store our docker-compose.yml and DB migration files, create a resources folder under the e2e folder. Once you add the docker-compose.yml file to the resources folder, we can proceed with the initial service definitions that contain the database layer. Notice that we just created a separate module dedicated to end-to-end tests that contains a Docker Compose file to define the required service layers for our tests. Let’s start with the database layer used by our microservices.

7.4.4 Database layer

In this layer, we provide a Docker image, mysql:8.0.30, and a password for the root user. We also have an SQL file mounted to this service. As a health check mechanism, we simply ping the MySQL server, which will be retried 20 times maximum, with a 5-second difference on each retry. If the command result provided in the test field returns true, this service will be marked as ready. You can append the following YAML definition to the docker-compose.yml file:

version: "3.9"
services:
  mysql:                                                    
    image: "mysql:8.0.30"
    environment:
      MYSQL_ROOT_PASSWORD: "s3cr3t"
    volumes:
      - "./init.sql:/docker-entrypoint-initdb.d/init.sql"   
    healthcheck:
      test: ["CMD", "mysqladmin" ,"ping", "-h", "localhost", "-uroot", "-
       ps3cr3t"]
      interval: 5s
      timeout: 5s
      retries: 20

Service name as a key

The SQL file contains DB creations.

You can create an init.sql file and append the following SQL script to prepare our databases when the MySQL container starts:

CREATE DATABASE IF NOT EXISTS payments;
CREATE DATABASE IF NOT EXISTS orders;

The following command will provision a MySQL container with the payments and orders database:

cd resources && docker-compose up

This command will provision a database container, and before accepting a new connection, it will create two databases and the test section will start to work. Now that we know how to create the database container, let’s look at the Payment service to integrate it with a database that is already up and running with a stack.

7.4.5 The Payment service layer

The Payment service depends on the database layer because it stores payment information for specific orders. To run the Payment service, we will need a Docker image built during the test container startup process, but we don’t have a Dockerfile yet. In a Dockerfile, we express which parent Docker image will be used, and, in our case, we have two parent images: one for compilation and one for runtime. This type of build is called a multistage build, which we will cover in detail in chapter 8. For now, it is enough to know we use a Golang base image to build the payment project and a scratch image and payment executable to run the application.

You can navigate the payment folder and create a Dockerfile with the following content:

FROM golang:1.18 AS builder                               
WORKDIR /usr/src/app                                      
COPY . .                                                  
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o payment 
 ./cmd/main.go                                          
 
FROM scratch
COPY --from=builder /usr/src/app/payment ./payment        
CMD ["./payment"]                                         

The builder name is alias.

Changes the working directory

Copies the source code of the payment

Builds the binary executable

Copies the binary from the builder stage

The payment executable is an entry point.

We also need some configurations in the environment variables: application port, database URL, and so forth. You can append the following service definition for the Payment service to the docker-compose.yml file as follows:

version: "3.9"
services:
  mysql:
  ...
  payment:
    depends_on:                     
      mysql:
        condition: service_healthy
    build: ../../payment/           
    environment:                    
      APPLICATION_PORT: 8081
      ENV: "development"
      DATA_SOURCE_URL: "root:s3cr3t@tcp(mysql:3306)/
       payments?charset=utf8mb4&parseTime=True&loc=Local"

Depends on running the mysql service

Dockerfile location for the payment

Required configurations

We can now continue with the Order service.

7.4.6 The Order service layer

The Order service definition is almost the same as the Payment service, except it has an additional configuration in the environment: a port exposed for our test suite to access. The only difference between the payment and order Dockerfile is the binary executable name:

FROM golang:1.18 AS builder
WORKDIR /usr/src/app
COPY . .
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o order 
 ./cmd/main.go
 
FROM scratch
COPY --from=builder /usr/src/app/order ./order
CMD ["./order"]

From the order folder, add this content to the Dockerfile. The service definition of the Order service is as follows:

version: "3.9"
services:
  mysql:
  ...
  payment:
  ...
  order:
    depends_on:
      mysql:
        condition: service_healthy
    build: ../../order/
    ports:
          “8080:8080”                          
    environment:
      APPLICATION_PORT: 8080
      ENV: "development"
      DATA_SOURCE_URL: "root:s3cr3t@tcp(mysql:3306)/
       orders?charset=utf8mb4&parseTime=True&loc=Local"
      PAYMENT_SERVICE_URL: "localhost:8081"    

The test suite will use this port.

User payment gRPC connection

Now that we added the Order service definition, we can see how to use the docker-compose.yml file within our test suite.

7.4.7 Running tests against the stack

We will use the same test suite strategy here that we did with integration tests, and the application stack will be provisioned in the SetupSuite section. Here, we will have Docker Compose a reference (testcontainers.LocalDockerCompose), which will be available to the suite via the test suite struct (CreateOrderTestSuite). We will have only one test to test order creation flow during which we will create an order gRPC client to call Create and Get endpoints. Once we finish the test, the application stack will be destroyed in the TearDownSuite section. We can start by creating a file with the name create_order_e2e_test.go under the e2e folder and add the following struct:

type CreateOrderTestSuite struct {
    suite.Suite                                  
    compose *testcontainers.LocalDockerCompose   
}

Suite dependency to use via the receiver function

Docker Compose reference

In the SetupSuite section, we will use the e2e/resources/docker-compose.yml file we prepared previously for the Docker Compose up operation through test containers, as follows:

func (c *CreateOrderTestSuite) SetupSuite() {
    composeFilePaths := []string{"resources/docker-compose.yml"}      
    identifier := strings.ToLower(uuid.New().String())                
 
    compose := testcontainers.NewLocalDockerCompose(composeFilePaths,
     identifier)
    c.compose = compose                                               
    execError := compose.
        WithCommand([]string{"up", "-d"}).
        Invoke()                                                      
    err := execError.Error
    if err != nil {
        log.Fatalf("Could not run compose stack: %v", err)
    }
}

docker-compose.yml we just prepared

Randomized docker-compose file name

Sets shared Docker compose reference

Equals docker-compose up -d

docker-compose up operation, which causes application stack creation, will be executed first. Our stack is ready and will create a gRPC connection to the Order service, which lives in the Docker container. We will do two things here:

Add the following test after the SetupSuite section:

func (c *CreateOrderTestSuite) Test_Should_Create_Order() {
    var opts []grpc.DialOption
    opts = append(opts, grpc.WithTransportCredentials(insecure.NewCredentials()))
    conn, err := grpc.Dial("localhost:8080", opts...)            
    if err != nil {
        log.Fatalf("Failed to connect order service. Err: %v", err)
    }
 
    defer conn.Close()
 
    orderClient := order.NewOrderClient(conn)                    
    createOrderResponse, errCreate := 
     orderClient.Create(context.Background(), &order.CreateOrderRequest{
        UserId: 23,
        OrderItems: []*order.OrderItem{
            {
                ProductCode: "CAM123",
                Quantity:    3,
                UnitPrice:   1.23,
            },
        },
    })                                                           
    c.Nil(errCreate)                                             
 
    getOrderResponse, errGet := orderClient.Get(context.Background(), 
     &order.GetOrderRequest{OrderId: createOrderResponse.OrderId})
    c.Nil(errGet)
    c.Equal(int64(23), getOrderResponse.UserId)
    orderItem := getOrderResponse.OrderItems[0]
    c.Equal(float32(1.23), orderItem.UnitPrice)
    c.Equal(int32(3), orderItem.Quantity)
    c.Equal("CAM123", orderItem.ProductCode)
}

localhost:8080 goes to the Order service in the stack.

Initializes Order gRPC client

Example order request

Verifies there is no error

After success or failure, we need to shut the application stack down so as not to consume extra resources. We can do that in the TearDownSuite phase by using the Docker Compose reference and invoking the following shutdown operation:

func (c *CreateOrderTestSuite) TearDownSuite() {
    execError := c.compose.
        WithCommand([]string{"down"}).
        Invoke()                       
    err := execError.Error
    if err != nil {
        log.Fatalf("Could not shutdown compose stack: %v", err)
    }
}

Equals Docker Compose down

As a final step, we can add a runner section to run the entire test:

func TestCreateOrderTestSuite(t *testing.T) {
    suite.Run(t, new(CreateOrderTestSuite))
}

Our end-to-end test is almost ready; we just need to add the Order service client as a dependency to the e2e project via the following command:

go get github.com/huseyinbabal/microservices-proto/golang/order

Now you can use NewOrderClient and other order-related resources for your test. After all, you should have a folder structure, as shown in figure 7.7.

Figure 7.7 End-to-end module structure

You can now navigate to the e2e folder and execute the following test command to see how it works:

go test -run "^TestCreateOrderTestSuite$"

Notice we provide a regex to test the run command and start executing from TestCreateOrderTestSuite, the test suite runner.

To wrap up the end-to-end test, we simply run our application stack and run our tests against it, thanks to Testcontainers providing a good abstraction for Docker Compose that lets us run our stack via service definitions. This kind of test may take longer because it provisions a real system to verify features. Now that we have seen all the major test strategies for microservice architecture, let’s look at how we can measure the coverage of our tests for the entire application.

7.5 Test coverage

The test coverage operation’s primary motivation is to understand the missing test cases for production code. Code coverage is a strategy to detect how much of the application’s entire codebase is covered by tests. Golang has very good built-in features for testing, and coverage can be automatically handled during test execution with the -cover parameter:

go test -cover ./...

Once you execute this command, you can see the coverage information for each package among test execution status:

ok      
 github.com/huseyinbabal/microservices/order/internal/application/core/a
 pi       0.274s  coverage: 93.3% of statements

At the end of the line, you can see the coverage information of that package, which mostly describes your confidence level in the codebase (more coverage means you know your codebase better), and you can refactor your codebase with fewer problems. It is simple to see the coverage information for your tests, but let’s look at how we can see the distribution of this coverage under a package.

7.5.1 Coverage information

With the -cover parameter, you can see the percentage of each package in the output. Completing the following steps shows a detailed report to drill down files, functions, and so on:

  1. Redirect the coverage output to a file.

  2. Use a built-in coverage tool to convert it to an HTML file.

You can use the following command to redirect detailed coverage information to a file:

go test -coverprofile=coverage.out

This will save coverage information into the coverage.out file, and then we are ready to pass this file to the following command to generate an HTML report.

go tool cover -html=coverage.out

coverage.out is provided via the -html option, which should generate an HTML report. In a modern automated environment, we are not interested in coverage reports in HTML format, but the coverage.out file can be passed to modern tools in our CI pipeline to maintain code quality for our repository. Now let’s look at a brief introduction for using tests and coverages in a CI flow in our environment.

7.5.2 Testing in a CI pipeline

Continuous integration (CI) is a special automation that aims to integrate code changes. Those code changes can trigger testing and artifact generation, such as a Docker image or JAR file, to verify changes or build an artifact after changes are already approved. As a more concrete example, if you create a pull request (PR) in GitHub, CI can trigger a set of actions to calculate coverage and do some static code analysis to verify the code changes for maintaining quality. Once you merge this PR into the main branch, the CI can trigger a Docker image build operation with a special tag to use in application deployment. You can also configure rules, such as if the coverage is dropped under a certain threshold, the the PR check should fail. This will force the maintainer to revisit changes and add more tests to increase coverage. The more coverage you have, the more confidence you have during any change in the codebase. Figure 7.8 shows an overview of PR flow.

Figure 7.8 PR flow with checks and artifact generation

CI is powerful in maintaining code quality and reducing distraction while developing software. You can focus more on business logic development while CI handles checks and artifact generation for you.

Summary