Testing and debugging

Introduction

The auto-generated *.test.ts files are used for local debugging and testing of services, commands, external entities, operations and agents against the cluster (integration tests). Therefore, through the runner you can provide the input (i.e. the input values) for the execution of a service, command or operation and also access their output after the execution of the script. For all services, commands, external entities, operations the debugging process works in a similar way. From the runner component you are able to access the input and output objects.

The input object is used to provide the values of the input properties of the service or command. For operations the input properties belong in the request parameters and request body. In contrary, the output object is used to read the values of the output (or response for the operations) properties after the await runner.run() has been successfully executed. Moreover, the await runner.run() is the line to execute the service, command or operation. The run() function either does not retrieve an input at all or, in case of instance commands, it retrieves as input the instance id.

Tip:

For unit tests, which will be automatically executed with every pipeline run (if "Enable unit test execution" is set to true), you need to use the *.unit.test.ts files. Only files named with this pattern will be recognized. By default, "Enable unit test execution" is set to false, since there are no unit tests generated by the system.

Preparation

Note:

In order to use the debugging and testing features described below, make sure you both

Note:

In order to run the pre-generated *.test.ts files you need to set up a connection before. See instructions below.

Prepare the CLI

You can download the required *-cli-config.json file from the Solution Envoy of the runtime (stage) you want to connect with. To do so

  1. open Solution Designer
  2. go to CI/CD
  3. in the Pipeline Configurations table search for a "deploy" pipeline for your desired deployment target
  4. click on the link provided in the Solution Envoy column
  5. inside Solution Envoy click on Infrastructure
  6. click on Download in the Solution CLI Setup window to download the config file

This file will be prefixed with the name of the stage and ending with -cli-config.json.

k5 setup-envoy -f <path/to/your/{name of your stage}-cli-config.json>
k5 prepare-debug

The CLI will prompt you to authenticate yourself.

Setup of local bindings

Note:

The setup of local bindings is only required if your project contains any API bindings or uses events.

Create a local-bindings.json file and place it in your project's root directory. Make sure it's added to the .gitignore file, so it will not get uploaded to your git repository.

Depending on your project you might have to set up event topic bindings if it uses events or API bindings for an API dependency that you modelled within an integration namespace. If your project uses both than you have to configure both in the local-bindings.json file.

Topic bindings

Warning:

Currently, it is only possible to connect to a local instance of Kafka and not to the cluster's instance.

For services, commands or agents that use events, you need to provide a local configuration for the topic binding(s).

Tip:

You can find the name(s) of the topic binding(s) your project uses in Solution Designer.

  1. Install and configure your local Kafka broker see https://kafka.apache.org/quickstart
  2. For each event topic add a proper topic binding configuration in the local-bindings.json file. Same goes for Kafka bindings like below:

Example of a local-bindings.json file:

For each topic in use add a mapping and also for each kafka binding in use.

  {
  "topicBindings": {
    "<Topic Binding Name>": {
      "topicName": "<Kafka Topic Name>",
      "kafkaBinding": "<Kafka Binding Name>"
    },
    "<Topic Binding Name>": {
      "topicName": "<Kafka Topic Name>",
      "kafkaBinding": "<Kafka Binding Name>"
    }
  },
  "kafkaBindings": {
    "<Kafka Binding Name>": {
      "kafka_brokers_sasl": [
        "<Kafka Broker sasl>"
      ],
      "user": "<Username>",
      "password": "<Password>"
    },
    "<Kafka Binding Name>": {
      "kafka_brokers_sasl": [
        "<Kafka Broker sasl>"
      ],
      "user": "<Username>",
      "password": "<Password>"
    }
  }
}

Explanation:

To find out which topic bindings this project uses you can open the project in Solution Designer. You can find the relevant information on the topic bindings in Solution Hub > Topic Bindings.

  • <Topic Binding Name>: the name of the event topic binding in Solution Designer
  • topicName:: the name of the topic on the Kafka cluster
  • kafkaBinding: must match one of the kafkaBindings listed in "kafkaBindings" configuration
Warning:

In case you modelled events in your project, topicBindings and kafkaBindings are mandatory keys for local debugging.

Note:

Kafka Broker user and password are optional if your Kafka broker is not secured

API bindings

In case you created an API binding for an API Dependency of your project you need to add a configuration for API bindings to the local-bindings.json file.

Example of a local-bindings.json file:

  {
  "apiBindings": {
    "<API Binding Name>": {
      "url": "example1.com",
      "k5_propagate_security_token": true,
      "ca_cert": "<PEM-formatted certificate string>"
    },
    "<API Binding Name>": {
      "url": "example2.com",
      "k5_propagate_security_token": true
    },
    "<API Binding Name>": {
      "url": "example3.com",
      "k5_propagate_security_token": true,
      "custom_key": "custom_value"
    }
  }
}

Explanation:

  • <API Binding Name>: the name of the API binding (see API bindings on how to get this information)
  • <url>: the URL of the external API that you want to call
  • k5_propagate_security_token: a boolean that determines if the JWT will be forwarded automatically
  • ca_cert: (optional) the certificate as a PEM formatted string
  • custom_key: (optional) this can be any key/value that you added to this API binding ( see API bindings on how to add custom keys)
Tip:

Put all binding configurations (API bindings, topic bindings and kafka bindings) in the same local-bindings.json file.

Configure schema registry

Note:

The configuration of schema registry is only required if you want to debug logic that publishes events using schemas from the schema registry.

To configure the schema registry for local debugging, please execute the following steps:

  1. Create a new file .env in the root directory of your project folder

  2. Add the schema registry configuration properties to the .env. Please choose the right values, depending on your installation. You can get the relevant information either from your administrator or by checking the environment variables of your deployed service using the OpenShift console.

    SCHEMA_REGISTRY_SECURITY_ENABLED=<true/false>
    SCHEMA_REGISTRY_URL=<schema-registry-url>
    SCHEMA_REGISTRY_AUTH_SERVER_URL=<schema-registry-authentication-server-url>
    SCHEMA_REGISTRY_AUTH_REALM=<schema-registry-authentication-server-url>
    SCHEMA_REGISTRY_CLIENT_ID=<schema-registry-˚client-id>
    SCHEMA_REGISTRY_CLIENT_SECRET=<schema-registry-client-secret>
    Note:

SCHEMA_REGISTRY_AUTH_SERVER_URL, SCHEMA_REGISTRY_AUTH_REALM, SCHEMA_REGISTRY_CLIENT_ID, SCHEMA_REGISTRY_CLIENT_SECRET are only needed if the schema registry has security enabled

  1. Open the file /.vscode/launch.json
  2. Add the property "envFile" to the configurations and set the value to the just created .env file. The launch.json should look similar:
{
  "version": "0.2.0",
  "configurations": [
    {
      "type": "node",
      "request": "launch",
      "name": "Current Test",
      "protocol": "inspector",
      "showAsyncStacks": true,
      "console": "internalConsole",
      "internalConsoleOptions": "openOnSessionStart",
      "preLaunchTask": "Update Debug Credentials",
      "program": "${workspaceFolder}/node_modules/mocha/bin/_mocha",
      "cwd": "${workspaceFolder}/",
      "args": [
        "--require",
        "ts-node/register/transpile-only",
        "--colors",
        "${file}"
      ],
      "outputCapture": "std",
      "envFile": "${workspaceFolder}/.env"
    }
  ]
}

Setup your IDE

The steps that must be followed are:

  1. Open a *.test.ts file (it won't work with implementation files)
  2. Set some breakpoints in the implementation files
  3. Navigate to the debug section on the left menu bar
  4. Launch the debug in VS Code on "Current Test"
  5. On the sidebar it is possible to trace variables
Note:

In order to debug the scripts, a default launch configuration for Microsoft VS Code is provided.

Test environment

With TestEnvironment one is able to create new instances of entities and then perform one or more test scenarios on the created instances. After the tests have been executed, the created instances can be deleted at once using the cleanUp() method.

Example:

describe('solution:Command1', () => {
  // we define the testEnvironment so that it is accessible in the test blocks
  // We need to create a new instance of our TestEnvironment
  // Each instance of it can handle its own test data

  const testEnvironment = new TestEnvironment();

  // we define the created entity so that it is accessible in the test blocks
  let createdEntity;
  before(async () =>
  {
    createdEntity = testEnvironment.factory.entity.cptest.RootEntity1();
    // We set values to each property
    createdEntity.property1 = "value1";
    createdEntity.property2 = "value2";

    // We create the entity in the database
    await createdEntity.persist();
  });

  // This block will define what will happen after all tests are executed.
  after(async () =>
  {
    // Delete everything we've created
    // through the testEnvironment in this test session
    await testEnvironment.cleanUp();
  });

  // 1: Create and delete the entity in the actual test.
  // In this case you do not need the before() and after() blocks
  it('works on a RootEntity2', async () =>
  {

    // Initialize the entitiy
    const rootEntity2 = testEnvironment.factory.entity.cptest.RootEntity2();

    // Set value to the properties of the entitiy
    rootEntity2.property1 = "value1";

    // Create the entity in the database
    await rootEntity2.persist();

    const runner = new cptest_Command1Runner();
    // Run the test on the instance we created before
    await runner.run(rootEntity2._id);

    expect(true).to.equal(true);

    // Delete the instance created before
    await rootEntity2.delete();
  });

  // 1: Use the find() function to search for a specific entity that was created in before() block
  // do not need to delete manually, after() block will do it for you
  it('works on a rootEntity1', async () =>
  {

    // The before() block will run automatically before this test, provided it was implemented

    // Find an instance that already exists
    const foundEntity = await testEnvironment.repo.cptest.RootEntity1.find(true, 'myFilter');

    const runner = new cptest_Command1Runner();
    // Run the test on the instance that already exists
    await runner.run(foundEntity._id);

    expect(true).to.equal(true);

    // The after() block will run automatically
  });
});

Debug factory commands

Debug a Factory Command by following the structure below:

it('works on an existing rootEntity1 that we find', async () => {

    // The beforeAll() block will run automatically before this test, provided it was implemented

    const runner = new cptest_FactoryCommand1Runner();

    // Give input to factory command
    runner.input = testEnvironment.factory.entity.ns.FactoryCommandIdentifier_Input();
    runner.input.property1 =  value1;
    runner.input.property2 =  value2;

    // This will return the created instance of the root entity
    const factory_output = await runner.run();

    expect(true).to.equal(true);
});

Debug instance commands

Debug an Instance Command by following the structure below:

it('works on an existing rootEntity1 that we find', async () => {

    // Give input to factory command
    runner.input = testEnvironment.factory.entity.ns.InstanceCommandIdentifier_Input();
    runner.input.property1 =  value1;
    runner.input.property2 =  value2;


    // Use the Id of the created entity
    // This will return the modified instance of the root entity
    const instance_output =  await runner.run(createdEntity._id);

    expect(true).to.equal(true);

    instance_output._id;
    instance_output.prop1;
    instance_output.prop2;
});

Debug services

Debug a Service by following the structure below:

it('works on an existing rootEntity1 that we find', async () => {
     // Initialize runner 
     const runner = new cptest_Service1Runner();

     // Give input to factory command
     runner.input = testEnvironment.factory.entity.ns.Service1Identifier_Input();
     runner.input.property1 =  value1;
     runner.input.property2 =  value2;

     // This returns the output entity
     const service_output = await runner.run();

     expect(true).to.equal(true);

     // Get the output of the service and store it in local variable
     service_output.prop1;
     service_output.prop2;
});

Debug agents

Debug an Agent by following the structure below:

it('works on an existing agent', async () => {   
     // Initialize runner    
     const runner = new cptest_Agent1Runner();

     // Set message key, headers and timestamp 
     runner.messageKey = 'messageKey'
     runner.messageHeaders['key'] = 'value';
     runner.messageTimestamp = new Date.now().toString();

     // Execute Agent
     await runner.run();

     expect(true).to.equal(true);

});

Debug external entities

Debug an External Entity by following the structure below:

describe('ns:ExternalEntityId', () => {

  const testEnvironment = new TestEnvironment();
  before(async () => {
    // This block will run automatically before all tests.
    // Alternatively, use beforeEach() to define what should automatically happen before each test.
    // This is an optional block.
  });
  after(async () => {
    // This block will run automatically after all tests.
    // Alternatively, use afterEach() to define what should automatically happen after each test.
    // This is an optional block.

    // Recommended: remove all instances that were created
    // await testEnvironment.cleanup();
  });

  describe('create', () => {
    it('works', async () => {
     // const runner = new externalEntityRunners.ns_ExternalEntityIdConstructorRunner();
     // await runner.run();
      console.warn('No tests available');
      expect(true).to.equal(true);
    });
  });

  describe('load', () => {
    it('works', async () => {
      // const runner = new externalEntityRunners.ns_ExternalEntityIdLoaderRunner();
      // await runner.run();
      console.warn('No tests available');
      expect(true).to.equal(true);
    });
  });

  describe('validate', () => {
    it('works', async () => {
       // const runner = new externalEntityRunners.ns_ExternalEntityIdValidatorRunner();
       // await runner.run(false);
      console.warn('No tests available');
      expect(true).to.equal(true);
    });
  });
});

Debug operations

Debug an Operation by following the structure below:

import { TestEnvironment } from 'solution-framework';
import { DebugRequestContext } from 'solution-framework';
import { loadAndPrepareDebugConfig } from 'solution-framework';
import { TestRequest as Request } from 'solution-framework';
import { ApiOperation1Api } from './ApiOperation1Api';
import { errors } from 'solution-framework';

describe('ApiOperation1Api', () => {
  const testEnvironment = new TestEnvironment();
  let requestContext: DebugRequestContext;
  before(async () => {
    // This block will run automatically before all tests.
    // Alternatively, use beforeEach() to define what should automatically happen before each test.
    // This is an optional block.
    requestContext = loadAndPrepareDebugConfig().requestContext;

  });
  after(async () => {
    // This block will run automatically after all tests.
    // Alternatively, use afterEach() to define what should automatically happen after each test.
    // This is an optional block.

    // Recommended: remove all instances that were created
    // await testEnvironment.cleanup();
  });

});

Debug and test success scenario

  it('successfully execute API Operation', async () => {

    const  apiOperation1ApiInstance = new ApiOperation1Api(requestContext);
    // Declaring request
    let request: Request.ApiOperation1Request;

    // Initializing request path parameters
    request.path = {
      someId: 'some id',
      anotherId : 'some id'
    };

    // Initializing request query parameters
    request.query = {
        from: '1',
        to: '20',
    };

    // Initializing request body
    request.body = {
        property1: 'property 1',
        property2: 'property 2',
    };

    // Calling operation
    await apiOperation1ApiInstance.apiOperation1(request);
    
    // Accessing the response status code
    expect(apiOperation1ApiInstance.response.statusCode).to.equal(200);

    // Accessing the response body
    expect(apiOperation1ApiInstance.response.body.myProperty).to.equal('some property');
  });

Debug and test error handler

  it('sets error response if an error happens', async () => {

    const  apiOperation1ApiInstance = new ApiOperation1Api(requestContext);
    // Declaring request
    let request: Request.ApiOperation1Request;

    // construct a new error (e. g. AggregateNotFoundError)
    const error: Error = {
      message: 'your error message',
      name: 'mockingName'
    }
    const aggregateError = new errors.AggregateNotFoundError(error, 'someAggregateId');

    // Initialize request using the created error
    await apiOperation1ApiInstance.apiOperation1ErrorHandler(aggregateError, request);

    // Accessing the response status code
    expect(apiOperation1ApiInstance.response.statusCode).to.equal(404);

    // Accessing the response body
    expect(apiOperation1ApiInstance.response.body.customErrorMessage).to.equal('An error happened');
  });

Change default log level

Either adjust the Project Configuration or the Solution-Specific Configuration in the Configuration Management with the following value:

configmap:
  extraConfiguration:
    de.knowis.cp.ds.action.loglevel.defaultLevel: INFO

The log level can be changed here as needed either to INFO, DEBUG, TRACE, ERROR or WARN.

Configure different log levels

Prerequisites

  1. Create a JSON file named log-config.json in your project's root directory
  2. Add an entry in the .gitignore file for log-config.json so it is not pushed to your repository
  3. Adjust your VS Code launch configuration to allow output display from std. Open .vscode/launch.json in configurations and add "outputCapture": "std"

Supported log levels

The supported log levels are:

  • error
  • warn
  • info
  • debug
  • trace

Configure log levels using module names

Configure solution-framework log level

The below example configures the solution-framework to be at error log level, this is achieved by placing an entry in log-config.json file with key "solution-framework" and desired log level, in this example error

{
  "solution-framework": "error"
}

Configure project implementation files

The below example will set the log level for all files within the project's src-impl folder (including test files) to debug. This is achieved by placing an entry in log-config.json file with key "your-solution-acronym" and desired log level, in this example debug

{
  "ORDERS": "debug"
}

Configure using specific paths

In the example below:

  1. Every file under the path "/src-impl/api/apitest/operations" in your project will be configured to log level debug
  2. Test file "/src-impl/api/apitest/operations/addDate.test" will be configured to log level warn
  3. File "/src-impl/api/apitest/operations/addDate" will be configured to log level warn
  4. All sdk files under "/sdk/v1" will be configured to log level error
  5. All sdk files under "/sdk/v1/handler" will be configured to log level trace
{
  "/src-impl/api/apitest/operations/*": "debug",
  "/src-impl/api/apitest/operations/addDate.test": "warn",
  "/src-impl/api/apitest/operations/addDate": "trace",
  "/sdk/v1/*": "error",
  "/sdk/v1/handler/*": "trace"
}
Note:

When using paths, a path always starts with forward slash '/' and doesn't include extension (.js or .ts). If no log-config.json is available, the default log level is info. A specific file log level will always take precedence over its parent folder log level, same with sub-folders and their parent folders.

Warning:

If trace log level is configured, the logs might contain sensitive user information such as ID token.

Unit testing for TypeScript / JavaScript projects

The *.unit.test.ts or *.unit.test.js files are used to unit test the TypeScript / JavaScript code that is explicitly added by the user.

If the user enables Enable unit test execution flag while creating the pipeline, it will run the command npm run test:unit which will look for the test:unit script in package.json. For example, in a Domain Service Projects (TypeScript), the package.json has something like this

{
  "scripts": {
    "test:unit": "./node_modules/.bin/mocha --require ./node_modules/ts-node/register/transpile-only -c --recursive test/*.unit.test.ts"
  }
}

If the Enable unit test execution flag in a pipeline is enabled, the pipeline will run the script test:unit and will look for *.unit.test.ts files inside the test folder.

As an example, to test a utility function that is defined in src-impl/util/add.ts:

export function add(a: number, b: number): number {
  return a + b;
}

Test file test/add.unit.test.ts:

import { expect } from 'chai';
import { add } from '../src-impl/util/add';

describe('generic test', () => {
  it('adds 2 numbers', () => {
    expect(add(1, 2)).to.equal(3);
  })
})