The idea

Unittests and integration-tests are a great tool not to break your code. They are building a safety-net to help you when you have to add a new feature or fixing a bug in an existing codebase.

But of course there will be situations when a bug will occur which was not covered by your test-suite. One way to get an understanding what went wrong are logfiles. You can use them to write a protocol what happened during runtime. When something useful ( like creating a new entry into a database ) happened you can write this information with a timestamp into your protocol. When a bug occurs like disk is full an error-entry will be created. And you can use it to log some internal states of your application. When the log-entries are well maintained they help you to get a better understanding what happened during a crash ( and of course what went wrong before ). And you can use them to post warnings like: be careful, this API-call is deprecated.

But do you watch your logs in your unit-tests and integration-tests? Maybe there are interesting information stored in them for a test-fixure which you should take care of as well. For instance when you declare an API-call as deprecated, but this call is still in use in a different sub-system it would be great to get this as an error in a unittest. Or when some kind of warning occurrs at some point of your log. We observed stuff like that in production code more than once. To take care of these situations we added a functionality called a log-filter: you can use it to define expected log-entries like an error which must occur in a test because you want to test the error behaviour. When unexpected entries are there the test will fail. So you will see in your tests what shall happen and what not.

Do some coding

Lets start with a simple logging service for an application:

My basic concept for a logging service looks like:

  • A logger is some kind of a service, so only one is there. Normally I am building a special kind of singleton to implement it ( yes I know, they are bad, shame on me ). You can create them on startup at the beginning. You shall destroy them at the end of the application ( last call before exit( 0 ) )
  • Log entries have different severities, for instance:
    • Debug: Internal states messages for debugging
    • Info: All useful messages
    • Warn: Warnings for the developer like “API-call deprecated”
    • Error: External errors like disk full or DAU-user-error
    • Fatal: An internal error has occurred caused by a bug
  • You can register log-streams to the logger, each logstream will write the protocol to a special output like a log-file or a window from the frontend.

In code this could look like:

class AbstractLogStream {
  virtual ~AbstractLogStream();
  virtual void write( const std::string &message ) = 0;

class LoggingService {
  // the entry severity
  enum class Severity {
  static LoggingService &create();
  static void destroy();
  static LoggingService &getinstance();
  void registerStream( const AbstractLogStream &stream );
  void log( Severity sev, const std::string &message, 
    const std::string &file, unsigned int line );
  static LoggingService *mInstance;

With this simple API you can create and destroy your log service, log messages of different severities and register your own log-streams.

Now we want to observer the entries during your unittests. A simple API to do this could look like:

class TestLogStream : public AbstractLogStream {
  void write( const std::string &message ) override {
    TestLogFilter.getInstance().addLogEntry( message );

class TestLogFilter {
  static TestLogFilter &create();
  static void destroy();
  static TestLogFilter &getInstance();
  void addLogEntry( const std::string &message );
  void registerExpectedEntry( const std::string &message );
  bool hasUnexpectedLogs() const {
    for ( auto entry : m_messages) {
      std::vector::iterator it( std::find( m_expectedMessages.begin, m_expectedMessages.end(), entry );
      if ( it != m_expectedMessages.end() ) {
        return true;
    return false;


  std::vector m_expectedMessages;
  std::vector m_messages;

The filter contains two string-arrays:

  • One contains all expected entries, which are allowed for the unittest
  • The other one contains all written log-entries, which were written by the TestLogStream during the test-fixure

Let’s try it out

You need to setup your testfilter before runnning your tests. You can use the registerExpectedEntry-method to add an expected entry during your test-execution.
Most unittest-frameworks support some kind of setup-callback mechanism before executing a bundle of tests. I prefer to use gtest. So you can create this singleton-class here:

#include <gtest/gtest.h>

class MyTest : public ::testing::test {
  virtual void SetUp() {
    LoggingService::getInstance().registerStream( new TestLogStream );
    TestLogFilter::getInstance().registerExpectedEntry( "Add your entry here!" );

  virtual void TearDown() {
    EXPECT_FALSE( TestLogFilter::getInstance().hasUnexpectedLogs() );

TEST_F( MyTest, do_a_test ) {

First you need to create the logging-service. In this example only the TestLogStream will be registered. Afterwards we will register one expected entry for the test-fixure.
When all tests have proceeded the teatDown-callback will check, if any unexpected log-entries were written.

So when unexpected entries were detected the test will be marked as a failure. Andy you can see if you forget to deal with any new behaviour.

What to do next

You can add more useful stuff like:

  • Add wildcards for expected og entries
  • Make this thread-safe

The latest version of QtCreator brings an option to run static-code-analysis using Clang. I struggled a lot with the setup of Coverity for Asset-Importer-Lib, so I had some hope that the setup for Clang will be a little bit easier. I wanted to run it on Windows 10 first, then move to Linux. So here is the report of my experiences:

First thing to do is get the latest QtCreator-version, the latest one is QtCreator-4.0.0. You can find it here: QtCreator-Homepage .

QtCreator is able to open CMake-based projects. What a luck: Asset-Importer-Lib is based on a CMake build. So open it and run the clang-analyser, theoretically.

Unfortunately there is a bug with clang-analyser when you are using the Visual-Studio to build it. You can find the corresponding bug here: . When using VS together with the clang-analyser the executable of clang cannot been started in the correct way. The workaround to get it running is easy: add the folder conaining clang in the QtCreator-bin-directory to your Enrironment-variable path.

Did it, restarted QtCreator, open Asset-Importer-Lib, clang analysis began to work …

To be continued …

If you want to generate a 64bit-build for Asset-Importer-Lib by using the Visual Studio project files generated by CMake please follow these instructions:

  • Make sure that you are using a supported cmake ( 2.8 or higher at the moment )- and Visual-Studio-Version ( on the current master VS2010 is deprecated )
  • Clone the latest master Asset-Importer-Lib from github
  • Generate the project files with the command: cmake -G”Visual Studio 14 Win64″
  • Open the project and build the whole project
  • Enjoy the 64-bit-version of your famous Asset-Importer-Lib

This should help you if you a struggeling with this feature. We just learned that just switching to code generation for 64bit does not work.

If you are looking for the latest Asset Importer Lib build: we are using appveyor
( check their web-site, its free for OpenSource projects ).
as the Continuous Integration service for windows. If the build was successful it
will create an archive containing the dll’s, all executables and the export
libraries for Windows. At the moment we are supporting the following versions:

    – Visual Studio 2015
    – Visual Studio 2013
    – Visual Studio 2012

I am planning to support the MinGW version as well. Unfortunately first I have to
update one file which is much too long for the MinGW-compiler ( thanks to the
guy’s from the Qt-framework ).

Do you know the assert-macro? It is an easy tool for debugging: You can use it to
check if a pointer is a NULL-pointer or if your application is in a proper state
for processing. When this is not the case, if will stop your application,
when you are using a debug mode, in release mode normally nothing happens.
Depending on your platform this can vary a little bit. For instance the
Qt-framework prints a log-message if you have a failed assert test to stderr when
you are currently using a release build. So assert is a nice tool to check
pre-conditions for you function / method. And you will see your application crashing
when this precondition is not fulfilled. Thanks to some preprocessor-magic the
statement itself will be printed to stdout. So when you are writing something

void foo( bar_t *ptr ) {
  assert( NULL != ptr );

and your pointer is a NULL-pointer in your application you will get some info on
your stdout like:

assert in line 222, file bla.cpp: assert( NULL != ptr );

Great, you see what is wrong and you can start to fix that bug. But sometimes you
have to check more than one parameter or state:

global_state_t MyState = init;

void foo( bar_t *ptr ) {
  assert( NULL != ptr && MyState == init );

Nice one, your application still breaks and you can still see, what went wrong?
Unfortunately not, you will get a message like:

assert in line 222, file bla.cpp: assert( NULL != ptr && MyState == init );

So what went wrong, you will not be able to understand this on a first look.
Because the pointer could be NULL or the state may be wrong or both of the tests
went wrong. You need to dig deeper to understand the error.

For a second developer this will get more complicated, because he will most likely
not know which error case he should check first, because he didn’t wrote the code.s

So when you have to check more than one state please use more than one assert:

global_state_t MyState = init;

void foo( bar_t *ptr ) {
  assert( NULL != ptr );
  assert( MyState == init );


Dear reader,


someone hacked my webspace and all older posts were deleted. Sorry, I wil start to rework on them as fast as possible.