Use the Asset-Importer-Lib Meta-Data-API right


The problem:

Think of the following situation: you want to import a model using Asset-Importer-Lib and store some values like the version of the current asset or the author / company. Or when you want to manage the models using modules for a much more efficient way for your SCM you need to store grouping information. How can you do that using the Asset-Importer-Lib?

The solution: The Metadata API:

Asset-Importer-Lib provides a meta-data API to offer a solution for these kind of use-cases. It is straight-forward to use:

// to allocated two entries
aiMetadata *data = aiMetadata::Alloc( 2 ); 
unsigned int index( 0 );
bool success( false );
const std::string key_int = "test_int";
// to store an int-key
success = data->Set( index, key_int, 1 );

// to store a string key
index = 1;
const std::string key = "test";
success = data->Set( index, key, aiString( std::string( "test" ) ) );

// Deallocate the data afterwards
aiMetaData::dealloc( data );

You can store an arbitrary number of items, supported datatypes are

int
float
double
aiString

The intermediate data-structure aiNode can store this data.

C#: Use System.Diagnostics.Trace and DbgView within a WPF-Application


One of my favorite tools to debug MFC-Applications was the Win32-call:

::OutputDebugString( "Test\n" );

You can use it to trace information withing your application during runtime without a log-file. You only have to run the tool DbgView to monitor these log-entries ( if you want to try it out you can download DbgView here. )

Because I am currently working with C# in my job I wanted to use this tool as well for WVF-applications. And of course there is a corresponding API-method in the .NET-framework called:

Trace.WriteLine( string msg );

You can find the documentation here.

It is easy to use. Just insert your Trace-log-entries in your code. When you want to take a look into your application you can start DbgView to see, what is ongoing. Just make sure, that you have defined

#define TRACE

at the beginning of your source-file. Here is a small example:

#define TRACE

using System.Diagnosics;

class Test {
    static int main() {
        Trace.WriteLine("Hello, world!" );
        Exit(0);
    }
}

C#: Calling a generic from a generic with surprises


I am currently working on C# wit generics. The base concept seems to be the same as in C++ ( which I really like honestly spoken ).
And I tried to use specialization. There was a class which needs to deal with special data types. And for some special cases you need a type-specific semantic, because bool cannot be added for example. So I tried to do something like this:

public class MyGeneric<T> {
  T mValue;
  public MyGeneric<T>() {}
  public OnMethod( T v ) {
    mValue = v;
  }
  public OnMethod(bool v ) {
    mValue = !v;
  }

  static main() {
    MyGeneric<int> i = new MyGeneric<int> i();
    i.OnMethod(1); // Will call the generic method
    MyGeneric<bool> b = new MyGeneric<bool> i();
    b.OnMethod(true); // Will call the specialized method for bool
  }
}

Looks greate, I thought. problem soled I thought. Let’s integrate it into your application and continue to live a happy life again I thought. I was so wrong …

Because when calling a generic method from a generic method this specialization will not work:


public class MyCaller<T> {
  MyGeneric<T> mClassToCall;
  ...
  public void MethodCaller<T>( T v ) {
    mClassToCall.OnMethod( v );
  }
}
public class MyGeneric<T> {
  T mValue;
  public MyGeneric<T>() {}
  public OnMethod( T v ) {
    mValue = v;
  }
  public OnMethod(bool v ) {
    mValue = !v;
  }

  static main() {
    MyCaller<bool> boolCaller = new MyCaller<Bool> i();
    boolCaller.MethodCaller(true); // Will call the generic method public OnMethod( T v )
  }
}

This will not call the specialization. You need to add special traits to deal with this situation.

The solution is simple: use a type trait to detect which specialization is the right one for you and do the switch during runtime.

Getting starting with a Legacy-Code-Project


Day zero

Imagine the following situation: you are starting a new job, you are looking forward to your bright future. Of course you are planning to use the newest technologies and frameworks. And then you are allowed to take a first look into the source you have to work with. No tests, no spec, which fit to the source, of course no doc but a lot of  ( angry ) customers, which are strongly coupled to this mess. And now you are allowed to work with this kind of … whatever.

We call this Legacy-Code and I guess this situation is a common one, every developer or most of them will face a situation like this during his/her career. So what can we do to get out of this? I want to show you some base-techniques which will help you.

Accept the fact: this is the code to work on and it currently solves real problems!

No developer is planning to create legacy code. There is always a reason like: we needed to get in buisness or we had failed. Or the old developers had not enough resources to solve all upcoming issues or develop automatic tests. 10 years ago I faced this situation again and again: nobody want to write automatic tests because it costs a lot of time and you need some experience how to design your architecture in a way that its testable. And there were not so much tools out in those days.

The code is there for a reason and you need to accept this: this working legacy code ensures that you got the job. So even when its hard try to be polite when reading the code. Someone invested a lot of lifetime to keeps it up and running. And hopefully this guy is still in the company and you can ask him some questions.

You can kill him later ;-).

Check, if there is any source-control-management

The first thing you should check is the existence of an Source-Control-Management-Tool like Subversion, Git or Perforce. If not: get one, learn how to use it and put all your legacy code into source control! Do it now, do not discuss. If any of the other developers are concerned about using one install a SCM-tool on you own developer-pc and use it there. I promise: it will save your life some day. One college accidentally killed  his project-files after 6 weeks of work. He forgot the right name of his backup-folder and removed the false on, the one containing the current source. He tried to save disk-space, even in those old day disk-space was much cheaper than manpower.

To avoid errors like this: use a SCM-tool.

Check-in all your files!

Now you have a working SCM-tool check if all source-files, scripts and Makefiles are checked-in. If not: start doing this. The target of this task is just to get a reproducible build for you. Work on this until you are able to build from scratch after checking out your product. And when this works write a small KickStarter-Doc how to build everything from scratch after a clean checkout. Of course this will not work in the beginning. Of course you will face a lot of issues like a broken build, wrong paths or a different environment. But this is also a sign of legacy-code: not reproducible builds. Normally not all related files like special Makefiles are checked in. Or sometimes the environment differs between the different developer-PCs. And this causes a lot of hard to reproduce issues.

Do you know the phrase: “It worked on my machine?” after facing a new bug. Sometimes the developer was right. The issue was caused by different environments between the developer machines ( for instance different compiler version, different IDE, different kernel, different whatever … ).

When you have checkin all you files try to ensure that everyone is using the same tools: same compiler version, same libs, same IDE and document this in your KickStarter-Doc. Let’s try other guy’s to work with this and fix all upcoming issues.

This can slow down the ongoing development tasks. To avoid this you can learn how to work with branches with your SCM-tool ( for instance this doc shows how to do branches in git: https://git-scm.com/book/en/v2/Git-Branching-Basic-Branching-and-Merging ).

More Quality-Assurance on GitHub via SAAS


When you are working on Github with your project there are a lot really handy services which you can use. This kind of software-usage is called “Software-As-A-Service”. Why? You can use it via a nice Web-API without having all the maintain-work.

For instance when you want to use a Continuous-Integration-Service for your project you can setup a new PC, install Jenkins. Or you just use Travis on Github instead.

So I just started to use some more services on GitHub for my projects, in special for Asset-Importer-Lib ( see https://github.com/assimp/assimp and its dependency https://github.com/kimkulling/openddl-parser.git ) of course:

 

Watch your logs in your unittests!


The idea

Unittests and integration-tests are a great tool not to break your code. They are building a safety-net to help you when you have to add a new feature or fixing a bug in an existing codebase.
But of course there will be situations when a bug will occur which was not covered by your test-suite. One way to get an understanding what went wrong are logfiles. You can use them to write a protocol what happened during runtime. When something useful ( like creating a new entry into a database ) happened you can write this information with a timestamp into your protocol. When a bug occurs like disk is full an error-entry will be created. And you can use it to log some internal states of your application. When the log-entries are well maintained they help you to get a better understanding what happened during a crash ( and of course what went wrong before ). And you can use them to post warnings like: be careful, this API-call is deprecated.
But do you watch your logs in your unit-tests and integration-tests? Maybe there are interesting information stored in them for a test-fixure which you should take care of as well. For instance when you declare an API-call as deprecated, but this call is still in use in a different sub-system it would be great to get this as an error in a unittest. Or when some kind of warning occurrs at some point of your log. We observed stuff like that in production code more than once. To take care of these situations we added a functionality called a log-filter: you can use it to define expected log-entries like an error which must occur in a test because you want to test the error behaviour. When unexpected entries are there the test will fail. So you will see in your tests what shall happen and what not.

Do some coding

Lets start with a simple logging service for an application:
My basic concept for a logging service looks like:
A logger is some kind of a service, so only one is there. Normally I am building a special kind of singleton to implement it ( yes I know, they are bad, shame on me ). You can create them on startup at the beginning. You shall destroy them at the end of the application ( last call before exit( 0 ) )
Log entries have different severities, for instance:
Debug: Internal states messages for debugging
Info: All useful messages
Warn: Warnings for the developer like “API-call deprecated”
Error: External errors like disk full or DAU-user-error
Fatal: An internal error has occurred caused by a bug
You can register log-streams to the logger, each logstream will write the protocol to a special output like a log-file or a window from the frontend.
In code this could look like:

class AbstractLogStream {
public:
  virtual ~AbstractLogStream();
  virtual void write( const std::string &message ) = 0;
};

class LoggingService {
public:
  // the entry severity
  enum class Severity {
    Debug,
    Info,
    Warn,
    Error,
    Fatal
  };
  static LoggingService &create();
  static void destroy();
  static LoggingService &getinstance();
  void registerStream( const AbstractLogStream &stream );
  void log( Severity sev, const std::string &message, 
    const std::string &file, unsigned int line );
  
private:
  static LoggingService *mInstance;
  LoggingService();
  ~LoggingService();
};

With this simple API you can create and destroy your log service, log messages of different severities and register your own log-streams.
Now we want to observer the entries during your unittests. A simple API to do this could look like:

class TestLogStream : public AbstractLogStream {
public:
  TestLogStream();
  ~TestLogStream();
  void write( const std::string &message ) override {
    TestLogFilter.getInstance().addLogEntry( message );
  }
};

class TestLogFilter {
public:
  static TestLogFilter &create();
  static void destroy();
  static TestLogFilter &getInstance();
  void addLogEntry( const std::string &message );
  void registerExpectedEntry( const std::string &message );
  bool hasUnexpectedLogs() const {
    for ( auto entry : m_messages) {
      std::vector::iterator it( std::find( m_expectedMessages.begin, m_expectedMessages.end(), entry );
      if ( it != m_expectedMessages.end() ) {
        return true;
      }
    }
    return false;
  }

private:
  TestLogFilter();
  ~TestLogFilter();

private:
  std::vector m_expectedMessages;
  std::vector m_messages;
};

The filter contains two string-arrays:
One contains all expected entries, which are allowed for the unittest
The other one contains all written log-entries, which were written by the TestLogStream during the test-fixure
Let’s try it out

You need to setup your testfilter before runnning your tests. You can use the registerExpectedEntry-method to add an expected entry during your test-execution.
Most unittest-frameworks support some kind of setup-callback mechanism before executing a bundle of tests. I prefer to use gtest. So you can create this singleton-class here:

#include <gtest/gtest.h>

class MyTest : public ::testing::test {
protected:
  virtual void SetUp() {
    LoggingService::create();
    TestLogFilter::create();
    LoggingService::getInstance().registerStream( new TestLogStream );
    TestLogFilter::getInstance().registerExpectedEntry( "Add your entry here!" );
  }

  virtual void TearDown() {
    TestLogFilter::destroy();
    EXPECT_FALSE( TestLogFilter::getInstance().hasUnexpectedLogs() );
    LoggingService::destroy();
  }
};

TEST_F( MyTest, do_a_test ) {
  ...
}

First you need to create the logging-service. In this example only the TestLogStream will be registered. Afterwards we will register one expected entry for the test-fixure.
When all tests have proceeded the teatDown-callback will check, if any unexpected log-entries were written.
So when unexpected entries were detected the test will be marked as a failure. Andy you can see if you forget to deal with any new behaviour.
What to do next

You can add more useful stuff like:
Add wildcards for expected og entries
Make this thread-safe

Please use only one statement per assert


Do you know the assert-macro? It is an easy tool for debugging: You can use it to
check if a pointer is a NULL-pointer or if your application is in a proper state
for processing. When this is not the case, if will stop your application,
when you are using a debug mode, in release mode normally nothing happens.
Depending on your platform this can vary a little bit. For instance the
Qt-framework prints a log-message if you have a failed assert test to stderr when
you are currently using a release build. So assert is a nice tool to check
pre-conditions for you function / method. And you will see your application crashing
when this precondition is not fulfilled. Thanks to some preprocessor-magic the
statement itself will be printed to stdout. So when you are writing something
like:

void foo( bar_t *ptr ) {
  assert( NULL != ptr );
  ...
}

and your pointer is a NULL-pointer in your application you will get some info on
your stdout like:

assert in line 222, file bla.cpp: assert( NULL != ptr );

Great, you see what is wrong and you can start to fix that bug. But sometimes you
have to check more than one parameter or state:

global_state_t MyState = init;

void foo( bar_t *ptr ) {
  assert( NULL != ptr && MyState == init );
  ...
}

Nice one, your application still breaks and you can still see, what went wrong?
Unfortunately not, you will get a message like:

assert in line 222, file bla.cpp: assert( NULL != ptr &amp;&amp; MyState == init );

So what went wrong, you will not be able to understand this on a first look.
Because the pointer could be NULL or the state may be wrong or both of the tests
went wrong. You need to dig deeper to understand the error.
For a second developer this will get more complicated, because he will most likely
not know which error case he should check first, because he didn’t wrote the code.s
So when you have to check more than one state please use more than one assert:

global_state_t MyState = init;

void foo( bar_t *ptr ) {
  assert( NULL != ptr );
  assert( MyState == init );
  ...
}

Thanks!