Assimp: Starting to support point-clouds

In Asset-Importer-Lib we got a lot of feature-requests to provide point-clouds somehow. Doing this will be a big step for our library because the intention when designing Assimp was to give the user an optimized scene with meshes ready for rendering. And we need triangles to get a valid scene definition. And all free vertices, which are not referenced by a triangle / face will be removed during the post-processing. So a point-cloud, free vertices which are not referenced by any triangle, will be not possible with our initial design.

My first step ( and the easiest way to go I guest ) is to be able to export point-cloud data as an ASCII-STL-file. So you can find this feature on our current master branch on github. to export this you have to follow these steps:

  1. Define a scene containing some vertices without any face declarations.
  2. Create your exporter class, use the new introduced boolean property property AI_CONFIG_EXPORT_POINT_CLOUDS and set it to true
  3. Use the ASCII-STL-exporter:
struct XYZ {
    float x, y, z;

std::vector<XYZ> points;

for (size_t i = 0; i < 10; ++i) {
    XYZ current;
    current.x = static_cast<float>(i);
    current.y = static_cast<float>(i);
    current.z = static_cast<float>(i);
aiScene scene;
scene.mRootNode = new aiNode();

scene.mMeshes = new aiMesh*[1];
scene.mMeshes[0] = nullptr;
scene.mNumMeshes = 1;

scene.mMaterials = new aiMaterial*[1];
scene.mMaterials[0] = nullptr;
scene.mNumMaterials = 1;

scene.mMaterials[0] = new aiMaterial();

scene.mMeshes[0] = new aiMesh();
scene.mMeshes[0]->mMaterialIndex = 0;

scene.mRootNode->mMeshes = new unsigned int[1];
scene.mRootNode->mMeshes[0] = 0;
scene.mRootNode->mNumMeshes = 1;

auto pMesh = scene.mMeshes[0];

long numValidPoints = points.size();

pMesh->mVertices = new aiVector3D[numValidPoints];
pMesh->mNumVertices = numValidPoints;

int i = 0;
for (XYZ &p : points) {
   pMesh->mVertices[i] = aiVector3D(p.x, p.y, p.z);

Assimp::Exporter mAiExporter;
ExportProperties *properties = new ExportProperties;
properties->SetPropertyBool(AI_CONFIG_EXPORT_POINT_CLOUDS, true);
mAiExporter.Export(&scene, "stl", "testExport.stl", 0, properties );

delete properties;

The exported file will have the following syntax:

 solid Assimp_Pointcloud
 facet normal 0 0 0
  vertex 0 0 0
  vertex 0 0 0
  vertex 0 0 0
  vertex 1 1 1
  vertex 1 1 1
  vertex 1 1 1
  vertex 2 2 2
  vertex 2 2 2
  vertex 2 2 2
  vertex 3 3 3
  vertex 3 3 3
  vertex 3 3 3
  vertex 4 4 4
  vertex 4 4 4
  vertex 4 4 4
  vertex 5 5 5
  vertex 5 5 5
  vertex 5 5 5
  vertex 6 6 6
  vertex 6 6 6
  vertex 6 6 6
  vertex 7 7 7
  vertex 7 7 7
  vertex 7 7 7
  vertex 8 8 8
  vertex 8 8 8
  vertex 8 8 8
  vertex 9 9 9
  vertex 9 9 9
  vertex 9 9 9
endsolid Assimp_Pointcloud

If you need any other way how to define point-clouds please use this post to give me feedback!

The next step will be to be able to import a point-could. So stay tuned…

OSRE: The threading model

When I started to work on my OSRE-Engine I was just curious how to answer the following question: how can I work with a separate render-thread without getting mad all day long. And what the fuck is a thread. So I decided to implement my own multi-threaded renderer some years ago. Special when you want to learn something like this you need some time to think about it. Unfortunately I was getting a father during this period of time ( 6 years ago ) as well. So it took quit a while to get a good understanding about it.

After 6 years I have the feeling that maybe someone else is interesting in my design as well so I decided to write a blogpost about it. Hopefully you can enjoy it:

Before C++11 threading was not a default feature of C/C++. So if you wanted to use it you needed to implement it for the runtime of your Operation-System. I started using the Win32-API, because my plan was to run a separate render-thread for my own RenderDevice based on DirectX9. You can find some kind of documentation about threading here .

When you wants to start aapplication it first starts its own process. A process is some kind of os-specific container for your application. When you are writing a simple Hello-World-app you compiler will generate an executable which contains an entry point for a new process ( in C/C++this entry-point will be called main() ). A process owns its own adress-space, simplified it encapsulates the context ( handles / memory / resources ) of your application. No other application is allowed to access any memory adress of your process. And: in a process you can start different threads. When starting a process there will be one thread running you Hello-World. A thread is a OS-specific way to run code. A process can start several threads. SO what is the reason to run more than one thread. There are a lot of reasons to run threads:

  • You have to run a time-consuming computation and you do not want to block you main thread because he is controlling your UI ( and you do not want to have a blocked UI )
  • You want to run several computations at the same time on different cores
  • You are running a server and you don’t want to get one client blocked by other clients, so each client handling is running in its own thread

In my ase my plan was to run one main thread, which will run the main engine code. And I wanted to spawn one render thread, which will encapsulating the rendering completely. THe draw calls will not be blocked by any other computation. I started a simple Win32-Thread and implemented a renderer based on D3D9 for it. This is simple: just take an example Triangle render-example and start it in a separate thread:

class RendererD3D9 {
    void renderScene(); // do the render magic here

static void RenderFunc(void *data) {
    RendererD3D9 *renderer = (RendererD3D9*) data;
    bool Stop = false;
    while( Stop ) {
        renderer->renderScene(); // do the rendering here

int main( int argc, char argv[] ) {
    var renderer  = CreateRenderer();
    RunThread( renderer );

    return 0;

Looks simple, you have to use the Win32-API to spawn the function RenderFunc in a separate thread and your triangle will be rendered. Unfortunately we need some kind of way to communicate with this renderer during runtime:

  • I want to update the uniform buffers, because my triangle needs to move
  • I want to change the scene

You need a functionality to communicate with this thread: a concurrent-queue. When the thread will be spawned it looks for any stuff which was enqueued and handles it:

class RendererD3D9 {
    void updateScene( void *data );
    void renderScene();

class ConcurrentQueue {
    void enqueue( void *data );
    void *dequeue();
    bool isEmpty() const;

    Mutex m_mutex;      // to lock the access
    Condition m_finish; // to terminate the thread;

struct ThreadData {
    RendererD3D9 *renderer;
    ConcurrentQueue *queue;

static void RenderFunc(void *data) {
    ThreadData *threadContext= (RendererD3D9*) data;
    bool Stop = false;
    while( Stop ) {
        if ( !threadContext->queue->isEmpty() ) {
            threadContext->renderer->updateScene( threadContext->queue->dequeue() );

The main- and the render-thread are able to access the concurrent queue, If the main thread wants to add any updates he can enqueue the update data for the renderer. In the render-thread the thread will look before rendering the next frame if there are any updates for the scene he shall render. If this is the case this data will be dequeued from the concurrent-queue and the updateScene-call will be performed. Afterwards the rendering of the next frame will be performed.

This concepts works fine for long-runnning tasks like rendering a frame 60 times a second. But the design works only for a renderer at the moment. If you want to be able to run arbitrary stuff in a thread you need a way to install your own handlers. So I introduced an abstract interface called AbstractEventHandler. The data will be encapsulated in an event ( a small class / struct which contains the payload for the next handling ):

class Event {};     // contains the Id of the event
calss EventData {}; // contains the data assigned to the event
class AbstractEventHandler {
    virtual void onEvent( const Event &event, conste EventData *data )

struct ThreadData {
    Condition *finished;
    AbstractEventHandler *handler;
    ConcurrentQueue *queue;

static void ThreadFunc(void *data) {
    ThreadData *threadContext= (ThreadData *) data;
    bool Stop = false;
    while( Stop ) {
        if ( !threadContext->queue->isEmpty() ) {
            threadContext->handler->onEvent( threadContext->queue->dequeue().getEvent(), threadContext->queue->dequeue().getEventData() );
        Stop = finished->isSignaled(); 

So a developer can install his own handler-code by deriving his own EventHandler-classed from AbstractEventHandler, the data to deal will be descriped by the event and the corresponding data is stored in EventData. If you want to stop the thread execution you can use the Condition, which is a way to use a flag in a threadsafe way.

The next step was to build an interface for working with threads on different API’s. I called it AbstractThread. It encapsulates the implementation details for running a thread:

class OSRE_EXPORT AbstractThread {
    ///	The function pointer for a user-specific thread-function.
    typedef ui32 (*threadfunc) ( void * );

    ///	@brief	This enum describes the priority of the thread.
    enum class Priority {
        Low,	///< Low prio thread.
        Normal,	///< Normal prio thread.
        High	///< High prio thread.

    ///	@enum	ThreadState
    ///	@brief	Describes the current state of the thread.
    enum class ThreadState {
        New,			///< In new state, just created
        Running,		///< thread is currently running
        Waiting,		///< Awaits a signal
        Suspended,		///< Is suspended
        Terminated		///< Thread is terminated, will be destroyed immediately

    virtual ~AbstractThread();
    virtual bool start( void *pData ) = 0;
    virtual bool stop() = 0;
    virtual ThreadState getCurrentState() const;
    virtual bool suspend() = 0;
    virtual bool resume() = 0;
    virtual void setName( const String &name ) = 0;
    virtual const String &getName() const = 0;
    virtual void waitForTimeout( ui32 ms ) = 0;
    virtual void wait() = 0;
    virtual AbstractThreadEvent *getThreadEvent() const = 0;
    virtual void setPriority( Priority prio ) = 0;
    virtual Priority getPriority() const = 0;
    virtual const String &getThreadName() const = 0;
    virtual AbstractThreadLocalStorage *getThreadLocalStorage() = 0;
    virtual void setThreadLocalStorage( AbstractThreadLocalStorage *tls ) = 0;
    virtual void setThreadId( const ThreadId &id ) = 0;
    virtual ThreadId getThreadId() = 0;

It contains some more functionality:

  • Each thread has its own id to be able to identify him
  • The interface offers a way to define a priority for the thread execution
  • You can assign a name to your thread if your operation system is offering this feature ( when using Win32-API there is a way to assign a name to a dedicated thread which will be shows in your debugger )
  • There is a state-machine which shows the internal state of the thread. This will help a user to monitor the state itself for debugging during runtime.

To define a way how to execute a thread with its assigned EventHandler-Instance there is a class called SystemTask:

class OSRE_EXPORT SystemTask : public AbstractTask {
    virtual bool start( Platform::AbstractThread *pThread );
    virtual bool stop();
    virtual bool isRunning() const;
    virtual bool execute();
    virtual void setThreadInstance( Platform::AbstractThread *pThreadInstance );
    virtual void onUpdate();
    virtual void awaitUpdate();
    virtual void awaitStop();
    virtual void attachEventHandler( Common::AbstractEventHandler *pEventHandler );
    virtual void detachEventHandler();
    virtual bool sendEvent( const Common::Event *pEvent, const Common::EventData *pEventData );
    virtual ui32 getEvetQueueSize() const;

You can start a system-task by assigning it its thread instance and its event-handler. Of course you can stop it as well. And you can send events to your thread. The events will be enqueued in the thread-specific concurrent-queue.

So I resolved my targets:

  • I was able to run a deticated thread for dealing with my render device by implementing a render-specific eventhandler class
  • I can communicate to the thread by using a concurrent queue
  • I can start / stop the thread execution
  • I can use this concept to define other tasks by implementing different event-handlers

One nice side-effect: By defining only the event-based protocoll how to render a scene you can decouple the way how to implement your renderer. It is encapsulated by the render-thread-specific event-handler. The user will only see the events how to work with it.

Dealing with Legacy Code Part 2: Extract the API out of your interfaces

Think about the following situation:

There is an interface / class interface / function in your application used by others which conains non-standard enums / types / whatever coming from an external API like OpenGL, DirectX, some kind of commercial API. The underlying code strongly uses this API. So using parts of this API was straight forward. Instead of having a strong encapsulation your API was spreaded all oer your codebase. Unfortunately the Product-Manager generates this requests: we need to exchange the underlying API because of whatever … Easy, just change this one API below your code and … wait a minute. There are all these symbols coming from the old API. WTF!

It looks like:

enum class API1Enum {

class MyInterface {
  void enableFeature( API1Enum myEnum ) = 0;

// Implementation for API1
class MyAPI1Impl : public MyInterface {
  void enableFeature( API1Enum myEnum ) {
    // handle the enum from the API1
    switch (myEnum) { ... }

For the first implementation this concept works fine. Now you get a new API to wrap, just implement the interface, right?

class MyAPI2Impl : public MyInterface {
  // Damned, API1 does not exist anymore ...
  void enableFeature( API1Enum myEnum ) {

Solution: Wrap the enum as well:

Of course you can introduce another app-specific enum, wrap the API by using it and you are fine:

// From the API
enum class API1Enum {

// defined in your app
enum class AppEnum {

// change your interface
class MyInterface {
  void enableFeature( AppEnum myEnum ) = 0;

// Introduce functions for translation
static API1Enum translateAPI1( AppEnum type ) {
  switch( type ) {
    case AppEnum::Appfeatue1; return API1Enum::feature1;
    case AppEnum::Appfeature2: return API1Enum::feature2;
  // error handling

static API2Enum translateAPI2( AppEnum api2_type ) {
  // do it for API2

class MyAPI1Impl : public MyInterface {
  void enableFeature( AppEnum myEnum ) {
    // translate AppEnum and handle it for API1
    switch (translateAPI1(myEnum)) { ... }

class MyAPI2Impl : public MyInterface {
  void enableFeature( AppEnum myEnum ) {
    // translate AppEnum and handle it for API1
    switch (translateAPI2(myEnum)) { ... }

What you are doing:

  1. Introduce an APP-specific enum
  2. Use this enum instead of the API-specific enum
  3. Introduce translation functions to translate the AppEnum to the API1Enum / API2Enum
  4. For the API-specific implementations: translate the AppEnum into the API-specific enum


You can use lookup-tables instead of the translation-functions. But as a first step translation-functions a much easier to debug.

Use the Asset-Importer-Lib Meta-Data-API right

The problem:

Think of the following situation: you want to import a model using Asset-Importer-Lib and store some values like the version of the current asset or the author / company. Or when you want to manage the models using modules for a much more efficient way for your SCM you need to store grouping information. How can you do that using the Asset-Importer-Lib?

The solution: The Metadata API:

Asset-Importer-Lib provides a meta-data API to offer a solution for these kind of use-cases. It is straight-forward to use:

// to allocated two entries
aiMetadata *data = aiMetadata::Alloc( 2 ); 
unsigned int index( 0 );
bool success( false );
const std::string key_int = "test_int";
// to store an int-key
success = data->Set( index, key_int, 1 );

// to store a string key
index = 1;
const std::string key = "test";
success = data->Set( index, key, aiString( std::string( "test" ) ) );

// Deallocate the data afterwards
aiMetaData::dealloc( data );

You can store an arbitrary number of items, supported datatypes are


The intermediate data-structure aiNode can store this data.

Getting starting with a Legacy-Code-Project

Day zero

Imagine the following situation: you are starting a new job, you are looking forward to your bright future. Of course you are planning to use the newest technologies and frameworks. And then you are allowed to take a first look into the source you have to work with. No tests, no spec, which fit to the source, of course no doc but a lot of  ( angry ) customers, which are strongly coupled to this mess. And now you are allowed to work with this kind of … whatever.

We call this Legacy-Code and I guess this situation is a common one, every developer or most of them will face a situation like this during his/her career. So what can we do to get out of this? I want to show you some base-techniques which will help you.

Accept the fact: this is the code to work on and it currently solves real problems!

No developer is planning to create legacy code. There is always a reason like: we needed to get in buisness or we had failed. Or the old developers had not enough resources to solve all upcoming issues or develop automatic tests. 10 years ago I faced this situation again and again: nobody want to write automatic tests because it costs a lot of time and you need some experience how to design your architecture in a way that its testable. And there were not so much tools out in those days.

The code is there for a reason and you need to accept this: this working legacy code ensures that you got the job. So even when its hard try to be polite when reading the code. Someone invested a lot of lifetime to keeps it up and running. And hopefully this guy is still in the company and you can ask him some questions.

You can kill him later ;-).

Check, if there is any source-control-management

The first thing you should check is the existence of an Source-Control-Management-Tool like Subversion, Git or Perforce. If not: get one, learn how to use it and put all your legacy code into source control! Do it now, do not discuss. If any of the other developers are concerned about using one install a SCM-tool on you own developer-pc and use it there. I promise: it will save your life some day. One college accidentally killed  his project-files after 6 weeks of work. He forgot the right name of his backup-folder and removed the false on, the one containing the current source. He tried to save disk-space, even in those old day disk-space was much cheaper than manpower.

To avoid errors like this: use a SCM-tool.

Check-in all your files!

Now you have a working SCM-tool check if all source-files, scripts and Makefiles are checked-in. If not: start doing this. The target of this task is just to get a reproducible build for you. Work on this until you are able to build from scratch after checking out your product. And when this works write a small KickStarter-Doc how to build everything from scratch after a clean checkout. Of course this will not work in the beginning. Of course you will face a lot of issues like a broken build, wrong paths or a different environment. But this is also a sign of legacy-code: not reproducible builds. Normally not all related files like special Makefiles are checked in. Or sometimes the environment differs between the different developer-PCs. And this causes a lot of hard to reproduce issues.

Do you know the phrase: “It worked on my machine?” after facing a new bug. Sometimes the developer was right. The issue was caused by different environments between the developer machines ( for instance different compiler version, different IDE, different kernel, different whatever … ).

When you have checkin all you files try to ensure that everyone is using the same tools: same compiler version, same libs, same IDE and document this in your KickStarter-Doc. Let’s try other guy’s to work with this and fix all upcoming issues.

This can slow down the ongoing development tasks. To avoid this you can learn how to work with branches with your SCM-tool ( for instance this doc shows how to do branches in git: ).

More Quality-Assurance on GitHub via SAAS

When you are working on Github with your project there are a lot really handy services which you can use. This kind of software-usage is called “Software-As-A-Service”. Why? You can use it via a nice Web-API without having all the maintain-work.

For instance when you want to use a Continuous-Integration-Service for your project you can setup a new PC, install Jenkins. Or you just use Travis on Github instead.

So I just started to use some more services on GitHub for my projects, in special for Asset-Importer-Lib ( see and its dependency ) of course:


Watch your logs in your unittests!

The idea

Unittests and integration-tests are a great tool not to break your code. They are building a safety-net to help you when you have to add a new feature or fixing a bug in an existing codebase.
But of course there will be situations when a bug will occur which was not covered by your test-suite. One way to get an understanding what went wrong are logfiles. You can use them to write a protocol what happened during runtime. When something useful ( like creating a new entry into a database ) happened you can write this information with a timestamp into your protocol. When a bug occurs like disk is full an error-entry will be created. And you can use it to log some internal states of your application. When the log-entries are well maintained they help you to get a better understanding what happened during a crash ( and of course what went wrong before ). And you can use them to post warnings like: be careful, this API-call is deprecated.
But do you watch your logs in your unit-tests and integration-tests? Maybe there are interesting information stored in them for a test-fixure which you should take care of as well. For instance when you declare an API-call as deprecated, but this call is still in use in a different sub-system it would be great to get this as an error in a unittest. Or when some kind of warning occurrs at some point of your log. We observed stuff like that in production code more than once. To take care of these situations we added a functionality called a log-filter: you can use it to define expected log-entries like an error which must occur in a test because you want to test the error behaviour. When unexpected entries are there the test will fail. So you will see in your tests what shall happen and what not.

Do some coding

Lets start with a simple logging service for an application:
My basic concept for a logging service looks like:
A logger is some kind of a service, so only one is there. Normally I am building a special kind of singleton to implement it ( yes I know, they are bad, shame on me ). You can create them on startup at the beginning. You shall destroy them at the end of the application ( last call before exit( 0 ) )
Log entries have different severities, for instance:
Debug: Internal states messages for debugging
Info: All useful messages
Warn: Warnings for the developer like “API-call deprecated”
Error: External errors like disk full or DAU-user-error
Fatal: An internal error has occurred caused by a bug
You can register log-streams to the logger, each logstream will write the protocol to a special output like a log-file or a window from the frontend.
In code this could look like:

class AbstractLogStream {
  virtual ~AbstractLogStream();
  virtual void write( const std::string &message ) = 0;

class LoggingService {
  // the entry severity
  enum class Severity {
  static LoggingService &create();
  static void destroy();
  static LoggingService &getinstance();
  void registerStream( const AbstractLogStream &stream );
  void log( Severity sev, const std::string &message, 
    const std::string &file, unsigned int line );
  static LoggingService *mInstance;

With this simple API you can create and destroy your log service, log messages of different severities and register your own log-streams.
Now we want to observer the entries during your unittests. A simple API to do this could look like:

class TestLogStream : public AbstractLogStream {
  void write( const std::string &message ) override {
    TestLogFilter.getInstance().addLogEntry( message );

class TestLogFilter {
  static TestLogFilter &create();
  static void destroy();
  static TestLogFilter &getInstance();
  void addLogEntry( const std::string &message );
  void registerExpectedEntry( const std::string &message );
  bool hasUnexpectedLogs() const {
    for ( auto entry : m_messages) {
      std::vector::iterator it( std::find( m_expectedMessages.begin, m_expectedMessages.end(), entry );
      if ( it != m_expectedMessages.end() ) {
        return true;
    return false;


  std::vector m_expectedMessages;
  std::vector m_messages;

The filter contains two string-arrays:
One contains all expected entries, which are allowed for the unittest
The other one contains all written log-entries, which were written by the TestLogStream during the test-fixure
Let’s try it out

You need to setup your testfilter before runnning your tests. You can use the registerExpectedEntry-method to add an expected entry during your test-execution.
Most unittest-frameworks support some kind of setup-callback mechanism before executing a bundle of tests. I prefer to use gtest. So you can create this singleton-class here:

#include <gtest/gtest.h>

class MyTest : public ::testing::test {
  virtual void SetUp() {
    LoggingService::getInstance().registerStream( new TestLogStream );
    TestLogFilter::getInstance().registerExpectedEntry( "Add your entry here!" );

  virtual void TearDown() {
    EXPECT_FALSE( TestLogFilter::getInstance().hasUnexpectedLogs() );

TEST_F( MyTest, do_a_test ) {

First you need to create the logging-service. In this example only the TestLogStream will be registered. Afterwards we will register one expected entry for the test-fixure.
When all tests have proceeded the teatDown-callback will check, if any unexpected log-entries were written.
So when unexpected entries were detected the test will be marked as a failure. Andy you can see if you forget to deal with any new behaviour.
What to do next

You can add more useful stuff like:
Add wildcards for expected og entries
Make this thread-safe

Build Asset Importer Lib for 64bit with Visual Studio from source-repo

If you want to generate a 64bit-build for Asset-Importer-Lib by using the Visual Studio project files generated by CMake please follow these instructions:
Make sure that you are using a supported cmake ( 2.8 or higher at the moment )- and Visual-Studio-Version ( on the current master VS2010 is deprecated )
Clone the latest master Asset-Importer-Lib from github
Generate the project files with the command:

cmake -G”Visual Studio 14 Win64"

Open the project and build the whole project and enjoy the 64-bit-version of your famous Asset-Importer-Lib.
This should help you if you a struggeling with this feature. We just learned that just switching to code
generation for 64bit does not work.

Feel free to report any issues if you observed one.

Asset Importer Lib binaries of the latest build

If you are looking for the latest Asset Importer Lib build: we are using appveyor
( check their web-site, its free for OpenSource projects ).
as the Continuous Integration service for windows. If the build was successful it
will create an archive containing the dll’s, all executables and the export
libraries for Windows. At the moment we are supporting the following versions:
– Visual Studio 2015
– Visual Studio 2013
– Visual Studio 2012
I am planning to support the MinGW version as well. Unfortunately first I have to
update one file which is much too long for the MinGW-compiler ( thanks to the
guy’s from the Qt-framework ).

Please use only one statement per assert

Do you know the assert-macro? It is an easy tool for debugging: You can use it to
check if a pointer is a NULL-pointer or if your application is in a proper state
for processing. When this is not the case, if will stop your application,
when you are using a debug mode, in release mode normally nothing happens.
Depending on your platform this can vary a little bit. For instance the
Qt-framework prints a log-message if you have a failed assert test to stderr when
you are currently using a release build. So assert is a nice tool to check
pre-conditions for you function / method. And you will see your application crashing
when this precondition is not fulfilled. Thanks to some preprocessor-magic the
statement itself will be printed to stdout. So when you are writing something

void foo( bar_t *ptr ) {
  assert( NULL != ptr );

and your pointer is a NULL-pointer in your application you will get some info on
your stdout like:

assert in line 222, file bla.cpp: assert( NULL != ptr );

Great, you see what is wrong and you can start to fix that bug. But sometimes you
have to check more than one parameter or state:

global_state_t MyState = init;

void foo( bar_t *ptr ) {
  assert( NULL != ptr && MyState == init );

Nice one, your application still breaks and you can still see, what went wrong?
Unfortunately not, you will get a message like:

assert in line 222, file bla.cpp: assert( NULL != ptr &amp;&amp; MyState == init );

So what went wrong, you will not be able to understand this on a first look.
Because the pointer could be NULL or the state may be wrong or both of the tests
went wrong. You need to dig deeper to understand the error.
For a second developer this will get more complicated, because he will most likely
not know which error case he should check first, because he didn’t wrote the code.s
So when you have to check more than one state please use more than one assert:

global_state_t MyState = init;

void foo( bar_t *ptr ) {
  assert( NULL != ptr );
  assert( MyState == init );