Assimp: Starting to support point-clouds

In Asset-Importer-Lib we got a lot of feature-requests to provide point-clouds somehow. Doing this will be a big step for our library because the intention when designing Assimp was to give the user an optimized scene with meshes ready for rendering. And we need triangles to get a valid scene definition. And all free vertices, which are not referenced by a triangle / face will be removed during the post-processing. So a point-cloud, free vertices which are not referenced by any triangle, will be not possible with our initial design.

My first step ( and the easiest way to go I guest ) is to be able to export point-cloud data as an ASCII-STL-file. So you can find this feature on our current master branch on github. to export this you have to follow these steps:

  1. Define a scene containing some vertices without any face declarations.
  2. Create your exporter class, use the new introduced boolean property property AI_CONFIG_EXPORT_POINT_CLOUDS and set it to true
  3. Use the ASCII-STL-exporter:
struct XYZ {
    float x, y, z;

std::vector<XYZ> points;

for (size_t i = 0; i < 10; ++i) {
    XYZ current;
    current.x = static_cast<float>(i);
    current.y = static_cast<float>(i);
    current.z = static_cast<float>(i);
aiScene scene;
scene.mRootNode = new aiNode();

scene.mMeshes = new aiMesh*[1];
scene.mMeshes[0] = nullptr;
scene.mNumMeshes = 1;

scene.mMaterials = new aiMaterial*[1];
scene.mMaterials[0] = nullptr;
scene.mNumMaterials = 1;

scene.mMaterials[0] = new aiMaterial();

scene.mMeshes[0] = new aiMesh();
scene.mMeshes[0]->mMaterialIndex = 0;

scene.mRootNode->mMeshes = new unsigned int[1];
scene.mRootNode->mMeshes[0] = 0;
scene.mRootNode->mNumMeshes = 1;

auto pMesh = scene.mMeshes[0];

long numValidPoints = points.size();

pMesh->mVertices = new aiVector3D[numValidPoints];
pMesh->mNumVertices = numValidPoints;

int i = 0;
for (XYZ &p : points) {
   pMesh->mVertices[i] = aiVector3D(p.x, p.y, p.z);

Assimp::Exporter mAiExporter;
ExportProperties *properties = new ExportProperties;
properties->SetPropertyBool(AI_CONFIG_EXPORT_POINT_CLOUDS, true);
mAiExporter.Export(&scene, "stl", "testExport.stl", 0, properties );

delete properties;

The exported file will have the following syntax:

 solid Assimp_Pointcloud
 facet normal 0 0 0
  vertex 0 0 0
  vertex 0 0 0
  vertex 0 0 0
  vertex 1 1 1
  vertex 1 1 1
  vertex 1 1 1
  vertex 2 2 2
  vertex 2 2 2
  vertex 2 2 2
  vertex 3 3 3
  vertex 3 3 3
  vertex 3 3 3
  vertex 4 4 4
  vertex 4 4 4
  vertex 4 4 4
  vertex 5 5 5
  vertex 5 5 5
  vertex 5 5 5
  vertex 6 6 6
  vertex 6 6 6
  vertex 6 6 6
  vertex 7 7 7
  vertex 7 7 7
  vertex 7 7 7
  vertex 8 8 8
  vertex 8 8 8
  vertex 8 8 8
  vertex 9 9 9
  vertex 9 9 9
  vertex 9 9 9
endsolid Assimp_Pointcloud

If you need any other way how to define point-clouds please use this post to give me feedback!

The next step will be to be able to import a point-could. So stay tuned…

Use the Asset-Importer-Lib Meta-Data-API right

The problem:

Think of the following situation: you want to import a model using Asset-Importer-Lib and store some values like the version of the current asset or the author / company. Or when you want to manage the models using modules for a much more efficient way for your SCM you need to store grouping information. How can you do that using the Asset-Importer-Lib?

The solution: The Metadata API:

Asset-Importer-Lib provides a meta-data API to offer a solution for these kind of use-cases. It is straight-forward to use:

// to allocated two entries
aiMetadata *data = aiMetadata::Alloc( 2 ); 
unsigned int index( 0 );
bool success( false );
const std::string key_int = "test_int";
// to store an int-key
success = data->Set( index, key_int, 1 );

// to store a string key
index = 1;
const std::string key = "test";
success = data->Set( index, key, aiString( std::string( "test" ) ) );

// Deallocate the data afterwards
aiMetaData::dealloc( data );

You can store an arbitrary number of items, supported datatypes are


The intermediate data-structure aiNode can store this data.

A new concept for testing Asset-Importer-Lib

Our problem!

After struggeling with bugs in assimp for more than 10 years until now I have to accept the fact: I need to improve our unittests + regression-test-strategy:

  • New patches are breaking older behaviour and I haven’t recognized it until a new issue comes up created by a frustrated user.
  • I do not have a clue about most of the importers. I didn’t implement them and it is really really hard to get in the code ( special when there is no up-to-date spec ).
  • When not knowing the code of a buggy importer it is also hard to start a debugging session for tracing issues down to its root cause.

Yes, these are all signs of legacy code :-)…

So I started to review our old testing-strategy:

  • We have some unittests, but most of the importers are not covered by these tests.
  • We have a great regression testsuite ( many Kudos to Alexander Gessler ). When running this app it will import all our models and generate a md5-checksum of its content. This checksum will be saved in our git-repo. For each run the md5-checksum will be calculated again and the testsuite checks it against the older value, a mismatch will cause a broken test. Unfortunately also a new line-break will cause a different checksum as well and the test will fail. And what you see is that a model is broken. You do no see which special part of its importer or data. But some model-importers have more than 1000 lines of code …
  • This regression test suite was part of our CI-service. Without a successful regression-run it was possible to merge new pull-requests into our master-branch. In other words: we were blocked!
  • We are doing manual tests …

So I started to redesign this kind of strategy. A lot of people are spending hours and hours to support us and a lot of software is using Asset-Importer-Lib. So for me as a software-engineer it is my duty to guaratee that we are not breaking all your code with every change we are doing.

So I will change some things …

The idea

  • Measuring our legacy: how much code is really under test-coverage. I started this last week and the result ( 17% ) is nothing I am proud of.
  • Add one unittest for each Importer / Exporter:
      • For this task I have added a new test-class called AbstractImportExportBase:
    class AbstractImportExportBase : public ::testing::Test {
        virtual ~AbstractImportExportBase();
        virtual bool importerTest() = 0;

    All Importer tests will be derived from this class. As you can see the importerTest-method must be implemented for each new importer test. So we will get a starting point when you want to look into a special importer issue: look for the right unittest: if you can find one: start your investigations. If not: build you own test fixure, derive it from AbstractImportExportBase and make sure, that there is an importer test. At the moment this is just a simple Assimp::Importer::ReadFile()-call. But to make sure that the results will not change there is another new class.

  • Introducing SceneDiffer: This class is used to evaluate if the result of an import-process creates the expected result. You can declare which results you are expecting. So in each unittest this result will be checked against the expected data. When something breaks you can recognize this in executing our unit-test-suite. And the best part: you will see will part of the model data has changed.
  • Use Static-Code-Analysis via Coverity: A cron-job will running a Coverity-Analysis once a week to make sure that new commits or pull-requests haven’t introduce too much new issues.
  • Run the Regression-Test-Suite once a week: the same cron-job, who will trigger the coverity-run will run the regression test suite.  When some files will generate different results or just crashes you can see it and investigate this model in our unittest-suite. The regression-test suite was moved into a separate repo to make it more easy to deal with it.
  • Run the Performance-Test-Suite once a week: I introduced a new repository with some bigger models. The same cron-job for the static-code-analysis and for the regression-test-suite will trigger this import as well. The idea is to measure the time to import a big-big file. When after a week the time increases, someone introduced some buggy ( for importer code slow code is buggy code ) code and we need to fix it.
  • Release every to weeks: in the last couple of months I got a lot of new issues, which were reports of already solved issues solved on our current master branch. But the released version was outdated ( 2-3 months behind the latest master version ). To avoid this one release every two weeks could help our users to keep up-to-date without getting all the issues when looking on an experimental branch. The release process shall be run automatically. Until now there is no Continuous-Delivery-Service to generate the source release, the binary release for our common platforms and some installer for windows. Special the several deliveries to different platforms generated most of our new issues after releasing a new version. So doing this automatically will test our devilervy-process as well.

I already started to implements the unit-tests and the SceneDiffer-class. And I am using our unittests to reproduce new issues. When fixing them the test to reproduce the underlying issue is checked in as well.

Hopefully these things will help you Assimp-users to get a better User-Experience with Asset-Importer-Lib.

Feel free to give me any kind of feedback …

More Quality-Assurance on GitHub via SAAS

When you are working on Github with your project there are a lot really handy services which you can use. This kind of software-usage is called “Software-As-A-Service”. Why? You can use it via a nice Web-API without having all the maintain-work.

For instance when you want to use a Continuous-Integration-Service for your project you can setup a new PC, install Jenkins. Or you just use Travis on Github instead.

So I just started to use some more services on GitHub for my projects, in special for Asset-Importer-Lib ( see and its dependency ) of course:


Build Asset Importer Lib for 64bit with Visual Studio from source-repo

If you want to generate a 64bit-build for Asset-Importer-Lib by using the Visual Studio project files generated by CMake please follow these instructions:
Make sure that you are using a supported cmake ( 2.8 or higher at the moment )- and Visual-Studio-Version ( on the current master VS2010 is deprecated )
Clone the latest master Asset-Importer-Lib from github
Generate the project files with the command:

cmake -G”Visual Studio 14 Win64"

Open the project and build the whole project and enjoy the 64-bit-version of your famous Asset-Importer-Lib.
This should help you if you a struggeling with this feature. We just learned that just switching to code
generation for 64bit does not work.

Feel free to report any issues if you observed one.

Asset Importer Lib binaries of the latest build

If you are looking for the latest Asset Importer Lib build: we are using appveyor
( check their web-site, its free for OpenSource projects ).
as the Continuous Integration service for windows. If the build was successful it
will create an archive containing the dll’s, all executables and the export
libraries for Windows. At the moment we are supporting the following versions:
– Visual Studio 2015
– Visual Studio 2013
– Visual Studio 2012
I am planning to support the MinGW version as well. Unfortunately first I have to
update one file which is much too long for the MinGW-compiler ( thanks to the
guy’s from the Qt-framework ).