A new concept for testing Asset-Importer-Lib


Our problem!

After struggeling with bugs in assimp for more than 10 years until now I have to accept the fact: I need to improve our unittests + regression-test-strategy:

  • New patches are breaking older behaviour and I haven’t recognized it until a new issue comes up created by a frustrated user.
  • I do not have a clue about most of the importers. I didn’t implement them and it is really really hard to get in the code ( special when there is no up-to-date spec ).
  • When not knowing the code of a buggy importer it is also hard to start a debugging session for tracing issues down to its root cause.

Yes, these are all signs of legacy code :-)…

So I started to review our old testing-strategy:

  • We have some unittests, but most of the importers are not covered by these tests.
  • We have a great regression testsuite ( many Kudos to Alexander Gessler ). When running this app it will import all our models and generate a md5-checksum of its content. This checksum will be saved in our git-repo. For each run the md5-checksum will be calculated again and the testsuite checks it against the older value, a mismatch will cause a broken test. Unfortunately also a new line-break will cause a different checksum as well and the test will fail. And what you see is that a model is broken. You do no see which special part of its importer or data. But some model-importers have more than 1000 lines of code …
  • This regression test suite was part of our CI-service. Without a successful regression-run it was possible to merge new pull-requests into our master-branch. In other words: we were blocked!
  • We are doing manual tests …

So I started to redesign this kind of strategy. A lot of people are spending hours and hours to support us and a lot of software is using Asset-Importer-Lib. So for me as a software-engineer it is my duty to guaratee that we are not breaking all your code with every change we are doing.

So I will change some things …

The idea

  • Measuring our legacy: how much code is really under test-coverage. I started this last week and the result ( 17% ) is nothing I am proud of.
  • Add one unittest for each Importer / Exporter:
      • For this task I have added a new test-class called AbstractImportExportBase:
    class AbstractImportExportBase : public ::testing::Test {
    public:
        virtual ~AbstractImportExportBase();
        virtual bool importerTest() = 0;
    };
    

    All Importer tests will be derived from this class. As you can see the importerTest-method must be implemented for each new importer test. So we will get a starting point when you want to look into a special importer issue: look for the right unittest: if you can find one: start your investigations. If not: build you own test fixure, derive it from AbstractImportExportBase and make sure, that there is an importer test. At the moment this is just a simple Assimp::Importer::ReadFile()-call. But to make sure that the results will not change there is another new class.

  • Introducing SceneDiffer: This class is used to evaluate if the result of an import-process creates the expected result. You can declare which results you are expecting. So in each unittest this result will be checked against the expected data. When something breaks you can recognize this in executing our unit-test-suite. And the best part: you will see will part of the model data has changed.
  • Use Static-Code-Analysis via Coverity: A cron-job will running a Coverity-Analysis once a week to make sure that new commits or pull-requests haven’t introduce too much new issues.
  • Run the Regression-Test-Suite once a week: the same cron-job, who will trigger the coverity-run will run the regression test suite.  When some files will generate different results or just crashes you can see it and investigate this model in our unittest-suite. The regression-test suite was moved into a separate repo to make it more easy to deal with it.
  • Run the Performance-Test-Suite once a week: I introduced a new repository with some bigger models. The same cron-job for the static-code-analysis and for the regression-test-suite will trigger this import as well. The idea is to measure the time to import a big-big file. When after a week the time increases, someone introduced some buggy ( for importer code slow code is buggy code ) code and we need to fix it.
  • Release every to weeks: in the last couple of months I got a lot of new issues, which were reports of already solved issues solved on our current master branch. But the released version was outdated ( 2-3 months behind the latest master version ). To avoid this one release every two weeks could help our users to keep up-to-date without getting all the issues when looking on an experimental branch. The release process shall be run automatically. Until now there is no Continuous-Delivery-Service to generate the source release, the binary release for our common platforms and some installer for windows. Special the several deliveries to different platforms generated most of our new issues after releasing a new version. So doing this automatically will test our devilervy-process as well.

I already started to implements the unit-tests and the SceneDiffer-class. And I am using our unittests to reproduce new issues. When fixing them the test to reproduce the underlying issue is checked in as well.

Hopefully these things will help you Assimp-users to get a better User-Experience with Asset-Importer-Lib.

Feel free to give me any kind of feedback …

Leave a Reply