Getting starting with a Legacy-Code-Project


Day zero

Imagine the following situation: you are starting a new job, you are looking forward to your bright future. Of course you are planning to use the newest technologies and frameworks. And then you are allowed to take a first look into the source you have to work with. No tests, no spec, which fit to the source, of course no doc but a lot of  ( angry ) customers, which are strongly coupled to this mess. And now you are allowed to work with this kind of … whatever.

We call this Legacy-Code and I guess this situation is a common one, every developer or most of them will face a situation like this during his/her career. So what can we do to get out of this? I want to show you some base-techniques which will help you.

Accept the fact: this is the code to work on and it currently solves real problems!

No developer is planning to create legacy code. There is always a reason like: we needed to get in buisness or we had failed. Or the old developers had not enough resources to solve all upcoming issues or develop automatic tests. 10 years ago I faced this situation again and again: nobody want to write automatic tests because it costs a lot of time and you need some experience how to design your architecture in a way that its testable. And there were not so much tools out in those days.

The code is there for a reason and you need to accept this: this working legacy code ensures that you got the job. So even when its hard try to be polite when reading the code. Someone invested a lot of lifetime to keeps it up and running. And hopefully this guy is still in the company and you can ask him some questions.

You can kill him later ;-).

Check, if there is any source-control-management

The first thing you should check is the existence of an Source-Control-Management-Tool like Subversion, Git or Perforce. If not: get one, learn how to use it and put all your legacy code into source control! Do it now, do not discuss. If any of the other developers are concerned about using one install a SCM-tool on you own developer-pc and use it there. I promise: it will save your life some day. One college accidentally killed  his project-files after 6 weeks of work. He forgot the right name of his backup-folder and removed the false on, the one containing the current source. He tried to save disk-space, even in those old day disk-space was much cheaper than manpower.

To avoid errors like this: use a SCM-tool.

Check-in all your files!

Now you have a working SCM-tool check if all source-files, scripts and Makefiles are checked-in. If not: start doing this. The target of this task is just to get a reproducible build for you. Work on this until you are able to build from scratch after checking out your product. And when this works write a small KickStarter-Doc how to build everything from scratch after a clean checkout. Of course this will not work in the beginning. Of course you will face a lot of issues like a broken build, wrong paths or a different environment. But this is also a sign of legacy-code: not reproducible builds. Normally not all related files like special Makefiles are checked in. Or sometimes the environment differs between the different developer-PCs. And this causes a lot of hard to reproduce issues.

Do you know the phrase: “It worked on my machine?” after facing a new bug. Sometimes the developer was right. The issue was caused by different environments between the developer machines ( for instance different compiler version, different IDE, different kernel, different whatever … ).

When you have checkin all you files try to ensure that everyone is using the same tools: same compiler version, same libs, same IDE and document this in your KickStarter-Doc. Let’s try other guy’s to work with this and fix all upcoming issues.

This can slow down the ongoing development tasks. To avoid this you can learn how to work with branches with your SCM-tool ( for instance this doc shows how to do branches in git: https://git-scm.com/book/en/v2/Git-Branching-Basic-Branching-and-Merging ).

Watch your logs in your unittests!


The idea

Unittests and integration-tests are a great tool not to break your code. They are building a safety-net to help you when you have to add a new feature or fixing a bug in an existing codebase.
But of course there will be situations when a bug will occur which was not covered by your test-suite. One way to get an understanding what went wrong are logfiles. You can use them to write a protocol what happened during runtime. When something useful ( like creating a new entry into a database ) happened you can write this information with a timestamp into your protocol. When a bug occurs like disk is full an error-entry will be created. And you can use it to log some internal states of your application. When the log-entries are well maintained they help you to get a better understanding what happened during a crash ( and of course what went wrong before ). And you can use them to post warnings like: be careful, this API-call is deprecated.
But do you watch your logs in your unit-tests and integration-tests? Maybe there are interesting information stored in them for a test-fixure which you should take care of as well. For instance when you declare an API-call as deprecated, but this call is still in use in a different sub-system it would be great to get this as an error in a unittest. Or when some kind of warning occurrs at some point of your log. We observed stuff like that in production code more than once. To take care of these situations we added a functionality called a log-filter: you can use it to define expected log-entries like an error which must occur in a test because you want to test the error behaviour. When unexpected entries are there the test will fail. So you will see in your tests what shall happen and what not.

Do some coding

Lets start with a simple logging service for an application:
My basic concept for a logging service looks like:
A logger is some kind of a service, so only one is there. Normally I am building a special kind of singleton to implement it ( yes I know, they are bad, shame on me ). You can create them on startup at the beginning. You shall destroy them at the end of the application ( last call before exit( 0 ) )
Log entries have different severities, for instance:
Debug: Internal states messages for debugging
Info: All useful messages
Warn: Warnings for the developer like “API-call deprecated”
Error: External errors like disk full or DAU-user-error
Fatal: An internal error has occurred caused by a bug
You can register log-streams to the logger, each logstream will write the protocol to a special output like a log-file or a window from the frontend.
In code this could look like:

class AbstractLogStream {
public:
  virtual ~AbstractLogStream();
  virtual void write( const std::string &message ) = 0;
};

class LoggingService {
public:
  // the entry severity
  enum class Severity {
    Debug,
    Info,
    Warn,
    Error,
    Fatal
  };
  static LoggingService &create();
  static void destroy();
  static LoggingService &getinstance();
  void registerStream( const AbstractLogStream &stream );
  void log( Severity sev, const std::string &message, 
    const std::string &file, unsigned int line );
  
private:
  static LoggingService *mInstance;
  LoggingService();
  ~LoggingService();
};

With this simple API you can create and destroy your log service, log messages of different severities and register your own log-streams.
Now we want to observer the entries during your unittests. A simple API to do this could look like:

class TestLogStream : public AbstractLogStream {
public:
  TestLogStream();
  ~TestLogStream();
  void write( const std::string &message ) override {
    TestLogFilter.getInstance().addLogEntry( message );
  }
};

class TestLogFilter {
public:
  static TestLogFilter &create();
  static void destroy();
  static TestLogFilter &getInstance();
  void addLogEntry( const std::string &message );
  void registerExpectedEntry( const std::string &message );
  bool hasUnexpectedLogs() const {
    for ( auto entry : m_messages) {
      std::vector::iterator it( std::find( m_expectedMessages.begin, m_expectedMessages.end(), entry );
      if ( it != m_expectedMessages.end() ) {
        return true;
      }
    }
    return false;
  }

private:
  TestLogFilter();
  ~TestLogFilter();

private:
  std::vector m_expectedMessages;
  std::vector m_messages;
};

The filter contains two string-arrays:
One contains all expected entries, which are allowed for the unittest
The other one contains all written log-entries, which were written by the TestLogStream during the test-fixure
Let’s try it out

You need to setup your testfilter before runnning your tests. You can use the registerExpectedEntry-method to add an expected entry during your test-execution.
Most unittest-frameworks support some kind of setup-callback mechanism before executing a bundle of tests. I prefer to use gtest. So you can create this singleton-class here:

#include <gtest/gtest.h>

class MyTest : public ::testing::test {
protected:
  virtual void SetUp() {
    LoggingService::create();
    TestLogFilter::create();
    LoggingService::getInstance().registerStream( new TestLogStream );
    TestLogFilter::getInstance().registerExpectedEntry( "Add your entry here!" );
  }

  virtual void TearDown() {
    TestLogFilter::destroy();
    EXPECT_FALSE( TestLogFilter::getInstance().hasUnexpectedLogs() );
    LoggingService::destroy();
  }
};

TEST_F( MyTest, do_a_test ) {
  ...
}

First you need to create the logging-service. In this example only the TestLogStream will be registered. Afterwards we will register one expected entry for the test-fixure.
When all tests have proceeded the teatDown-callback will check, if any unexpected log-entries were written.
So when unexpected entries were detected the test will be marked as a failure. Andy you can see if you forget to deal with any new behaviour.
What to do next

You can add more useful stuff like:
Add wildcards for expected og entries
Make this thread-safe