Development process in Coherent Labs

by Stoyan Nikolov June. 29, 18 0 Comment

development process

At Coherent Labs, we build technologies and tools for one of the most critical audiences in the IT industry – game developers. Nothing, but the best quality software will do for them.

Software quality has many dimensions. In this blog post, I’ll mostly focus on how we ensure the user-facing quality – the fact that the software must work correctly and perform adequately (should be very fast in our case). Other dimensions of quality include modifiability, testability, scalability, which in the end are reflected in the user-facing quality attributes.

Development process

To ensure the high-quality standard we strive for, we employ a mix of human actions, tools & automation. Developing a new feature involves multiple steps until it finally lands in the hands of our users.

For each feature:

• The developer(s) extracts all the requirements from the major stakeholders. In our case, this can include the Product Manager, Software Architect, CTO, Team lead & Design team.

• The developer creates a Design document – this is a high-level document that should answer important questions such as:

– What should I do?

– How am I going to do it (technically)?

– What risks do I see?

– How will I test the new feature?

– How much time I believe it’ll take?

• The design document is discussed with anyone who can contribute to the project. In most cases, these people are the stakeholders and developers. Design docs allow us to significantly reduce the risk for the development of new features and make sure that all members of the team are on the same page. It also has the benefit of remaining as a long-term artifact to remind us how a certain feature was designed at the beginning and what decisions were made. It can be invaluable down the road if the feature has to be modified somehow or faults are found and the developer who created it can’t remember all the details. In my experience, I keep certain decisions in my head for max 1-3 weeks, after that I either have to rewind everything in my head (long and tough process) or look at some notes.

• After the design is approved by the teams, the developer writes the actual code. Now he/she knows exactly what needs to be done and how. The actual code-writing is expected to be easy and frictionless.

• When the feature is ready, the developer creates multiple tests – both for the current feature and automated regression tests. The automation infrastructure is explained later in the post.

• When all tests pass, the developer moves the feature in the Code review stage. Each new piece of code has to get 2 “Ship It” from other devs. We are very democratic at this stage, every developer can review whatever he/she wants. It usually happens that the featured creator himself appoints to the review at least 1 team member that is familiar with the sub-system. The Code review stage is important, not only because it improves the quality of the features, but also gives much better visibility to all team members. We use Reviewboard.

• After the feature has received the needed “Ship It”, it is shown to QA. This allows the QA team to know what to expect, how to test and make sure the feature is aligned with the requirements.

• At this stage, the feature is merged back into the main branch. QA works always off the main branch.

The following steps illustrate the most important “human” part of the process. To facilitate and make the development faster, we have introduced numerous tolls in the pipeline.

 

Automation tools

Build automation

Our products currently support at least a dozen platforms and game engines. This means that it’s unfeasible to require every developer to build & run his/her code for each one of them. Build machines are employed that continuously compile & test the projects across multiple platforms. The coordination is performed with BuildBot and hand-written scripts. Developers can leverage the infrastructure by running builds and test on feature branches.

Build machines run 3 distinct types of builds for the products:

• Smoke builds – run after every commit and make sure the product builds and passes the most important tests. Smokes have to be fast, so they only run the most basic sanity checks.

• Nightly builds – run every night and cover all platforms and tests, including performance regression checks.

• Release builds – when we tag a version for release, it is essentially the same as the nightly, but gets a release version.

Test automation

We run different types of test that cover the functionality of our products:

• Unit tests – multiple internal systems are covered with unit tests to improve their reliability. We use Google Test as a framework for our tests.

• Layout tests – these are more high-level tests that run in an application that internally uses Hummingbird/GT. The tests usually produce a screenshot that is compared to a committed golden master. Many tests also support JavaScript logic through the mocha/chai frameworks. They allow us to do more precise checks on specific values. For instance, a certain animation should be at a certain point after 0.5s etc.

• Graphics tests – Our rendering library Renoir is extensively tested as a standalone package. It powers all our products and it’s important to make sure it always works reliably in isolation.

• Performance tests – we have a custom tool that runs a test suite on game consoles and checks the performance. If any profiled function on any test has become slower, the build fails. This is an extremely important aspect because seemingly subtle changes can have a negative impact on the performance of the software. The tool makes sure we always improve things. Historic results are kept in a database and allow us to plot changes over longer periods of time.

Additional tooling

Along with tests, we have an additional set of tools that are currently run by hand by QA or developers.

• Code coverage – a tool is used that calculates C++ code coverage of tests. When a developer writes new features and tests for it, she can check that she has covered all the new code. QA also routinely runs coverage of all tests and looks for portions of the source that are not exercised.

• Fuzzing – we have a tool that generates fuzzy tests continuously and runs our products on them. In our case, the fuzzer generates random HTML pages. The tool then runs Hummingbird on the random page and monitors it for asserts/crashes. If such is detected, the HTML page, log file, and a complete memory dump are recorded for inspection by a developer. The fuzzer has been very successful in finding some defects that are very difficult to encode in a test by hand – for instance very large and somewhat chaotic mutations of the DOM. We use the same tool to also generate and test our declarative data binding system.

• Static analysis – we periodically run static code analysis tools on the software. So far we’ve had some minor bugs detected by them, but nothing really serious. I believe that is due to the strict development process we enforce.

Future developments

The next iteration of our internal tools will see better integration of the code coverage and fuzzing tool. So far, the fuzz tests are completely random – this is not ideal, because they tend to exercise the same code paths very often. This means that catching a bug becomes quite unlikely. We plan to try some genetic programming techniques to make the fuzzer smarter and guide it through the code coverage tool. Genetic algorithms have been shown to dramatically improve the effectiveness of fuzzing and we are eager to try it for ourselves.

Social Shares

Related Articles

Leave a Comment