By: Julian Smith, http://op59.net/, firstname.lastname@example.org.
[Note: this article was written in 2008]
Yabs is a build system which I've written and developed and used over the last few years.
As the title of this article implies, there are many existing and well-established build systems, and some people may question the need for another one. But I have looked at various existing build systems, and so far I have not seen any that work in the way that I think they should.
The starting point of my wishes for a build system is that I don't want to have to use any limited scripting language or macro system. Building projects is always more complicated than one initially thinks, and what at first may look like a neat scripting or macro facility, invariably turns out to be woefully inadequate.
The upshot of this is that I want to write rules as functions, not as data.
This may be worth expanding on. I think people tend to shy away from writing code these days. The tendency seems to be to write data in such a way that it can be given to some pre-written code in order to acomplish a task. I think XML is an example of this. I've never understood what is so special about XML - it's a data format, nothing more, nothing less, yet everyone seems to be unaccountably excited by it. Ok, so you can write DTDs that ensure that some XML matches a certain prearranged template, but that's hardly rocket science, is it? (on the other hand, getting some of the XML libraries out there to do anything useful may require a brain the size of a planet, but that's hardly a recommendation...).
The problem with data is that it is not Turing-powerful - it's
always possible for someone to come up with something that a given
data format can't handle. In the context of build systems, this is
a real problem, because build systems are often used to perform
really hard tasks - handling hundreds or thousands of source files,
running different sets of tools to compile/links source files,
track hidden dependencies such as header files, running tools to
generate source, building these tools in the first place, working on
multiple very different platforms etc. In addition, build systems
have to be 100% reliable, and they must not waste time - we're not
all willing to put in a spot of juggling practise every time we run
Put like that, writing a build system would seem to be asking for trouble. A look at most projects' build system will bear this out - they are almost always a complete mess, poorly understood, are not reliable (a measure of this is to see how soon after a build/run problem, will someone do a clean build), and are usually frustratingly slow.
Ok, so we've established that a build system has to do a very hard task. Why then do we often expect a project's build system to be constructed using cut-down macro languages, or with a few dialogue boxes?
In all other areas of computing, we solve hard problems by writing computer programmes. There is a reason for this - computer programming has been developed in order to solve hard problems. Thus we have different approaches to programming (object orientation, functional programming), a myriad ways of representing data (lists, sets, maps, in memory or on disc, etc) with different speed/space characteristics for different usage patterns, and a whole culture of experience in how to apply these tools to all manner of difficult problems.
So what are the implications for build systems?
I think the fundamental characteristic of a build system should be a dependency tree: Some things depend on other things. For example object files depend on C files - if a C file changes, then the equivalent object file must be regenerated. There are more subtle dependencies - object files also depend on header files, because if a header file is changed, then any object file whose C file used (even indirectly) that header file, must be regenerated.
Similarly, executables depend on object files and libraries.
More subtly, object files and executables depend on the actual compiler and linker commands that are used to generate them. I'll return to this point later on.
While dependency trees are usually used for low-level things like compilation in traditional systems, it's very common for higher-level things to depart from the dependency tree approach. Thus we may use a dependency tree to build executables, but switch to a separate plain script to build a release candidate with resource files etc. I think this is a mistake - it means that we now have a second system that runs commands and creates output files for the user, which doesn't benefit from the dependency approach to avoid doing work that is not needed, and probably won't have as flexible a system for controling diagnostics.
Dependency trees are pretty simple things. It's not hard to get them right, so it's actually easy to ensure that we, for example, a) always recompile a file when we need to and b) never recompile a file when we don't need to.
Getting this right means that, at a stroke, we've removed a
whole set of problems that plague day-to-day computer programming
- we never waste time waiting for a build system to do stuff that
doesn't need doing (e.g. recompile something that doesn't need
recompiling), and we never have to worry about whether a problem has
been caused by the build system failing to recompile something which
it should recompiled (or, worse, worry about whether we need to run
the appalling autotools system's
again because we may have changed something critical).
So, dependency trees are the heart of build systems. What is the best way to implement a build system's dependency tree?
In Make, one specifies dependencies with rules, e.g.:
foo.o: foo.c gcc -o $@ $<
As well as saying that
foo.o depends on
foo.c, this rule also tells Make how to generate
foo.o. As it stands, this rule won't work 100%
reliably because if
foo.c #includes a header file
bar.h is modified, Make will
not know to regenerate
foo.o. There are a whole host of
techniques that allow one to get Make and Gcc to conspire together
to generate this sort of extra dependency, but it's not easy to do,
and rather messes up the nice simple Makefile syntax - an example of
things looking really neat in the simple cases, but of the system
not being powerful enough to handle real-world problems. (Make
2.80 introduced a new extension to the macro system that improved
things, but it is still practically impossible to get things to work
I think this basic approach is a good one - specify dependencies, along with the commands that are used to regenerate dependent files. In addition, Make has the concept of phony targets, which are useful for things that are named but don't actually exist as files, such as tests or releases.
We can generalise the above build-system rule into a function. Yabs rules are functions which take a target, and return the command that is need to regenerate the target, plus a list of the things that the target depends on. Thus:
def compile_rule( target): if target=='foo.o': return 'gcc -o foo.o foo.c', 'foo.c' return None
The nice thing here is that the rule is bog-standard code. If
more complicated behaviour is required, we can do it just by
writing some more code. E.g. if we want to use different compiler
flags for different object files, and compile any C file, not just
def compile_rule( target): if target.endswith( '.debug.o'): src = root[:-8] + '.c' return 'gcc -g -o ' + target + ' ' + src, src) elif target.endswith( '.release.o'): src = root[:-10] + '.c' return 'gcc -O2 -o ' + target + ' ' + src, src) return None
This example is a little crude, but remember that we're doing computer programming here - we can easily write some utility functions that enable the above to be written more elegently.
The crucial thing is not to be distracted into hiding away from
having to write code. By all means write a set of functions that
allow a Make-style specification of rules (in fact, Yabs as exactly
yabs2.add_patternrule()), but don't
preclude the user from being able to get back to the metal and write
rules as raw code.
So, the general approach should be to provide functions that enable rules to be written more easily.
We could go further into how I think build systems should be designed, but instead I think it's probably easier to simply describe how Yabs works, pointing out how it solves various issues along the way.
Yabs is simply a Python module. A build system that uses Yabs
is a Python programme that happens to make use of the functions
and classes that the Yabs module provides. The user does a
build by running this programme. Yabs provides functions that
can take care of parsing command-line parameters so, apart from
defining and registering rules, the programme simply has to call
The way things work is that the build system defines various
rules as python functions, and registers them with Yabs. When a
target is to be built,
yabs.make() is called with the
name of the target.
yabs.make() calls each rule in turn until, Instead
None, one of them returns a command that
will build the target plus a list of prerequistes - files that need
to be present before the command is run. E.g. if the target is an
executable, the command will call the linker, and the prerequisites
will be a list of object files.
yabs.make() will then call itself recursively for
each prerequisite. Eventually, we end up with targets, such as
human-written source files, that don't have any prerequisites,
so the recursion ends. Yabs defines a conventional rule for
these hand-written files - rule returns an empty command and no
prerequisites if the target already exists as a file.
For each target, once it has built all the required
yabs.make() looks at the datestamps of
the prerequisites, and compares with the datestamp of the target. If
the target doesn't exist or is older than any of the prerequisites,
Yabs runs the command that promises to generate the target.
There are various things that can go wrong here. A command could
fail (e.g. a compiler error), or maybe no rule will be able to build
a particular target.
yabs.make() returns an appropriate
error object in these cases, which contains full details about the
Hopefully it will be clear that there is nothing particularly complicated going on here. But there are a few useful properties of this design:
Yabs has a few extra tricks up its sleeve:
LD_PRELOADlibrary, which automatically gathers information about hidden dependencies - files which the command opens but which may not have been specified explicitly on the command line. This information is used in later builds, forcing rebuilds if these files are changed. For example, this system can be used to detect header files that are directly or indirectly #included by C files.
Interestingly, auto dependencies are implemented simply as internal conventional rules, which look at the contents of the generated dependency files to get a list of prerequisites.
foocan be written to a file called
foo.autocmds. Later builds will regenerate
fooif the rule returns a different command from that in
foo.autocmds, even if no prerequiste is newer than
foo. This sort of thing is vital if a build system is to be reliable. For example, if one decides to change to a different level of optimisation,
gcc -O3instead of
gcc -O2, the build system must rebuild all object files, otherwise a clean build will give different results.
Because rules are functions, it is easy to write code that allows
the user to specify rules in a different way from actually writing
them directly. For example,
allows a Make-style way of specifying rules:
yabs2.add_patternrule( '%.o', '%1.c', 'gcc -o $@ $<')
The yabs3 module provides a particular way of specifying
executables and libraries, by taking lists of source files, and
generating and registering Yabs rules which build executables or
libraries from these source files. In addition, yabs3 can build
debug or release builds of these executables or libraries, and also
allows simple control over compiler/linker flags such as
-fPIC, all with 100%-reliable handling of header file
Speed is important for a build system - it's incredibly frustrating to have to wait for ages before anything happens.
Despite Python not being the speediest language around,
Yabs is fast, because it only looks at things that it needs
to look at in order to be correct. So, if you ask it to build
foo.exe, it will only look at the datestamps of files
that are used to build
foo.exe, and not waste time
worrying about hidden dependencies for other targets (as Make
does). As usual with efficiency, getting the algorithm right is more
important than using an inherrently fast language.
One thing to bear in mind whenever discussing build system effeciency is that, in order to reliably build a particular target, the build system has to look at the datestamp of every single human-written file that is used to build the target. If it doesn't do this, it is entirely likely that target will not be rebuilt correctly.
Surprisingly, even though this often means that hundreds if not thousands of files' datestamps have to be read from disc, this takes much less time that one might imagine. Most of the time, the instinct to short-cut such exhaustive checking (e.g. using small per-directory makefiles) is unfounded. For more about this, see the classic paper Recursive Make Considered Harmful.
Yabs has a facility to mark everything within particular
directories as being old, so that it never checks the actual
filesystem. E.g. specifying
'/usr/include/' will stop
Yabs from ever looking at the datestamps of system header files,
which is usually safe because these files never change. This saves
a certain amount of time, but the saving is smaller than one might
Yabs has no internal state that effects how things are build. If
you ask it to build
foo.exe, the information that
foo.exe is build is implied by the name
foo.exe and the rules that are available. For example,
there is no internal we are doing a debug build flag.
An example of this is the yabs3 module. This
encodes debug/release information in filenames, e.g.
This may seem odd. Sure, the filenames get a little unwieldy,
but there's method to this madness. Encoding information into
the filenames like this means that all information is explicitly
avaiable, so one can ask Yabs to build intermediate targets
and one knows that this will be built in the exact same way
as when it is an intermediate target on the way to building
The lack of internal state also means that one can mix
debug and release builds. E.g. if an executable is built from
bar.c, but we need to always use
a debug build of
bar.c, one could do:
yabs3.add_exe( 'myapp', 'foo.c bar.c,gcc,debug.o')
- i.e. specify a particular
bar.o file which is used
in both debug and release builds of myapp. This
file will be shared between the two builds - doing a release build
of myapp followed by a debug build of
myapp, will only
In general, this means that different configurations of top-level targets can reuse intermediate targets, resulting in no unnecessary compilation or linking. Another example of where this can be useful is when the build system has to build tools which are then used to generate/process source files prior to compilation. There is usually no need to make Debug and Release builds use different builds of these tools, so a global Debug/Release state variable is not helpful.
The dependency tree idea can be taken a little further. I
think that tests should also be fitted into the dependency tree
framework. Tests generally require various test programmes to be
built and run with various parameters. There is nothing about these
tests programmes that makes them different from programmes that are
generated outside of the testing framework. Often various individual
tests are grouped together, e.g. into
tests. Again, one can make a
precommit test into an
item in the dependency tree, such that it depends on each
of the individual tests. So running one particular test will build
only the files necessary for that test, which can often save lots of
time if one is only working in a small area of code.
Similarly, a customer-release of a project depends on various separate executables and generated files being packaged together. If we make a release be another standard item in the dependency tree, we again get all the benefits of being guaranteed correctness while never spending time building anything unnecessary.
To sum up, we can easily propogate all the speed and correctness benefits of the buld system into the areas of building releases and running tests.
One things that has always intrigued me is the way that people talk about configuration separately from building. I think there is a myth that they are distinct things; the world is full of premature categorisation, and this is just another example of it. The truth is that all existing build systems already do some amount of configuration when deciding what source files to compile and linke, or whether to make a release or debug build.
There are higher-level configuration decisions, such as whether to build for Windows or for Linux, but separating these into a separate configuration system means that we immediately lose some of the advantages that a simple dependency-tree system gives us. For example, some files (e.g. help files) may be identical in Windows and Linux releases, and we don't want to generate them twice, because this could be slow. Similarly, sometimes we may want to use debug builds of certain C files in releases (e.g. because of a compiler bug); it is not possible for a separate configuration system to specify things in this amount of detail, to an underlying build system.