About

I'm Henri Verroken, I recently graduated with a Master's degree in Computer Science and Engineering. This blog is mostly about Haskell. Have fun reading!

LinkedIn GitHub RSS

Found something incorrect? Found a typo? Other question or remark? Drop me an email at

Making Haskell as fast as C: Imperative programming in Haskell

Posted on July 30, 2018 - Discussion - All posts

By Henri Verroken

Implementing efficient and fast data structures in Haskell is not straightforward. A functional implementation of an abstract data type is often orders of magnitude slower than an imperative solution that provides the same functionality. This blog post compares several implementations of a concrete and relatively simple abstract data type in terms of execution time. Ultimately, we develop an imperative data structure using native Haskell code that is as fast as its C equivalent.

We will consider a set of integer numbers as a simple abstract data type. Our set of integers has two operations:

  1. add: Add an integer to the set, ignoring duplicates as is expected from a regular set.
  2. check: Check whether a given integer is in the set or not, returning a Boolean value.

For the purpose of brevity, we have not included an operation that removes an integer from the set. The reader will most certainly be able to add this operation to the proposed solutions if desired.

We will furthermore assume that the integer set will have to be scalable to hold numbers from a large predefined range (e.g. values from 5 to 15 million) and that the set will be very densely populated.

We will compare several implementations by leveraging the excellent criterion benchmark library. First, we will consider several naive implementations which use off-the-shelf Haskell data structures. Then we will explore the performance of a C implementation using Haskell’s foreign function interface. Lastly, we will try to match the speed of C by porting the imperative C data structure to Haskell. All code is available on GitHub.

Defining the benchmark

As previously discussed we will consider a densely populated integer set with addition and lookup operations for a certain range. To adequately benchmark all data structures, we sequentially insert and query a predefined array of numbers1.

The code below shows the general structure of a benchmark for a specific data structure. Generating the list of integers is not included in the benchmark. We have executed the benchmark and plotted the results for various rangesfull code.

Off-the-shelf Haskell data structures

The Haskell library ecosystem contains many set-like data structures. The Data.Set module exposes Set, which is based on binary search tries2, while the Data.HashSet module implements HashSet, which uses a data structure called a hash array mapped trie or HAMT. The containers package even includes the specialized Data.IntSet module, especially developed for storing dense integer set. Its implementation is based on Patricia tries, a variant on binary tries3.

All of these data structures are persistent, often with different performance characteristics. To give a detailed overview of these data structures would lead us too far – we refer to the documentation and the provided resources – but we can easily check their performance using our benchmark.

We can see that execution times are acceptable for small set sizes. However, for large set sizes, building and accessing the set becomes unacceptably slow.

Bit vectors in C

The desired abstract data type immediately hints at a very simple solution. Since we know the integer set will be very densely populated, and the range of integers is predefined, we can simply use a bit vector, keeping a bit for each possible number in the set. In such a way, storing 10 million numbers would require only 1.2 MB.

Implementing such a set in C is rather straightforward. And requires only a minimal amount of code4,full code:

Haskell has an excellent foreign function interface which allows us to compile and call C functions with minimal overheadfull code:

We can now add our C implementation to the benchmarks:

We immediately see that this straightforward imperative solution is orders of magnitude faster than our functional equivalents. Yet, it comes at a cost of maintaining some C code inside our Haskell application.

Bit vectors in Haskell

Luckily, the imperative C code can quickly be translated to native Haskell code. GHC directly supports allocating, modifying and reading chunks of memory by exposing the GHC.Exts module. Directly using this module can be a bit daunting5, but luckily the primitive library provides a nice abstraction on top of GHC’s built-in byte arrays.

Allocating and using memory can be done inside the PrimMonad, which is mostly specialized to either IO or ST. Operations like newByteArray or writeByteArray can be used to directly interact with raw memory. Translating the C code to Haskell is now rather straightforwardfull code.

In the code above we used the SPECIALIZE compiler directive to instruct GHC to provide specialized function implementations for the IO monad. This ensures that no type class dictionaries are passed around when calling these functions in the IO monad. Our benchmark results are optimistic!

Closing the gap

We can see that our previous Haskell implementation competes with C, yet it is still about 2 to 3 times slower than the C implementation. Unfortunately, it is not immediately clear why. To discover the root cause of the difference in execution time we have to take a look at the code that GHC generates.

When compiling Haskell code to an executable binary, GHC uses many intermediate formats, all the way from the Haskell code you write to the machine code that gets executed on the machine. When a performance deterioration cannot be explained by inspecting the Haskell code itself, we have to inspect either the Core6 code or the native assembly code. Luckily, the ghc-core tool can be used to inspect both. In our case, comparing the assembly code generated by GCC for our C implementation and the assembly code generated by GHC for our Haskell implementation exposes the root problem. Can you spot it?

GHC has not optimized the remainder operation when the second operand is a multiple of 2. It uses the div instruction in the check and add functions, while GCC has replaced this instruction by a shift and an and instruction, as div instructions are relatively expensive on most CPUs.

Manually using the bitwise operators from Data.Bits in our Haskell code, makes GHC generate the correct instructionsfull code:

Now, our Haskell implementation is really as fast as C!

Conclusion

There will always be a fundamental mismatch between functional programming and fast imperative data structures. However, Haskell definitely offers enough support for low-level implementations of such data structures, without the need to call into foreign code written in traditional low-level imperative languages.


  1. The array of numbers is randomly generated for a specific range and has the same size as that range. However, when generating random numbers duplicates are allowed, which means that we expect that about 63% of all numbers in the range will eventually be added to the set.

  2. Nievergelt, J. and Reingold, E.M., 1973. Binary search trees of bounded balance. SIAM journal on Computing, 2(1), pp.33-43.

  3. Okasaki, C. and Gill, A., 1998, September. Fast mergeable integer maps. In Workshop on ML (pp. 77-86).

  4. Adding support for numbers that are out of range can be done in a relatively simple way. The full source code uses a simple hash table with chaining.

  5. Directly using the GHC.Exts module is not only dauntingcode, but also does not improve performancefigure.

  6. We will not be inspecting any Core in this blog post. However, it is very readable as it looks a lot like regular Haskell. Run the provided ghc-core command to take a look at it!