Writing tests for Nim libraries with Nimble and unittest
11th July 2025 - Guide , Nim , Programming
Most developers will agree that having automatic tests is a good thing. With good tests we can make changes to our code and test whether these changes causes old bugs to resurface, new bugs to appear, or current workflows to be broken.
In this tutorial we will have a look at how the default testing setup works in Nimble, the official Nim package manager/build tool which is installed along Nim.
nimble init
When you first create a package you typically run the nimble init
command. If you selected library
or hybrid
as the package type during initialisation it creates a folder structure which looks something like this:
.
├── src
│ ├── awesomeproject
│ │ └── submodule.nim
│ └── awesomeproject.nim
├── tests
│ ├── config.nims
│ └── test1.nim
└── awesomeproject.nimble
If you instead chose binary
it will only have the .nimble
file and the src
directory with a single .nim
file within it. This is, for better or worse, the default scaffold of a Nim package. In the awesomeproject.nimble
file you will find the information given during the init process, along with a single requirement on the version of Nim you used when the package was created and some combination of srcDir
, installExt
, and bin
depending on the package type you chose.
nimble test
If you’ve created a library
or hybrid
package you should already have a tests
directory, otherwise you can always create your own. The config.nims
file in the default structure only contains a single line switch("path", "$projectDir/../src")
which just tells Nim that when running the tests it should use the sources found in the src
folder next to the test, and not a globally installed version of the package. To run the test we use the nimble test
command. The official documentation is a bit sparse on what this actually does:
Nimble offers a pre-defined
test
task that compiles and runs all files in thetests
directory beginning with lettert
in their filename. Nim flags provided tonimble test
will be forwarded to the compiler when building the tests.
It does give us some information though. When running nimble test
what happens is that all files in the tests
directory starting with the letter t
and ending in .nim
gets compiled and executed. This includes links to files, but doesn’t go into sub-directories. Just note that links will compile and build as if the file was in the base directory. So you can’t have a directory with a custom config.nims
along with a test and then link that test to the root of the test folder. We can also pass flags to the tests, e.g. nimble test -d:ssl
to enable SSL. This is useful if you need to pass a system-local path to a resource the tests need, but if a test just requires a switch to run (like the above -d:ssl
) you should rather put those into config.nims
. It also mentions that it is possible to redefine the test task so that what happens on nimble test
is completely up to you. For the purposes of this tutorial though we will focus on the default setup.
So by default when we run nimble test
it will simply compile all the test files in tests
and run them. If any tests file fails to compile, or if the program has an exit code other than 0, the test is considered a fail. So if we create a simple test just containing
quit 1
that test will be considered a failure. Similarly if we use normal asserts or have an exception that is raised all the way to the global scope the program will exit with an error code and the test will fail. With this we can already start writing some test scenarios, but if we have a look in the tests/test1.nim
file that’s generated we can see that it imports a library called unittest
, so let’s have a look at what we can do with that.
std/unittest
The unittest
library is a part of the standard library. Curiously the official documentation starts by offering suggestions about what to use instead. But it’s what Nimble generates by default, and because of this it is pretty common to use none the less. The library also has some pretty nifty features. If we have a look in the auto-generated file we can find a single lonely test, the content of the test depends on whether the project was created as a library
or a hybrid
. Below is the test for a library
:
import unittest
import awesomeproject
test "can add":
check add(5, 5) == 10
It imports the unittest
library, as well as the package the test is a part of (awesomeproject
in this case). Then it defines a single test using the test
template from the unittest
library. This template requires a name and a body for the test. The name is used both as the name to output along with the test result, but also in order to filter which tests to run. In the body of the test we see a single check
statement which tests if the call to the add
function from the default generated library does what we would expect. The check
does a similar thing to an assert
, it checks that the given statement evaluates to true, and if not then it fails. However unlike an assert
it is aware of the context of a test. How often haven’t you added echo
statements to some failing code to get more context as to why it fails, only to delete them once you’ve figured out what went wrong because they are polluting the output? The unittest
library solves this with checkpoints. Checkpoints are simply log messages which are suppressed until a test fails. This way we can have nice log output when we need it, but if everything goes the way we expect then we can have a lot of output indicating what went wrong. You could of course spend time to write check
statements for every small piece of state in your test, but this is time consuming and tedious and probably not worth the time and energy. If we tweak the default test case to now read:
import unittest
import awesomeproject
suite "arithmetic":
setup:
let num = 6
test "can add":
checkpoint "Number is " & $num
check add(num, 5) == 10
test "can sub":
checkpoint "Number is " & $num
check add(num, -5) == 1
test "can div":
checkpoint "Number is " & $num
expect DivByZeroDefect:
checkpoint "Result of div by zero: " & $(num div 0)
check num div 3 == 2
We see that the can add
test fails, and we get a Nim output
segment when we run nimble test
which reads:
[Suite] arithmetic
Number is 6
/tmp/awesomeproject/tests/test1.nim(9, 22): Check failed: add(num, 5) == 10
add(num, 5) was 11
[FAILED] can add
[OK] can sub
[OK] can div
Error: execution of an external program failed: '/tmp/awesomeproject/tests/test1'
Here we can see that the failed can add
test shows “Number is 6” from the checkpoint, while the successful can sub
test doesn’t output anything like that because it succeeds. I’ve also sneaked in multiple new elements into our test case. In the can div
test case I use checkpoint
in conjunction with expect DivByZeroDefect
. The expect keyword is quite simply something that runs the given code and expect there to be an exception thrown. We might think of exceptions as an error, but in some cases it is correct to throw an error instead of just carrying on and pretending like nothing happened. With except
we can verify that even when something goes wrong our program does what we expect it to do. I also use a checkpoint
inside this block, this is simply to have somewhere to put the num div 0
result as it needs to be assigned to something. As a nice bonus we will also be able to see the value it returned if it for some reason shouldn’t raise an exception. Without this the only message we would get is “Expect Failed, no exception was thrown” which isn’t very descriptive.
But I’ve also added a suite "arithmetic"
, these two new tests, and a setup
block. This is another nice feature of unittest
. While the default test only has a single test — and it is completely valid to have test files with just lists of tests — the suite
block allows us to organise our tests a bit more. This not only makes the logs pretty, but can also be used to filter which tests to run as we’ll see in a second. The setup
block (and its counterpart teardown
) is simply a convenient way to define things which should be run for each test. If you have cross-test initialisation you can simply do that inside the suite
block without putting it in setup
, and if you have cross-suite initialisation you can just put it outside the suite altogether. But if you have some state that needs to be initialised and torn-down between tests then setup
and teardown
offer a way to do this without repeating yourself.
The last part I want to cover about the unittest
library is the filtering abilities. When trying to fix a bug you probably don’t need to run the entire test-suite until the test case for that specific bug is passing. In order to have some more tests to filter by lets add another suite, this can be done in a separate test file or right after the first suite in the same file:
suite "text handling":
test "concat":
check "hello " & "world" == "hello world"
test "can index":
check "hello"[^1] == 'o'
test "can static index":
when hostOs == "linux":
check hostOs[0] == 'l'
else:
skip()
Right of the bat you probably notice the can static index
test which contains a skip()
call. This is not strictly speaking filtering, but it’s a useful way to skip tests that require certain hardware or software, or which should only be run under certain circumstances. If we run this test on a Linux machine all these new tests will pass, but if we run it on a Windows machine we will get an output indicating the test was skipped:
[Suite] text handling
[OK] concat
[OK] can index
[SKIPPED] can static index
By default nimble test
will simply run every test in every suite. If we for example only wanted to run the “concat” test we can simply run nimble test "concat"
:
$ nimble test "concat"
Compiling /tmp/awesomeproject/tests/test1 (from package awesomeproject) using c backend
Info: compiling nim package using /home/peter/.nimble/bin/nim
Nim Output [Suite] arithmetic
... [Suite] text handling
... [OK] concat
Success: Execution finished
Success: All tests passed
I mentioned in the beginning that Nimble itself only compiles and runs our test cases, but it also forwards our arguments to the Nim compiler. So in the above something similar to nim c -r test1 "concat"
is actually being run. The unittest
library looks at the arguments given and determines which tests to run from there. Since the same arguments are used for every test file it doesn’t matter if the suites and tests are split up into files or in one big file. We can also filter by wildcard:
$ nimble test "can *"
Compiling /tmp/awesomeproject/tests/test1 (from package awesomeproject) using c backend
Info: compiling nim package using /home/peter/.nimble/bin/nim
Nim Output [Suite] arithmetic
... Number is 6
... /tmp/awesomeproject/tests/test1.nim(9, 22): Check failed: add(num, 5) == 10
... add(num, 5) was 11
... [FAILED] can add
... [OK] can sub
... [OK] can div
... [Suite] text handling
... [OK] can index
... [OK] can static index
... Error: execution of an external program failed: '/tmp/awesomeproject/tests/test1 'can *''
Here we can see that only tests starting with the word “can” are run, the “concat” test from the “text handling” suite is ignored. We can also run only a single suite by appending ::
:
$ nimble test "text handling::"
Compiling /tmp/awesomeproject/tests/test1 (from package awesomeproject) using c backend
Info: compiling nim package using /home/peter/.nimble/bin/nim
Nim Output [Suite] arithmetic
... [Suite] text handling
... [OK] concat
... [OK] can index
... [OK] can static index
Success: Execution finished
Success: All tests passed
It’s also possible to select multiple tests by listing them individually, or combining filters. A test is run if it matches any of the filters:
$ nimble test "can sub" "text handling::con*"
Compiling /tmp/awesomeproject/tests/test1 (from package awesomeproject) using c backend
Info: compiling nim package using /home/peter/.nimble/bin/nim
Nim Output [Suite] arithmetic
... [OK] can sub
... [Suite] text handling
... [OK] concat
Success: Execution finished
Success: All tests passed
Conclusion
Testing is a great tool to help you be more confident that things don’t break as you edit your codebase or accept pull requests. Almost any test is better than no tests. But testing can be boring, and digging trough different pieces of documentation to figure out how to do it properly can feel like an unnecessary hit to productivity. I hope that this tutorial can work as a simple and accessible way to get information about how testing works in Nim and Nimble using the default and built-in tools. As I’ve hit on a couple of times pretty much all of this is possible to change, you don’t have to use Nimble, you can override the test
target, you can use a different testing library than unittest
— or even none at all, and you can use as many or as few features from the library as suites your needs. But by at least knowing how to wield the tools that are right there and ready to be used you are better suited to at least add some tests rather than thinking you’ll look into it at some point or another and never getting around to it.