Optimizing TypeScript

  04 Aug 2021   5 min read

We are big fans of TypeScript and have been using it for many years now in our own projects, as well as having created a deep integration with TypeScript for both Wallaby and Quokka. In our previous blog post, we looked at the importance of optimizing your tests to run faster. In this blog post, we’re going to dive a little deeper and discover how TypeScript can sometimes be a big culprit in slowing down your test feedback loop and provide some options for what you can do about it.

One of the best features of TypeScript is that it is a strongly-typed language, which means that it can help you catch errors early and prevent you from accidentally using incorrect types. In the context of running your tests, this is not necessarily a bad thing. In the context of running your tests while you are in your editor and can already immediately see type errors, it’s a waste of processing time; your editor is doing work to provide type errors, and now so is your test runner. In addition, feedback from editor type-checking is much faster (as it’s not running your tests) and much more ergonomic (right next to your code and in problems views, instead of being reported as test errors). Unfortunately, the processing time required to do this type checking is sometimes very expensive, which means that when using Wallaby you can really benefit from optimizing how TypeScript is compiled when running your tests.

Before we get started, let’s dive into Wallaby’s architecture to understand when TypeScript compilation occurs. Wallaby uses multiple worker processes to run your tests in parallel whereas TypeScript compilation is limited to a single process. For most testing frameworks that Wallaby supports, compilation from TypeScript to JavaScript occurs in a single process, before any of your tests are run in parallel. For Jest, the behavior is a little different; Wallaby creates multiple worker processes, and in each worker process, Jest creates a separate instance of the TypeScript compiler and compiles your entire project again. This is an unfortunate limitation of Jest and means the same work is being done multiple times, which can add up to a lot of processing.

Now for the fun part… it’s not all doom and gloom. There are a few easy things you can do to optimize TypeScript when running your tests. First, let’s get a baseline of how long it takes TypeScript to run on a medium-size project.

We have an internal TypeScript project with about 22,000 lines of code, split across 135 files. Using the built-in TypeScript compiler (tsc), it takes an average of 13.37 seconds to compile the project.

The first thing that we can consider doing to improve performance, is to skip type checking between other files by setting the isolatedModules compiler option to true. This brings the average compile time down to 9.09 seconds, a 32% reduction in time (not bad for a quick setting change). There are some TypeScript features that don’t work with this setting, which you may like to read about. In our experience, most projects don’t use these features, but yours may be different. We choose to not use these features in order to benefit from a faster feedback loop.

There are other ways to compile TypeScript files in a similar way that the built-in TypeScript compiler does with the isolatedModules compiler option set to true. Babel is a popular tool for compiling TypeScript to JavaScript. It is much faster than the TypeScript compiler as it effectively just discards your TypeScript type information. The average time to compile our project with babel is 2.26 seconds. This is an 83% reduction in time compared to TypeScript’s 13.37 seconds. Depending on the TypeScript features you’re using, you may need to configure babel to use a few additional plugins. You may like to read more about configuring Babel for compiling TypeScript in the TypeScript docs.

We’ve made some massive gains already to the time it takes to compile our tests, but we’re not quite done yet. There’s one more method that we can use to compile TypeScript, swc. The average time to compile our project with swc is 0.461 seconds. This is a 96% reduction in the time it initially took us to compile with TypeScript. We have had some problems using swc and while it works with our projects now, we initially had issues when targeting earlier versions of node. The https://swc.rs website claims that: “it is 20x faster than babel on single thread, and 70x faster on 4 core benchmark”. It’s definitely worth checking out.

So, what do we do for our projects? We set our own projects up to use swc where possible and when not possible, we use babel when running our tests from both within Wallaby and from the command line / continuous integration (CI) services. Our npm scripts and CI tasks include a step to first compile using tsc so that we don’t miss out on type errors. The source code for our production builds use tsc as well.

It’s also worth mentioning that for smaller projects (or projects you don’t expect to grow much) the TypeScript built-in compiler will be just fine. For larger projects (e.g. large mono-repos that share types between projects), you will see a much bigger gain in performance than we did by using babel or swc.

If you are using TypeScript in your project, we hope you benefit from this article.