I was recently reading an excellent post entitled Testing Microservices, the sane way by Cindy Sridharan. The post is rather lengthy and, in my opinion, worth the read. If anything, it provides a good amount of detail as to the challenges and nuggets of wisdom gleamed from building out an application formed by microservices.
I’ve been spending a fair quantity of time thinking about testing over the past year or so – which makes sense when you look at Vester and these blog posts. The idea of testing, using test driven development (TDD), and refactoring to make testing easier has, in my personal opinion, improved my ability to write clean, repeatable code.
The reason I wanted to write about this particular post was based on this snipet from Cindy:
Writing debuggable code involves being able to ask questions in the future
This is without a doubt my Achilles heel when it comes to writing modules and functions. Most of the time, my default process was to load up an entire environment and either step through code in debug mode or plant enough verbose outputs to catch whatever was plaguing me.
For the microservices world, Cindy states that it involves:
- instrumenting code well enough
- having an understanding of the Observability format of choice (be it metrics or logs or exception trackers or traces or a combination of these) and its pros and cons
- being able to pick the best Observability format given the requirements of the given service, operational quirks of the dependencies and good engineering intuition
I think these are ideals we can all embrace across the infrastructure engineering space, especially when using more and more “infrastructure as code” methods to declare and control our systems.
Treating Infrastructure Code as Microservices
The other thought that I had was around how similar the world of microservices are to the world of properly creating re-usable functions for controlling infrastructure. It’s been brutally beaten into my skull to avoid writing functions or scripts that do more than one logical action. For example, having multiple if
statements in a single piece of code makes testing exponentially more difficult. Rather than testing the true and false states of the code – if
this is true or not – I would have to test four or more states due to the combination of possibilities. This is only exacerbated as more logic is baked into the function.
As you write bits of code and form them into re-usable functions, think about this moving forward. Simple functions that are focused on performing one single task are not only simple building blocks for creating a complex system – similar to our friends building microservices – but also infinitely easier to test. I’ll be the first to admit that I didn’t start writing code this way. My first few years of scripting were all about creating monster sized walls of text that either worked silently or failed spectacularly.

Originally, I think the idea of creating lots of little single-minded functions was counter intuitive to my brain. After all, little functions results in more code sprawl and potential overhead, right? However, adhering to a proper naming scheme for functions and refactoring chunks of larger code into small bits of focused code has helped me develop a library of “micro tasks” that can be orchestrated together for simple automation workflows.
Testing Tiny Tatters of Code
These tiny functions also make testing much simpler for several reasons.
First, the inputs and outputs are simple and easy to test. If a function is designed to transform a data type from a raw PowerShell hashtable to a formatted JSON file, it’s not that hard to determine success or failure via testing and debugging. It either worked or it didn’t. I can then perfect this function and plant it directly into other functions, such as a function that submits data to a RESTful endpoint. There’s no need to really worry about code coverage of the data transform code, because it’s already tested as an individual bit of code. So long as I supply the correct input to the function, it’ll return what I need.
Second, it’s also fairly easy to debug things. If the data transform function is failing, I just have to fix that to make sure the output is what is expected. If something malforms the payload, I can drill down to that single piece of code and see what’s up. This can be done by debugging that code in VS Code, as an example, or by adding try/catch logic and doing something smart with the errors. Heck, the error logic could even be handed off to yet another small function that lives only to make decisions about error codes.
Thoughts
There’s lots of overlap across technological silos. Microservices sparked my interest early on because it sounded very similar to how I’ve been approaching orchestration design for infrastructure engineering. Back when I was consulting, I even called the creation of uni-tasks as building micro services.
As a bonus action, you can also use fuzzing – providing invalid, unexpected, or random data as inputs – to see just how smart you are when it comes to handling exceptions to your code. I used to call this the “banana test” because a colleague would typically enter something goofy like the word “banana” when I asked for a numerical input. This made sure that I had proper error handling and input validation in play. 🙂