As programmers, whenever we embark on writing a piece of code, we risk being accused of reinventing the wheel and succumbing to Not Invented Here syndrome. There are nowadays dozens of libraries, framework and other software products of some description for every task imaginable. Yet, the answer to buy vs. build is not as clear-cut as might appear at a glance. The Programmer’s Paradox blog post which made rounds in programming circles in November points out that effective use of third-party software requires deep understanding of its inner workings. In the case of software we build ourselves, this understanding comes naturally as we develop and shape the code in response to discoveries about the problem we are solving and the behaviour of the component we created. Experienced software developers can gain important insight about a third-party component from reading the documentation, but ultimately the learning comes from using the software in the wild and seeing it behave in unexpected ways.
A hand-crafted piece of code stands better chance of being simple and well suited to a specific task. The approach to complexity we adopted as an industry follows the violence meme: if it’s not solving your problem, you’re not using enough of it. Deployment and management of multiple machines is hard, so we use Docker, that constructs a filesystem from multiple layers of files, each being allowed to modify what the previous ones put in place. HTML, Java Script and CSS works for the web, so let’s also write desktop apps based on those technologies. It often seems easier, perhaps even “simpler”, for some measure of simplicity, to pile on top of what is already out there, add another layer, or adapt a technology wholesale for a different domain. Stripping something down to bare bones is rarely considered. Many programmers decry the mess this is getting us – and the world, which relies on the technology we build – into. Even the agile/lean development principle of building the simplest thing that will work often steers us towards pre-built products, where we tend to underestimate the effort involved in understanding how they work, and discount the impedance mismatch and resulting effort required when implementing new features. Moving away from a technical solution adopted earlier is often a tricky proposition, and due to both the sunk cost fallacy and actual cost of migration becomes harder as the time passes.
Back in the 1990s, when I was taking my first steps into programming by hacking Windows apps in VB3, I read somewhere that the programming of the future will be nothing like what it seemed at the time. Instead of banging out code we will be putting together components. That did not seem as much fun and I was sincerely hoping that day will never arrive. Well, here we are now, but with a twist: instead of puzzle pieces I imagined, we now build our system of prefabs the size and complexity of houses. Predicting the operational parameters of a conglomerate of such things is something few software developers are capable of.
Programmers often have their pet technologies; for some it will be the hot new thing they read about last week on Hacker News or Reddit, for some it will be the tool they have successfully used on multiple projects over many years. Even in the latter case the depth of understanding might be missing though. Relational databases were, and still are to a large extent, my default solution for any non-trivial persistence. Yet it was only within the last year that I discovered, to my surprise, that read committed is not quite the serialisable I imagined it to be – and that is after many years and many systems which I built or maintained on top of relational databases. Even in the case of technology with decades of prior art, applied to the vanilla CRUD use-case, there are still products that have quirks and exhibit strange behaviour under certain circumstances.
Writing stuff yourself has some things going for it, it turns out. On the other hand, there are certain aspects of software that are hard to get right. Take security; the very first piece of advice given to programmers embarking on building a custom security protocol or encryption algorithm is: don’t do it. Use something that has been tried, tested and proven to work (or rather, not proven to not work for long enough). A lot of apparent complexity in software is not gratuitous, but a result of taking into consideration edge-case scenarios, often ones involving failure of some component. While writing software from scratch provides a lot of insight and learning opportunities, we are bound to miss something that others have discovered before us, and built their knowledge into a third-party product that we discarded as too complex.
There is no win-win here. The systems we build often involve unavoidable complexity. We have thousands of tools at our disposal, and given that most non-trivial problems require a bunch of them, we are dealing with combinatorial explosion in the solution space. Various combinations of products are mismatched in subtle and non-obvious ways – something we will only discover after we have built the system. Then, attempting to design and build everything from scratch is fraught with peril, as there are many unspecified but implicitly assumed features of the system which we will inevitably miss. The heuristics I strive to use nowadays is to do the due dilligence, see what products are out there that are of roughly the right shape, kick their wheels. Eventually, choose one of them and run with it. Every time there is an issue with existing system or I struggle to implement new feature, I would then ask myself: what caused it? Would I be better off with some other product? Should I build it myself?
How do I do the due dilligence effectively? How do I make a decision to switch? I will post an update as soon as I figure any of those out.