Last week, I was talking with a friend who works in astronomy. He’s on the IT side of large, decade-spanning projects, and he told me that he sometimes has to contact computer museums to get replacement parts for their machines. This surprised me, as I assumed newer technology would automatically be faster, stronger, and therefore better. However, he explained that once a project has started, it becomes almost impossible to change computers and operating systems due to the incompatibility between systems built decades apart. This piqued my curiosity: how could technological progress hinder long-running, well-funded projects? And are there other examples of technological advancements causing issues in specific industries?
Long-Term technological compatibility
When working on projects that span decades, maintaining consistency in both hardware and software is crucial. It may seem logical that upgrading to more advanced systems would improve ongoing projects. However, the faster processors and larger memory capacities of new systems often don’t outweigh the risks that these upgrades introduce. Potential software incompatibility or, more likely, human error could jeopardize the entire project. As a result, sticking with older technology is often the safer and more reliable choice.
As previously mentioned, incompatibility is a significant challenge. Systems developed decades apart often run on different computer architectures, programming languages, and hardware, making integration difficult. For example, in astronomy, even a minor disruption in data collection could compromise years of work. A slightly faster or more powerful system simply isn’t worth risking bugs, data corruption, or inconsistencies that could arise from upgrading.
Specialized systems
Another major issue in long-term projects, particularly in industrial or scientific fields, is the uniqueness of the hardware and software used. While a MacBook or Dell laptop may be sufficient for personal use, long-term projects rely heavily on custom-built machines tailored to their specific needs. This is especially true in scientific and experimental research, where systems must be precisely calibrated to work with specific equipment, such as telescopes or sensors.
In these scenarios, upgrading isn’t just about replacing parts or switching programming languages. It often requires a complete redesign from the ground up, which is not only time-consuming and expensive but also risks introducing small deviations that could make new data incompatible with existing data, rendering it useless.
A Widespread Problem
The issue of relying on outdated technology isn’t confined to a single industry. It’s common across sectors where long-term stability and reliability take precedence over rapid innovation. For instance, the airline industry still partially depends on decades-old software systems, though they are gradually transitioning to newer versions. This process, however, is slow and laborious. You can learn more about this transition in the airline industry in this blog.
The airliners change is necessary due to growing demands from customers, who want different things that the current system can offer, also known as a legacy system.
A legacy system is outdated computing software and/or hardware that is still in use. The system still meets the needs it was originally designed for, but doesn’t allow for growth. What a legacy system does now for the company is all it will ever do. A legacy system’s older technology won’t allow it to interact with newer systems.
While these systems meet their original design needs, they’re incapable of adapting to future requirements. As technology advances, all systems will eventually become legacy because future needs are impossible to predict.
Another example of outdated systems creating problems is in the Dutch tax authorities, where at least 70% of taxes are processed through COBOL systems, a programming language introduced in 1959. COBOL isn’t unreliable—it’s still a fast and stable system—but the problem lies in the diminishing pool of programmers who understand it. As more modern programming languages like C++, Java, and C# gained popularity, COBOL was left behind, and now very few programmers know how to maintain or update it. While the systems work, they’re unable to handle changes in tax policy because there aren’t enough experts to modify the code.
What can be done?
Although there may not be a quick and easy fix for industries like astronomy, airlines, or the Dutch government, there are ways to future-proof new systems. One approach is to rely more on modular system design, where components can be replaced without disrupting the entire system. This design philosophy allows for easier upgrades and ensures that long-term projects can integrate new technologies as they become available, even if they weren’t ready when the system was first conceived.
Navigating Progress and stability
Technological progress generally leads to greater efficiency and enables more impressive projects. However, it’s important to remember that this constant drive for advancement can create challenges for industries and projects that require long-term stability. Upgrading legacy systems can be too complex or risky to justify the potential benefits, forcing organizations to strike a balance between reliability and performance.
Organizations can manage these challenges by planning for the future and considering potential innovations, even if they aren’t currently feasible. While this task is incredibly difficult, it’s necessary to minimize the inevitable complications that come with long-term projects. The tension between progress and compatibility will likely remain an issue across many sectors.
Very interesting blog. I did not know, apart from the tax office, that many sectors are running on older systems. After reading it I wondered about the following. If this is a problem across many sectors, is there such a thing as a market for it and how big is that market? and how do computer museums or companies function in this market? Do you have any idea about this?
I was thinking about this post for a while and I did end up thinking about obsolescence in hardware for quite a while. I wonder how much this would hold up in a more modern context. There is this one theory out there called Moore’s law, which more or less says that we’re going to reach a physical limitation in computing power.
Once we hit that point, I would wonder whether the benefits associated with upgrading older hardware with modern ones outweigh the risks, especially if the older projects are running on decades old tools. Alternatively, more recent projects could indeed be made to be more modular, but my bulk of the knowledge comes from building home PCs and nothing of industrial or professional scale like astronomy, but I wonder whether or not we’re going to reach a point where this becomes a non-problem
One of the main messages of your blog is that developing new innovations is not by definition better than making the effort to maintain established systems.
I can’t help but this reminds me of the fact that replication studies are as valuable as studies exploring new methods, even though results might not be as spectacular. Validation and maintenance are essential to prevent unsafe situations.
The solution you propose (modular system design) appeals to me. It could extend products’ life cycles and would probably make it more lucrative for suppliers to have more spare parts available. Also, there would be more shared knowledge, hence fewer super specialized, almost irreplaceable experts. I’m curious to see if we will actually move towards more modular systems (and circular economy)!