I’ve worked in this space for over 30 years, and I’ve seen a lot of well-intentioned modernization projects implode. There’s one thing I know for sure about modernization: legacy tech is rarely aforethought. It seems like there are two times when legacy technology becomes a priority:
- There’s a new initiative that forces the change.
- Something broke.
Truthfully, it’s usually the latter, but there’s nothing wrong with this. There’s nothing wrong with the mainframe. Most are still around because they’re so reliable. So, when is it essential to take a closer look at your legacy environment?
Get to know your mainframe environment before making costly decisions.
These days, there’s very little mainframe development happening. Most legacy technology activities involve maintaining the status quo, such as performing maintenance or making small adjustments due to a regulatory change, internal data changes, or rules that govern the way a business operates.
Whether you think mainframes are outdated or want to utilize them better, it’s good to understand your legacy environment. Most of the people who have built and designed your critical PL/1 and COBOL applications have retired or moved on. Hopefully, when something breaks, you’ve kept enough subject matter experts or resources on-site to fix it quickly.
However, the real challenge comes when all that legacy expertise leaves without documenting a roadmap of the various systems and how they interact. You could do more damage by trying to fix it than just leaving it broken.
This where powerful analytical tools come in that help map out what your systems do, how they interact, and how they may be interdependent. The more information you have about your legacy stack, the greater chance you will make changes successfully and estimate their actual cost.
The first thing to consider about your legacy environment is your code dependency. As a programmer, I solve problems by adding new code. I rarely take code away.
The first thing that happens when you decide to migrate code is you’ll tally up the total number of code lines. Let’s say you have 1M lines of code spread out over many programs. However, 30% of that code might be dead. A deep dive into your legacy environment can tell you how many of these 1M lines of code were executed in the last ten years.
Suppose you’re considering migrating, and the company doing the migration charges by the line of code. In that case, that’s 30% of 1M lines of code that you conceivably don’t need nor want to migrate, or more importantly, pay for.
Another important aspect of your legacy environment is the interactions between applications. It’s one of the reasons why most people shy away from projects that involve mainframes.
Whenever I compare the complexity and interdependencies of legacy environments, I sometimes liken it to a bowl of spaghetti. You grab one end of the noodle, and sometimes it’s so tangled you can’t find the other end, or even worse, it breaks. I’m referring to these kinds of interactions: what are your shared data files, shared code, shared parameters, shared configurations, shared screens, and more.
If you decide to do a partial migration of your legacy environment, pulling the systems apart is nearly impossible. Who is to say if the screen of one application doesn’t pull 100 fields from system A and one field from system B? Peel away that connection, and the field needed in another application is gone.
A detailed evaluation of your legacy environment identifies where data is shared, and this is important. Shared data is far more difficult to separate because it doesn’t duplicate like shared code. If you move forward with the migration, you’ll have to decide which systems own the data, who makes sure the data is up to date, which method overrides the others. There are many questions to answer around keeping data in sync and contributing to additional costs and challenges to modernization projects.
Manual coding is expensive and time-consuming. Not all code is created equal, and not all code is compatible. By doing a deep dive into your mainframe environment, you can identify code that doesn’t easily transfer from vendor to vendor. This will tell you how much manual effort is required to make the two systems compatible, which will tell you how much more expensive the project will be.
Look for this when shopping for a good legacy environment analyzer.
Here are a few things to consider when choosing a tool for legacy analysis:
- Look for completeness of legacy language support. This could be as simple as making sure COBOL or PL/I is supported and assembler and other lower-level development and scripting languages.
- Check for completeness of legacy data support. Not everything on the mainframe is in VSAM or DB2. Make sure that IDMS, Datacom, Adabas, and other mainframe data sources are supported. Ensure there is support for languages and technologies outside of the mainframe.
- Many legacy applications now interact with Java, other non-mainframe languages, Oracle, and other distributed databases. Having the ability to map your entire technology infrastructure is very valuable.
In the end, knowledge is the key to making a sound decision. Legacy technology has a more significant impact on businesses than most people realize. Companies often make modernization decisions at the board level without understanding the full scope of the project. This is why modernization projects take years longer and cost tens of thousands of dollars more than initially anticipated.
Director of Product Evangelism
Don Spoerke is the Director of Product Evangelism at GT Software. He is a 25-year veteran in the enterprise modernization space. Don collaborates with an impressive list of FORTUNE companies to intelligently integrate legacy mainframe assets for new business application initiatives.