EMPOWERING THE ADAPTIVE, INTELLIGENT ENTERPRISE

 

Learning the Lessons from Past Mainframe Integration Projects

by | Sep 20, 2018

The API model has definitely become a major consideration for a growing number of companies across the world. As discussed previously, the API approach has particular attractions for mainframe users. However, past attempts at mainframe integration have typically run into a range of problems, and today there is a much greater understanding of the mainframe-specific issues to take into account before embarking on a business services-based mainframe integration strategy such as API enablement.

A number of the lessons learned reflect directly back to the technology-based considerations just discussed. But one issue, in particular, stands out – that of mainframe business service composition. The idea of a business service is the cornerstone of numerous mainframe integration initiatives and was mentioned in the introduction to this paper, but as a reminder, it refers to the need to provide discrete business functions that can then be accessed externally, for example through APIs. If a phone App needs to be able to get an accurate product price, for instance, then it has to have some mechanism to drive whatever applications and data components make up the ‘get a price’ process on the mainframe.

API Integration Process Flow

A common difficulty stems from a collision between the purist world of the systems architect, and the pragmatic needs of operational service quality. Companies looking to open up the mainframe and leverage it across other environments often see a pure, clean architecture where every business activity is packaged as a business service, and all these services are exposed through APIs. This is a great idea but can be disastrous if implemented without due consideration. The main issue is that, given the number of mainframe transactions in existence, there is a danger this approach will result in a huge number of low-level services being created, for example ‘get customer details’ or ‘check service history’. This may seem very logical, but in reality, the danger is these exports design issues for the API developers. An App developer working on a new phone-based digital marketplace wants to be able to drive a ‘product quote’ process; the App developer now has to work out which low-level services are needed and in what process flow to deliver the final price.

 

Figure A: Excessive granularity requires procedural knowledge for the API developers

 

Contrast this approach with a more considered one, where a higher level ‘Find Customer Details’ API is implemented. The consumption of the API has been de-skilled, removing any need for the API solution developer to have any knowledge of internal processes and implementation details.

Figure B: Getting the granularity right insulates the API developers

Mainframe Middleware

Note however that the ‘many small services’ approach can work if the right API middleware layer is present. If a company chooses to implement a design where every discrete business operation has a corresponding service, the API middleware can perform the necessary orchestration of all the lower-level services offered by the systems of record to present the API developer with a simple high-level API.

It turns out that the API middleware is the key to the whole issue because provided the middleware enables services to be composed into APIs that satisfy the API developer skills and needs, it doesn’t really matter whether the packaging of those services (access, orchestration, data formatting, etc) is carried out by the middleware alone or combined with other business service initiatives within the mainframe platform such as BPEL or BPM.

In short, defining the optimal level of granularity:

  • Decouples the API developers from the implementation details of the operation
  • Ensures that mainframe APIs meet the business needs more closely
  • Keeps the number of APIs and related definitions under control
  • Reduces the development effort required
  • Optimizes performance and network load by limiting the trips to and from the mainframe

Mainframe Data Integration

In fact, mainframe integration user experiences generally show that a good guideline is to avoid imposing too much of the API model on the mainframe environment. As commented in previous blogs, mainframes are different from other platforms; data is often in proprietary formats, XML is almost never used, the skills set is highly specialized and expectations of performance, scalability, and reliability are much higher. Therefore, the key to API enablement success in mainframe environments is to implement only those APIs that are required to achieve company goals. The API middleware should handle as much of the packaging and managing of the various systems of record components as possible, to keep the APIs presented as simple and easy to use as possible.

-Steve Craggs, Lustratus Research

Want to find out more about API integration? Download the complete free Ebook from Lustratus Research.