Integration Made Easy With Ivory - GT Software

TRANSCRIPT

Dr. Alex Heublein:

We have a line of products that we call Ivory Service Architect, and it’s really an integration platform for mainframe environments that very quickly allows you to go out and build SOAP or REST based interfaces to modern systems, and do that in a way that doesn’t impact what you’re doing on the mainframe, do it in a way that’s very, very cost effective and very quick without writing any code.

So if you look at the way we do things, there’s sort of two scenarios I wanted to walk you through. One of them is what we call inbound integration. And so this is I’ve got a mobile app, or I have a web application, or I have a partner that needs API access into my mainframe, but I need to do it in a very secure, very reliable fashion, I need to be able to do it very quickly. Well, we have something called Ivory Service Architect and there’s a run-time environment that we run that allows you to build APIs. You need a REST or SOAP that connects into the mainframe and allows those callers to get information out of the mainframes, to process transactions on those mainframes and get the results of those transactions. And one of the nice things about this run-time environment is it can run anywhere. It can run on the mainframe itself, or we’re seeing a lot of customers saying, “Hey look, I already have a problem with the capacity on my mainframe as it is. I don’t need to be putting anything else on it.” Well, no problem. We can run Ivory run-time pretty much anywhere, it’s Java-based, it will run on Windows or Linux virtual machines, it’ll run in Docker containers, it’ll run in Azure or AWS, it will run as an open shift operator. So the deployment options for this run-time are very, very flexible, depending on your need.

But the challenge you run into with mainframes is a lot of times, in order to do one seemingly simple thing, like, “I want to go look up this customer’s account balance,” or, “I’m going to look at their reservation or claims history,” or whatever it is; I might have to touch three or four different systems in order to be able to do that. It’s not a simple matter of, “We’ll go look it up in the database and then pull the data back and send it to the caller.” So we have an Ivory integration workflow engine, and this workflow engine allows you to do very, very complex orchestrations of those integrations. So I can say, “Go out and pull some information from this green screen application, go execute this CICS transaction, go look something up over here in a VCM data format,” and I can take all of that information back, sequence it the way I want it to, and then take it back and send it back to the caller in the right format that they want to see it in.

And that’s really powerful because all of this decision logic and all of this integration workflow can be really challenging to go develop if I have to write a bunch of code to do it. So we also developed an addition to our run-time environment that sort of plays translator between these sort of legacy platforms and these modern platforms. We also have something that we call the Ivory Studio. And what Ivory Studio does is it’s a development tool, it’s a Windows application, you’ve got components on the mainframe you can drag and drop into this. And basically you build a workflow, and this workflow can go out and touch many different systems, it can make decisions. If I get this type of data back, then go look up something here. If I get this other type of data back, then go look up something over here. You can put in complex decision trees, you can manipulate the data that comes back, and ultimately that allows you to go out and generate all the necessary integrations that run in that run-time environment. So it’s a true no-code, drag and drop environment that lets you build very complex integrations to your mainframe, and you can do it literally in a matter of days rather than weeks or months or years.

And so the time to market advantage here, the flexibility that you’ve got, not just to be build these APIs and these integrations quickly, but also be able to change them quickly as the world changes around you.

The second situation that we see is what we call outbound integration, and outbound integration is interesting because not only do I need to be able to call into these systems, but what happens if I have one of these applications on my mainframe that needs to call out into the modern world? So let’s say I’m a bank and I need to do a fraud check to make sure that someone’s transaction isn’t fraudulent, or let’s say I’m an insurance company and I need to go out to an external ratings engine to get a rating for this particular customer’s insurance. Well, traditionally initiating those transactions from the mainframe and calling modern REST or SOAP APIs was very, very cumbersome and very, very difficult. But using the same development studio that I just showed you, we can also deploy those integrations, those outbound integrations, to our runtime environment, and then we’re able to actually go out and generate small, self-contained COBOL or PL-1 code blocks that act as a sub routine.

So a COBOL or PL/1 programmer says, “I’m just going to call this generated subroutine. I’m going to pass it some data, it’s going to pass me some data back.” When in reality, under the covers, what’s happening is that little subroutine is talking to our run-time environment and that run-time environment is going out and making SOAP and REST calls out to these external providers. So the mainframe developers are none the wiser, they don’t really know that they’re talking to an external system, they think they’re just calling a subroutine. And so what that does, is A, is it shields them from having to learn a lot of unfamiliar technologies, but it also speeds time to value, right? Because the last thing I want to do is try to teach legacy developers all of the intricacies of SOAP and REST and JSON and XML formats and protocols. And I don’t want them to have to know any of that, right? It’s great if they do, but I don’t want to have to take that time and the learning curve that’s required to be able to do that. So we generate the self-contained code blocks that go into subroutines and those legacy developers can keep doing what they do best while still being able to communicate with the outside world.

So that’s really what our product does. It allows that bi-directional communication into and out of your mainframe. It abstracts out a lot of this so that the mainframe doesn’t really know it’s talking to the modern world or modern applications, distributed applications, and those applications don’t know they’re talking to a mainframe. And so that’s great too. So for instance, if I ever wanted to write a mobile application that communicates with my mainframe, well, all that mobile application is doing is, say, calling a REST API. Now, if I decide to migrate the functionality that it’s talking to on my mainframe off of my mainframe, no problem. The caller, in this case a mobile application, all it has to do is make sure that my new code that I’m writing implements the same REST interface. So I can change out components on the mainframe, I can change the implementation of how these things are actually implemented on the back end, and I don’t need to make changes to my front end application. So I don’t need to make changes to my mobile app, I don’t need to make changes to my web app, I don’t need to make changes to any partner integrations that I’ve done. And that’s a really, really nice situation to be in.