Innovating in Uncertain Times
Now more than ever, companies are facing constant pressure to innovate. Learn how to do more with your legacy technology.
Uncertain times call for heightened innovation.
Hey everyone, my name is Jennifer Henderson with GT software. I want to thank everyone for coming to our webinar, Innovating During Uncertain Times. I will be your moderator today and we’re very lucky to have Dr. Alex Heublein as our presenter. Alex leads the Sales and Solution Architecture and Strategic Alliances team at GP software. Prior to joining us here, Alex held a wide variety of executive roles including vice president of cloud services at Hexagon, vice president of global technology services at Oracle and he was a former HP distinguished technologist and chief technologist for HP’s consulting and integration business unit.
So Alex, thank you so much for being with us today. I’ll go ahead and turn things over to you.
Dr. Alex Heublein:
Thanks Jennifer. And, and again thanks to everyone that’s joined the webinar. We really appreciate your time. The topic of this webinar, we struggled with the title on this, but I think it’s actually a pretty good title. It’s descriptive of what we’re going to be talking about, which is innovating in uncertain times with a particular focus on legacy systems, and what makes this topic interesting, I mean, obviously we’re, we’re living in very uncertain times right now and these uncertain times of popped up on us with a speed and a rate of change. That’s really unprecedented, I think, in just about anyone’s lifetime. So the question is what do you do? What do you do in uncertain times? What do you do when you’re dealing with a crisis? What separates the great companies from the okay or average companies out there? And, how do you innovate in uncertain times?
Because, you see a lot of things happening in the macro economic world. You see a lot of things happening from a social standpoint. You see a lot of things happening in businesses today that are dramatically altering the course, not only of society, but also the macro economic climate, but also specifically within individual businesses, a lot of plans have changed over the last couple of months. So, that’s the topic of this is really how do you go about innovating in uncertain times. And, in addition to working at GT, I also do a fair bit of academic research and my area of specialization is in innovation research. The real question is what is innovation? How do companies go out and innovate? What do they do in good times? What do they do in difficult times? And, one of the things that you find when you do a lot of research on this topic is truly great companies are the ones that focus on innovation even in difficult times.
They don’t take their eye off the ball when it comes to innovation. Their pace of innovation might slow down a little bit and the way that they innovate may change during difficult and uncertain times, but they continue to place an emphasis on innovation. And when they come out of tough times, when they come out of crisis or uncertain economic times, they’re far better positioned to compete and to grow their business as a result of that. So, a few things that we’re seeing as a result of the global crisis that’s going on right now, there’s a few challenges as they relate to IT and to business as well.
The first is that we’re seeing a lot of what I’ll call refocusing of IT budgets. Some IT budgets are being reduced and then others we’re seeing the existing budgeting being refocused, and they’re refocusing things on near term initiatives versus longer term capital intensive projects. So, for instance, we have a lot of higher education customers, and those higher education customers have refocused some of their spend away from some initiatives that they had planned and they’ve really refocused it onto how do we take all of our classes and make them online for our students? So, all these instructor-led classes that they had before, now they’re moving a lot of those online so we’re working to help our customers do some of that work.
But you see a refocusing of those budgets. The second thing we see is the slowing economic growth, the challenges we’re seeing out there from a macro economic standpoint is actually driving many organizations to postpone a lot of their legacy system replacement initiatives. We’ve talked to several customers over the last two or three weeks that have said, “Yeah, we were planning on retiring our mainframe systems over the next year or two years. A lot of that work has been put on hold. We’re just trying to figure out what we do in the interim.” Because legacy system replacement initiatives over time can save you a lot of money, the challenge is they tend to be very capital intensive, very labor intensive projects to shift off of an environment that a lot of customers had been on for many, many years or in some cases even decades onto newer technology. So, we’re seeing a lot of those initiatives get put on hold or postponed indefinitely at this point.
The third thing that we’re seeing is that people are starting to get the shift to this new normal, and I think people have come to the recognition that this isn’t just going to be a little dip or a little blip. There are longterm economic ramifications of what’s happening, and a lot of companies are having to really take a new perspective on how they run their business, and we’re seeing this obviously in the travel industry with airlines. We’re seeing this in the higher education industry. We’re seeing this in many different industries that we serve, but they’re really having to take a new look at how they actually run their businesses and as a result, the way that they use their enterprise data, their transaction processing capabilities and a lot of legacy systems capabilities are also changing to shift to that new normal.
Again, we’ve seen that happen in the higher education world. We have a good number of customers that are in the travel transportation, hospitality sector, so large airlines are changing their business models, the way they do business. We’ve seen it happen in financial institutions who are changing the way they do things. We’re even seeing it in the healthcare field where telemedicine is taking off. You’re seeing a lot of different … And, all those changes have implications for IT and they have implications for the legacy systems that underpin a lot of the organizations that we’re seeing. We’re also seeing it happen in the government sector, as well, where you’re seeing, for instance, unemployment benefit systems just getting hammered. Some of these are mainframe based unemployment management systems. So many people are applying for unemployment it’s crashing systems, so on and so forth.
The way that they run things, the way that they deal with their business in general is having a fall on effect into the way they run their IT organizations and the way they react to things, and that’s particularly true of legacy systems because one of the challenges with legacy systems is they by their nature can be slow to change and challenging to change in real time to react to real time conditions. So, a lot of companies are saying, “We’ve got to have a better way of doing this. We’ve got to be able to react faster. We need to be able to move more quickly given this economic crisis and the health crisis is really impacting our business.”
We’re also seeing a lot of furloughs and contractors layoffs. So, we’re seeing a lot of companies, one of the first things that we’re seeing them cut, and this is true of a lot of our existing customers as well as other prospects we’re talking to out in the marketplace. One of the first things they’ve done is they’ve said, “We’re going to cut contractors.” So, sometimes that’s leaving them with critical skillset shortages. We were talking to a customer last week that said, “Yeah, we just cut 800 contractors. Unfortunately, some of those contractors were very, very critical to how we run the IT operations that are necessary to run our business.” And now, they’re scrambling to backfill those positions with other people within the company, so on and so forth, but it’s left them with some critical skillset shortages that in the near term are going to be challenging to go out and solve for them.
And then finally, in light of all of that, we’re seeing the demands for innovation, the demands for efficiency, those things aren’t going away. You’re seeing your customers demand more innovation. Partners are demanding more innovation and more efficiency in the way that you run your business. And, we’re seeing this again across the board, this focus on how do I not just innovate for my general business but how do I come up with innovate solutions to deal with the short term and medium term challenges this has all presented to us? So, the expectation level is also remaining the same in a lot of cases. It certainly isn’t changing in relation to the budget cuts and the technology shift and a lot of the critical skillset shortages. That expectation of being able to do more with less is there.
So, what are the implications of that? What are the recommendations that we have for customers that have legacy systems that are caught up in the challenges that we’re seeing out in the marketplace? The first one of those is look, now is the time to figure out how you do more with the systems you have. That’s the cold hard reality of this is that at least for the short to medium term, a lot of organizations, in particular IT organizations, are going to have to figure out how to do more with less with a lot of the existing systems they have. Like I said, a lot of the replacement initiatives had been put on hold, so now’s the time to start figuring out how you’re going to do that and I suspect a lot of you have already started that work.
The second recommendation we have is that companies really need to focus on cost effective ways to quickly leverage the power of their legacy systems. I don’t have six months to do this. I don’t have a year to figure these things out. I’ve got a week or a month to figure this thing out, right? So, how can I better harness what I have in my legacy systems? How do I open those legacy systems up to customers and partners in this new normal, in this new world that we’re living in, at least for the short and medium term? But, how do I do it cost effectively? There’s an old engineering joke, and I went to engineering school for many years … There’s an old joke; Good, fast or cheap, pick any two. Well, everybody wants everything fast. Everybody wants good stuff, but you also have to be able to do it at a cost effective price given some of the budget cuts out there.
So, how do I achieve good, fast and cheap all at the same time? That’s a tricky thing to do in any discipline. The third is that to get more efficient and to really drive value for the business in the short to medium term, you’ve got to be able to improve your existing processes but to be able to do that, how do I streamline those processes? But, in order to do that, I’ve got to have some automation technologies. I’ve got to be able to go out and use automation to make things more efficient. I’ll give you an example. We had a customer not too long ago that had legacy systems that handled all their customer information. When a customer would call in, for instance, let’s say someone got married and wanted to change their last name. They had to call in, someone had to sit on a bunch of green screen terminals and change 12 different systems in order to update their name, which is really, really challenging.
It took about 45 minutes for of the call center agent to go do all this, and then they hoped that it all worked itself out and it all got changed correctly and spelled correctly and everything else in that one system. They were actually able to use one of our solutions to go in and do that in literally a matter of seconds versus 45 minutes without any human intervention at all. Someone could go to their mobile application, they could say, “I want to change my last name.” They could put in the information that was required and it would make updates to those legacy systems, all 12 of those legacy systems in four or five seconds versus 45 minutes and having to sit there on hold on the phone.
So, being able to go out and improve those processes, but do it with advanced automation technologies, that’s really one of the keys. But you’ve got to be able to put that stuff into place quickly. The time to market, the speed, the velocity that’s required in today’s situation is absolutely critical and it’s more critical than it’s ever been. The other thing that we’ve seen is that there are a lot of skillset shortages we talked about on the last slide. We’re seeing those skillset shortages across the board. A lot of people are furloughing or getting rid of their contractors. They’ve brought in their technology IT contractors, so people are ending up with core IT skillset shortages, particularly with regards to some of their legacy systems.
So, how do I go out and put some very flexible and efficient tooling in place to take the load off of some of those resources so they’re not having to write as much code, they’re not having to do as much testing, so on and so forth, but still give them the ability to go out and unlock the power of their existing legacy systems? Then, finally one of the things that you see when you look at innovation over the long run is that there are small investments you can make in the short term that really drives some good medium to long term gains when you come out. When the world comes out and we see the light at the end of the tunnel here, which I think will happen fairly soon, how do you make sure that you’re on that upswing as the global economy requires, et cetera? How do you make sure that you’re making investments now that will help you drive that long term growth within your business and having IT support that long term growth, not just now but as we come out of this and there’s an economic recovery?
Those are some of the recommendations we’ve been making to our customers over the last couple of months. The only problem that you run into is that you’ve got all these legacy systems, and like I said before, legacy systems can often be tough to change. They’re certainly not known for their speed of change. And, there’s a lot of good reasons for that. There’s good reasons that sometimes it takes things … You want to do this methodically, you want to make sure. These are mission critical systems. You don’t want to just go changing them willy-nilly and hoping everything works out. There’s a lot of processes that need to be put into place and there’s a lot of methodology and rigor that has to go into making changes, but the reality is the business usually doesn’t care about that.
They’re saying, “I need you to go make this change now. The market has changed, my business has changed, the world has changed. I need my IT systems to change and I need to do it quickly.” So, one of the questions we get a lot is why is integrating with these mainframes so challenging? Because we see a lot of customers saying, “I need to integrate with this in new and different ways. Why is this thing so difficult?” There’s a couple of reasons. One of them is that a lot of the applications are older than I am. I’ll turn 50 in a couple of months and some of these applications are actually more than 50 years old, which is pretty frightening when you think about it. They work great, they run really well, they’re very secure, they’re very reliable, but in a lot of cases they’re very busy brittle and they’re easy to break if you really, really aren’t careful with what you’re doing.
The second reason is that you’ve got some very, very complex data structures and mainframes and you’ve got a very high degree of tight coupling. These systems are very, very tightly coupled together. They’re very challenging to integrate with, they’re very challenging to modify in many cases without having a lot of downstream impacts. I need solutions to be able to integrate with this rather than having to go write a bunch of code changes into my environment or a lot of different configuration changes into that environment. And then finally, we still see a huge reliance on green screen applications. It never ceases to amaze me when I go to different businesses and I talk to them how many people are still typing things into 30, 40, 50 year old green screens. And, there’s nothing inherently wrong with that, but integrating with those green screen applications that have a lot of business logic built into them can be really, really challenging.
So the question is what do you do? Okay, you’ve got all these challenges, you’ve got some difficulties. How do you do it? Well, there is a better way and I want to talk to you a little bit about what we bring to the table in the hopes that this might be able to be of help to you going forward. So, we have a line of products that we call Ivory Service Architect, and it’s really an integration platform for mainframe environments that very quickly allows you to go out and build SOAP or REST based interfaces to modern systems and do that in a way that doesn’t impact what you’re doing on the mainframe, do it in a way that’s very, very cost effective and very quick without writing any code.
Now, I’ll be honest with you. I was a software developer for most of my career. I started my career as a software engineer and if I had a dollar for every time somebody told me they had this magic no-code platform and I’d never have to do anything other than configure some stuff, I probably wouldn’t be on this call. I would probably be living on my own island. I’ve heard these pitches many times before. They say it’s no-code, it’s drag-and-drop, you never have to write a line of code, and in most cases that turns out actually not to be the case but one of the reasons I joined GT Software was that I saw this platform and realized actually this is true. We can actually do this without writing any code and without having to break the bank doing it.
So, if you look at the way we do things, there’s two scenarios I wanted to walk you through. One of them is what we call inbound integration. This is I’ve got a mobile app or I have a web application, or I have a partner that needs API access into my mainframe, but I need to do it in a very secure, very reliable fashion. I need to be able to do it very quickly. So, you’ve got your mainframe, you’ve got whatever your modern cloud applications, or whatever it is that needs to interface into these systems. Well, we have something called Ivory Service Architect and there’s a runtime environment that we run that allows you to build API’s in either REST or SOAP that connects into the mainframe and allows those callers to get information out of the mainframe, to process transactions on those mainframes and get the results of those transaction, so on and so forth.
That‘s really what our runtime environment does. One of the nice things about this runtime environment is it can run anywhere. It can run on the mainframe itself or we’re seeing a lot of customers saying, “Hey look, I already have a problem with the capacity on my mainframe as it is. I don’t need to be putting anything else on it.” Well no problem, we can run Ivory Runtime pretty much anywhere. It’s Java based. It will run on Windows or Linux virtual machines. It will run in DOC or containers. It’ll run in AZR or AWS. It’ll run as an open shift operator. So, the deployment options for this runtime are very, very flexible depending on your needs.
But, the challenge you run into with mainframes is a lot of times in order to do one seemingly simply thing, like if I want to go look up this customer’s account balance, or I’m going to look at their reservation or claims history, or whatever it is, I might have to touch three or four different systems in order to be able to do that. It’s not a simple matter of go look it up in the database and then pull the data back and send it to the caller. So, we have an Ivory Integration Workflow Engine, and this workflow engine allows you to do very, very complex orchestrations of those integrations. So, I could say, “Go out and pull some information from this green screen application. Go execute this CICS transaction. Go look something up over here in a VSAM data format.” Whatever, and I can take all of that information back, sequence it the way I want it to and then take it back and send it back to the caller in the right format that they want to see it in.
That’s really powerful because all of this decision logic and all of this integration workflow can be really challenging to go develop it if I have to write a bunch of code to do it. So, we also developed in addition to our runtime environment that plays translator between these legacy platforms and these modern platforms, we also have something that we call the Ivory Studio, and what Ivory Studio does is it’s a development tool, it’s a Windows application. You’ve got components on the mainframe you can drag and drop into this, and basically you build a workflow, and this workflow can go out and touch many different systems. It can make decisions. If I get this type of data back, then go look up something here. If I get this other type of data back, then go look up something over here. You can put in complex decision trees. You can manipulate the data that comes back, and ultimately that allows you to go out and generate all the necessary integrations that run in that runtime environment.
So, it’s a true no-code, drag-and-drop environment that lets you build very complex integrations to your mainframe and you can do it literally in a matter of days rather than weeks or months or years. And so, the time to market advantage here, the flexibility that you’ve got, not just to build these APIs and these integrations quickly, but also be able to change them quickly as the world changes around you. And, who knows what will happen over the next six months or a year or 18 months. I don’t know. I don’t think anyone knows definitively, but what I do know is that the ability to adapt and change quickly is very, very important when you’re dealing with changing times.
So, that’s our inbound integration model that lets us generate what we need, that will run in that Ivory Runtime environment. The second situation that we see is what we call outbound integration, and outbound integration is interesting because not only do I need to be able to call into these systems, but what happens if I have one of these applications on my mainframe that needs to call out into the modern world?
Let’s say I’m a bank and I need to do a fraud check to make sure that someone’s transaction isn’t fraudulent, or let’s say I’m an insurance company and I need to go out to an external ratings engine to get a rating for this particular customer’s insurance, so on and so forth. Well, traditionally initiating those transactions from the mainframe and calling modern REST or SOAP API, this was very, very cumbersome, very, very difficult. But using the same development studio that I just showed you, the concept of that, we can also deploy those outbound integrations to our runtime environment. And then, we’re able to actually go out and generate small self-contained COBOL or PL/1 code blocks that act as a subroutine.
So, a COBOL or PL/1 programmer says, “I’m just going to call this generated subroutine. I’m going to pass it some data. It’s going to pass me some data back.” But when in reality, under the covers, what’s happening is that little subroutine is talking to our runtime environment, and that runtime environment is going out and making SOAP and REST calls out to these external providers. So, the mainframe developers are none the wiser. They don’t really know that they’re talking to an external system. They think they’re just calling a subroutine. What that does is A, is it shields them from having to learn a lot of unfamiliar technologies, but it also speeds time-to-value, right? Because the last thing I want to do is try to teach legacy developers all of the intricacies of SOAP and REST and JSON and XML formats and protocols, and so on and so forth. I don’t want them to have to know any of that.
It’s great if they do, but I don’t want to have to take that time and the learning curve that’s required to be able to do that. So we generate these self-contained code blocks, they go into subroutines and those legacy developers can keep doing what they do best while still being able to communicate with the outside world. That’s really what we see from an outbound integration standpoint and not only does that give us the ability to do the inbound integration, it also gives us the ability to do the outbound integration and that’s generally a good thing for the customers that we deal with.
So, that’s really what our product does. It allows that bi-directional communication into and out of your mainframe. It abstracts out a lot of this so that the mainframe doesn’t really know it’s talking to the modern world or modern applications, [inaudible 00:23:41] applications and those applications don’t know they’re talking to a mainframe. That’s great, too. For instance, if I ever wanted to write a mobile application that communications with my mainframe, well all that mobile application is doing is say calling a REST API. Now, if I decide to migrate the functionality that it’s talking to on my mainframe off of my mainframe, no problem. The caller, in this case a mobile application, all it has to do is make sure that my new code that I’m writing implements the same REST interface.
I can change out components on the mainframe. I can change the implementation of how these things are actually implemented on the backend and I don’t need to make changes to my front end applications, so I don’t need to make changes to my mobile app. I don’t need to make changes to my web app. I don’t need to make changes to any partner integrations that I’ve done, so on and so forth. And, that’s a really, really nice situation to be in because as I modernize those systems, I can mix and match between the old world and the new world and neither of those environments are aware of one another directly.
So, let’s talk about a couple of examples. There’s a major US airline that is one of our largest customers, and the challenge that they had is that a few years ago there was a big consolidation amongst US airlines. We saw a bunch of larger airlines buy a bunch of smaller airlines or merge with smaller airlines, and in some cases they really weren’t that much smaller, but in this circumstance one of the things during the merger that the FAA wanted was they had a regulation that said, “Look, you can’t have two different aircraft maintenance and parts inventory systems.” Aircraft maintenance is one the most critical safety factors in an airline. And I said, “Look, you’re going to end up with two aircraft maintenance and parts inventory systems. We want you to merge those together.”
They looked at the problem and they said, “Wow, merging these two things together is going to be a nightmare, but what if we made it look like we had one? What if we built an API layer on top of these aircraft maintenance and parts inventory systems that allowed it to appear to the mechanics and to the inventory managers and procurement professionals as though it was one system and they were none the wiser?” So, that’s actually what they did. They very rapidly generated a set of API’s. They connected to multiple mainframe systems in order to be able to power those web, mobile and other front ends so that people could go out and do maintenance on aircraft, they could go manage parts inventory, and it looked like they were dealing with one system. And, they were able to build a modern front end.
So the mechanics, instead of having to drag a 3270 terminal out to the plane to go type in, “Okay, I changed this part in an airplane,” now they’ve got a handheld mobile tablet where they put that information in directly as they’re working on the aircraft. So, huge cost savings but also enabled them to very rapidly meet the regulatory requirements that were put on them for that. That was huge for them. It’s part of a larger project, but they’re going to save many millions of dollars a year by doing this, and also they did it in the fraction of the time that trying to build these integrations by hand would’ve taken.
Another good case study is a large Swiss bank that we deal with. A bank you’ve probably heard of. These guys never let you use their names on public webinars, but let’s suffice to say it’s a very large Swiss bank. The challenge they were having was they needed to be able to rapidly implement the ability to go out and check a known criminal and terrorist database so if somebody tried to go open an account with them … and being a Swiss bank, a lot of people throughout the world try to use those Swiss banks to do all kinds of nefarious things. So, there was a regulation by the government that said, “I need to be able to go out and verify that this new customer isn’t a terrorist or a known criminal or someone …” So, they had to go out to this World-Check system with a set of API calls, but they needed to be able to call it from their existing PL/1 base mainframe core banking system.
That was a huge challenge for them. So, being able to call out was a big issue. So, they used Ivory Service Architect to develop both SOAP and REST based APIs without writing any code at all to be able to go out to the World-Check system, verify that a new customer wasn’t on their watch list or wasn’t on their terrorists list, and also they were able to make that API accessible to other systems within their organization. So it wasn’t just the mainframe that can access this and initiate outbound calls, but other systems within the bank could do it as well.
And so, they were able to meet all of the functional specs that were in the banking regulations ahead of the required timeframe and they did it at a very small fraction of the cost that it would have taken using traditional methods. They were able to do this very, very quickly, very cost effectively and make changes to it very quickly and very cost effectively as well.
So, just a few key takeaways from this. Hopefully you’ve seen some interesting information here, but some of the key takeaways are, and I said this earlier, in challenging times you have to do more with less. You have to do more with what you have today. It’s hard to sit around and try to justify the budget for doing brand new things, building new systems, migrating off of old systems. Those economic arguments and those business cases get harder and harder to make when you’re in uncertain times. So what do I do in the interim? The time to act on that is now. The time to go and take a look at products like these and technologies like these that very rapidly get you okay to value and very rapidly enable you to innovate is absolutely critical, but waiting six months won’t do you any good.
The time to start looking at that, if you haven’t already, is right now. The second takeaway is, look, don’t let uncertain times interfere with your ability to innovate. I mean, it’s going to happen to some extent, but it’s really all about how can I continue to innovate and drive innovation not just for IT but for the business as well? How can I do that and keep up that pace the best I can during those uncertain times? And again, if you go look at this academically, you go look at all the research that’s been done on this, what you find is that great companies, companies with long term, high growth prospects, are the ones that are able to continue to drive innovation even during difficult and uncertain times.
And then, the third takeaway is look, I’ve shown you conceptually what we do. We’re able to help organizations build very complex interfaces into legacy systems in a very small fraction of the time and cost that it takes using traditional methods with Ivory Service Architect. You can go to GTSoftware.com/POC and look, we’re willing to prove it to you. A lot of people say, “Yeah, that sounds great on paper, Alex. That’s fantastic, but the reality is I want you to prove it to me.” Well look, we’re willing to come out and prove it to you.
We’re able to generally sent up POC’s in a matter of days and actually build real live, running API’s into your mainframes a lot of times in a couple of days. So, it’s not a six week POC. It’s not a six month POC. We literally do these things in a small number of days. If you’re interested in doing that, I would absolutely encourage you to reach out, register on the website. We’ll be in touch with you, and we’ll set something up. We’ll set up a demo for you and then if you like what you see, we’ll set up a POC to actually talk live to your legacy systems and show you how quickly and easily you can build API’s.
So, that’s what we got. I’m going to turn it back over to Jennifer. I think we’ve got a few questions that we want to cover that have come in over the course of the webinar, so Jennifer, I’m going to turn it back over to you and let’s see what we’ve got from the questions.
Great. Thanks, Alex. You do have a couple of questions. The first one is there seems to be a lot of products on the market that claim to integrate with legacy systems. What makes you guys different?
Dr. Alex Heublein:
Yeah, that’s a really good question. There are technologies out there that actually will integrate with mainframe. I don’t think it’s really so much of a claim. They will integrate with your mainframe and they do make it easier. There’s a couple of things that really distinguish what we do, I think, from some of those other technologies that are out there. The first thing is that we have a truly no-code, drag-and-drop platform, and that really allows you to get to market very quickly, but maybe more importantly, it drives time to market but it also enables you to not have to take the focus off of what your legacy system developers are doing today.
So, it gives them the ability to go out and very quickly do these things and then get back to what they were doing before somebody asked them to go build an integration. And, they don’t have to take that time to go out and learn all of those new technologies that they may not be familiar with. And also, the potential for errors, right? Whenever you’re learning a new technology, you’re bound to make mistakes. You’re bound to have challenges, and when you’re dealing with complex mission critical systems, often times you don’t have the time for people to learn but you don’t have time for people to make mistakes, either.
So, being a completely no-code, drag-and-drop platform, it eliminates not only the time to market, it makes you go faster but also eliminates a lot of the learning curve and the errors that are associated with that learning curve. And when you’re doing things like aircraft maintenance, or you’re doing things like known terrorist or criminal checks, you’ve got to get that right the first time. You can’t iterate through that three or four times and get it wrong in order to make it happen. That’s stuff that’s got to be done right the first time. So, that’s one aspect to it.
I think the second big differentiator is our ability to very quickly and seamlessly generate the outbound calls, so being able to initiate those outbound calls from the mainframe to go out and talk to modern distributed systems that have well-known APIs and be able to pull that data back into the mainframe without the mainframe developers needing to know a lot about those new technologies. That’s huge, and that’s one of the biggest things that we bring to the table and we’ve spent a lot of time and money figuring out how to make it that a seamless process, and one that’s very, very easy for those legacy system developers to consume.
Awesome, Thanks. The other questions that we have says, “I have other integration products that I use. How does this fit in with the rest of my integration architecture?”
Dr. Alex Heublein:
Yeah, another great question and we get that one a lot. We get a lot of people asking us, “So, you’re trying to be an enterprise service bus?” Or, “You’re trying to be a more traditional integration platform?” And, what we tell is no, we’re not. Look, we’re really focused on one thing, and that is mainframe integration. Generally speaking, we actually work really, really well with a lot of other integration products. So, if you’ve got TIBCO or you’ve got MuleSoft or other integration platforms in place, all we do is generate SOAP and REST API’s, so we play very, very nicely with those other broader integration platforms. We specialize in mainframe integration and then we integrate to those integration platforms to become part of that larger ecosystem.
So, you can think of us as we’re not trying to be broad here. What we are trying to do is be very, very deep and very, very good at one very specific thing. So, we’re not trying to be all things to all people. We have a fantastic solution that we’ve been working on for over a decade, so this isn’t a new product we released six months ago or last year. This is a product that we’ve been making continuous improvements to and selling to customers throughout the world over the last 10 years or so. That gives the ability to focus our investment, focus our efforts on being the best in the world at one thing, and then having that one thing fit into a broader integration portfolio and a broader integration ecosystem that many of our customers have in play.
So, we have plenty of customers that had exiting integration platforms and they said, “Okay, we’re going to implement you guys for the mainframe part of this, and then that’s going to become part of our larger integration architecture.” And, it’s worked out beautifully in those cases.
We want to go ahead and thank everyone for joining us.
Dr. Alex Heublein:
Yeah, absolutely. Thank you everyone. Yes, and please go to that URL that’s on the screen. We’d love to show you what we’ve got. We find the vast majority of people we talk to, when they actually see this thing in action and they see how easy it is, they scratch their heads and they ask the question, “Why? Why have I been doing this the hard way?”
So, we very much encourage you to go check out our website. Check out that URL with the POC. Sign up for that and we’ll be in touch with you shortly. All right. Thanks, everyone. Have a great day and stay safe.