How to Navigate the Complexities of Open Banking
Dr. Alex Heublein discusses the open banking hurdles facing companies today. Learn how how traditional banks can utilize Open Banking SmartBridge to overcome these obstacles.
Open Banking SmartBridge can help organizations evade open banking obstacles
Thank you everyone for joining. My name is Jennifer Henderson, with GT Software. Today we’ll be talking about what we’ve seen in terms of open banking adoption and the hurdles that companies are navigating. We’ll also introduce our Open Banking SmartBridge and what that means for traditional banks and financial institutions.
Today, I’m joined by Dr. Alex Heublein. Alex leads the sales solution architecture and strategic alliances teams at GT Software. Alex, thank you so much for being here. I’ll go ahead and turn things over to you.
Dr. Alex Heublein:
Perfect, thanks Jennifer. Welcome everyone to the webinar. I figured there were a couple things we would do today. One, give you a little bit of an intro in terms of who GT Software is. Talk a little bit about what we’re seeing and what some of the analysts and experts are seeing out in the open banking world. Talk to you a little bit about a platform that we’ve developed to help our customers navigate and integrate with this new changing landscape. Then we can talk about a couple of success stories that we’ve seen throughout the years as we’ve gone through this.
So let’s just start with a little bit of an overview of GT Software. We’re an industry leader in legacy systems modernization and integration. One of the big challenges we see in open banking is most of the large banks out there have core banking systems that are mainframe based. So the question becomes, how do you go out and integrate with those systems to bring them into the broader financial ecosystem and help modernize those applications and integrate with them as well? We spent many, many years working on technology that can do that and SmartBridge is really the latest iteration of that technology.
We’ve got over 2,500 customers worldwide, so we’ve spent a lot of time doing this. We focus on doing one thing well, and that’s helping our customers integrate and modernize those legacy systems. The platform I want to talk to you today about is a no code drag and drop platform that lets you create these API’s into these legacy systems to fulfill a lot of the open banking challenges and integration transactions. We’re able to do it literally in a matter of days, rather than the weeks or months that you typically see with most integration platforms, they’re just trying to do it by hand. You can also see some of our customers. Not only do we have customers in large financial services institutions, but we have a wide variety of customers in other industries as well.
Now what are we seeing in open banking? This is a very rapidly changing landscape. But there are a few trends that we’ve seen thus far in the open banking world. One of the trends we’ve seen is that there’s continued growth in things like real time payment initiatives. So initiatives like FedNow and other real-time payment initiatives, these things are really starting to take off and we’re seeing more and more banks and more and more financial institutions really get focused on the benefits of the being able to implement real-time payment initiatives.
The second thing we’ve seen is really just an exponential growth in the number of open banking API calls. For instance, it’s particularly true in Europe where we’re seeing massive growth, 15%, 20% compound growth month over month in open banking API calls. We’re also seeing a very similar situation in the U.S with open banking standards like the Financial Data Exchange or FDX. While the volumes are still relatively small when you look at the overall numbers of financial services transactions that are done every day, they’re growing exponentially. Transactions that are adhering to these open banking standards are really starting to take off. And the growth rate seems to continue unabated.
The third thing we’ve seen though is that reliability, scalability and security are just the foundational cornerstones of being able to do these transactions. And what’s interesting is, we’ve seen a lot of banks out there that have been able to put up some relatively scalable, secure open banking APIs but they’re not particularly reliable. Where if they are particularly reliable, they run into a lot of latency problems in terms of roundtrip transaction times. Those are really table stakes, if you can’t do it reliably, scalability and securely then you probably shouldn’t be playing in this arena.
In a similar note, we’re also seeing a huge trend toward fintechs and aggregators. Many financial institutions are view these fintechs, and a lot of them are startups, a lot of them have been around for a few years. They’re viewing them both as a threat and an opportunity. They’re a threat in the sense that there’s the potential to get disintermediated from their customers. But there’s also an opportunity in the sense that they can open up a much wide variety of consumers to the goods and services that these financial institutions provide. It’s a bit of double edge sword when you look at fintechs and aggregators.
But one thing we have seen is that a lot of the aggregators that are out there, and some of the fintechs, they started out by really going on and screen scraping the websites of bank accounts. If you gave them your credentials to your bank account and many different bank accounts that you have or financial institution accounts, they could pull back all of that data. But the way they did it was literally going out and screen scraping the login process and then pulling the data down. So not a terribly efficient way of doing things but we’ve talked to customers that said 40% to 50% of all their web traffic is simply screen scrapers running from fintechs and aggregators. So there has to be a better way of doing that.
And then finally, we’ve seen that the open banking and real-time payment landscapes, they’re just changing so, so quickly. They’re growing as we talked about but there’s also a tremendous amount of change happening. The specifications are changing, the landscapes changing, the economics of doing it is changing, the players are changing within the industry. So there’s a tremendous amount of change and that really implies the need for a lot of flexibility and adaptability.
So what are the implications of those changes? There’s a few that we’ve seen and there’s probably more than what I’ll innumerate on this slide, but we’ve seen a few changes. One of them is that line of business executives are increasingly driving open baking innovation. When open banking was first put into place, particularly in a place like the UK, the government came out and mandated to the nine largest banks in the UK. They said, “You all have to open up your systems.” And so initially open banking was kind of viewed as a government compliance and regulatory compliance initiative so the line of business executives weren’t terribly interested into it. But what we’ve seen is an evolution towards them figuring out that there’s actually a path to monetization when it comes to open banking. There’s the opportunity to go out and create entirely new services for their customers.
So we’ve seen this shift from regulatory compliance to acceptance to actually viewing it as a monetization initiative now. And so the line of business executives within the financial institutions are getting much more involved in their increasingly driving that level of innovation.
The second implication is that IT organizations have to be ready to securely and reliably and scalability open up these core banking systems to potentially many millions more or even tens of millions more transactions per day. So how do you do that? How do you go out there and do it in a way that’s going to scale well, that’s going to be reliable, that’s going to have a very low degree of latency? That’s a very interesting challenge for a lot of these IT organizations that are used to really these systems only being opened up to their own internal consumers rather than being opened up to others as well outside their organization. So that’s potentially a significant challenge for IT.
The third implication is that going out and scaling these systems and making sure that they can process these transactions in a very low latency very high volume environment, they require a very different architecture than what I’ll call traditional integration. And so that is implications in terms of the technologies and platforms that you implement to ensure that they can scale and they can do so with very low levels of latency.
The fourth thing we’ve seen, and this is particular become a problem I think in the last four or five years, is that legacy IT skillsets have become a big bottleneck. A lot of the people that know how these core banking systems work, that know how the ingratiation system with these mainframe applications work. A lot of them are retiring. There’s been a big brain drain in the industry. These legacy IT skillsets will almost assuredly become even more of a bottleneck going forward so the question is, how do you mitigate that? How do you rapidly evolve at the same pace that the industry is rapidly evolving at? And that’s really the fifth implication is that IT organizations are going to have to adapt more quickly than ever before in order to ensure that they can keep pace with the very, very rapid changes that are happening in the industry.
Now there’s just one small problem that we run into and that’s the fact that we’re dealing with mainframe legacy systems. A lot of these systems have been around for a very, very long time and what we find is that they’re very, very difficult to integrate with. So why is this mainframe integration so difficult? Why do we see so many challenges with this? Well let’s talk a little bit about that.
The first challenge in integrating with mainframes is that frankly a lot of these applications are older than I am. I mean they were written sometimes 40, 50 years ago. They’ve evolved over time and they’re very, very reliable, they can process amazing transactional loads. But let’s face it, they really weren’t designed for the world that they live in today. The hardware has evolved but the software has evolved in a much, much slower pace.
The second challenge is that when you start looking at the technical aspect of integrating with mainframes, you find a couple of things. One is that you’re dealing with very complex data structures and they’re unlike a lot of the data structures that we use in what I’ll call more modern platforms. There’s also a very high degree of tight coupling between applications. So there are a lot of application dependencies, there are a lot of data flows and they’re very difficult to ascertain and understand sometimes.
And then finally, what we see is that there’s a very heavy reliance still on what we call green screen interfaces. These are the old 3270 type interfaces that were designed for a human being to do something with. But sometimes these screens are the only way to get at certain types of information within some of these legacy systems. There’s still a pretty significant reliance on these green screens and the integrating with those green screens can really be challenging.
Let’s talk about the solution that we brought to market. This is what we call GT SmartBridge. This is built on a platform that we’ve had around for a long time but we’ve made some enhancements to it. Made some updates to it in order to be able to work very closely with a lot of the open banking, real-time payment, et cetera type of initiatives that are going on now.
Now there’s two types of integration I’d like to talk to you about today. One is what we call inbound integration. And so inbound integration is “hey, you know what, I’ve got a mainframe, I’ve got some open banking standards I need to create APIs for.” Then fintechs or whomever going to call in to my mainframe via those standards and get some information back. This might be an account number lookup, this might be an account balance lookup, give me the last 30 days of someone’s transaction history, et cetera. The question is, how do you do that?
What we’ve built with SmartBridge is a runtime environment that basically plays translator between external callers and the core banking systems that are running in the vast majority of the large banks out there. This runtime environment allows you to generate REST or SOAP APIs that can be accessed from outside. The runtime itself runs pretty much anywhere. It can run on the mainframe, it can run off the mainframe, any Windows or Linux VM, it can run out in things like Red Hat’s OpenShift Environment, Docker containers, et cetera. But the point is that it’s a run time environment that takes care of all of the translation back and forth between the mainframe and the external callers and enables you to open them up via a standardized REST and SOAP APIs.
One of the challenges is that when you’re dealing with mainframes and you’re dealing with applications that are running on them, as we talked this idea of tight coupling. A lot of times you might have to go three or four different systems to get information that you and I would probably think, “Well, how hard could that information be to go get?” Let’s say I’m going to go get someone’s transaction history for the last 30 days. That may not be all in one system or not all the data elements that I need to comply with some of these open banking standards might be available in one system. I might have to go to three or four different systems.
To deal with those complexities within our platform, we built something called an integration workflow engine. And this integration workflow engine allows you to orchestrate those transactions between multiple systems within a single API call, package up that information, make complex decisions about the information that you’re seeing and then send that back in a format that the caller can understand. Again, complying with FDX or open banking standards or whatever comes tomorrow, it literally can be any type of caller.
The other thing that we’re working on right now as part of the SmartBridge platform is building pre-built connectors to many of these API interfaces. That’s really one of the big values propositions that you’ll see with SmartBridge going forward is some pre-built connectors so you don’t have to do all of the wiring back and forth between these. But you’ll see pre-built connectors, more and more of these come out over time. Now the challenge then becomes, well how do I go out and create these APIs? There’s a runtime environment, it does workflow, it does translation back and forth in different data formats, so on and so forth. Sounds great but now how do I actually make this into something that will work? How do I define these things?
What we built is the SmartBridge Studio. This is sort of a traditional developmental environment except that it’s completely drag and drop, no code environment that allows you to go pull mainframe components that you need into a workflow. Define what you need from those components, which data elements need to get passed back and forth, what format they need to get passed back and forth in. Then simply, visually design a workflow that allows the runtime environment to make decisions to process information, to reach out to different systems. Then we’re able to generate from that studio, everything that’s needed. All of the integration code that’s needed to run in that runtime environment.
And so this has a couple of implications, right? One is, it’s very, very easy to build these APIs. The second thing is that it doesn’t require a tremendous amount of mainframe knowledge. Nor does it require a tremendous amount of knowledge about modern standards like REST and SOAP and XML and JSON. We handle almost all of that complexity for you. Big, big game changer. Instead of writing potentially thousands of lines of code integrating with these mainframes, we’re able to generate all the code that’s necessary to deploy into that runtime environment. That gives you the ability not only to create these APIs very quickly, but we talked earlier about how rapidly these standards are evolving, how quickly the industry is changing. This allows you to make extremely rapid modifications to these workflows as the industry demands.
Now the second scenario I’d like to talk to you about is just the reverse of that. It’s what we call outbound integration. And so an inbound integration, great, there’s an external caller, they call in, they go through the workflows so on and so forth and they return a result. But what happens if I have an application running on my mainframe that needs to make a call out to the outside world to these modern standards and to these modern systems? For instance, I might have an application that I wrote 30 years ago and I’d like to take advantage of a new third-party fraud detection system. How do I do that? How do I make a mainframe initiated call to where this older application that used to use its own internal fraud detection algorithm now wants to call out to the outside world to a provider for fraud detection services? How do I do that? That’s a very, very challenging thing to do from a mainframe. We handle that scenario as well.
What we’re able to do is generate very small what look like COBOL or PL/1 code block. Really just subroutines that act as a proxy back out to the SmartBridge runtime environment. Then they can call the workflows that go out and integrate with these third-party component. And this is fantastic because the COBOL or PL/1 programmers, the traditional mainframe programmers, all they see this as is a subroutine that they know exactly how to deal with. They know that they’re going to pass the subroutine in some information, they’re going to get some information back. They have absolutely no idea how any of this is implemented on the outside.
That’s great for a couple of reasons. One is that it shields legacy developers from having to learn unfamiliar technologies and that can take quite some time so it really speeds time to value. The other neat thing about though is that I can create these subroutines and have them call out to one fraud detection vendor. And if I decide a year from, you know what, I found a better fraud detection vendor, I’m going to switch to them, I don’t have to make any changes to those applications running on my mainframe again. They still call the same subroutine. It’s just simply calling a different workflow now that calls out to those fraud detection vendors.
This is a very, very powerful idea. If I can do this, I can truly create outbound integration. I can go inbound or I can go outbound. That gives me a tremendous amount of flexibility in how I deploy my computing resources. It also gives me a tremendous amount of flexibility in terms of the types of standards and the use cases that I can support going forward.
Another capability, and this is actually a real example of a workflow that we put together. This is an inbound integration scenario situation. But I also wanted to mention the fact that like I said earlier, we don’t just need to integrate with existing program. But sometimes the only way to get to these legacy systems is to go through these old green screens. This is an example of where we took four different systems over here on the right, four different green screens based systems, created a workflow within SmartBridge for them and then were able to serve up that information on a standardized webpage here that has different information. This is an insurance example but it for instance has your information, your dependents, the claims history. You could imagine a scenario like this in a banking world. But we have very, very powerful green screen processing capabilities for those times where there may be a system that you can only get to the information you need to get through, through those green screen terminal sessions.
What if people ask us, “Where does SmartBridge fit in? Where does it fit into sort of the big picture?” Over here, you’ve got your mainframe and your mainframe assets. You’ve got CICS and IMS systems. You have different databases, you have different file systems, you have batch jobs, et cetera. We really sit there as the bridge between those things and then a lot of other systems. We can have direct interfaces that can call out to fraud detection or anti-money laundering. We can also implement inbound calls directly from things like FDX and UK open banking standards. But we also work with a lot of partners’ products. We can fit in very nicely to an API management solutions or APIs can be managed through those solutions. We can fit very nicely into to an enterprise service bus because we’re really just another endpoint. As well as other application integration and analytics platforms.
Really, what you can think of us as is the best in the world at making all of those legacy resources available to the outside world. And then vice versa, enabling those legacy resource to be able to make calls to modern systems in order to be able to enable functionality like we talked about.
Let’s talk about a couple of real world examples here. This was a large French bank and their challenge was that they were absolutely trying to make sure that they could make these outbound calls that we talked about. To do things like processing real-time payments, detecting fraud, complying with know your customer guidelines. They needed a real-time solution and what they were particularly interested in was making sure that they could go out and make calls to FIS’s Clear2Pay system from a legacy COBOL core banking application in order to do real-time payments.
And so the beauty of using a technology like ours is that they were actually the first bank in France to be able to implement a real-time payment within the country. The interesting thing about that is that they did it without any coding. They moved from proof of concept to production in under two months. Now that would be a pretty amazing statistic in of itself. But remember what we’re dealing with here. We’re dealing with real-time payments where real money is moving around and so that’s tremendous to be able to go from proof of concept to actual production where are using it do real-time payments in under two months.
Second example is a large Swiss bank. You may have heard of this bank before. But the needed to rapidly implement the ability to verify the status of new accounts and new customers against the world check system which really tells you is this a known terrorist or a known criminal or whatever. They needed to do it with a uniform set of API calls that could be called from existing PL/1 programs in their core banking system. They also wanted it to be callable from anywhere. Anywhere within the organization. Not just their mainframe systems but other applications could call into these interfaces as well.
Using our solution, they were actually able to develop both SOAP and REST based versions of these interfaces without writing any code at both the integration layer and on the mainframe layer. They were able to also, like I said, make these APIs available to other systems within the bank going forward. It was a single set of callouts to these systems that were all done in a very, very standardized way. They were able to meet the challenge; they were able to meet the required timeframe that the banking regulators had ahead of the compliance timeframe. They did it at a cost that was literally a fraction of what it would cost using traditional methods of integration. Huge success there as well.
Let’s do a bit of a wrap up here in terms of key takeaways and next steps. I think the first takeaway that you should come away from this webinar with is that, look, the open banking landscape is really changing rapidly. The time to get ahead is now. The time to get ahead isn’t to wait until all the standards are completely worked out and it’s all a very well-known capability. Using a technology like SmartBridge, it gives you the ability to evolve and expand in a very flexible and adaptable fashion. That’s very, very important because it lets you stay out ahead of the curve without incurring huge costs of having to go back and rebuild integrations, redo what you’ve done before. You can do all of that very, very quickly with SmartBridge.
The second takeaway is that rapid and secure and reliable legacy integration is a very, very big inhibitor to success. You wouldn’t believe the number of people we talk to that say, “I would’ve had this implemented six months ago or a year ago but the time it’s taking me to go out and build these legacy system integrations, to build these calls into my core banking system is just killing me. It’s killing the timelines. It’s killing my ability to innovate.” That’s a huge problem for a lot of financial institutions that we talk to.
And then finally, the third key takeaway is that if you want to maintain pace with this ever changing landscape, you have to choose a secure, scalable and adaptable integration platform. That’s a key decision point. We believe we brought the right mix of security, scalability, adaptability to a single platform that will allow you to go out and fulfill the needs of your business as an IT organization going into the future. With that, I will turn it back over to Jennifer and we’ll take some questions.
Okay. Thanks so much Alex. We’ve actually got a couple of questions from the chatroom so I’ll go ahead and ask those to you. The first question is, you mentioned legacy skillset is a growing problem. Can you explain exactly how the open banking SmartBridge helps with the legacy skillset shortage?
Dr. Alex Heublein:
Yeah, absolutely Jennifer. You’re right, we’re seeing a huge challenge in these legacy skillsets, right? Like I said earlier, a lot of the people are retiring, there’s been a pretty significant brain drain. Trying to find people that know these technologies is not only becoming increasingly difficult but increasingly expensive. There’s a couple of ways that we mitigate that. One of them is the platform, the nature of it, is drag and drop no code development. This makes it so you can have people develop these integrations that are not mainframe gurus. They don’t have to be expert COBOL or PL/1 programmers. They literally just have to know how to import things like copybooks or run through a wizard to be able to generate these APIs.
And so that takes a lot of the burden off of the legacy developers that are still left. They’re not having to sit around and write a bunch of code to do these things. They can enable someone that’s maybe only a little bit familiar with mainframe technology to go build APIs and then they can go check them and make sure, okay, yes, you called the right program. It can reduce the amount of effort and the amount of labor that’s required of on behalf of those legacy programmers. It can also free up a lot of their time to go out and work on initiatives that are going to drive value for the business.
Okay, great. Thanks. The next question is, why do legacy system integrations end up taking so much longer than expected?
Dr. Alex Heublein:
Yeah, great question. I think there’s two reasons for it. One is these systems are mission critical. They are absolutely, I mean they’re core banking systems, right? These literally run the business. And so if you look at historically the pace of change, you always want to be careful when you’re trying to make changes to systems that run your entire business. I think part of it is cultural. I think part of it is an expectation that the pace of change shouldn’t be, not only isn’t but shouldn’t be quite as fast because you want to make sure that all your ducks are in a row or your I’s are dotted and T’s crossed, whatever analogy you want to use. I think there’s a cultural aspect to it.
And again, I think there’s also the aspect to it that it’s just actually technically very difficult. Because you have people that are not used to, for instance, making callouts to SOAP or REST interfaces or dealing with data formats that they’re unfamiliar with like JSON or XML. I think it’s a combination of culture and it’s also a combination of the skillsets that are there and the technologies being involved that people tend to underestimate how long these things are going to take. When in reality, a tool like ours can go out and generate these things literally in a matter of hours or days rather than the weeks or months that we often see them take.
Okay, great. The third question is, Alex, do you have any examples of how a line of business executive could use the SmartBridge?
Dr. Alex Heublein:
Yeah, absolutely. We see many, many different use cases. What you find with line of business executives, and this isn’t just true in banking, this is true in many, many industries. In fact, I would argue almost all industries. One area that line of business executives are really focused on is customer service and customer satisfaction. How do we do more for our customers? How do we create services and enable self-service particularly in industries like banking, how do we enable self-service? We see a lot of our customers using SmartBridge technology to create interfaces that allow their customers to do more self-service. Not only is it better for their customers, it’s better for them because they don’t have to get traditional banks and agents and people on the phone to do a lot of these things. They can empower their users and their customers to do a lot of things that traditionally took a lot of manual effort. I think that’s one trend that you see.
Awesome, great. Thank you Alex for your time today. And thank you everyone who was able to join us. If anyone listening has any more questions or want to know more about open banking, please reach to us on social media or visit us at gtsoftware.com. Have a great day. Bye.