At NexJ, the pioneer of intelligent customer management with client engagement products designed for the financial services industry, we sought to capture the full addressable market by breaking down the monolith and going API-first. At the core, many organizations choose to shift this direction for scalability and connectivity, but the value-add can be exponential. Here is a deeper dive into our implementation journey with Kong.
*This post was written by, Jelena Duma, Senior Director of Enterprise Architecture, and originally posted on Kong’s blog on 07/22/2021
Implementation With Zero-Trust
Let’s zero-in on the most critical feature that Kong offers us: security. Since NexJ applications are built for the financial services industry, zero-trust security is the number one priority. NexJ has very strict API and security standards, some based on open web application security project (OWASP) standards. We integrated Kong with our identity provider by using the OpenID Connect (OIDC) plugin so that for each request, the JWT token is validated on the gateway level.
We implemented validation of the API key for each tenant or client by using the Key Authentication plugin. We use two-way SSL, mutual TLS, between our microservices, and we hardened cross-origin resource sharing (CORS) by using Kong’s CORS plug-in. Since our microservices are running on Kubernetes, we use the Kong Ingress Controller to route our services and set up the load balancer per cluster. We use Kong’s plugin, Requests Transformer Advanced, to transform our requests for health checks.
Each microservice in our infrastructure is built and deployed independently, so it is in its own Docker container that runs in a Kubernetes cluster, an orchestrator and internal network where the containers can communicate and make use of their resources. We use all important Kubernetes objects such as:
- Pods and Docker containers
- Master nodes that manage other working nodes
- Kubernetes services that allow pods to communicate with each other
- Deployments that manage a set of pods
- Ingresses that allow pods to communicate with the network outside of the pods (in our case, Kong)
- Kubernetes config maps and secrets for external configurations.
We used a declarative approach to set up Kong Gateway and the Kong Ingress Controller, as well as all other resources. We used YAML files to configure pod services and ingress resources, and we used custom resource definitions (CRDs) and Kubernetes-native tooling to configure Kong. That kind of approach is Kubernetes-friendly because it has the ability for version and automate control, and it is simpler and faster to roll back.
We used Kubernetes ingress resources to set up Kong’s workspaces, routes, services, and consumers. In our case, Kong’s workspaces are mapped to our environments, Kong services are mapped to our microservices, Kong’s routes are mapped to the endpoints to access our applications, and Kong’s consumers are our applications with tenants that subscribe to them. With a declarative approach through YAML files, we configured plugins for authentication, authorization, transformations of requests and responses, CORS, etc.
Building Out the Architecture
The architecture of our applications is set up in a standard, containerized way. We followed the best practices of both Kubernetes and Kong’s set up to be able to integrate with third-party cloud applications in the simplest way. In our cloud environment, the Kong Ingress Controller and Kong Gateway are set up per Kubernetes cluster. When requests come from outside our cloud, they first reach the web application firewall (WAF), then they go through the load balancer, which is configured by the Kong Ingress Controller.
From the load balancer, the requests are distributed over the gateway to the applications. The load balancer is configured by Kong proxy service. Since all access to the APIs is managed through the gateway and ingress resources, our Kubernetes services have cluster IP type, which means you can’t access them directly from the outside of the wall, enabling strong security measures.
As each of our applications consists of multiple microservices that are running as containers and pods, they are exposed through Kubernetes services. Each of our environments are in their own Kubernetes cluster. Some databases that we use are managed services, and some are deployed in our clusters.
Since each environment is in its own cluster, the data is not shared between environments – one of our critical security requirements. The communication between microservices is done through mutual TLS. We also have a cert manager deployed in the cluster that manages certifications, their expirations and set up.
Looking toward our roadmap, we aim to extend our Kong Gateway setup. We want to expand the usage of the Development Portal to enable API tracing to increase observability and enhance the troubleshooting process. We are also looking into bringing the Developer Portal to our teams to enable them to subscribe to our own products and services, get the dedicated API key – just as it is with regular clients – and use it for the development of their applications.
This approach would promote our engineering transformation to the API-first approach. With the shift toward GitOps and infrastructure as code, we are trying out Argo CD and automating our pipelines for easy cluster set up. Finally, we are evaluating Kong’s recently announced generally available service connectivity platform, Kong Konnect.
As Konnect becomes Kong’s standard SaaS enterprise platform, we are looking into how to offload all NexJ operations and maintenance costs by having Kong Gateway and its Postgres database as a managed solution.