Deployment Architecture of the wicked API Portal
The API Portal relies on proven technology to manage traffic: The actual portal either sits behind a HAproxy, or leverages standard Ingress Controllers (on Kubernetes) and the actual API traffic is proxied by the excellent API Gateway Kong by Mashape.
The portal components are implemented as lightweight as possible, using node.js.
When deploying on a single docker Host, the "Kickstarter" can create suitable
docker-compose.yml files to include a HAproxy. For Kubernetes, the Helm
chart utilizes existing Ingress Controllers.
The API Portal can be deployed to any environment which runs a docker host.
This can be a single VM, or a Swarm environment. You can either deploy using
docker-compose file, or using an orchestrator like Kubernetes,
leveraging a Kubernetes Helm Chart.
Depending on your requirements regarding high availability and your SLAs towards your clients, you are free to choose how to deploy the API Portal and Gateway.
Behind the scenes, the API Portal runs in several small containers, as microservices inside a service. It is simple to extend the portal functionality using the Portal API, which is also how the "Mailer", "Chatbot" and "Kong Adapter" work in the background.
Scaling your API Gateway is supported out of the box, and depending on your runtime orchestration, setting up high availability is as well, for the Gateway (Kong), the Portal UI, API and Authorization Server.
To achieve this, wicked stores all runtime configuration in a Postgres instance, and session and other performance-critical information in a configurable Redis instance.