Kong is an Orchestration Microservice API Gateway. Kong is an abstraction layer that manages client-microservice communication securely via APIs. It is sometimes called an API Gateway, API Middleware, or Service Mesh. In 2015, it became an open-source project, and its core values are high performance and extensibility.
Kong runs in Nginx as a Lua application and is enabled by Lua-Nginx-module.
Why use Kong?
If you need a common functionality to run your actual software on your website, mobile, or IoT (Internet of Things). Kong can work as a gateway (sidecar) for microservices requests, at the same time also possible to provide logging, load balancing, authentication, rate-limiting, transformations, and more through plugins.
Kong also helps you to shorten development time, and support configurable plugins. It has many communities to support your development and make it stable.
Kong can add Security plugins for security layers such as ACL, CORS, Dynamic SSL, IP Restriction. It also has a useful traffic control plugin that has limited costs such as rate-limiting, request size limiting, response rate-limiting...
Support Analytics and monitoring plugin which can visualize, inspect, monitor traffic including Prometheus, data dog, and Runscope.
Support transforms requests and responses on the fly including Request Transformer, Response Transformer by Transformation plugin
Support logging request and response data using: TCP, UDP, HTTP, StatsD, Syslog, and others by Logging plugin
Here we will make a tutorial on how to set up and use KONG. You should refer to Docker and API REST for needed knowledge.
How to Install Kong Community Edition
Kong can work in multiple operating environments. The easiest installation is using docker. Follow the instruction below to install by docker.
Install KONG by docker
1. Create a docker network for Kong and API server
$ docker network create kong-net
2. Run a database. You can choose Postgres or Cassandra. We prioritize Postgres
$ docker run -d --name kong-database \
--network=kong-net \
-p 5555:5432 \
-e “POSTGRES_USER=kong” \
-e “POSTGRES_DB=kong” \
postgres:9.6
3. Run migration with Kong container after preparing a database
$ docker run --rm \
--network=kong-net \
-e “KONG_DATABASE=postgres” \
-e “KONG_PG_HOST=kong-database” \
kong:latest kong migrations up
4. After the migrations and database have been complete, start Kong container
$ docker run -d --name kong \
--network=kong-net \
-e “KONG_LOG_LEVEL=debug” \
-e “KONG_DATABASE=postgres” \
-e “KONG_PG_HOST=kong-database” \
-e “KONG_PROXY_ACCESS_LOG=/dev/stdout” \
-e “KONG_ADMIN_ACCESS_LOG=/dev/stdout” \
-e “KONG_PROXY_ERROR_LOG=/dev/stderr” \
-e “KONG_ADMIN_ERROR_LOG=/dev/stderr” \
-e “KONG_ADMIN_LISTEN=0.0.0.0:8001, 0.0.0.0:8444 ssl” \
-p 9000:8000 \
-p 9443:8443 \
-p 9001:8001 \
-p 9444:8444 \
kong:latest
4. Using the request cmd to check Kong Instance
$ curl -i http://localhost:9001
The successful response is
HTTP/1.1 200 OK
Server: openresty/1.13.6.2
Date: Wed, 18 Jul 2018 03:58:57 GMT
Content-Type: application/json
Connection: keep-alive
Access-Control-Allow-Origin: *
Now Kong is run completely. The next task is that prepare an API server containing service routes and support REST API.
Use node.js to setup API server routing
$ ls -l
total 48
-rw-r — r — 1 farendev staff 186 Jul 18 11:37 Dockerfile
-rw-r — r — @ 1 farendev staff 31716 Jul 16 10:36 Kong.postman_collection.json
-rw-r — r — 1 farendev staff 100 Jul 18 11:37 README.md
-rw-r — r — 1 farendev staff 878 Jul 18 11:37 index.js
-rw-r — r — 1 farendev staff 307 Jul 18 11:37 package.json
Let’s build a docker image and run it, by the cmd below:
$ docker build -t node_kong .
$ docker run -d --name=node_kong --network=kong-net node_kong
Check all docker has been run by cmd below:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d13586f83e52 node_kong “npm start” 2 minutes ago Up 2 minutes 10000/tcp node_kong
41156cad5c86 kong:latest “/docker-entrypoint.…” 6 days ago Up 6 days 0.0.0.0:9000->8000/tcp, 0.0.0.0:9001->8001/tcp, 0.0.0.0:9443->8443/tcp, 0.0.0.0:9444->8444/tcp kong
f794a0e9506c postgres:9.6 “docker-entrypoint.s…” 6 days ago Up 6 days 0.0.0.0:5555->5432/tcp kong-database
Get an IP container on the docker network kong-net for checking API server by accessing its API. You can get into the container kong shell and check the API from it.
Execute the cmd bellow on your terminal:
$ docker network inspect kong-net
…
…
“Containers”: {
“41156cad5c864af4ad8615c051fac8da7f683238a6c8cc42267f02813f14810f”: {
“Name”: “kong”,
“EndpointID”: “fe1cec9f6f31a015ab29a100fdd54b609abea11bbfa00f5e9ca67cc6175d7b2f”,
“MacAddress”: “02:42:ac:13:00:03”,
“IPv4Address”: “172.19.0.3/16”,
“IPv6Address”: “”
},
“d13586f83e52df8866b9879ba0537d58c21fc1b95978dde0580b017ce1a7b418”: {
“Name”: “node_kong”,
“EndpointID”: “5677f7588b7daef391cf8cecec6a3ede0155f99f7d86e0e14dd5970ff0570924”,
“MacAddress”: “02:42:ac:13:00:04”,
“IPv4Address”: “172.169.0.5/16”,
“IPv6Address”: “”
},
“f794a0e9506c7330f1cc19c5c390f745823c29dd4603e0d727dae4e8a68caa8d”: {
“Name”: “kong-database”,
“EndpointID”: “51737ca4e2a4b0e30d25db86e197e653a81e6206893588f4dae7b4a0a50e2799”,
“MacAddress”: “02:42:ac:13:00:02”,
“IPv4Address”: “172.19.0.2/16”,
“IPv6Address”: “”
}
},
…
Check the IP on node_kong in bold font and execute curl the IP bellow exactly.
$ docker exec -ti kong sh
/ # curl -i 172.169.0.5:10000/api/v1/customers
HTTP/1.1 200 OK
X-Powered-By: Express
Content-Type: application/json; charset=utf-8
Content-Length: 110
ETag: W/”6e-Tf3vAGLC3XH0dFR2pCIzWdG8/5c”
Date: Wed, 18 Jul 2018 10:09:32 GMT
Connection: keep-alive
[{“id”:5,”first_name”:”Dodol”,”last_name”:”Dargombez”},{“id”:6,”first_name”:”Nyongot”,”last_name”:”Gonzales”}]
The above respond shows that server API is alive, and can serve to GET method REST /API/v1/workers.
How to setup KONG as API – Gateway to API server routing
After completing the KONG engine and node.js API service, and starting registering our API to Kong, the image below shows the workflow.
Routes define rules to match client requests and work as entry points in Kong. When a route is matched, Kong proxies the request to its associated Service. The service will be direct to the API server that is ready to serve.
For example (Warning: IP might very different for every machine)
API server that is live on server http://172.169.0.5:10000/api/v1/customers
We set routes path /api/v1/customers
And set the service host to http://172.169.0.5:10000, and path /api/v1/customers
So, when the client request to kong (in this case kong is life at localhost:9000) with path route /api/v1/customer: incomplete client request http://localhost:9000/api/v1/customers , Kong will proxy it to 172.169.0.5:10000/api/v1/customers
To start please import postman collection file on GitHub NodeJS-API-KONG (https://github.com/faren/NodeJS-API-KONG) — kong.postman_collection.json.
So let’s see in practice take a look postman collection that has been imported:
For this tutorial, we should get the result like the above scenario:
REST for customers and clients.
Firstly, you register service customers and then register match routes requested. Let take a look at how to get node_kong IP from the docker network.
Find collection Kong, folder Services, POST Services — Create:
POST: localhost:9001/services/
Headers: Content-Type:application/json Body: { “name”: “api-v1-customers”, “url”: “http://172.169.0.5:10000/api/v1/customers" }Respond: { “host”: “172.169.0.5”, “created_at”: 1531989815, “connect_timeout”: 60000, “id”: “d28c20e4–94d3–4c3b-9a0d-688ac8dbf213”, “protocol”: “http”, “name”: “api-v1-customers”, “read_timeout”: 60000, “port”: 10000, “path”: null, “updated_at”: 1531989815, “retries”: 5, “write_timeout”: 60000 }
Find collection Kong, folder Services, GET Services — List:
GET: localhost:9001/services/
Respond:
{
“next”: null,
“data”: [
{
“host”: “172.169.0.5”,
“created_at”: 1531989815,
“connect_timeout”: 60000,
“id”: “d28c20e4–94d3–4c3b-9a0d-688ac8dbf213”,
“protocol”: “http”,
“name”: “api-v1-customers”,
“read_timeout”: 60000,
“port”: 10000,
“path”: null,
“updated_at”: 1531989815,
“retries”: 5,
“write_timeout”: 60000
}
]
}
After creating service customers, you can create routes for service customers.
Find collection Kong, folder Routes, POST Routes — Create:
POST: localhost:9001/services/api-v1-customers/routes/
Headers: Content-Type:application/json Body: { “hosts”: [“api.ct.id”], “paths”: [“/api/v1/customers”] }Respond: { “created_at”: 1531991052, “strip_path”: true, “hosts”: [ “api.ct.id” ], “preserve_host”: false, “regex_priority”: 0, “updated_at”: 1531991052, “paths”: [ “/api/v1/customers” ], “service”: { “id”: “d28c20e4–94d3–4c3b-9a0d-688ac8dbf213” }, “methods”: null, “protocols”: [ “http”, “https” ], “id”: “4d9503c3-d826–43e3–9063-ed434a949173” }
Find collection Kong, folder Routes, GET Routes -> List:
GET: localhost:9001/services/
Respond:
{
“next”: null,
“data”: [
{
“created_at”: 1531991052,
“strip_path”: true,
“hosts”: [
“api.ct.id”
],
“preserve_host”: false,
“regex_priority”: 0,
“updated_at”: 1531991052,
“paths”: [
“/api/v1/customers”
],
“service”: {
“id”: “d28c20e4–94d3–4c3b-9a0d-688ac8dbf213”
},
“methods”: null,
“protocols”: [
“http”,
“https”
],
“id”: “4d9503c3-d826–43e3–9063-ed434a949173”
}
]
}
Check you can access API customers from KONG (http://localhost:9000/api/v1/customers)
GET: localhost:9000/api/v1/customers
Headers: Host:api.ct.id
Respond:
[
{
“id”: 5,
“first_name”: “Dodol”,
“last_name”: “Dargombez”
},
{
“id”: 6,
“first_name”: “Nyongot”,
“last_name”: “Gonzales”
}
]
Conclusion
Kong is an open-source, scalable API Layer (API Gateway, or API Middleware), which runs in front of any RESTful API and is extended through Plugins. Kong can provide extra functionality and services beyond the core platform.
For better understanding, the image below shows a typical request workflow of an API using Kong:
When Kong is running, every request being made to the API will hit Kong first, and then it will be proxied to the final API. Kong can execute any plugin in between requests and responses. You can install many types of plugins to empower your APIs, Kong is also supported by plugins from authentication, security, traffic control, logging, and etc… Kong can be used effectively as entry points for every API request.