Training and deploying machine learning models with TensorFlow.js JavaScript Library

Deep learning and machine learning algorithms can be executed on Javascript with TensorFlow.js. In the article, we will introduce how to define, train and run your machine learning models using the API of TensorFlow.js.

With a few lines of Javascript, developers can implement pre-trained models for complex tasks such as visual recognition, generating music, or human poses detection. Node.js allows TensorFlow.js can be used in backend Javascript applications rather than use Python.

TensorFlow Javascript

TensorFlow is a popular open-source software library for machine learning applications. Many neural networks and other deep learning algorithms use the TensorFlow library, which was originally a Python library released by Google in November 2015. TensorFlow can use either CPU or GPU-based computation for training and evaluating machine learning models. The library was originally developed to operate on high-performance servers with GPUs.

In May 2017, Tensorflow Lite, a lightweight version of the library for mobile and embedded devices, was released. This was accompanied by MobileNet, a new series of pre-trained deep learning models for vision recognition tasks. MobileNet models were created to perform well in resource-constrained environments such as mobile devices.

TensorFlow.js follow TensorFlow Lite, was announced in March 2018. This version of the library was built on an earlier project called deeplearn.js and was designed to run in the browser. WebGL allows for GPU access to the library. To train, load, and run models, developers use a JavaScript API.

TensorFlow.js is a JavaScript library that can be used to train and deploy machine learning models in the browser and Node.js. TensorFlow.js was recently extended to run on Node.js by using the tfjs-node extension library.

Are you familiar with concepts such as Tensors, Layers, Optimizers, and Loss Functions (or willing to learn them)? TensorFlow.js is a JavaScript library that provides flexible building blocks for neural network programming.

Learn how to use TensorFlow.js code in the browser or Node.js to get started.

Get Setup

Importing Existing Models Into TensorFlow.js

The TensorFlow.js library can be used to run existing TensorFlow and Keras models. Models must be converted to a new format using this tool before they can be executed. Github hosts pre-trained and converted models for image classification, pose detection, and k-nearest neighbors are available on Github.

Learn how to convert pre-trained Python models to TensorFlow.js here.

Learn by looking at existing TensorFlow.js code. tfjs-examples provide small code examples that use TensorFlow.js to implement various ML tasks. See it on GitHub

Loading TensorFlow Libraries

TensorFlow’s JavaScript API is accessible via the core library. Node.js extension modules do not expose any additional APIs.

const tf = require('@tensorflow/tfjs')
// Load the binding (CPU computation)
require('@tensorflow/tfjs-node')
// Or load the binding (GPU computation)
require('@tensorflow/tfjs-node-gpu')

Loading TensorFlow Models

TensorFlow.js includes an NPM library (tfjs-models) to make it easier to load pre-trained and converted models for image classification, pose detection, and k-nearest neighbors.

The MobileNet image classification model is a deep neural network that has been trained to recognize 1000 different classes.

The following example code is used to load the model in the project’s README.

import * as mobilenet from '@tensorflow-models/mobilenet';

// Load the model.
const model = await mobilenet.load();

One of the first issues I ran into was that this does not work on Node.js.

Error: browserHTTPRequest is not supported outside the web browser.

The mobilenet library is a wrapper around the underlying tf.Model class, according to the source code. When the load() method is invoked, the correct model files are automatically downloaded from an external HTTP address and the TensorFlow model is instantiated.

The Node.js extension does not yet support HTTP requests to retrieve models dynamically. Models must instead be manually loaded from the filesystem.

After reading the library’s source code, I was able to devise a workaround…

Loading Models From a Filesystem

If the MobileNet class is created manually, rather than calling the module’s load method, the auto-generated path variable containing the model’s HTTP address can be overwritten with a local filesystem path. After that, calling the load method on the class instance will invoke the filesystem loader class rather than the browser-based HTTP loader.

const path = "mobilenet/model.json"
const mn = new mobilenet.MobileNet(1, 1);
mn.path = `file://${path}`
await mn.load()

MobileNet Models

TensorFlow.js models are made up of two file types: a model configuration file in JSON and model weights in binary format. Model weights are frequently sharded into multiple files for better browser caching.

The automatic loading code for MobileNet models retrieves model configuration and weight shards from a public storage bucket at this address.

https://storage.googleapis.com/tfjs-models/tfjs/mobilenet_v${version}_${alpha}_${size}/

The URL template parameters refer to the model versions listed here. On that page, the classification accuracy results for each version are also displayed.

According to the source code, the tensorflow-models/mobilenet library can only load MobileNet v1 models.

The HTTP retrieval code loads the model.json file from this location and then recursively retrieves all model weights shards that are referenced. These files are in the groupX-shard1of1 format.

How to use Kong to make Orchestration Microservice API Gateway

Kong is an Orchestration Microservice API Gateway. Kong is an abstraction layer that manages client-microservice communication securely via APIs. It is sometimes called an API Gateway, API Middleware, or Service Mesh. In 2015, it became an open-source project, and its core values are high performance and extensibility.

Kong runs in Nginx as a Lua application and is enabled by Lua-Nginx-module.

Why use Kong?

If you need a common functionality to run your actual software on your website, mobile, or IoT (Internet of Things). Kong can work as a gateway (sidecar) for microservices requests, at the same time also possible to provide logging, load balancing, authentication, rate-limiting, transformations, and more through plugins.

Kong can work as a gateway (sidecar) for microservices requests

Kong also helps you to shorten development time, and support configurable plugins. It has many communities to support your development and make it stable.

Kong can add Security plugins for security layers such as ACL, CORS, Dynamic SSL, IP Restriction. It also has a useful traffic control plugin that has limited costs such as rate-limiting, request size limiting, response rate-limiting...

Support Analytics and monitoring plugin which can visualize, inspect, monitor traffic including Prometheus, data dog, and Runscope.

Support transforms requests and responses on the fly including Request Transformer, Response Transformer by Transformation plugin

Support logging request and response data using: TCP, UDP, HTTP, StatsD, Syslog, and others by Logging plugin

Here we will make a tutorial on how to set up and use KONG. You should refer to Docker and API REST for needed knowledge.

How to Install Kong Community Edition

Kong can work in multiple operating environments. The easiest installation is using docker. Follow the instruction below to install by docker.

Install KONG by docker

1. Create a docker network for Kong and API server

$ docker network create kong-net

2. Run a database. You can choose Postgres or Cassandra. We prioritize Postgres

$ docker run -d --name kong-database \
--network=kong-net \
-p 5555:5432 \
-e “POSTGRES_USER=kong” \
-e “POSTGRES_DB=kong” \
postgres:9.6

3. Run migration with Kong container after preparing a database

$ docker run --rm \
--network=kong-net \
-e “KONG_DATABASE=postgres” \
-e “KONG_PG_HOST=kong-database” \
kong:latest kong migrations up

4. After the migrations and database have been complete, start Kong container

$ docker run -d --name kong \
--network=kong-net \
-e “KONG_LOG_LEVEL=debug” \
-e “KONG_DATABASE=postgres” \
-e “KONG_PG_HOST=kong-database” \
-e “KONG_PROXY_ACCESS_LOG=/dev/stdout” \
-e “KONG_ADMIN_ACCESS_LOG=/dev/stdout” \
-e “KONG_PROXY_ERROR_LOG=/dev/stderr” \
-e “KONG_ADMIN_ERROR_LOG=/dev/stderr” \
-e “KONG_ADMIN_LISTEN=0.0.0.0:8001, 0.0.0.0:8444 ssl” \
-p 9000:8000 \
-p 9443:8443 \
-p 9001:8001 \
-p 9444:8444 \
kong:latest

4. Using the request cmd to check Kong Instance

$ curl -i http://localhost:9001

The successful response is

HTTP/1.1 200 OK
Server: openresty/1.13.6.2
Date: Wed, 18 Jul 2018 03:58:57 GMT
Content-Type: application/json
Connection: keep-alive
Access-Control-Allow-Origin: *

Now Kong is run completely. The next task is that prepare an API server containing service routes and support REST API.

Use node.js to setup API server routing

In the task, we will use node.js for an API server. For simple, you 
can clone from GitHub faren-NodeJS-API-KONG. And it should contain like this in terminal:
Now prepare the API server, for this tutorial we are going to use node.js as an API server. To make it simple, please clone the code from GitHub faren-NodeJS-API-KONG. It should contain like below:
$ ls -l
total 48
-rw-r — r — 1 farendev staff 186 Jul 18 11:37 Dockerfile
-rw-r — r — @ 1 farendev staff 31716 Jul 16 10:36 Kong.postman_collection.json
-rw-r — r — 1 farendev staff 100 Jul 18 11:37 README.md
-rw-r — r — 1 farendev staff 878 Jul 18 11:37 index.js
-rw-r — r — 1 farendev staff 307 Jul 18 11:37 package.json

Let’s build a docker image and run it, by the cmd below:

$ docker build -t node_kong .
$ docker run -d --name=node_kong --network=kong-net node_kong

Check all docker has been run by cmd below:

$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d13586f83e52 node_kong “npm start” 2 minutes ago Up 2 minutes 10000/tcp node_kong
41156cad5c86 kong:latest “/docker-entrypoint.…” 6 days ago Up 6 days 0.0.0.0:9000->8000/tcp, 0.0.0.0:9001->8001/tcp, 0.0.0.0:9443->8443/tcp, 0.0.0.0:9444->8444/tcp kong
f794a0e9506c postgres:9.6 “docker-entrypoint.s…” 6 days ago Up 6 days 0.0.0.0:5555->5432/tcp kong-database

Get an IP container on the docker network kong-net for checking API server by accessing its API. You can get into the container kong shell and check the API from it.

Execute the cmd bellow on your terminal:

$ docker network inspect kong-net
…
…
“Containers”: {
“41156cad5c864af4ad8615c051fac8da7f683238a6c8cc42267f02813f14810f”: {
“Name”: “kong”,
“EndpointID”: “fe1cec9f6f31a015ab29a100fdd54b609abea11bbfa00f5e9ca67cc6175d7b2f”,
“MacAddress”: “02:42:ac:13:00:03”,
“IPv4Address”: “172.19.0.3/16”,
“IPv6Address”: “”
},
“d13586f83e52df8866b9879ba0537d58c21fc1b95978dde0580b017ce1a7b418”: {
“Name”: “node_kong”,
“EndpointID”: “5677f7588b7daef391cf8cecec6a3ede0155f99f7d86e0e14dd5970ff0570924”,
“MacAddress”: “02:42:ac:13:00:04”,
“IPv4Address”: “172.169.0.5/16”,
“IPv6Address”: “”
},
“f794a0e9506c7330f1cc19c5c390f745823c29dd4603e0d727dae4e8a68caa8d”: {
“Name”: “kong-database”,
“EndpointID”: “51737ca4e2a4b0e30d25db86e197e653a81e6206893588f4dae7b4a0a50e2799”,
“MacAddress”: “02:42:ac:13:00:02”,
“IPv4Address”: “172.19.0.2/16”,
“IPv6Address”: “”
}
},
…

Check the IP on node_kong in bold font and execute curl the IP bellow exactly.

$ docker exec -ti kong sh
/ # curl -i 172.169.0.5:10000/api/v1/customers
HTTP/1.1 200 OK
X-Powered-By: Express
Content-Type: application/json; charset=utf-8
Content-Length: 110
ETag: W/”6e-Tf3vAGLC3XH0dFR2pCIzWdG8/5c”
Date: Wed, 18 Jul 2018 10:09:32 GMT
Connection: keep-alive
[{“id”:5,”first_name”:”Dodol”,”last_name”:”Dargombez”},{“id”:6,”first_name”:”Nyongot”,”last_name”:”Gonzales”}]

The above respond shows that server API is alive, and can serve to GET method REST /API/v1/workers.

How to setup KONG as API – Gateway to API server routing

After completing the KONG engine and node.js API service, and starting registering our API to Kong, the image below shows the workflow.

Routes define rules to match client requests and work as entry points in Kong. When a route is matched, Kong proxies the request to its associated Service. The service will be direct to the API server that is ready to serve.

For example (Warning: IP might very different for every machine)

API server that is live on server http://172.169.0.5:10000/api/v1/customers

We set routes path /api/v1/customers

And set the service host to http://172.169.0.5:10000, and path /api/v1/customers

So, when the client request to kong (in this case kong is life at localhost:9000) with path route /api/v1/customer: incomplete client request http://localhost:9000/api/v1/customers , Kong will proxy it to 172.169.0.5:10000/api/v1/customers

To start please import postman collection file on GitHub NodeJS-API-KONG (https://github.com/faren/NodeJS-API-KONG) — kong.postman_collection.json.

So let’s see in practice take a look postman collection that has been imported:

For this tutorial, we should get the result like the above scenario:

REST for customers and clients.

Firstly, you register service customers and then register match routes requested. Let take a look at how to get node_kong IP from the docker network.

Find collection Kong, folder Services, POST Services — Create:

POST: localhost:9001/services/

Headers: Content-Type:application/json
Body:
{
“name”: “api-v1-customers”,
“url”: “http://172.169.0.5:10000/api/v1/customers"
}Respond:
{
“host”: “172.169.0.5”,
“created_at”: 1531989815,
“connect_timeout”: 60000,
“id”: “d28c20e4–94d3–4c3b-9a0d-688ac8dbf213”,
“protocol”: “http”,
“name”: “api-v1-customers”,
“read_timeout”: 60000,
“port”: 10000,
“path”: null,
“updated_at”: 1531989815,
“retries”: 5,
“write_timeout”: 60000
}

Find collection Kong, folder Services, GET Services — List:

GET: localhost:9001/services/

Respond:
{
“next”: null,
“data”: [
{
“host”: “172.169.0.5”,
“created_at”: 1531989815,
“connect_timeout”: 60000,
“id”: “d28c20e4–94d3–4c3b-9a0d-688ac8dbf213”,
“protocol”: “http”,
“name”: “api-v1-customers”,
“read_timeout”: 60000,
“port”: 10000,
“path”: null,
“updated_at”: 1531989815,
“retries”: 5,
“write_timeout”: 60000
}
]
}

After creating service customers, you can create routes for service customers.

Find collection Kong, folder Routes, POST Routes — Create:

POST: localhost:9001/services/api-v1-customers/routes/

Headers: Content-Type:application/json
Body:
{
“hosts”: [“api.ct.id”],
“paths”: [“/api/v1/customers”]
}Respond:
{
“created_at”: 1531991052,
“strip_path”: true,
“hosts”: [
“api.ct.id”
],
“preserve_host”: false,
“regex_priority”: 0,
“updated_at”: 1531991052,
“paths”: [
“/api/v1/customers”
],
“service”: {
“id”: “d28c20e4–94d3–4c3b-9a0d-688ac8dbf213”
},
“methods”: null,
“protocols”: [
“http”,
“https”
],
“id”: “4d9503c3-d826–43e3–9063-ed434a949173”
}

Find collection Kong, folder Routes, GET Routes -> List:

GET: localhost:9001/services/

Respond:
{
“next”: null,
“data”: [
{
“created_at”: 1531991052,
“strip_path”: true,
“hosts”: [
“api.ct.id”
],
“preserve_host”: false,
“regex_priority”: 0,
“updated_at”: 1531991052,
“paths”: [
“/api/v1/customers”
],
“service”: {
“id”: “d28c20e4–94d3–4c3b-9a0d-688ac8dbf213”
},
“methods”: null,
“protocols”: [
“http”,
“https”
],
“id”: “4d9503c3-d826–43e3–9063-ed434a949173”
}
]
}

Check you can access API customers from KONG (http://localhost:9000/api/v1/customers)

GET: localhost:9000/api/v1/customers

Headers: Host:api.ct.id
Respond:
[
{
“id”: 5,
“first_name”: “Dodol”,
“last_name”: “Dargombez”
},
{
“id”: 6,
“first_name”: “Nyongot”,
“last_name”: “Gonzales”
}
]

Conclusion

Kong is an open-source, scalable API Layer (API Gateway, or API Middleware), which runs in front of any RESTful API and is extended through Plugins. Kong can provide extra functionality and services beyond the core platform.

For better understanding, the image below shows a typical request workflow of an API using Kong:

When Kong is running, every request being made to the API will hit Kong first, and then it will be proxied to the final API. Kong can execute any plugin in between requests and responses. You can install many types of plugins to empower your APIs, Kong is also supported by plugins from authentication, security, traffic control, logging, and etc… Kong can be used effectively as entry points for every API request.

Angular 8 HttpClient – PostgreSQL – Node.js/Express Sequelize CRUD APIs

Sequelize is a promise-based ORM for Node.js v4 and later. In the tutorial, we will show how to GET/POST/PUT/DELETE requests from Angular 8 Client to PostgreSQL with NodeJs/Express RestAPIs using Sequelize ORM.

Related posts:
Node.js/Express RestAPIs CRUD – Sequelize ORM – PostgreSQL
Node.js/Express RestAPIs – Angular 8 HttpClient – Get/Post/Put/Delete requests + Bootstrap 4

Continue reading “Angular 8 HttpClient – PostgreSQL – Node.js/Express Sequelize CRUD APIs”

Angular 9 HttpClient – PostgreSQL – Node.js/Express Sequelize CRUD APIs

Sequelize is a promise-based ORM for Node.js v4 and later. In the tutorial, we will show how to GET/POST/PUT/DELETE requests from Angular 9 Client to PostgreSQL with NodeJs/Express RestAPIs using Sequelize ORM.

Related posts:
Node.js/Express RestAPIs CRUD – Sequelize ORM – PostgreSQL
Node.js/Express RestAPIs – Angular 9 HttpClient – Get/Post/Put/Delete requests + Bootstrap 4

Continue reading “Angular 9 HttpClient – PostgreSQL – Node.js/Express Sequelize CRUD APIs”

Angular 10 HttpClient – PostgreSQL – Node.js/Express Sequelize CRUD APIs – Post/Get/Put/Delete

Sequelize is a promise-based ORM for Node.js v4 and later. In the tutorial, we will show how to GET/POST/PUT/DELETE requests from Angular 10 Client to PostgreSQL with NodeJs/Express RestAPIs using Sequelize ORM.

Related posts:
Node.js/Express RestAPIs CRUD – Sequelize ORM – PostgreSQL
Node.js/Express RestAPIs – Angular 10 HttpClient – Get/Post/Put/Delete requests + Bootstrap 4

Continue reading “Angular 10 HttpClient – PostgreSQL – Node.js/Express Sequelize CRUD APIs – Post/Get/Put/Delete”