Service Networking

Nirmata provides advanced features to help interconnect and network your applications. With Nirmata, you retain full control of your host networking and security. For example, your hosts can be in different zones or subnets. Nirmata only requires that hosts used by service instances with interconnectivity requirements have a Layer 3 connection.

Nirmata uses standard Docker networking capabilities and provides:
  • Service Naming, Registration and Discovery
  • Dynamic load balancing
  • Service Gateway functions
  • Programmable Routing

Each of these features is described below:

Service Naming, Registration and Discovery

Services within an application will often need to communicate with other services in the same application. Traditionally this requires complex multi-host and multi-device configuration, and is dependent on cloud infrastructure.

With Nirmata, application services can easily connect with each other, without requiring code changes or complex configuration. Best of all, the Nirmata solution works on any public or private cloud and fully decouples your application from the underlying cloud infrastructure.

Each Service in Nirmata has a DNS complaint name. When Nirmata’s Service Networking features are enabled, Nirmata provides seamless registration and discovery for these services. As service instances are deployed, Nirmata automatically tracks the runtime data for each service and populates this in a distributed service registry that is managed across hosts, within the Nirmata Host Agent container.

Enabling Service Networking is easy – it’s a single checkbox!

_images/service-networking-setup.png

Services are named using the following scheme:

<service>.<application>.local

where:
service = the Service Name
application = the Application Name

By default, the service names are resolved within an environment. If you want to reach a service in another environment, you can use following form:

<service>.<environment>.<application>.local

where:
service = the Service name
environment = the Environment name
application = the Application name

When Service Networking is enabled, Nirmata will resolve DNS and TCP requests originating from the application container. Only DNS requests that end with the “.local” domain, and TCP requests for declared Service Ports are handled by the Nirmata and all other requests are propagated upstream. Your application services can now simply use well known names, such as myservice.newapp.local, to interconnect.

Dynamic Load Balancing

In a Microservices style application, each Service is designed to be elastic, and can have several instances running within a single environment.

As discussed above Nirmata automatically resolves Service names to an IP address and port. The Service Discovery also has built in load balancing, to automatically spread requests across available Service Instances.

For example, the orders service can connect to the catalog service using a the well known name: catalog.shopme.local. As shown in the CMD shell output, an HTTP/S request can be simply made as: “https://catalog.shopme.local” and Nirmata will dynamically resolve the request to an IP address and port for the service. If multiple instances of the catalog service are running, Nirmata will automatically load balance requests across these, and will keep track of instances that are added, deleted, or are unreachable. The service load-balancing is also fully integrated with service health checks, for maximum resilency.

_images/service-networking-load-balancing.png

Service Gateway

The Nirmata Service Gateway enables routing of client requests based on the HTTP content or DNS names, to application service instances. The Service Gateway is intended to be used as an entry point to a Microservices style application. Most load balancers provide VM or Host based load balancing. A Service Gateway solves a slightly different problem: with a Microservices style application, an external client must connect to a backend service. However, requests from the client may need to be routed to different services within an application. A Service Gateway can use information in the HTTP content to determine which backend application service should be targeted. Once the application service is selected, the Service Gateway chooses an available instance and resolves its IP address and port.

For example, in the figure below the application has 3 services and each service has multiple instances. The Service Gateway acts as a single client endpoint for all front-end services. This allows a single client endpoint to dynamically address multiple services, on the same connection, by using HTTP information such as the URL path.

_images/service-gateway-example.png

Here is the corresponding Service Gateway configuration in Nirmata:

_images/service-gateway-setup.png

NOTE: A Service Gateway is not required for load-balancing across services in the same environment. Nirmata already provides service discovery and load-balancing across all services deployed using Nirmata. The Service Gateway is to be used for external client applications, not deployed by Nirmata, and who need access to your application services.

Deploying a Service Gateway

The Nirmata Service Gateway is deployed as part of your application. You can add a Service Gateway to your Application Blueprint:

_images/service-gateway-add.png

A Service Gateway will typically require public or externally routed IP Addresses. In this case, you can define a new Container Type, for example Gateway, in the Policies section, and then configure a Resource Selection Rule to place Service Gateway instances on Hosts that are created with a public IP address:

_images/service-gateway-resource-selection-rule.png

You can configure a Service Gateway when the application is deployed (in the Create Environment Wizard), or within an Environment’s view. The available options request routing are discussed below:

TCP Routing

TCP routing allows the mapping of a well known port to a backend service. For example:

:7000  ---->  catalog:80

This route type is useful for custom TCP protocols, stateful protocols like Websockets, and for web applications that perform redirects to internal endpoints. To configure port-based routing, the Service Gateway’s application blueprint must expose the ports that are routed. The Sticky Sessions and Target URL configuration options are not applicable for TCP routing.

URL Routing

URL routing allows a single client to call multiple backend servcies, using the same HTTP/S connection. For example a Web UI may require access to different backend application services. When URL Routing is used, each HTTP request is inspected and then routed based on the URL Path.

You can specify a different Target URL for your service. The Nirmata Service Gateway will rewrite your URL as follows:

/route/(.*)  --->   /targetUrl/$1

The path elements after the route will be captured and appended to the Target URL. If the Target URL is empty the entire URL Path in the HTTP Request will be sent to the backend service.

DNS Routing

DNS routing allows the mapping of an DNS name to a backend service. For example:

catalog.shopping.com  ---->  catalog:8080

DNS routing is intended to be used for load-balancing HTTP/S and Websocket (WS/S) connections to an available instance of a single service. Nirmata will look at the Host name in the HTTP/S request, resolve the service specified in the route, and connect the client to the service. Once connected, all subsequent requests will be sent to the same service instances and no request routing is performed.

Sticky Sessions

You can optionally configure a URL or DNS Route to be sticky. This means that the Service Gateway will select the same backend Service Instance for a client. This option is useful when services are not stateless and maintain session state.

When the ‘Sticky’ option is enabled, the Service Gateway adds a HTTP cookie to the response headers with service address. This cookie is then used when subsequent requests are made from the client. If the service specified in the cookie is not available, another instance will be automatically selected.

Managing HTTPS

The Nirmata Service Gateway can be used to terminate, or proxy, SSL.

To configure SSL for Nirmata Service Gateway, you will need to provide a TLS certificate and a key. The certificate and the key can be uploaded to the gateway container by mounting the directory that contains the certicate and key files. You will also need to specify the environment variables to provide the certificate and key file path to the gateway service.

In Nirmata Service Gateway configuration:

  • Mount the volume. e.g. /usr/share/nirmata/ssl:/usr/share/nirmata/ssl
  • Add the environment variables in the table below
Environment Variable Description
NIRMATA_GW_TLS_CERT Path for the TLS certificate e.g. /usr/share/nirmata/ssl/gateway.cer
NIRMATA_GW_TLS_KEY Path for your TLS key e.g. /usr/share/nirmata/ssl/gateway.key
  • Add HTTPS Port

Note: You will need to ensure that the certificate and the key file are placed at the specified location on the host that is running the gateway container.

If SSL proxying is used, the backed Services must be configured with a valid SSL certificate (self-signed, and expired, certificates will be rejected.)

HTTP Redirect

You can optionally redirect all HTTP connections to the HTTPS port. You first need to configure the HTTPS as specified above. To redirect HTTP, you need to configure an HTTP and HTTPS port in the Service Gateway and then enable the redirection:

_images/service-gateway-redirect-http.png

Programmable Service Routing

When you create an Environment you can set the default routing policy to either Allow All or Deny All service traffic, and then customize which services can communicate to each other. The routing rules can be configured using service names and tags, allowing control over which versions of your application services can communicate with each other.

Nirmata allows you to control routes across services, and also from the Nirmata Gateway Service to other services.

In the example below, we have configured 2 deny rules, one to not allow traffic from catalog to orders, and the other to not allow traffic from the gateway to orders. Note that I have not chosen a tag, but could use that to control traffic to different versions of services in my environment:

_images/programmable-routing.png