Access Components that Provide HTTP Services Through Domain Names

This article applies to application developers and operation and maintenance personnel

How to access the components deployed to Kato that provide HTTP protocol services is the focus of this article. Kato’s gateway service is designed to directly face the public network environment, so the Kato gateway can directly manage the public network traffic flow of all businesses of the enterprise. In the scenario of providing external services, the most common way to access HTTP services is to use domain names. Enterprises usually have only one external IP address, and port resources, especially 80/443 port resources, are very limited. The use of domain names can reuse ports well.

Next, we begin to bind a domain name to the Kato gateway and access the deployed components.

Prerequisites

  1. Successfully deployed accessible components that provide HTTP services, such as deploying any Demo based on source code.
  2. Prepare a usable domain name and do DNS resolution (local resolution can be done under test conditions, please configure correct DNS resolution for formal use)

Operating Procedures

  1. Confirm that the prerequisites are ready Assuming that the prepared domain name is www.example.com

  2. Configure the gateway strategy In order to facilitate the user to configure the gateway strategy, there are three entrances for management and new strategy, namely: team view/gateway/access strategy management page; application view/gateway strategy management page ; Component management panel, under port management; different management pages mainly have different management strategies and the same way of adding. Adding a strategy is mainly divided into two parts of configuration: routing rules and access target. We fill in the domain name www.example.com in the routing rules, and select the deployed Demo component in the access target to confirm and save.

  3. Verify whether the policy is effective directly click the added policy to initiate access, and the configuration is successful when the component page is opened normally.

Understand the Principle and More Configuration Parameters

The Kato gateway implementation can be considered as an ingress-controller, based on openresty 1.15.8.2 version. The policy configured by the user will be translated into a Kubernetes Ingress resource, and then automatically take effect in the Kato gateway. How to generate Kubernetes Ingress resources is that Kato’s internal implementation is transparent to users, so I won’t explain it in detail here. I will mainly explain the parameters that are supported by the domain name when configuring the policy.

Route Parameters

  • Domain name: The most important routing parameter. In the above example, we only set this parameter. The same domain name can be set repeatedly, routing access to different component targets, serving the gray release scenario.
  • Request path: In the case of the same domain name, different component services can be requested according to different request paths.
  • Request header: Using request headers to distinguish different request routes is mainly used in grayscale publishing scenarios.
  • HTTPs certificate: Select to configure the HTTPs certificate to upgrade the current policy to HTTPS, and support the configuration of HTTP transfer policies, including HTTPS/HTTP coexistence and HTTP forced to HTTPS. The HTTPs certificate needs to be uploaded and added in advance in the certificate management. The Kato Cloud version currently supports automatic certificate issuance, that is, automatically matching the existing certificate according to the configured domain name, if it does not exist, the third-party platform is called to automatically complete the issuance, and then the certificate binding is completed.
  • Weight: When the above routing parameters of multiple strategies are all the same, the weight will take effect. Set different weights to access different components (generally, different versions of the same business deploy multiple components at the same time), which is suitable for grayscale publishing scenarios.

Proxy Parameter Settings

The proxy parameters need to be changed by clicking the parameter settings in the management list after the policy is added, and it supports dynamic effect.

  • Connection timeout period Define the timeout period for establishing a connection with the upstream server. The unit is seconds, default: 75.

  • Request timeout period Set the timeout period for transmitting the request to the upstream server (upstream). The unit is seconds, default: 60. Set the timeout period only between two consecutive write operations, not for the entire request transmission. If the upstream server server is here If no content is received within the time, the connection is closed.

  • Response timeout Defines the timeout for reading the response from the upstream server. The unit is seconds, default: 60. Set the timeout only between two consecutive read operations, not for the transmission of the entire response. If the upstream server is within this time If nothing is transferred, the connection is closed.

  • Upload limit Set the maximum limit for uploading content (or request body), setting the size to 0 will not limit it. The unit is Mb, default: 1.

  • Custom request header After setting the custom request headers, each request sent to the upstream server (upstream) will carry these request headers.

  • Back-end response buffer Corresponding to the proxy_buffering parameter of Nginx, it is disabled by default. If the back-end response buffer is turned off, then Nginx will immediately transmit the response content received from the back-end to the client;. If the back-end response buffer is turned on, then Nignx will The content returned by the backend is put in the buffer first, and then returned to the client; and this process is received and sent, not all received and then sent to the client.

  • Websocket The WebSocket supported by the gateway is different from the simple WebSocket. It is based on HTTP and uses the HTTP Upgrade mechanism to upgrade the connection from HTTP to WebSocket. This HTTP Upgrade mechanism is to add two custom request headers to the request, namely’Upgrade $http_upgrade' and’Connection “Upgrade”', when Websoket is checked, the gateway will automatically add these two request headers to the current policy.

Default Domain Name Mechanism

When the components of the HTTP protocol open the external access authority of the port, if no access is configured, they will automatically be assigned a default domain name. The generation strategy of the default domain name is as follows:

{port}.{service-alias}.{team-alias}.{default_domain_suffix}
# eg. http://5000.gr6f1ac7.64q1jlfb.17f4cc.grapps.ca

The default_domain_suffix can be specified by the user during the installation of each cluster or Kato automatically assigns it.

Common Problems

Why can’t I access after the domain name is configured

The inaccessibility may have the following reasons: DNS resolution error; component is not in normal operation state; component port configuration is inconsistent with the actual listening port; component is not the provided HTTP service. Follow the above priorities to troubleshoot in turn.

Can the default assigned domain name be modified

The domain name assigned by default can be deleted first, and the suffix assigned by the default domain name can be modified by modifying the cluster attributes. The assigned policy does not currently support modification.

Support for pan-domain name strategy

Support, the domain name configuration can directly use the generic domain name, such as *.example.com, then no matter you visit a.example.com or b.example.com, you can route to the specified component.