Query Request Log Analysis

Query Request Log Analysis #

INFINI Gateway can track and record all requests that pass through the gateway and analyze requests sent to Elasticsearch, to figure out request performance and service running status.

Setting a Gateway Router #

To enable the query log analysis of INFINI Gateway, configure the tracing_flow parameter on the router and set a flow to log requests.

router:
  - name: default
    tracing_flow: request_logging
    default_flow: cache_first

In the above configuration, one router named default is defined, the default request flow is cache_first, and the flow for logging is request_logging.

Defining a Log Flow #

The log processing flow request_logging is defined as follows:

flow:
  - name: request_logging
    filter:
      - request_path_filter:
          must_not: # any match will be filtered
            prefix:
              - /favicon.ico
      - request_header_filter:
          exclude:
          - app: kibana # in order to filter kibana's access log, config `elasticsearch.customHeaders: { "app": "kibana" }` to your kibana's config `/config/kibana.yml`
      - logging:
          queue_name: request_logging

The above flow uses several filters:

  • The request_path_filter filters out invalid /favicon.ico requests.
  • The request_header_filter filters out requests from Kibana.
  • The logging filter logs requests to the local disk array request_logging so that the pipeline consumes and creates indexes.

Defining a Log Pipeline #

INFINI Gateway uses a pipeline task to asynchronously consume logs and create indexes. The configuration is as follows:

pipeline:
- name: request_logging_index
  auto_start: true
  keep_running: true
  processor:
    - json_indexing:
        index_name: "gateway_requests"
        elasticsearch: "dev"
        input_queue: "request_logging"
        idle_timeout_in_seconds: 1
        worker_size: 1
        bulk_size_in_mb: 10 #in MB

In the above configuration, one processing pipeline named request_logging_index is defined, a consumption disk queue named request_logging, an index target cluster dev, and an index named gateway_requests are set, one work thread is used, and the batch submission size is set as 10 MB.

Defining an Index Cluster #

Configure an index cluster as follows:

elasticsearch:
- name: dev
  enabled: true
  endpoint: https://192.168.3.98:9200 # if your elasticsearch is using https, your gateway should be listen on as https as well
  basic_auth: #used to discovery full cluster nodes, or check elasticsearch's health and versions
    username: elastic
    password: pass
  discovery: # auto discovery elasticsearch cluster nodes
    enabled: true
    refresh:
      enabled: true

In the above configuration, one Elasticsearch cluster named dev is defined and the Elastic module is enabled to process automatic configuration of the cluster.

Configuring an Index Template #

Configure an index template for the Elasticsearch cluster. Run the following commands on the dev cluster to create a log index template.

Configuring the Index Lifecycle #

Importing the Dashboard #

Download the latest dashboard INFINI-Gateway-7.9.2-2021-01-15.ndjson.zip for Kibana 7.9 and import it into Kibana of the dev cluster as follows:

Starting the Gateway #

Start the gateway.

➜ ./bin/gateway
   ___   _   _____  __  __    __  _       
  / _ \ /_\ /__   \/__\/ / /\ \ \/_\ /\_/\
 / /_\///_\\  / /\/_\  \ \/  \/ //_\\\_ _/
/ /_\\/  _  \/ / //__   \  /\  /  _  \/ \ 
\____/\_/ \_/\/  \__/    \/  \/\_/ \_/\_/ 

[GATEWAY] A light-weight, powerful and high-performance elasticsearch gateway.
[GATEWAY] 1.0.0_SNAPSHOT, a17be4c, Wed Feb 3 00:12:02 2021 +0800, medcl, add extra retry for bulk_indexing
[02-03 13:51:35] [INF] [instance.go:24] workspace: data/gateway/nodes/0
[02-03 13:51:35] [INF] [api.go:255] api server listen at: http://0.0.0.0:2900
[02-03 13:51:35] [INF] [runner.go:59] pipeline: request_logging_index started with 1 instances
[02-03 13:51:35] [INF] [entry.go:267] entry [es_gateway] listen at: http://0.0.0.0:8000
[02-03 13:51:35] [INF] [app.go:297] gateway now started.

Modifying Application Configuration #

Replace the Elasticsearch address with the gateway address for applications directed to the Elasticsearch address (such as Beats, Logstash, and Kibana). Assume that the gateway IP address is 192.168.3.98. Modify the Kibana configuration as follows:

# The Kibana server's name.  This is used for display purposes.
#server.name: "your-hostname"

# The URLs of the Elasticsearch instances to use for all your queries.
elasticsearch.hosts: ["https://192.168.3.98:8000"]
elasticsearch.customHeaders: { "app": "kibana" }

# When this setting's value is true Kibana uses the hostname specified in the server.host
# setting. When the value of this setting is false, Kibana uses the hostname of the host
# that connects to this Kibana instance.
#elasticsearch.preserveHost: true

# Kibana uses an index in Elasticsearch to store saved searches, visualizations and
# dashboards. Kibana creates a new index if the index doesn't already exist.
#kibana.index: ".kibana"

# The default application to load.
#kibana.defaultAppId: "home"

Save the configuration and restart Kibana.

Checking the Results #

All requests that access Elasticsearch through the gateway can be monitored.