Study and Analysis of ARender performances in ELK stack is now possible!
The ARender Rendition Server returns information on its usage, like the loading time of a document and the opened document type. This information is stored in a log file. It is possible to analyse these logs with the ELK stack.
ELK stack is composed of three applications: Elasticsearch, Logstash and Kibana.
In Kibana, a dashboard is obtained with graphical information on the rendition performances.
On the below image, the first number is the average loading time to open a document. The others are the minimum and maximum loading time to open a document. The pie chart to the right displays the number of good (green) and bad (red) responses of the rendition. The other pie chart returns the number of opened documents by document type. Finally, the line chart represents the average loading time by document type.
On the below image, the two first charts analyse the different exceptions met by the rendition. The line chart displays the average loading time in terms of these exceptions. Finally, the last bar vertical chart presents the average loading time by Java methods.
Finally, on the last image, the line chart displays the average loading time by day. The first vertical bar chart is the number of loaded documents by month. And the last, presents the average loading time by month in terms of document type.
To install the ELK stack, it is necessary to have at least Java 7 installed. To check the Java version, open a shell prompt and enter java -version. If the version of Java is less than 7, install a new version.
Verify that the JAVA_HOME environment variable is available in the environment variables:
If JAVA_HOME is not available add it:
Open a shell prompt and enter the following commands:
Download the zip file on: https://www.elastic.co/downloads/elasticsearch.
Unzip the folder to the chosen folder, for example C:\Users\ {name}.
Open Elasticsearch.yml file located in .\elasticsearch-{Version}\config\ with a text editor like Notepad++. Modify the following lines:
Save the modifications.
To install Elasticsearch as service, open a shell prompt and enter:
Elasticsearch is now installed as service. To start it, open the services application of Windows. Select Elasticsearch service and click on start.
The plugin head is an interface for Elasticsearch. To install on Linux, open a shell prompt and enter sudo Elasticsearch-{Version}/bin/plugin install mobz/Elasticsearch-head.
For Windows, the plugin is available in zip file on: https://github.com/mobz/elasticsearch-head.
Unzip the folder in ./Elasticsearch-{Version}/plugins. And rename the folder by “head”.
Restart Elasticsearch. The plugin head-Elasticsearch is available at this URL: {host}:9200/_plugin/head.
It is possible to add templates in Elasticsearch. In the following template, fields of string type not analysed finishing by .raw are created from fields of string type analysed. The fields of string type can be used without its cut after a space.
Add a template from “Any Request” tab of head-Elasticsearch plugin.
http**_:_**//**_{host}_**:9200/
template/{_name_of_template} PUT
{
“template”:"*",
“settings”:{
“index.number_of_shards”**:**5
},
“mappings”:{
“_default_":{
“_all”:{“enabled”:true},
“dynamic_templates”:[{
“string_fields”:{
“match”:"*",
“match_mapping_type”:“string”,
“mapping”:{
“type”:“string”,
“index”:“analyzed”,
“omit_norms”:true,
“fields”:{
“raw”:{
“type”:“string”,
“index”:“not_analyzed”,
“ignore_above”**:**256
}
}
}
}
}]
}
}
}
Open a shell prompt and enter:
Download the zip file on: https://www.elastic.co/downloads/logstash.
Unzip the folder to the chosen folder, for example C:\Users\ {name}.
Create a file of configuration myfile.conf, with a text editor like Notepad++, in the folder: ./logstash-{Version}/bin.
This file is composed in three parts:
input {
file {
path => « **_{Path to the log file}_** »
type => « syslog »
start_position => « beginning »
}
}
filter {
\# Grok cut the field « message » into several fields
\# Attention, for the **inferior**** versions at 3.0.6** the specific spaces
\# are not the same,
\# chose **SEPARATORSPE** for **Linux**
\# and **SEPARATORESP** for **Windows**
\# For the versions **equals or superior at the 3.0.6** **version**
\# chose **SEPARATOR** and add a space after right brace
grok{
match => {
\# "message" => "…" must be on only one line
"message" => "%{NAME}%{TIMESTAMP\_ISO8601:dateStart} %{SEPARATOR} %{NAMEEND}%{TIMESTAMP\_ISO8601:dateEnd} %{SEPARATOR} %{NAME}%{BASE10NUM:time}%{USERNAME:unite} %{SEPARATOR} %{NAME}%{METHOD:method} %{SEPARATOR} %{NAME}%{MIMETYPE:mimeType} %{**SEPARATORSPE**}%{NAME}%{DOCID:documentId} %{**SEPARATORSPE**}%{NAME}%{USERNAME:result} %{SEPARATOR} %{NAME}%{EXCEPTION:exception} %{**SEPARATORSPE**}%{NAME}%{GREEDYDATA:args}"}
\# patterns_dir give the path of file, pattern.conf
\# This file regroup regex of each new field
patterns_dir => ["**_{path to the file pattern.conf}_**_"]_
}
\# date change the type of a field in date
date {
locale => "fr"
match => ["dateStart", "YYYY-MM-dd HH:mm:ss,SSS"]
target => "@timestamp"
}
\# mutate convert the type of a field
mutate {
convert => { "time" => "integer" }
}
}
output {
elasticsearch {
hosts => ["**{host}**:9200"]
action => "index"
\# The index name must not have uppercase
index => "rendition-perfs"
}
stdout { codec => rubydebug }
}
NAME [a-zA-Z]*=
NAMEEND [a-zA-Z]*\s=
SEPARATOR \|\|
METHOD \[a-zA-Z\]\[^\|\]*
MIMETYPE \[a-zA-Z\]\[^\|\]*
DOCID \[a-zA-Z\]\[^\|\]*
EXCEPTION .*[^|]
\# Version of specific separator for Windows
SEPARATORESP \|\|
\# Version of specific separator for Linux
SEPARATORSPEC \|\|Â\\xA0
NAMEREQUEST [a-zA-Z\/?=]*
LOADINGTIME [0-9]*
SEPARATORAND &
NAMEVERSION [a-zA-Z=]*
VERSION [0-9a-zA-Z.-]*
To launch Logstash, open a shell prompt and enter: ./logstash-{Version}/bin/logstash –f myfile.conf
Open a shell prompt and enter:
To launch Kibana: nohup /opt/ELK/kibana-{Version}-linux-x64/bin/kibana &
Download zip file on https://www.elastic.co/downloads/kibana.
Unzip the folder, for example in C:\Users\ {name}\
In the folder .\Kibana\config\kibana.yml, configure the address of Elasticsearch.
To launch Kibana, click on kibana.bat.
To add an index in Kibana, open the URL {host} :5601. In Settings > Indices tab, shoot “index contains time-based events”. Enter the name of the Elasticsearch index and click on “Create”.
In Settings > Objects tab click on “Import”. Click on the visualizations file (visualisations_rendition.json) and restart with the dashboard file (rendition.json).
In the “Dashboard” tab, click on “Load Saved Dashboard” and click on the dashboard name.