HelloELK is a repository which provides hands on introduction to Elastic Stack(formerly ELK Stack) using simple configuration file.
Here, we will be creating a simple pipeline where inputs will be provided from STDIN by user and will be shown in Kibana.
Elastic Stack requires Java 8 or above. Make sure you have one in place by executing following command in CMD.
java -version
you should get following output
java version "1.8.0_121"
Java(TM) SE Runtime Environment (build 1.8.0_121-b13)
Java HotSpot(TM) 64-Bit Server VM (build 25.121-b13, mixed mode)
If Java is not installed or configured properly, please follow steps mentioned here.
Head over to Elastic’s downloads section and download ElasticSearch, Logstash and Kibana. For reference, following versions are used here for windows environment:
Unzip the packages and we’re good to go.
elasticsearch
After getting “started” message, we can proceed to next step.
{
"name" : "udlVS2N",
"cluster_name" : "elasticsearch",
"cluster_uuid" : "64onb1QsRy-CZKLT5B9-rw",
"version" : {
"number" : "6.5.1",
"build_flavor" : "default",
"build_type" : "zip",
"build_hash" : "8c58350",
"build_date" : "2018-11-16T02:22:42.182257Z",
"build_snapshot" : false,
"lucene_version" : "7.5.0",
"minimum_wire_compatibility_version" : "5.6.0",
"minimum_index_compatibility_version" : "5.0.0"
},
"tagline" : "You Know, for Search"
}
Create a file named logstash.conf inside directory “
logstash -f ../logstash.conf
This will start the logstash pipeline INPUT(STDIN) -> FILTER(GROK) -> OUTPUT(ElasticSearch).
kibana
We are now done with ELK Stack Hello World! We have successfully setup up ELK and created Logstash pipeline.
For Kibana visualizations and Dashboards, read here.
For advanced pipelines, read here.
Read more about Logstash filters here.
Read more about GROK filter here.
For Java based GROK filters, refer this. It will come handy while parsing LOGs for APM (Application Performance Monitoring).
For more GROK filters, refer this.