beeing edited at the moment, will be formated soon
elk-hole
elasticsearch, logstash and kibana configuration for pi-hole visualization
elk-hole provides the relevant files and configuration to easily visualize pi-holes/dnsmasq statistics via the popular elasticstack.
show, search, filter and customize pi-hole statistics ... the elk way
requirements:
working installation of:
- logstash (currently tested up to version "7.1.0")
- elasticsearch (currently tested up to version with "7.1.0")
- kibana (currently tested up to version "7.1.0")
- filebeat on pi-hole (tested with "1.3.1" & "7.1.1")
-> installation of the elk stack - refer to https://www.elastic.co/ for details.
this repo provides the relevant files and configuration for sending the pi-hole logs via filebeat directly to logstash/elasticsearch. We will then visualize the logs in kibana with a custom dashboard.
The result will look like this:
alternative:
HOW TO USE
LOGSTASH HOST
- copy "/conf.d/20-dns-syslog.conf" to your logstash folder (usually
/etc/logstash/
) 1.1 if you have other files in this folder make sure to properly edit the input/output/filter sections to avoid matching our filebeat dns logs in these files which may be processed earlier. For testing purposes you can name your conf files like so:
/conf.d/20-dns-syslog.conf
/conf.d/30-other1.conf
/conf.d/40-other2.conf
This makes sure that /conf.d/20-dns-syslog.conf
is beeing processed at the beginning.
-
customize
ELASTICSEARCHHOST:PORT
in the output section at the bottom of the file -
copy "dns" to:
/etc/logstash/patterns/
create the folder if it does not exist -
restart logstash
PI-HOLE
- copy
/etc/filebeat/filebeat.yml
to your filebeat installation at the pi-hole instance - customize
LOGSTASHHOST:5141
to match your logstash hostname/ip - restart filebeat
- copy
99-pihole-log-facility.conf to /etc/dnsmasq.d/
- this is very important: restart pi-hole and ensure filebeat is sending logs to logstash before proceeding
- You can verify this by:
- at your filebeat instance:
filebeat test output
it should say "ok" on every step. - again: the following steps will not work correctly if sending data to logstash here is not successfull!
KIBANA HOST (CAN BE THE SAME AS LOGSTASH AND ELASTICSEARCH)
- create the index pattern:
Management -> Index patterns -> Create index pattern
- type
logstash-syslog-dns
- it shound find one index - click next step and select
@timezone
- Create index pattern
- Once the index is created, verify that 79 fields are listed
- click the curved arrows on the top left
- import suitable
json/elk-hole *.json
for your version into kibana:management - saved objects - import
- optionally select the correct index pattern:
logstash-syslog-dns*
- delete any existing template matching our index name:
DELETE /_template/logstash-syslog-dns*
- import the template: paste the content of:
logstash-syslog-dns-index.template_ELK7.x.json
into kibanas dev tools console - click the green triangle in the upper right of the pasted content (first line). Output should be:
{
"acknowledged" : true
}
You should then be able to see your new dashboard and visualizations.