2. From the downloaded files, copy ```20-dns-syslog.conf``` and the ```/patterns``` folder to your logstash system. Place them both in ```/etc/logstash/conf.d/```.
...
...
@@ -41,6 +42,9 @@ Your files should be like this:
```/etc/logstash/con.d/patterns/dns```
=======
1. copy ```/conf.d/20-dns-syslog.conf``` to your logstash folder (usually ```/etc/logstash/```)
If you have other files in this folder make sure to properly edit the input/output/filter sections to avoid matching our filebeat dns logs in these files which may be processed earlier. For testing purposes you can name your conf files like so:
```
...
...
@@ -49,6 +53,7 @@ If you have other files in this folder make sure to properly edit the input/outp
/conf.d/40-other2.conf
```
This makes sure that ```/conf.d/20-dns-syslog.conf``` is processed at the beginning.
2. Using vim or nano, open/edit ```20-dns-syslog.conf```. Scroll down to the Output section and change ```ELASTICSEARCHHOST:PORT``` to match your environment. If elasticsearch is running on the same system as logstash, then ```127.0.0.1:9200``` should work.
...
...
@@ -111,6 +116,43 @@ The following steps on the Kibana Host will not work correctly if sending data t
22. Paste the content of: ```logstash-syslog-dns-index.template_ELK7.x.json``` into kibanas dev tools console
23. Click the green triangle in the upper right of the pasted content (first line). Output should be:
=======
This makes sure that ```/conf.d/20-dns-syslog.conf``` is beeing processed at the beginning.
2. customize ```ELASTICSEARCHHOST:PORT``` in the output section at the bottom of the file
3. copy ```dns``` to:
```/etc/logstash/patterns/``` create the folder if it does not exist
4. restart logstash
### PI-HOLE
5. copy ```/etc/filebeat/filebeat.yml``` to your filebeat installation at the pi-hole instance
6. customize ```LOGSTASHHOST:5141``` to match your logstash hostname/ip
7. restart filebeat
8. copy ```99-pihole-log-facility.conf to /etc/dnsmasq.d/```
9. this is very important: restart pi-hole and ensure filebeat is sending logs to logstash before proceeding
10. You can verify this by:
11. at your filebeat instance:
```filebeat test output```
it should say ```ok``` on every step.
12. again: the following steps will not work correctly if sending data to logstash here is not successfull!
### KIBANA HOST (CAN BE THE SAME AS LOGSTASH AND ELASTICSEARCH)
13. create the index pattern:
```Management -> Index patterns -> Create index pattern```
14. type ```logstash-syslog-dns*``` - it shound find one index
15. click next step and select ```@timezone```
16. Create index pattern
17. Once the index is created, verify that 79 fields are listed
18. click the curved arrows on the top right
19. import suitable ```json/elk-hole *.json``` for your version into kibana: ```management - saved objects - import```
20. optionally select the correct index pattern: ```logstash-syslog-dns*```
21. delete any existing template matching our index name:
```DELETE /_template/logstash-syslog-dns*```
22. import the template: paste the content of: ```logstash-syslog-dns-index.template_ELK7.x.json``` into kibanas dev tools console
23. click the green triangle in the upper right of the pasted content (first line). Output should be:
```
{
"acknowledged" : true
...
...
@@ -123,6 +165,12 @@ The following steps on the Kibana Host will not work correctly if sending data t
systemctl restart logstash.service
systemctl restart elasticsearch.service
systemctl restart kibana.service
=======
24. as a precaution restart the whole elk stack
```
systemctl restart logstash
systemctl restart elasticsearch
systemctl restart kibana
```
You should then be able to see your new dashboard and visualizations.