Now that our check has passed, we should be streaming netflow into our elastic stack. You’ll notice in your topology, you have three hosts connected at each site. When these hosts booted up, they started generating traffic to different websites to simulate a user. When we initially launched the containers into the telemetry host, filebeat generated a series of pre-built dashboards in Kibana.
Navigate to your telemetry host IP address with port 5601 - https://ip-address:5601. You can login with the default elastic password we set earlier.
You can select Dashboards from the menu and search for “netflow”. Select any of the available dashboards. Below are a few examples of the visualizations available by simply turing on netflow and pointing it to a running instance of Elastic.
Navigate back to your virtual desktip and VScode instance. You’ll notice in the telemetry folder there are two templates. These templates an be modified and sent to the telemetry node using an ansible playbook.
filebeat.config.modules:
enabled: true
path: ${path.config}/modules.d/*.yml
filebeat.modules:
- module: netflow
log:
enabled: true
var.netflow_host: "0.0.0.0"
var.netflow_port: 2055
setup.kibana:
host: ${KIBANA_HOST}
username: ${ELASTIC_USER}
password: ${ELASTIC_PASSWORD}
dashboards:
enabled: true
output.elasticsearch:
hosts: ${ELASTIC_HOSTS}
username: ${ELASTIC_USER}
password: ${ELASTIC_PASSWORD}
ssl.enabled: true
ssl.certificate_authorities: "certs/ca/ca.crt"
ansible-playbook ciscops.mdd.telemetry_update_elastic -i inventory_prod