Vladyslav Diadenko
18 min readSep 15, 2024

When I first started working with the Elastic Stack, one of the biggest challenges was implementing an Elastic cluster with secure communication between its components. The following resources significantly helped me in this process:

For this lab, I aim to provide a full tutorial on how to implement an Elasticsearch cluster with five Elasticsearch nodes, Kibana, Metricbeat, and Filebeat for cluster health monitoring. Additionally, I’ll include a brief section on implementing FortiDragon pipelines to parse FortiGate firewall syslogs and set up dashboards for monitoring. This tutorial will serve as a foundation to build a more complex lab for a future SIEM solution. The Fleet Server, as part of Kibana (integrated on the same host), will also be installed.

Note that no automation tools are included in this tutorial. I have already provided a Docker Compose setup for single-node installations. In this guide, I’ll walk you through each step manually. This material is not meant to be exhaustive or the only “correct” guide, but simply a sharing of my own experience and knowledge. Let’s begin by defining our topology.

All SSL certificates used in this lab will be self-signed. It’s not necessary to strictly follow this cluster topology; it’s intended to demonstrate how to assign specific roles to each node. You could, for example, create a few nodes without altering their roles, in which case each node would hold all roles. More information about node roles can be found here: https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-node.html

The main OS for the VMs in this lab will be Debian 11. For the master nodes, I will allocate 2 CPUs and 4GB of RAM, while for the hot (data) nodes, I will assign 4 CPUs and 16GB of RAM. For a production environment, you will need more resources, as outlined here: https://www.elastic.co/guide/en/cloud-enterprise/current/ece-hardware-prereq.html

The host OS I am using is Pop!_OS with Cockpit as the web-based graphical interface for virtual machine management (https://cockpit-project.org/running). However, you can also use the default Virtual Machine Manager, or if you’re on Windows, you can use VirtualBox. For this lab, I will create a network with the range 172.20.20.0/24 using the Cockpit web interface:

As an alternative, the virsh tool can be used to define, create, and start the network.

cat <<EOF > elastic-lab.xml
<network>
<name>elastic-lab</name>
<bridge name='virbr1'/>
<forward mode='nat'/>
<ip address='172.20.20.1' netmask='255.255.255.0'>
<dhcp>
<range start='172.20.20.100' end='172.20.20.150'/>
</dhcp>
</ip>
</network>
EOF

sudo virsh net-define elastic-lab.xml
sudo virsh net-start elastic-lab
sudo virsh net-autostart elastic-lab

As mentioned earlier, you can define the same network using VirtualBox on a Windows or Linux host without any issues. For the initial VM creation, I will be using Vagrant. Don’t worry if you’re not familiar with Vagrant; you can spend just a few minutes to install the VMs manually.

Vagrant.configure("2") do |config|

config.vm.define "es_m1" do |es_m1|
es_m1.vm.box = "debian/bullseye64"
es_m1.vm.hostname = "es-m1"
es_m1.ssh.insert_key = false
es_m1.nfs.verify_installed = false
es_m1.vm.synced_folder '.', '/vagrant', disabled: true

es_m1.vm.provider :libvirt do |libvirt|
libvirt.management_network_name = 'elastic-lab'
libvirt.management_network_address = '172.20.20.0/24'
libvirt.cpus = 2
libvirt.memory = 4096
end
es_m1.vm.provision "shell", privileged: true, inline: <<-SHELL
echo 'PermitRootLogin yes' >> /etc/ssh/sshd_config
sed -i 's/^#\\?PasswordAuthentication.*/PasswordAuthentication yes/' /etc/ssh/sshd_config
systemctl restart sshd
SHELL
end

config.vm.define "es_m2" do |es_m2|
es_m2.vm.box = "debian/bullseye64"
es_m2.vm.hostname = "es-m2"
es_m2.ssh.insert_key = false
es_m2.nfs.verify_installed = false
es_m2.vm.synced_folder '.', '/vagrant', disabled: true

es_m2.vm.provider :libvirt do |libvirt|
libvirt.management_network_name = 'elastic-lab'
libvirt.management_network_address = '172.20.20.0/24'
libvirt.cpus = 2
libvirt.memory = 4096
end
es_m2.vm.provision "shell", privileged: true, inline: <<-SHELL
echo 'PermitRootLogin yes' >> /etc/ssh/sshd_config
sed -i 's/^#\\?PasswordAuthentication.*/PasswordAuthentication yes/' /etc/ssh/sshd_config
systemctl restart sshd
SHELL
end

config.vm.define "es_m3" do |es_m3|
es_m3.vm.box = "debian/bullseye64"
es_m3.vm.hostname = "es-m3"
es_m3.ssh.insert_key = false
es_m3.nfs.verify_installed = false
es_m3.vm.synced_folder '.', '/vagrant', disabled: true

es_m3.vm.provider :libvirt do |libvirt|
libvirt.management_network_name = 'elastic-lab'
libvirt.management_network_address = '172.20.20.0/24'
libvirt.cpus = 2
libvirt.memory = 4096
end
es_m3.vm.provision "shell", privileged: true, inline: <<-SHELL
echo 'PermitRootLogin yes' >> /etc/ssh/sshd_config
sed -i 's/^#\\?PasswordAuthentication.*/PasswordAuthentication yes/' /etc/ssh/sshd_config
systemctl restart sshd
SHELL
end

config.vm.define "es_h1" do |es_h1|
es_h1.vm.box = "debian/bullseye64"
es_h1.vm.hostname = "es-h1"
es_h1.ssh.insert_key = false
es_h1.nfs.verify_installed = false
es_h1.vm.synced_folder '.', '/vagrant', disabled: true

es_h1.vm.provider :libvirt do |libvirt|
libvirt.management_network_name = 'elastic-lab'
libvirt.management_network_address = '172.20.20.0/24'
libvirt.cpus = 4
libvirt.memory = 16384
end

es_h1.vm.provision "shell", privileged: true, inline: <<-SHELL
echo 'PermitRootLogin yes' >> /etc/ssh/sshd_config
sed -i 's/^#\\?PasswordAuthentication.*/PasswordAuthentication yes/' /etc/ssh/sshd_config
systemctl restart sshd
SHELL
end

config.vm.define "es_h2" do |es_h2|
es_h2.vm.box = "debian/bullseye64"
es_h2.vm.hostname = "es-h2"
es_h2.ssh.insert_key = false
es_h2.nfs.verify_installed = false
es_h2.vm.synced_folder '.', '/vagrant', disabled: true

es_h2.vm.provider :libvirt do |libvirt|
libvirt.management_network_name = 'elastic-lab'
libvirt.management_network_address = '172.20.20.0/24'
libvirt.cpus = 4
libvirt.memory = 16384
end

es_h2.vm.provision "shell", privileged: true, inline: <<-SHELL
echo 'PermitRootLogin yes' >> /etc/ssh/sshd_config
sed -i 's/^#\\?PasswordAuthentication.*/PasswordAuthentication yes/' /etc/ssh/sshd_config
systemctl restart sshd
SHELL
end

config.vm.define "kibana" do |kibana|
kibana.vm.box = "debian/bullseye64"
kibana.vm.hostname = "kibana"
kibana.ssh.insert_key = false
kibana.nfs.verify_installed = false
kibana.vm.synced_folder '.', '/vagrant', disabled: true

kibana.vm.provider :libvirt do |libvirt|
libvirt.management_network_name = 'elastic-lab'
libvirt.management_network_address = '172.20.20.0/24'
libvirt.cpus = 4
libvirt.memory = 8192
end

kibana.vm.provision "shell", privileged: true, inline: <<-SHELL
echo 'PermitRootLogin yes' >> /etc/ssh/sshd_config
sed -i 's/^#\\?PasswordAuthentication.*/PasswordAuthentication yes/' /etc/ssh/sshd_config
systemctl restart sshd
SHELL
end

end

As you can see, I have allowed root login via SSH on all cluster nodes for lab purposes only:

sudo echo 'PermitRootLogin yes' >> /etc/ssh/sshd_config
sudo sed -i '/^PasswordAuthentication/c\PasswordAuthentication yes' /etc/ssh/sshd_config
sudo systemctl restart sshd

In my lab, the addressing is as follows:

  • Kibana: 172.20.20.10
  • Elasticsearch master node 1: 172.20.20.11
  • Elasticsearch master node 2: 172.20.20.12
  • Elasticsearch master node 3: 172.20.20.13
  • Elasticsearch hot data node 1: 172.20.20.14
  • Elasticsearch hot data node 2: 172.20.20.15

We assume that your network is configured and you have access to the Internet. Now, let’s install the required packages. On each Elasticsearch cluster node (using version 8.14.3 in this lab), run the following commands:

sudo apt update
sudo apt install -y curl sshpass jq default-jre unzip apt-transport-https gpg
ELASTIC_VERSION="8.14.3"
wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo gpg --dearmor -o /usr/share/keyrings/elasticsearch-keyring.gpg
echo "deb [signed-by=/usr/share/keyrings/elasticsearch-keyring.gpg] https://artifacts.elastic.co/packages/8.x/apt stable main" | sudo tee /etc/apt/sources.list.d/elastic-8.x.list
apt update
sudo apt-get install -y "elasticsearch=$ELASTIC_VERSION"
sudo apt-get install -y "metricbeat=$ELASTIC_VERSION"
sudo apt-get install -y "filebeat=$ELASTIC_VERSION"

Next, let’s install the required service on the Kibana host. Run the following commands on the Kibana host to set up the service:

sudo apt update
sudo apt install -y curl unzip apt-transport-https gpg
ELASTIC_VERSION="8.14.3"
wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo gpg --dearmor -o /usr/share/keyrings/elasticsearch-keyring.gpg
echo "deb [signed-by=/usr/share/keyrings/elasticsearch-keyring.gpg] https://artifacts.elastic.co/packages/8.x/apt stable main" | sudo tee /etc/apt/sources.list.d/elastic-8.x.list
apt update
sudo apt install -y "kibana=$ELASTIC_VERSION"

After you have installed all the defined services, create a directory on the Kibana host for the SSL certificates. Run the following command:

sudo mkdir /etc/kibana/certs/

Now, let’s use the es-m1 host to generate all SSL/TLS certificates. To make things faster and easier, we’ll define the certificate password in a variable (only for lab environment use):

CERT_PASS=changeme123

On the es-m1 node, let’s start by generating the CA certificate:

sudo /usr/share/elasticsearch/bin/elasticsearch-certutil ca -s -out /usr/share/elasticsearch/elastic-stack-ca.p12 --pass $CERT_PASS

Next, generate the certificate for the transport layer:

sudo /usr/share/elasticsearch/bin/elasticsearch-certutil cert --ca /usr/share/elasticsearch/elastic-stack-ca.p12 -s -out /usr/share/elasticsearch/elastic-certificates.p12 --ca-pass $CERT_PASS --pass $CERT_PASS

Next, we will create the instances.yml file, which will define the instances for which certificates are generated. Each instance is identified by its name and associated IP addresses. This file is used when generating certificates to ensure each node has the correct information for secure communication. Here’s the command to create the instances.yml file:

sudo bash -c 'cat <<EOF > /usr/share/elasticsearch/instances.yml
instances:
- name: es-m1
ip:
- 127.0.0.1
- 172.20.20.11
- name: es-m2
ip:
- 127.0.0.1
- 172.20.20.12
- name: es-m3
ip:
- 127.0.0.1
- 172.20.20.13
- name: es-h1
ip:
- 127.0.0.1
- 172.20.20.14
- name: es-h2
ip:
- 127.0.0.1
- 172.20.20.15
- name: kibana
ip:
- 127.0.0.1
- 172.20.20.10
- name: fleet-server
ip:
- 127.0.0.1
- 172.20.20.10
- name: logstash
ip:
- 127.0.0.1
- 172.20.20.16
EOF'

If you plan to use DNS names, add the dns key-value pair as shown in the example for Kibana:

instances:
- name: kibana
dns:
- kibana.local
ip:
- 127.0.0.1
- 172.20.20.10

Generate the instance certificates. This command creates SSL/TLS certificates for all the instances defined in the instances.yml file, signing them with the previously generated CA:

sudo /usr/share/elasticsearch/bin/elasticsearch-certutil cert -s -out /usr/share/elasticsearch/certs.zip --in /usr/share/elasticsearch/instances.yml --ca /usr/share/elasticsearch/elastic-stack-ca.p12 --pass $CERT_PASS --ca-pass $CERT_PASS

Unzip the generated certificates (certs.zip file) into the specified directory:

sudo mkdir -p /usr/share/elasticsearch/certs
sudo unzip -o /usr/share/elasticsearch/certs.zip -d /usr/share/elasticsearch/certs

Convert the Kibana .p12 certificate to PEM format (private key):

sudo openssl pkcs12 -in /usr/share/elasticsearch/certs/kibana/kibana.p12 -nocerts -out /usr/share/elasticsearch/certs/kibana/kibana.key -nodes -passin pass:$CERT_PASS

Convert the Kibana .p12 certificate to PEM format (public certificate):

sudo openssl pkcs12 -in /usr/share/elasticsearch/certs/kibana/kibana.p12 -clcerts -nokeys -out /usr/share/elasticsearch/certs/kibana/kibana.crt -passin pass:$CERT_PASS

Convert Fleet Server .p12 to PEM format (private key):

sudo openssl pkcs12 -in /usr/share/elasticsearch/certs/fleet-server/fleet-server.p12 -nocerts -out /usr/share/elasticsearch/certs/fleet-server/fleet-server.key -nodes -passin pass:$CERT_PASS

Convert Fleet Server .p12 to PEM format (public certificate):

sudo openssl pkcs12 -in /usr/share/elasticsearch/certs/fleet-server/fleet-server.p12 -clcerts -nokeys -out /usr/share/elasticsearch/certs/fleet-server/fleet-server.crt -passin pass:$CERT_PASS

Generate elasticsearch-ca.pem:

sudo openssl pkcs12 -in /usr/share/elasticsearch/elastic-stack-ca.p12 -clcerts -nokeys -passin pass:$CERT_PASS \
| awk '/-----BEGIN CERTIFICATE-----/,/-----END CERTIFICATE-----/' > /usr/share/elasticsearch/elasticsearch-ca.pem

Remove all auto-generated certificates in the /etc/elasticsearch/certs directory:

sudo rm -f /etc/elasticsearch/certs/*

Copy the generated certificates to the Elasticsearch node’s certs folder. Let’s start with master node 1:

sudo cp /usr/share/elasticsearch/elastic-certificates.p12 /etc/elasticsearch/certs/
sudo cp /usr/share/elasticsearch/elasticsearch-ca.pem /etc/elasticsearch/certs/
sudo cp /usr/share/elasticsearch/certs/es-m1/es-m1.p12 /etc/elasticsearch/certs/http.p12
sudo cp /usr/share/elasticsearch/elastic-stack-ca.p12 /etc/elasticsearch/certs/

From master 1, send the generated certificates to the other nodes in the cluster. Use the following commands to copy the necessary certificates to each node:

REMOTE_VM_PASS="your_password_here"

# For Elasticsearch master 2
sshpass -p $REMOTE_VM_PASS scp -o StrictHostKeyChecking=no /usr/share/elasticsearch/elastic-certificates.p12 root@172.20.20.12:/etc/elasticsearch/certs/
sshpass -p $REMOTE_VM_PASS scp -o StrictHostKeyChecking=no /usr/share/elasticsearch/elastic-stack-ca.p12 root@172.20.20.12:/etc/elasticsearch/certs/
sshpass -p $REMOTE_VM_PASS scp -o StrictHostKeyChecking=no /usr/share/elasticsearch/elasticsearch-ca.pem root@172.20.20.12:/etc/elasticsearch/certs/
sshpass -p $REMOTE_VM_PASS scp -o StrictHostKeyChecking=no /usr/share/elasticsearch/certs/es-m2/es-m2.p12 root@172.20.20.12:/etc/elasticsearch/certs/http.p12

# For Elasticsearch master 3
sshpass -p $REMOTE_VM_PASS scp -o StrictHostKeyChecking=no /usr/share/elasticsearch/elastic-certificates.p12 root@172.20.20.13:/etc/elasticsearch/certs/
sshpass -p $REMOTE_VM_PASS scp -o StrictHostKeyChecking=no /usr/share/elasticsearch/elastic-stack-ca.p12 root@172.20.20.13:/etc/elasticsearch/certs/
sshpass -p $REMOTE_VM_PASS scp -o StrictHostKeyChecking=no /usr/share/elasticsearch/elasticsearch-ca.pem root@172.20.20.13:/etc/elasticsearch/certs/
sshpass -p $REMOTE_VM_PASS scp -o StrictHostKeyChecking=no /usr/share/elasticsearch/certs/es-m3/es-m3.p12 root@172.20.20.13:/etc/elasticsearch/certs/http.p12

# For Elasticsearch hot data node 1
sshpass -p $REMOTE_VM_PASS scp -o StrictHostKeyChecking=no /usr/share/elasticsearch/elastic-certificates.p12 root@172.20.20.14:/etc/elasticsearch/certs/
sshpass -p $REMOTE_VM_PASS scp -o StrictHostKeyChecking=no /usr/share/elasticsearch/elastic-stack-ca.p12 root@172.20.20.14:/etc/elasticsearch/certs/
sshpass -p $REMOTE_VM_PASS scp -o StrictHostKeyChecking=no /usr/share/elasticsearch/elasticsearch-ca.pem root@172.20.20.14:/etc/elasticsearch/certs/
sshpass -p $REMOTE_VM_PASS scp -o StrictHostKeyChecking=no /usr/share/elasticsearch/certs/es-h1/es-h1.p12 root@172.20.20.14:/etc/elasticsearch/certs/http.p12

# For Elasticsearch hot data node 2
sshpass -p $REMOTE_VM_PASS scp -o StrictHostKeyChecking=no /usr/share/elasticsearch/elastic-certificates.p12 root@172.20.20.15:/etc/elasticsearch/certs/
sshpass -p $REMOTE_VM_PASS scp -o StrictHostKeyChecking=no /usr/share/elasticsearch/elastic-stack-ca.p12 root@172.20.20.15:/etc/elasticsearch/certs/
sshpass -p $REMOTE_VM_PASS scp -o StrictHostKeyChecking=no /usr/share/elasticsearch/elasticsearch-ca.pem root@172.20.20.15:/etc/elasticsearch/certs/
sshpass -p $REMOTE_VM_PASS scp -o StrictHostKeyChecking=no /usr/share/elasticsearch/certs/es-h2/es-h2.p12 root@172.20.20.15:/etc/elasticsearch/certs/http.p12

# For Fleet Server on the Kibana host
sshpass -p $REMOTE_VM_PASS scp -o StrictHostKeyChecking=no /usr/share/elasticsearch/certs/fleet-server/fleet-server.crt root@172.20.20.10:/etc/kibana/certs/
sshpass -p $REMOTE_VM_PASS scp -o StrictHostKeyChecking=no /usr/share/elasticsearch/certs/fleet-server/fleet-server.key root@172.20.20.10:/etc/kibana/certs/
sshpass -p $REMOTE_VM_PASS scp -o StrictHostKeyChecking=no /usr/share/elasticsearch/elasticsearch-ca.pem root@172.20.20.10:/etc/kibana/certs/

# For Kibana certificates
sshpass -p $REMOTE_VM_PASS scp -o StrictHostKeyChecking=no /usr/share/elasticsearch/certs/kibana/kibana.crt root@172.20.20.10:/etc/kibana/certs/
sshpass -p $REMOTE_VM_PASS scp -o StrictHostKeyChecking=no /usr/share/elasticsearch/certs/kibana/kibana.key root@172.20.20.10:/etc/kibana/certs/

On each Elasticsearch node in the cluster, change the permissions for the certificate files:

sudo chown root:elasticsearch /etc/elasticsearch/certs/elasticsearch-ca.pem
sudo chmod 0644 /etc/elasticsearch/certs/elasticsearch-ca.pem

sudo chown root:elasticsearch /etc/elasticsearch/certs/elastic-certificates.p12
sudo chmod 0644 /etc/elasticsearch/certs/elastic-certificates.p12

sudo chown root:elasticsearch /etc/elasticsearch/certs/http.p12
sudo chmod 0644 /etc/elasticsearch/certs/http.p12

On each Elasticsearch cluster node, add the certificate password to the Elasticsearch keystore:

CERT_PASS="changeme123"
echo $CERT_PASS | /usr/share/elasticsearch/bin/elasticsearch-keystore add xpack.security.transport.ssl.keystore.secure_password -f > /dev/null 2>&1
echo $CERT_PASS | /usr/share/elasticsearch/bin/elasticsearch-keystore add xpack.security.transport.ssl.truststore.secure_password -f > /dev/null 2>&1
echo $CERT_PASS | /usr/share/elasticsearch/bin/elasticsearch-keystore add xpack.security.http.ssl.keystore.secure_password -f > /dev/null 2>&1

Import the CA keystore into the HTTP keystore on each Elasticsearch cluster node:

CERT_PASS="changeme123"
sudo keytool -importkeystore \
-destkeystore /etc/elasticsearch/certs/http.p12 \
-srckeystore /etc/elasticsearch/certs/elastic-stack-ca.p12 \
-srcstoretype PKCS12 \
-deststorepass "$CERT_PASS" \
-srcstorepass "$CERT_PASS" \
-noprompt

Next you need to edit the configuration file located at /etc/elasticsearch/elasticsearch.yml for each Elasticsearch node:

  • cluster.name: The cluster name must be the same across all Elasticsearch nodes.
  • node.name: Assign a unique name to each node, such as es-m1, es-m2, etc.
  • network.host: For each Elasticsearch node, specify its own VM IP address.
  • http.port: The port should be set to 9200.

For each node, the discovery.seed_hosts configuration will look like this:

discovery.seed_hosts:
- 172.20.20.11:9300
- 172.20.20.12:9300
- 172.20.20.13:9300
- 172.20.20.14:9300
- 172.20.20.15:9300

Change the cluster.initial_master_nodes setting. Make sure it is not already in use. The configuration should look like this:

cluster.initial_master_nodes:
- es-m1

For xpack.security.transport.ssl, update the keystore and truststore paths, and check the verification_mode as follows:

xpack.security.transport.ssl:
enabled: true
verification_mode: certificate
keystore.path: certs/elastic-certificates.p12
truststore.path: certs/elastic-certificates.p12

Make sure to uncomment the following parameters at the end of the configuration file:

http.host: 0.0.0.0
transport.host: 0.0.0.0

Start and run the service on all Elasticsearch hosts:

sudo systemctl daemon-reload
sudo systemctl enable elasticsearch
sudo systemctl start elasticsearch

Go back to the section where we started editing /etc/elasticsearch/elasticsearch.yml and repeat the instructions for each Elasticsearch node.

You may also come across examples of adding new Elasticsearch hosts to the cluster by creating an enrollment token. While you can use this method, be aware that it will override the /etc/elasticsearch/certs directory and keystores. Whether or not you generate an enrollment token to add a new node, you will still need to manually edit the elasticsearch.yml config file.

When adding new nodes via token enrollment, you might encounter an error like:

ERROR: Skipping security auto configuration because it appears that the node is not starting up for the first time. The node might already be part of a cluster and this auto setup utility is designed to configure Security for new clusters only., with exit code 80

This happens because the configuration file on the new nodes should remain unchanged. However, you can still manually edit it if an error occurs during token enrollment. The enrollment token only updates a few fields in the new node’s configuration file. So whether you use the enrollment token or not, you will need to go back to the section in this article where we started editing the /etc/elasticsearch/elasticsearch.yml file.

Here’s an example of a successful token enrollment:

During the Elasticsearch cluster setup, you might encounter other errors, such as missing configurations or incorrect passwords for keystores. To monitor the logs and troubleshoot issues during the setup, use the following command:

tail -f /var/log/elasticsearch/my-application.log

After all nodes are added to the cluster I will reset default elastic super admin user password. Password can be reseted from any cluster node.

sudo /usr/share/elasticsearch/bin/elasticsearch-reset-password -u elastic -i

You can use the following commands to debug the Elasticsearch cluster’s health (you can either ignore certificate verification with -k or provide a path to the .pem file):

curl -k -u elastic https://ANY_CLUSTER_NODE_IP:9200/_cat/nodes?pretty
curl -k -u elastic https://ANY_CLUSTER_NODE_IP:9200/_cluster/health?pretty

You can also navigate to https://172.20.20.11:9200 in your browser to check the Elasticsearch status.

Here is an example of the Elasticsearch cluster I’ve created. As you can see, all nodes currently have all possible roles. Let’s change this configuration to assign specific roles to each node. However, keep in mind that you can skip this step if you don’t want to configure roles.

Changing Elasticsearch Master Node Roles

I will start with es-m1, then proceed to es-m2 and es-m3. There is a specific set of commands for configuring master node roles. You need to repeat these steps exclusively for master nodes.

First, stop the Elasticsearch service:

sudo systemctl stop elasticsearch

Edit the configuration file /etc/elasticsearch/elasticsearch.yml and add the following at the end:

node.roles: ["master", "remote_cluster_client"]

Run this command to delete data indices from the master node:

sudo /usr/share/elasticsearch/bin/elasticsearch-node repurpose

Start the Elasticsearch service again:

sudo systemctl start elasticsearch

Repeat these steps for all master nodes in the cluster.

Changing Elasticsearch Data Node Roles

Next, for the data (hot) nodes, starting with es-h1 and then applying the same to es-h2, edit the configuration file /etc/elasticsearch/elasticsearch.yml and add the following:

node.roles: ["data_hot", "data_content", "transform", "ingest"]

Then restart the Elasticsearch service:

sudo systemctl restart elasticsearch

The result will look like this:

Log in to the Kibana host and change the permissions for the certificates that were copied earlier:

sudo chown root:kibana /etc/kibana/certs/elasticsearch-ca.pem
sudo chmod 0660 /etc/kibana/certs/elasticsearch-ca.pem

sudo chown root:kibana /etc/kibana/certs/kibana.crt
sudo chmod 0660 /etc/kibana/certs/kibana.crt

sudo chown root:kibana /etc/kibana/certs/kibana.key
sudo chmod 0660 /etc/kibana/certs/kibana.key

sudo chmod 660 /etc/kibana/certs/fleet-server.*
sudo chown root:root /etc/kibana/certs/fleet-server.*

From the es-m1 node, I will generate a token for Kibana. To speed up the execution of the curl commands, let's define the password in a variable. Generate and copy the token:

ELASTIC_PASS='changeme123'
ANY_ES_NODE_IP='172.20.20.14'
api_response=$(curl -s -X POST --cacert /usr/share/elasticsearch/elasticsearch-ca.pem -u "elastic:$ELASTIC_PASS" "https://$ANY_ES_NODE_IP:9200/_security/service/elastic/kibana/credential/token/kibana_token")
token_value=$(echo "$api_response" | jq -r '.token.value')
echo $token_value

Create the Kibana keystore on the Kibana host by running the following command:

sudo /usr/share/kibana/bin/kibana-keystore create

Next, submit the generated token from es-m1 on the Kibana host. Use the token you generated earlier:

sudo /usr/share/kibana/bin/kibana-keystore add elasticsearch.serviceAccountToken

Generate the encryption keys for Kibana by running the following command:

sudo /usr/share/kibana/bin/kibana-encryption-keys generate

Copy the last three lines that contain the following keys after generating the encryption keys:

xpack.encryptedSavedObjects.encryptionKey: *******
xpack.reporting.encryptionKey: ******
xpack.security.encryptionKey: ******

Add the generated encryption keys to the end of the /etc/kibana/kibana.yml file. Once the keys are added, go back to the start of the configuration file and modify the following parameters:

server.port: 5601
server.host: "172.20.20.10"
server.publicBaseUrl: "https://172.20.20.10:5601"
server.ssl.enabled: true
server.ssl.certificate: /etc/kibana/certs/kibana.crt
server.ssl.key: /etc/kibana/certs/kibana.key
elasticsearch.hosts: [ "https://172.20.20.11:9200", "https://172.20.20.12:9200", "https://172.20.20.13:9200" ]
elasticsearch.ssl.certificateAuthorities: [ "/etc/kibana/certs/elasticsearch-ca.pem" ]

After making these changes, enable and start the Kibana service:

sudo systemctl daemon-reload
sudo systemctl enable kibana
sudo systemctl start kibana

If any errors occur, you can check the Kibana log file for troubleshooting:

tail -f /var/log/kibana/kibana.log

Wait a few minutes, then visit https://172.20.20.10:5601 and log in with the elastic account. To enable X-Pack monitoring and check the cluster nodes:

  1. Go to the main menu and navigate to Stack Monitoring.
  2. Click on Or, set up with self-monitoring.
  3. Then, click Turn on monitoring.

Next, let’s set up the Fleet Server to manage Elastic Agents and integrations.

Important Note: When adding a Fleet Server, make sure to configure the Elasticsearch output first. Another available output option is Logstash, but this is only accessible in the paid version or with a free trial.

For example, if you’re collecting data from hosts and managing them from a central point while implementing logs parsing via custom pipelines with Logstash, you must set up the Elasticsearch output for the Fleet Server first. This way, metrics and logs will be collected from the Fleet Server and sent to Elasticsearch — not Logstash.

If you’re using the paid version or a free trial, you can enable Logstash output per policy. For instance, a specific policy for Linux servers can be configured to send logs to Logstash for parsing via custom pipelines (enabled in the policy).

To add the Fleet Server:

  1. Navigate to the main menu.
  2. Click on Fleet and then Settings.
  3. Submit the information as illustrated in the image.

After you’ve entered the required information, click Generate Fleet Server Policy. An installation script will appear — copy it for later use, or you can generate a new one if needed.

Next, navigate to the Output section in the Fleet settings and edit the existing default output.

Let’s prepare some necessary information for the configuration. We need the Elasticsearch CA trusted fingerprint. To get this, run the following command on any Elasticsearch node:

sudo openssl x509 -fingerprint -sha256 -noout -in /etc/elasticsearch/certs/elasticsearch-ca.pem | awk -F"=" {' print $2 '} | sed s/://g

In the Advanced YAML configuration section, we need to insert the elasticsearch-ca.pem certificate. Be careful with the YAML syntax. Use the example below, but replace it with your own generated certificate. First, copy this to your editor:

ssl:
certificate_authorities:
- |
-----BEGIN CERTIFICATE-----
[Your-Generated-Certificate-Here]
-----END CERTIFICATE-----

To get the content of your elasticsearch-ca.pem certificate, run the following command:

cat /etc/elasticsearch/certs/elasticsearch-ca.pem

Paste the content into the Advanced YAML configuration section as shown in the example. Note that the BEGIN CERTIFICATE and END CERTIFICATE lines must be indented with 4 spaces:

As output hosts, I set all three Elasticsearch master nodes. If you add a new output, make sure to select:

  • Make this output the default for agent integrations
  • Make this output the default for agent monitoring

Next, use the previously generated installation script, or go to Agents, click Add Fleet Server, then select Advanced. At stage 4, click Generate service token. Proceed to stage 5 and copy the entire script.

I will separate this script into two steps. First, let’s install the Fleet Server on the Kibana host.

curl -L -O https://artifacts.elastic.co/downloads/beats/elastic-agent/elastic-agent-8.14.3-linux-x86_64.tar.gz
tar xzvf elastic-agent-8.14.3-linux-x86_64.tar.gz
cd elastic-agent-8.14.3-linux-x86_64

After changing the directory, run the following command to install the Fleet Server:

sudo ./elastic-agent install \
--url=https://172.20.20.10:8220 \
--fleet-server-es=https://172.20.20.11:9200 \
--fleet-server-es=https://172.20.20.12:9200 \
--fleet-server-es=https://172.20.20.13:9200 \
--fleet-server-service-token=YOUR_GENERATED_TOKEN_HERE \
--fleet-server-policy=fleet-server-policy \
--fleet-server-es-ca-trusted-fingerprint=YOUR_FINGERPRINT_HERE \
--certificate-authorities=/etc/kibana/certs/elasticsearch-ca.pem \
--fleet-server-es-ca=/etc/kibana/certs/elasticsearch-ca.pem \
--fleet-server-cert=/etc/kibana/certs/fleet-server.crt \
--fleet-server-cert-key=/etc/kibana/certs/fleet-server.key \
--force

The Fleet Server has been successfully enrolled.

You will also see the Fleet Server appear in Kibana. It will start reporting metrics, which is a good indicator that communication between the Fleet Server and the Elasticsearch cluster is active and secured.

Elastic Agents

To add Elastic Agents to hosts, navigate to Fleet > Agents and click Add Agent. You can start by creating a basic policy where the agent will be added, and later move it to another policy. Alternatively, you can create a specific policy first — for example, for Windows hosts.

Before generating the Elastic Agent script, you can configure policy, such as those for Windows. Choose the platform where the agent will be installed. After downloading the agent to the host, the main installation commands for Linux and Windows are:

sudo ./elastic-agent install --url=https://FLEET_KIBANA_SERVER_IP:8220 --enrollment-token=GENERATED_TOKEN --certificate-authorities=/PATH/TO/elasticsearch-ca.pem --force
certutil -addstore -f "Root" C:\PATH\TO\elasticsearch-ca.pem
.\elastic-agent.exe install --url=https://FLEET_KIBANA_SERVER_IP:8220 --enrollment-token=GENERATED_TOKEN --certificate-authorities=C:\PATH\TO\elasticsearch-ca.pem --force

Elasticsearch Cluster Monitoring

Earlier, we enabled basic X-Pack monitoring for the Elasticsearch cluster in Kibana, but it’s recommended to use Metricbeat. Let’s first generate API keys for the Beats (you can use a certificate or ignore verification):

curl -s -X POST "https://172.20.20.11:9200/_security/api_key" \
-u elastic \
--header "Content-Type: application/json" \
--data '{"name": "beats-key", "role_descriptors": {}}' \
-k | jq -r '"\(.id):\(.api_key)"'

You can generate an API key for each Elasticsearch node where Metricbeat (and also Filebeat) will be installed or use a single token for all nodes.

Next, open the /etc/metricbeat/metricbeat.yml file and navigate to the output.elasticsearch section. Edit the following fields:

  • hosts: Add the list of Elasticsearch master nodes (with port 9200).
  • uncomment protocol: "https"
  • uncomment api_key and set it to the one you generated earlier.
  • Under api_key, add the following:
ssl:
enabled: true
ca_trusted_fingerprint: "FINGERPRINT"

To get the fingerprint, run this command on any Elasticsearch node:

sudo openssl x509 -fingerprint -sha256 -noout -in /etc/elasticsearch/certs/elasticsearch-ca.pem | awk -F"=" {' print $2 '} | sed s/://g

Next, navigate to the setup.kibana section in the /etc/metricbeat/metricbeat.yml file and set the host:

setup.kibana:
host: "https://172.20.20.10:5601"

Save the configuration and exit. Then, run the following command to enable the Elasticsearch X-Pack module:

metricbeat modules enable elasticsearch-xpack

Navigate to the /etc/metricbeat/modules.d/elasticsearch-xpack.yml file and apply the following configuration:

- module: elasticsearch
xpack.enabled: true
period: 10s
hosts: ["https://HOST_IP:9200"] # CURRENT ELASTICSEARCH HOST IP ADDRESS WHERE YOU ARE CHANGING THE CONFIG
api_key: "BEATS_API_KEY"
ssl.enabled: true
ssl.certificate_authorities: ["/etc/elasticsearch/certs/elasticsearch-ca.pem"]

Once the configuration is set, enable and start the Metricbeat service:

sudo systemctl enable metricbeat
sudo systemctl start metricbeat

Repeat these steps for each Elasticsearch cluster node. After completing the configuration on all nodes, navigate to Kibana Stack Monitoring and:

  1. Click the “Enter setup mode” button in the top right corner.
  2. Then, click “Disable self monitoring.”
  3. Finally, click “Exit setup mode.”

The Metricbeat configuration is complete, and it is working properly.

Filebeat

Next, we will configure Filebeat to collect Elasticsearch cluster logs in a central location. Start by editing the /etc/filebeat/filebeat.yml configuration file:

  1. Search for the filebeat.inputs section and change enabled: false to true.
  2. In the setup.kibana section, uncomment the host and change the value to:
host: "https://172.20.20.10:5601"
  1. Search for the output.elasticsearch section and set the hosts to:
hosts: ["https://172.20.20.11:9200", "https://172.20.20.12:9200", "https://172.20.20.13:9200"]
  1. Add the api_key and the following SSL configuration:
ssl:
enabled: true
ca_trusted_fingerprint: "FINGERPRINT"

To enable the Elasticsearch module on each node, run:

filebeat modules enable elasticsearch

Next, edit the /etc/filebeat/modules.d/elasticsearch.yml configuration file, modifying the first lines for the server logs as shown:

- module: elasticsearch
# Server log
server:
enabled: true
# Set custom paths for the log files. If left empty,
# Filebeat will choose the paths depending on your OS.
var.paths:
- /var/log/elasticsearch/*.log
- /var/log/elasticsearch/*_server.json
...

Finally, enable and start the Filebeat service:

sudo systemctl enable filebeat
sudo systemctl start filebeat

After setup, you will be able to see logs for each cluster node in Stack Monitoring.

Ensure that Index Lifecycle Management (ILM) is properly configured for Filebeat and Metricbeat indices to manage the retention and lifecycle of your logs and metrics data.

Custom UDP Logs Integration (FortiGate firewall syslogs)

A quick and efficient method to import necessary templates, pipelines, and ILM policies is using the approach provided by FortiDragon (https://github.com/enotspe/fortinet-2-elasticsearch).

To use this method, follow the instructions in the repository and run the load.sh script. However, I encountered an issue where the script works only for HTTP, not HTTPS, so make sure to add the -k flag to all curl functions in the script to ignore SSL verification.

How to configure the integration, upload dashboards, and set up pipelines is already documented in the FortiDragon project. You can also use Logstash but it is more complex setup and requires more time.

No responses yet