ElasticStack in Kubernetes

Maciej
14 min readMay 17, 2020

--

Introduction

When building a Kubernetes cluster on-prem, I tried to implement a log metrics display environment with ElasticStack. Since Elastic Cloud on Kubernetes (ECK) which is Kubernetes Operator / CRD of ElasticStack was released in 1.02, it was built by ECK.

Overall configuration overview

A summary diagram of the entire resource is provided for organizing the entire resource.

Building steps

Follow the steps below to build it.

  1. Construction of namespace
  2. Introduction of ECK
  3. Elasticsearch
  4. Kibana
  5. Filebeat
  6. Metricbeat
  7. Auditbeat

1. Construction of namespace

Namespace is created by elastic-monitoring separately from default.
This is because I want to keep it separate from default and kube-system.

kind: Namespace
apiVersion: v1
metadata:
name: elastic-monitoring
labels:
name: elastic-monitoring

Create namespace by applying the above manifest.

$ kubectl apply -f elastic-namespace.yaml 
namespace/elastic-monitoring created

2. Introduction of ECK

Introduce ECK. With this, Elasticsearch/Kibana/ApmServer CRD and Operator are built.

$ curl -OL https://download.elastic.co/downloads/eck/1.0.1/all-in-one.yaml
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 93520 100 93520 0 0 94865 0 --:--:-- --:--:-- --:--:-- 94847

$ kubectl apply -f all-in-one.yaml
customresourcedefinition.apiextensions.k8s.io/apmservers.apm.k8s.elastic.co created
customresourcedefinition.apiextensions.k8s.io/elasticsearches.elasticsearch.k8s.elastic.co created
customresourcedefinition.apiextensions.k8s.io/kibanas.kibana.k8s.elastic.co created
clusterrole.rbac.authorization.k8s.io/elastic-operator created
clusterrolebinding.rbac.authorization.k8s.io/elastic-operator created
namespace/elastic-system created
statefulset.apps/elastic-operator created
serviceaccount/elastic-operator created
validatingwebhookconfiguration.admissionregistration.k8s.io/elastic-webhook.k8s.elastic.co created
service/elastic-webhook-server created
secret/elastic-webhook-server-cert created

$ kubectl get po -n elastic-system
NAME READY STATUS RESTARTS AGE
elastic-operator-0 1/1 Running 1 3m13s

3. Elasticsearch

Create an Elasticsearch cluster

apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
name: monitoring-elasticsearch
namespace: elastic-monitoring
spec:
version: 7.6.0
nodeSets:
- name: master-data
count: 3
config:
node.master: true
node.data: true
node.ingest: true
node.store.allow_mmap: false
volumeClaimTemplates:
- metadata:
name: elasticsearch-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 50Gi
storageClassName: standard

Perform the above yaml apply.

$ kubectl apply -f elasticsearch.yaml 
elasticsearch.elasticsearch.k8s.elastic.co/monitoring-elasticsearch created

$ kubectl get po -n elastic-monitoring
NAME READY STATUS RESTARTS AGE
monitoring-elasticsearch-es-master-data-0 1/1 Running 0 53s
monitoring-elasticsearch-es-master-data-1 1/1 Running 0 53s
monitoring-elasticsearch-es-master-data-2 1/1 Running 0 52s

$ kubectl get es -n elastic-monitoring
NAME HEALTH NODES VERSION PHASE AGE
monitoring-elasticsearch green 3 7.6.0 Ready 79s

4. Kibana

apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:
name: monitoring-kibana
namespace: elastic-monitoring
spec:
version: 7.6.0
count: 1
elasticsearchRef:
name: monitoring-elasticsearch

Apply the above yaml.

$ kubectl apply -f kibana.yaml 
kibana.kibana.k8s.elastic.co/monitoring-kibana created

$ kubectl get kibana -n elastic-monitoring
NAME HEALTH NODES VERSION AGE
monitoring-kibana green 1 7.6.0 117s

Get login password.

$ kubectl get secret monitoring-elasticsearch-es-elastic-user -n elastic-monitoring -o=jsonpath='{.data.elastic}' | base64 --decode; echo
s2gqmsd5vxbknqlqpvsjmztg

Port forward implementation.

$ kubectl port-forward service/monitoring-kibana-kb-http 5601 -n elastic-monitoring
Forwarding from 127.0.0.1:5601 -> 5601
Forwarding from [::1]:5601 -> 5601

5. Filebeat

Introduce filebeat for log acquisition. It is changed based on yaml for kubernetes that is elastically prepared.

Download the base yaml.

curl -L -O https://raw.githubusercontent.com/elastic/beats/7.6/deploy/kubernetes/filebeat-kubernetes.yaml

Change namespace

By default, is kube-systemspecified, so elastic-monitoringchange it to.

sed -e 's/namespace: kube-system/namespace: elastic-monitoring/g' filebeat-kubernetes.yaml > filebeat.yaml

Specify Elasticsearch / Kibana host, add authentication, add secret reference

Since ECK has created Elasticsearch and Kibana’s Service and Secret, specify them.

elasticsearch

  • Service: monitoring-elasticsearch-es-http
  • Secret: monitoring-elasticsearch-es-http-certs-public

kibana

  • Service: monitoring-kibana-kb-http
  • Secret: monitoring-kibana-kb-http-certs-public

DaemonSet

env:
- name: ELASTICSEARCH_HOST
value: monitoring-elasticsearch-es-http
- name: ELASTICSEARCH_PORT
value: "9200"
- name: ELASTICSEARCH_USERNAME
value: elastic
- name: ELASTICSEARCH_PASSWORD
valueFrom:
secretKeyRef:
key: elastic
name: monitoring-elasticsearch-es-elastic-user
- name: KIBANA_HOST
value: monitoring-kibana-kb-http
volumeMounts:
- name: es-certs
mountPath: /mnt/elastic/tls.crt
readOnly: true
subPath: tls.crt
- name: kb-certs
mountPath: /mnt/kibana/tls.crt
readOnly: true
subPath: tls.crt
volumes:
- name: es-certs
secret:
secretName: monitoring-elasticsearch-es-http-certs-public
- name: kb-certs
secret:
secretName: monitoring-kibana-kb-http-certs-public

ConfigMap

data:
filebeat.yml: |-
output.elasticsearch:
hosts: ['https://${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}']
username: ${ELASTICSEARCH_USERNAME}
password: ${ELASTICSEARCH_PASSWORD}
ssl.certificate_authorities:
- /mnt/elastic/tls.crt
setup.dashboards.enabled: truesetup.kibana:
host: "https://${KIBANA_HOST}:5601"
username: ${ELASTICSEARCH_USERNAME}
password: ${ELASTICSEARCH_PASSWORD}
protocol: "https"
ssl.certificate_authorities:
- /mnt/kibana/tls.crt

Add tolerations and also get from master node

Since I want to get the data of MasterNode as well, I set tolerations.

DaemonSet

spec : 
tolerations :
- key : node-role.kubernetes.io/master
effect : NoSchedule

Autodiscover

Since there is an autodiscover function for kubernetes, enable it.

ConfigMap

As commented in default, comment out (remove) filebeat.input and uncomment filebeat.autodiscover that is commented out.

filebeat.yml: |-
# filebeat.inputs:
# - type: container
# paths:
# - /var/log/containers/*.log
# processors:
# - add_kubernetes_metadata:
# host: ${NODE_NAME}
# matchers:
# - logs_path:
# logs_path: "/var/log/containers/"
# To enable hints based autodiscover, remove `filebeat.inputs` configuration and uncomment this:
filebeat.autodiscover:
providers:
- type: kubernetes
host: ${NODE_NAME}
hints.enabled: true
hints.default_config:
type: container
paths:
- /var/log/containers/*${data.kubernetes.container.id}.log

Module addition

Add the syslog and auth modules.

ConfigMap

filebeat.yml: |-
filebeat.config:
modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
filebeat.modules:
- module: system
syslog:
enabled: true
var.paths: ["/var/log/messages"]
var.convert_timezone: true
auth:
enabled: true
var.paths: ["/var/log/secure"]
var.convert_timezone: true

Timezone adaptation

If you leave it as it is, it will be in the UTC Timezone.

ConfigMap

filebeat.yml: |-
spec:
volumeMounts:
- name: localtime
mountPath: /etc/localtime
readOnly: true
volumes:
- name: localtime
hostPath:
path: /etc/localtime
type: File

Here’s full yaml with all changes so far:

---
apiVersion: v1
kind: ConfigMap
metadata:
name: filebeat-config
namespace: elastic-monitoring
labels:
k8s-app: filebeat
data:
filebeat.yml: |-
# filebeat.inputs:
# - type: container
# paths:
# - /var/log/containers/*.log
# processors:
# - add_kubernetes_metadata:
# host: ${NODE_NAME}
# matchers:
# - logs_path:
# logs_path: "/var/log/containers/"
# To enable hints based autodiscover, remove `filebeat.inputs` configuration and uncomment this:
filebeat.autodiscover:
providers:
- type: kubernetes
node: ${NODE_NAME}
hints.enabled: true
hints.default_config:
type: container
paths:
- /var/log/containers/*${data.kubernetes.container.id}.log
processors:
- add_cloud_metadata:
- add_host_metadata:
- add_locale: ~
cloud.id: ${ELASTIC_CLOUD_ID}
cloud.auth: ${ELASTIC_CLOUD_AUTH}
output.elasticsearch:
hosts: ['https://${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}']
username: ${ELASTICSEARCH_USERNAME}
password: ${ELASTICSEARCH_PASSWORD}
ssl.certificate_authorities:
- /mnt/elastic/tls.crt
setup.dashboards.enabled: truesetup.kibana:
host: "https://${KIBANA_HOST}:5601"
username: ${ELASTICSEARCH_USERNAME}
password: ${ELASTICSEARCH_PASSWORD}
protocol: "https"
ssl.certificate_authorities:
- /mnt/kibana/tls.crt
filebeat.config:
modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
filebeat.modules:
- module: system
syslog:
enabled: true
var.paths: ["/var/log/messages"]
var.convert_timezone: true
auth:
enabled: true
var.paths: ["/var/log/secure"]
var.convert_timezone: true
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: filebeat
namespace: elastic-monitoring
labels:
k8s-app: filebeat
spec:
selector:
matchLabels:
k8s-app: filebeat
template:
metadata:
labels:
k8s-app: filebeat
spec:
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
serviceAccountName: filebeat
terminationGracePeriodSeconds: 30
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
containers:
- name: filebeat
image: docker.elastic.co/beats/filebeat:7.6.0
args: [
"-c", "/etc/filebeat.yml",
"-e",
]
env:
- name: ELASTICSEARCH_HOST
value: monitoring-elasticsearch-es-http
- name: ELASTICSEARCH_PORT
value: "9200"
- name: ELASTICSEARCH_USERNAME
value: elastic
- name: ELASTICSEARCH_PASSWORD
valueFrom:
secretKeyRef:
key: elastic
name: monitoring-elasticsearch-es-elastic-user
- name: KIBANA_HOST
value: monitoring-kibana-kb-http
- name: ELASTIC_CLOUD_ID
value:
- name: ELASTIC_CLOUD_AUTH
value:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
securityContext:
runAsUser: 0
# If using Red Hat OpenShift uncomment this:
#privileged: true
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 100Mi
volumeMounts:
- name: config
mountPath: /etc/filebeat.yml
readOnly: true
subPath: filebeat.yml
- name: data
mountPath: /usr/share/filebeat/data
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
- name: varlog
mountPath: /var/log
readOnly: true
- name: es-certs
mountPath: /mnt/elastic/tls.crt
readOnly: true
subPath: tls.crt
- name: kb-certs
mountPath: /mnt/kibana/tls.crt
readOnly: true
subPath: tls.crt
- name: localtime
mountPath: /etc/localtime
readOnly: true
volumes:
- name: config
configMap:
defaultMode: 0600
name: filebeat-config
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
- name: varlog
hostPath:
path: /var/log
# data folder stores a registry of read status for all files, so we don't send everything again on a Filebeat pod restart
- name: data
hostPath:
path: /var/lib/filebeat-data
type: DirectoryOrCreate
- name: es-certs
secret:
secretName: monitoring-elasticsearch-es-http-certs-public
- name: kb-certs
secret:
secretName: monitoring-kibana-kb-http-certs-public
- name: localtime
hostPath:
path: /etc/localtime
type: File
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: filebeat
subjects:
- kind: ServiceAccount
name: filebeat
namespace: elastic-monitoring
roleRef:
kind: ClusterRole
name: filebeat
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: filebeat
labels:
k8s-app: filebeat
rules:
- apiGroups: [""] # "" indicates the core API group
resources:
- namespaces
- pods
verbs:
- get
- watch
- list
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: filebeat
namespace: elastic-monitoring
labels:
k8s-app: filebeat
---

Apply the above yaml.

$ kubectl apply -f filebeat.yaml 
configmap/filebeat-config created
daemonset.apps/filebeat created
clusterrolebinding.rbac.authorization.k8s.io/filebeat created
clusterrole.rbac.authorization.k8s.io/filebeat created
serviceaccount/filebeat created
$ kubectl get po -n elastic-monitoring | grep filebeat
filebeat-4gqlc 1/1 Running 3 83s
filebeat-h2zh2 1/1 Running 3 83s
filebeat-lmb4f 1/1 Running 3 83s
filebeat-ngfrx 1/1 Running 3 83s
filebeat-ngnwt 1/1 Running 3 83s
filebeat-pmjdh 1/1 Running 0 83s
filebeat-tk4g6 1/1 Running 0 83s
filebeat-xwxv6 1/1 Running 0 83s

6. Metricbeat

Since metricbeat requires kube-state-metrics as a prerequisite, that is also introduced. It is changed based on yaml for elastic kubernetes prepared officially.

Download the base yaml.

curl -L -O https://raw.githubusercontent.com/elastic/beats/7.6/deploy/kubernetes/metricbeat-kubernetes.yaml

Change to the following:

kube-state-metrics

Clone kube-state-metrics from Github.

git clone https://github.com/kubernetes/kube-state-metrics.git

Copy the sample manifest.

cp -Rp kube-state-metrics/examples/standard/ elastic-kube-state-metrics

Modify the namespace of the following 4 files that were copied.

  • change namespace: kube-system to namespace: elastic-monitoring
$ sed -i -e 's/namespace: kube-system/namespace: elastic-monitoring/g' elastic-kube-state-metrics/*.yaml

Changed kube-state-metrics name

If you leave it as it is, when you put Prometheus with kube-prometheus etc., kube-state-metrics and clusterrolebinding and clusterrole that have no relation to namespaces will be the same, and it will disappear with either deletion.

Rename kube-state-metrics to avoid impact

  • change kube-state-metrics to elastic-kube-state-metrics

However, the image name kube-state-metricsremains

$ sed -i -e 's/kube-state-metrics/elastic-kube-state-metrics/g' elastic-kube-state-metrics/*.yaml
$ sed -i -e 's/quay.io\/coreos\/elastic-kube-state-metrics/quay.io\/coreos\/kube-state-metrics/g' elastic-kube-state-metrics/deployment.yaml

After changing, apply the whole folder kube-state-metricsand build on the elastic-monitoring namespace .

$ kubectl apply -f elastic-kube-state-metrics/
clusterrolebinding.rbac.authorization.k8s.io/elastic-kube-state-metrics created
clusterrole.rbac.authorization.k8s.io/elastic-kube-state-metrics created
deployment.apps/elastic-kube-state-metrics created
serviceaccount/elastic-kube-state-metrics created
service/elastic-kube-state-metrics created

Confirmed that kube-sate-metrics was created.

$ kubectl get po -n elastic-monitoring | grep kube-state
elastic-kube-state-metrics-547876f486-7v892 1/1 Running 0 60s

Change namespace

Since it is the same as filebeats, omit the commands other than the following.

$ sed -e 's/namespace: kube-system/namespace: elastic-monitoring/g' metricbeat-kubernetes.yaml > metricbeat.yaml

Elasticsearch / Kibana host specification, authentication added, secret reference added

Omitted because it is the same as filebeat.

Add tolerations and also get from master node

Omitted because it is the same as filebeat.

Autodiscover

Omitted because it is almost the same as filebeat.

Add settings to demonset kubernetes.yml

If you leave it as it is, you can not get pod metrics, so add the following.

ConfigMap — change before (DaemonSet side only)

kubernetes.yml: |-
metricsets:
- node
- system
- pod
- container
- volume
period: 10s
host: ${NODE_NAME}
hosts: ["localhost:10255"]

ConfigMap — change after (DaemonSet side only)

kubernetes.yml: |-
- module: kubernetes
metricsets:
- node
- system
- pod
- container
- volume
period: 10s
host: ${NODE_NAME}
hosts: ["https://${HOSTNAME}:10250"]
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
ssl.verification_mode: "none"

The latest version of Github has this fix.

Kube-state-metrics name change

kube-state-metrics Since the name of has been changed, the part specified by metricbeat is also corrected.

Change before

apiVersion: v1
kind: ConfigMap
metadata:
name: metricbeat-deployment-modules
hosts: ["kube-state-metrics:8080"]

After change

apiVersion: v1
kind: ConfigMap
metadata:
name: metricbeat-deployment-modules
hosts: ["elastic-kube-state-metrics:8080"]

Here’s full yaml with all changes so far:

---
apiVersion: v1
kind: ConfigMap
metadata:
name: metricbeat-daemonset-config
namespace: elastic-monitoring
labels:
k8s-app: metricbeat
data:
metricbeat.yml: |-
metricbeat.config.modules:
# Mounted `metricbeat-daemonset-modules` configmap:
path: ${path.config}/modules.d/*.yml
# Reload module configs as they change:
reload.enabled: false
# To enable hints based autodiscover uncomment this:
metricbeat.autodiscover:
providers:
- type: kubernetes
node: ${NODE_NAME}
hints.enabled: true
processors:
- add_cloud_metadata:
cloud.id: ${ELASTIC_CLOUD_ID}
cloud.auth: ${ELASTIC_CLOUD_AUTH}
output.elasticsearch:
hosts: ['https://${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}']
username: ${ELASTICSEARCH_USERNAME}
password: ${ELASTICSEARCH_PASSWORD}
ssl.certificate_authorities:
- /mnt/elastic/tls.crt
setup.kibana:
host: "https://${KIBANA_HOST}:5601"
username: ${ELASTICSEARCH_USERNAME}
password: ${ELASTICSEARCH_PASSWORD}
protocol: "https"
ssl.certificate_authorities:
- /mnt/kibana/tls.crt
setup.dashboards:
enabled: true
xpack.monitoring.enabled: true
---
apiVersion: v1
kind: ConfigMap
metadata:
name: metricbeat-daemonset-modules
namespace: elastic-monitoring
labels:
k8s-app: metricbeat
data:
system.yml: |-
- module: system
period: 10s
metricsets:
- cpu
- load
- memory
- network
- process
- process_summary
#- core
#- diskio
#- socket
processes: ['.*']
process.include_top_n:
by_cpu: 5 # include top 5 processes by CPU
by_memory: 5 # include top 5 processes by memory
- module: system
period: 1m
metricsets:
- filesystem
- fsstat
processors:
- drop_event.when.regexp:
system.filesystem.mount_point: '^/(sys|cgroup|proc|dev|etc|host|lib)($|/)'
kubernetes.yml: |-
- module: kubernetes
metricsets:
- node
- system
- pod
- container
- volume
period: 10s
host: ${NODE_NAME}
hosts: ["https://${HOSTNAME}:10250"]
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
ssl.verification_mode: "none"
# If using Red Hat OpenShift remove the previous hosts entry and
# uncomment these settings:
#hosts: ["https://${HOSTNAME}:10250"]
#bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
#ssl.certificate_authorities:
#- /var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt
- module: kubernetes
metricsets:
- proxy
period: 10s
host: ${NODE_NAME}
hosts: ["localhost:10249"]
---
# Deploy a Metricbeat instance per node for node metrics retrieval
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: metricbeat
namespace: elastic-monitoring
labels:
k8s-app: metricbeat
spec:
selector:
matchLabels:
k8s-app: metricbeat
template:
metadata:
labels:
k8s-app: metricbeat
spec:
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
serviceAccountName: metricbeat
terminationGracePeriodSeconds: 30
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
containers:
- name: metricbeat
image: docker.elastic.co/beats/metricbeat:7.6.0
args: [
"-c", "/etc/metricbeat.yml",
"-e",
"-system.hostfs=/hostfs",
]
env:
- name: ELASTICSEARCH_HOST
value: monitoring-elasticsearch-es-http
- name: ELASTICSEARCH_PORT
value: "9200"
- name: ELASTICSEARCH_USERNAME
value: elastic
- name: ELASTICSEARCH_PASSWORD
# value: changeme
valueFrom:
secretKeyRef:
key: elastic
name: monitoring-elasticsearch-es-elastic-user
- name: KIBANA_HOST
value: monitoring-kibana-kb-http
- name: ELASTIC_CLOUD_ID
value:
- name: ELASTIC_CLOUD_AUTH
value:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
securityContext:
runAsUser: 0
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 100Mi
volumeMounts:
- name: config
mountPath: /etc/metricbeat.yml
readOnly: true
subPath: metricbeat.yml
- name: modules
mountPath: /usr/share/metricbeat/modules.d
readOnly: true
- name: dockersock
mountPath: /var/run/docker.sock
- name: proc
mountPath: /hostfs/proc
readOnly: true
- name: cgroup
mountPath: /hostfs/sys/fs/cgroup
readOnly: true
- name: es-certs
mountPath: /mnt/elastic/tls.crt
readOnly: true
subPath: tls.crt
- name: kb-certs
mountPath: /mnt/kibana/tls.crt
readOnly: true
subPath: tls.crt
volumes:
- name: proc
hostPath:
path: /proc
- name: cgroup
hostPath:
path: /sys/fs/cgroup
- name: dockersock
hostPath:
path: /var/run/docker.sock
- name: config
configMap:
defaultMode: 0600
name: metricbeat-daemonset-config
- name: modules
configMap:
defaultMode: 0600
name: metricbeat-daemonset-modules
- name: data
hostPath:
path: /var/lib/metricbeat-data
type: DirectoryOrCreate
- name: es-certs
secret:
secretName: monitoring-elasticsearch-es-http-certs-public
- name: kb-certs
secret:
secretName: monitoring-kibana-kb-http-certs-public
---
apiVersion: v1
kind: ConfigMap
metadata:
name: metricbeat-deployment-config
namespace: elastic-monitoring
labels:
k8s-app: metricbeat
data:
metricbeat.yml: |-
metricbeat.config.modules:
# Mounted `metricbeat-daemonset-modules` configmap:
path: ${path.config}/modules.d/*.yml
# Reload module configs as they change:
reload.enabled: false
processors:
- add_cloud_metadata:
cloud.id: ${ELASTIC_CLOUD_ID}
cloud.auth: ${ELASTIC_CLOUD_AUTH}
output.elasticsearch:
hosts: ['https://${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}']
username: ${ELASTICSEARCH_USERNAME}
password: ${ELASTICSEARCH_PASSWORD}
ssl.certificate_authorities:
- /mnt/elastic/tls.crt
setup.kibana:
host: "https://${KIBANA_HOST}:5601"
username: ${ELASTICSEARCH_USERNAME}
password: ${ELASTICSEARCH_PASSWORD}
protocol: "https"
ssl.certificate_authorities:
- /mnt/kibana/tls.crt
setup.dashboards:
enabled: true
---
apiVersion: v1
kind: ConfigMap
metadata:
name: metricbeat-deployment-modules
namespace: elastic-monitoring
labels:
k8s-app: metricbeat
data:
# This module requires `kube-state-metrics` up and running under `kube-system` namespace
kubernetes.yml: |-
- module: kubernetes
metricsets:
- state_node
- state_deployment
- state_replicaset
- state_pod
- state_container
- state_cronjob
- state_resourcequota
# Uncomment this to get k8s events:
#- event
period: 10s
host: ${NODE_NAME}
hosts: ["elastic-kube-state-metrics:8080"]
---
# Deploy singleton instance in the whole cluster for some unique data sources, like kube-state-metrics
apiVersion: apps/v1
kind: Deployment
metadata:
name: metricbeat
namespace: elastic-monitoring
labels:
k8s-app: metricbeat
spec:
selector:
matchLabels:
k8s-app: metricbeat
template:
metadata:
labels:
k8s-app: metricbeat
spec:
serviceAccountName: metricbeat
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
containers:
- name: metricbeat
image: docker.elastic.co/beats/metricbeat:7.6.0
args: [
"-c", "/etc/metricbeat.yml",
"-e",
]
env:
- name: ELASTICSEARCH_HOST
value: monitoring-elasticsearch-es-http
- name: ELASTICSEARCH_PORT
value: "9200"
- name: ELASTICSEARCH_USERNAME
value: elastic
- name: ELASTICSEARCH_PASSWORD
# value: changeme
valueFrom:
secretKeyRef:
key: elastic
name: monitoring-elasticsearch-es-elastic-user
- name: KIBANA_HOST
value: monitoring-kibana-kb-http
- name: ELASTIC_CLOUD_ID
value:
- name: ELASTIC_CLOUD_AUTH
value:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
securityContext:
runAsUser: 0
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 100Mi
volumeMounts:
- name: config
mountPath: /etc/metricbeat.yml
readOnly: true
subPath: metricbeat.yml
- name: modules
mountPath: /usr/share/metricbeat/modules.d
readOnly: true
- name: es-certs
mountPath: /mnt/elastic/tls.crt
readOnly: true
subPath: tls.crt
- name: kb-certs
mountPath: /mnt/kibana/tls.crt
readOnly: true
subPath: tls.crt
volumes:
- name: config
configMap:
defaultMode: 0600
name: metricbeat-deployment-config
- name: modules
configMap:
defaultMode: 0600
name: metricbeat-deployment-modules
- name: es-certs
secret:
secretName: monitoring-elasticsearch-es-http-certs-public
- name: kb-certs
secret:
secretName: monitoring-kibana-kb-http-certs-public
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: metricbeat
subjects:
- kind: ServiceAccount
name: metricbeat
namespace: elastic-monitoring
roleRef:
kind: ClusterRole
name: metricbeat
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: metricbeat
labels:
k8s-app: metricbeat
rules:
- apiGroups: [""]
resources:
- nodes
- namespaces
- events
- pods
verbs: ["get", "list", "watch"]
- apiGroups: ["extensions"]
resources:
- replicasets
verbs: ["get", "list", "watch"]
- apiGroups: ["apps"]
resources:
- statefulsets
- deployments
verbs: ["get", "list", "watch"]
- apiGroups:
- ""
resources:
- nodes/stats
verbs:
- get
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: metricbeat
namespace: elastic-monitoring
labels:
k8s-app: metricbeat
---

Apply the above yaml.

$ kubectl apply -f metricbeat.yaml 
configmap/metricbeat-daemonset-config created
configmap/metricbeat-daemonset-modules created
daemonset.apps/metricbeat created
configmap/metricbeat-deployment-config created
configmap/metricbeat-deployment-modules created
deployment.apps/metricbeat created
clusterrolebinding.rbac.authorization.k8s.io/metricbeat created
clusterrole.rbac.authorization.k8s.io/metricbeat created
serviceaccount/metricbeat created

$ kubectl get po -n elastic-monitoring | grep metricbeat
metricbeat-57jpz 1/1 Running 0 30s
metricbeat-67b75b56b5-4r9jn 1/1 Running 0 30s
metricbeat-8kmg7 1/1 Running 0 30s
metricbeat-fwfmn 1/1 Running 0 30s
metricbeat-jckss 1/1 Running 0 30s
metricbeat-r9vkj 1/1 Running 0 30s
metricbeat-rrm69 1/1 Running 0 30s
metricbeat-sx5b8 1/1 Running 0 30s
metricbeat-wq498 1/1 Running 0 30s

Auditbeat

It is included in auditbeat because it can be used for file integrity check. (The following is quoted from the auditbeat site )
Auditbeat Docker images can be used on Kubernetes to check files integrity.

$ curl -L -O https://raw.githubusercontent.com/elastic/beats/7.6/deploy/kubernetes/auditbeat-kubernetes.yaml
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 4288 100 4288 0 0 9329 0 --:--:-- --:--:-- --:--:-- 9342
$ sed -e 's/namespace: kube-system/namespace: elastic-monitoring/g' auditbeat-kubernetes.yaml > auditbeat.yaml

The modifications are omitted because they are namespace. Authentication. Tolerations mentioned in filebeat and are omitted.

---
apiVersion: v1
kind: ConfigMap
metadata:
name: auditbeat-config
namespace: elastic-monitoring
labels:
k8s-app: auditbeat
data:
auditbeat.yml: |-
auditbeat.config.modules:
# Mounted `auditbeat-daemonset-modules` configmap:
path: ${path.config}/modules.d/*.yml
# Reload module configs as they change:
reload.enabled: false
processors:
- add_cloud_metadata:
cloud.id: ${ELASTIC_CLOUD_ID}
cloud.auth: ${ELASTIC_CLOUD_AUTH}
output.elasticsearch:
hosts: ['https://${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}']
username: ${ELASTICSEARCH_USERNAME}
password: ${ELASTICSEARCH_PASSWORD}
ssl.certificate_authorities:
- /mnt/elastic/tls.crt
setup.kibana:
host: "https://${KIBANA_HOST}:5601"
username: ${ELASTICSEARCH_USERNAME}
password: ${ELASTICSEARCH_PASSWORD}
protocol: "https"
ssl.certificate_authorities:
- /mnt/kibana/tls.crt
setup.dashboards:
enabled: true
xpack.monitoring.enabled: true
---
apiVersion: v1
kind: ConfigMap
metadata:
name: auditbeat-daemonset-modules
namespace: elastic-monitoring
labels:
k8s-app: auditbeat
data:
system.yml: |-
- module: file_integrity
paths:
- /hostfs/bin
- /hostfs/usr/bin
- /hostfs/sbin
- /hostfs/usr/sbin
- /hostfs/etc
exclude_files:
- '(?i)\.sw[nop]$'
- '~$'
- '/\.git($|/)'
scan_at_start: true
scan_rate_per_sec: 50 MiB
max_file_size: 100 MiB
hash_types: [sha1]
recursive: true
---
# Deploy a auditbeat instance per node for node metrics retrieval
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: auditbeat
namespace: elastic-monitoring
labels:
k8s-app: auditbeat
spec:
selector:
matchLabels:
k8s-app: auditbeat
template:
metadata:
labels:
k8s-app: auditbeat
spec:
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
serviceAccountName: auditbeat
terminationGracePeriodSeconds: 30
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
containers:
- name: auditbeat
image: docker.elastic.co/beats/auditbeat:7.6.0
args: [
"-c", "/etc/auditbeat.yml"
]
env:
- name: ELASTICSEARCH_HOST
value: monitoring-elasticsearch-es-http
- name: ELASTICSEARCH_PORT
value: "9200"
- name: ELASTICSEARCH_USERNAME
value: elastic
- name: ELASTICSEARCH_PASSWORD
valueFrom:
secretKeyRef:
key: elastic
name: monitoring-elasticsearch-es-elastic-user
- name: KIBANA_HOST
value: monitoring-kibana-kb-http
- name: ELASTIC_CLOUD_ID
value:
- name: ELASTIC_CLOUD_AUTH
value:
securityContext:
runAsUser: 0
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 100Mi
volumeMounts:
- name: config
mountPath: /etc/auditbeat.yml
readOnly: true
subPath: auditbeat.yml
- name: modules
mountPath: /usr/share/auditbeat/modules.d
readOnly: true
- name: bin
mountPath: /hostfs/bin
readOnly: true
- name: sbin
mountPath: /hostfs/sbin
readOnly: true
- name: usrbin
mountPath: /hostfs/usr/bin
readOnly: true
- name: usrsbin
mountPath: /hostfs/usr/sbin
readOnly: true
- name: etc
mountPath: /hostfs/etc
readOnly: true
- name: es-certs
mountPath: /mnt/elastic/tls.crt
readOnly: true
subPath: tls.crt
- name: kb-certs
mountPath: /mnt/kibana/tls.crt
readOnly: true
subPath: tls.crt
volumes:
- name: bin
hostPath:
path: /bin
- name: usrbin
hostPath:
path: /usr/bin
- name: sbin
hostPath:
path: /sbin
- name: usrsbin
hostPath:
path: /usr/sbin
- name: etc
hostPath:
path: /etc
- name: config
configMap:
defaultMode: 0600
name: auditbeat-config
- name: modules
configMap:
defaultMode: 0600
name: auditbeat-daemonset-modules
- name: data
hostPath:
path: /var/lib/auditbeat-data
type: DirectoryOrCreate
- name: es-certs
secret:
secretName: monitoring-elasticsearch-es-http-certs-public
- name: kb-certs
secret:
secretName: monitoring-kibana-kb-http-certs-public
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: auditbeat
subjects:
- kind: ServiceAccount
name: auditbeat
namespace: elastic-monitoring
roleRef:
kind: ClusterRole
name: auditbeat
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: auditbeat
labels:
k8s-app: auditbeat
rules:
- apiGroups: [""]
resources:
- nodes
- namespaces
- pods
verbs: ["get", "list", "watch"]
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: auditbeat
namespace: elastic-monitoring
labels:
k8s-app: auditbeat
---

Apply the above yaml.

$ kubectl apply -f auditbeat.yaml 
configmap/auditbeat-config unchanged
configmap/auditbeat-daemonset-modules unchanged
daemonset.apps/auditbeat created
clusterrolebinding.rbac.authorization.k8s.io/auditbeat created
clusterrole.rbac.authorization.k8s.io/auditbeat created
serviceaccount/auditbeat created

$ kubectl get po -n elastic-monitoring | grep audit
auditbeat-5s6rh 1/1 Running 0 33s
auditbeat-6xrkc 1/1 Running 0 33s
auditbeat-846pz 1/1 Running 0 33s
auditbeat-8szhp 1/1 Running 0 33s
auditbeat-9kqsf 1/1 Running 0 33s
auditbeat-njf45 1/1 Running 0 33s
auditbeat-v7swg 1/1 Running 0 33s
auditbeat-vx4hv 1/1 Running 0 33s

--

--

Maciej
Maciej

Written by Maciej

DevOps Consultant. I’m strongly focused on automation, security, and reliability.

Responses (1)