Установка Filebeat в Unix/Linux

Установка Filebeat в Unix/Linux

Filebeat — клиент для передачи логов в logstash. Работает в совокупности с ELK стеком.

Имеется:

  1. 192.168.13.100 — Filebeat клиент.
  2. 192.168.13.195 — ELK стек.

Доступны четыре клиента (если ничего не изменилось с выходом релиза):

  • Packetbeat – Анализ сетевых пакетных данных.
  • Filebeat – Анализ лог-данных в реальном режиме времени.
  • Topbeat – Получает представление о данных инфраструктуры.
  • Metricbeat – Ship метрики для Elasticsearch.

Установка Filebeat в Unix/Linux

Прежде чем начать устанавливать файлбит, я бы рекомендовал прочитать установку ELK:

Установка ELK (ElasticSearch/Filebeat/Kibana) в Unix/Linux

Т.к данная утилита бессмысленна без этого стека.

Установка Filebeat в CentOS/Fedora/RHEL

-===СПОСОБ 1 — использовать репозиторий ===-

Загружаем и устанавливаем публичный ключ подписи:

# rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch

Открываем (создаем) файл:

# vim /etc/yum.repos.d/filebeat.repo

И прописываем:

[elastic-5.x]
name=Elastic repository for 5.x packages
baseurl=https://artifacts.elastic.co/packages/5.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md

Вы можете установить его с помощью команды:

# yum install filebeat -y

-===СПОСОБ 2 — использовать RPM пакет ===-

# cd /usr/local/src && curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-5.4.1-x86_64.rpm
# rpm -vi filebeat-5.4.1-x86_64.rpm

-===СПОСОБ 3 — использовать готовый архив ===-

Настройка и запуск утилиты будет немного ниже.

Установка Filebeat в Debian/Ubuntu

-===СПОСОБ 1 — использовать репозиторий ===-

Загружаем и устанавливаем публичный ключ подписи:

# wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | apt-key add -

Возможно, вам придется установить пакет apt-transport-https на Debian перед установкой утилиты:

# apt-get install apt-transport-https -y

Добавляем репозиторий:

# echo "deb https://artifacts.elastic.co/packages/5.x/apt stable main" | tee -a /etc/apt/sources.list.d/elastic-5.x.list

Вы можете установить его с помощью команды:

# apt-get update && sudo apt-get install filebeat

-===СПОСОБ 2 — использовать DEB пакет ===-

# cd /usr/local/src && curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-5.4.1-amd64.deb
# dpkg -i filebeat-5.4.1-amd64.deb

-===СПОСОБ 3 — использовать готовый архив ===-

Настройка и запуск утилиты будет немного ниже.

Установка Filebeat в Mac OS X

Для начала, устанавливаем HOMEBREW — Установка HOMEBREW на Mac OS X после чего, выполняем поиск пакета:

$ brew search filebeat

Для установки:

$ brew install filebeat

-=== СПОСОБ 2 — использовать готовый архив==-

# curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-5.4.1-darwin-x86_64.tar.gz
# tar xzvf filebeat-5.4.1-darwin-x86_64.tar.gz

Настройка и запуск утилиты будет немного ниже.

Установка Filebeat на другие Unix/Linux ОС

-=== СПОСОБ 1 — использовать docker==-

Не было нужды использовать logstesh в докере. Как появиться, так сразу же обновлю данную статью.

-===СПОСОБ 2 — использовать готовый архив ===-

Настройка Filebeat в Unix/Linux

Я описывал создание ключей в статье с ELK (на приводится выше) и я говорил как это сделать. Но чтобы не открывать и не искать это — я приведу еще раз.

Если у вас нет DNS, то придется добавить свой собственный IP-адрес вашего ELK-сервера в subjectAltName (SAN). Это позволит вашим серверам собирать логи. И для этого, потребуется ключик.

Необходимо создать новый SSL сертификат. Сначала отредактируйте файл:

# vim /etc/pki/tls/openssl.cnf

В [ v3_ca ] разделе, прописываем:

subjectAltName = IP: 192.168.13.195

Где, localhost — Это ваш ИП адрес сервера с логстешем.

……….СПОСОБ 1 — использовать IP……….

Генерируем сертификат:

# openssl req -config /etc/pki/tls/openssl.cnf -x509 -days 3650 -batch -nodes -newkey rsa:2048 -keyout /etc/pki/tls/private/logstash-forwarder.key -out /etc/pki/tls/certs/logstash-forwarder.crt

У меня это — 192.168.13.195!

……….СПОСОБ 2 — использовать доменное имя……….

# openssl req -config /etc/pki/tls/openssl.cnf -subj '/CN=domain_name/'-x509 -days 3650 -batch -nodes -newkey rsa:2048 -keyout /etc/pki/tls/private/logstash-forwarder.key -out /etc/pki/tls/certs/logstash-forwarder.crt

Где:

domain_name — Ваше доменное имя!

НО ВСЕ ЭТИ ДЕЙСТВИЯ НУЖНО ВЫПОЛНИТЬ БЫЛО НА СТОРОНЕ LOGSTASH!

Чтобы файлбит использовать шифрованый канал (TLS/SSL), стоит скопировать данный сгенерированный сертификат себе на клиентскую машину, где будет использоваться filebeat для отправки логов на logstash сервер.

Скопируйте SSL-сертификат с сервера на клиент:

# scp /etc/pki/tls/certs/logstash-forwarder.crt root@192.168.13.100:/etc/pki/tls/certs/

PS: Как использовать SCP — Как скопировать данные через SCP в Linux

Открываем файл самого файлбита:

# vim /etc/filebeat/filebeat.yml

И приводим к виду:

filebeat.prospectors:
- input_type: log
  paths:
    - /var/log/*.log
output.elasticsearch:
  hosts: ["192.168.13.195:9200"]
output.logstash:
  hosts: ["192.168.13.195:5044"]
  ssl.certificate_authorities: ["/etc/pki/tls/certs/logstash-forwarder.crt"]

Т.к у меня используется 1 сервер для ELK, то я указал 192.168.13.195. А вы измените параметры.

Вот пример более подробный:

filebeat:
  # List of prospectors to fetch data.
  prospectors:
    # Magento log prospector
    -
      paths:
        - /var/www/html/my_domain.org/var/log/*.log
      encoding: plain
      input_type: log
      exclude_files: ["exception.log"]
      document_type: magento
      scan_frequency: 1s
      multiline:
        # The regexp Pattern that has to be matched. The example pattern matches all lines starting with [
        pattern: '^[0-9]{4}-[0-9]{2}-[0-9]{2}T[0-9]{2}:[0-9]{2}:[0-9]{2}\+[0-9]{2}:[0-9]{2} [A-Z]+ \([0-9]\):.*'
        negate: true
        match: after
        max_lines: 500
        timeout: 5s
    # Magento exception log prospector
    -
      paths:
        - /var/www/html/my_domain.org/var/log/exception.log
      encoding: plain
      input_type: log
      document_type: magento_exception
      scan_frequency: 1s
      multiline:
        # The regexp Pattern that has to be matched. The example pattern matches all lines starting with [
        pattern: '^[0-9]{4}-[0-9]{2}-[0-9]{2}T[0-9]{2}:[0-9]{2}:[0-9]{2}\+[0-9]{2}:[0-9]{2} [A-Z]+ \([0-9]\):.*'
        negate: true
        match: after
        max_lines: 500
        timeout: 5s
    # Services Log Prospector
    -
      paths:
        - /var/log/httpd/*-error.log
        - /var/log/nginx/*-error.log
        - /var/log/php-fpm/*error.log

    # php-fpm slow log prospector
    -
      paths:
        - /var/log/php-fpm/*slow.log
      encoding: plain
      input_type: log
      document_type: phpfpm_slow
      scan_frequency: 1s
      multiline:
        # The regexp Pattern that has to be matched. The example pattern matches all lines starting with [
        pattern: '^\[[0-9]{2}-[A-Za-z]{3}-[0-9]{4} [0-9:]{8}\].*'
        negate: true
        match: after
        max_lines: 500
        timeout: 5s


        #- c:\programdata\elasticsearch\logs\*

      # Configure the file encoding for reading files with international characters
      # following the W3C recommendation for HTML5 (http://www.w3.org/TR/encoding).
      # Some sample encodings:
      #   plain, utf-8, utf-16be-bom, utf-16be, utf-16le, big5, gb18030, gbk,
      #    hz-gb-2312, euc-kr, euc-jp, iso-2022-jp, shift-jis, ...
      encoding: plain

      # Type of the files. Based on this the way the file is read is decided.
      # The different types cannot be mixed in one prospector
      #
      # Possible options are:
      # * log: Reads every line of the log file (default)
      # * stdin: Reads the standard in
      input_type: log

      # Exclude lines. A list of regular expressions to match. It drops the lines that are
      # matching any regular expression from the list. The include_lines is called before
      # exclude_lines. By default, no lines are dropped.
      # exclude_lines: ["^DBG"]

      # Include lines. A list of regular expressions to match. It exports the lines that are
      # matching any regular expression from the list. The include_lines is called before
      # exclude_lines. By default, all the lines are exported.
      # include_lines: ["^ERR", "^WARN"]

      # Exclude files. A list of regular expressions to match. Filebeat drops the files that
      # are matching any regular expression from the list. By default, no files are dropped.
      exclude_files: [".gz$"]

      # Optional additional fields. These field can be freely picked
      # to add additional information to the crawled log files for filtering
      #fields:
      #  level: debug
      #  review: 1

      # Set to true to store the additional fields as top level fields instead
      # of under the "fields" sub-dictionary. In case of name conflicts with the
      # fields added by Filebeat itself, the custom fields overwrite the default
      # fields.
      #fields_under_root: false

      # Ignore files which were modified more then the defined timespan in the past
      # Time strings like 2h (2 hours), 5m (5 minutes) can be used.
      #ignore_older: 24h

      # Type to be published in the 'type' field. For Elasticsearch output,
      # the type defines the document type these entries should be stored
      # in. Default: log
      document_type: os_log

      # Scan frequency in seconds.
      # How often these files should be checked for changes. In case it is set
      # to 0s, it is done as often as possible. Default: 10s
      scan_frequency: 1s

      # Defines the buffer size every harvester uses when fetching the file
      #harvester_buffer_size: 16384

      # Maximum number of bytes a single log event can have
      # All bytes after max_bytes are discarded and not sent. The default is 10MB.
      # This is especially useful for multiline log messages which can get large.
      #max_bytes: 10485760

      # Mutiline can be used for log messages spanning multiple lines. This is common
      # for Java Stack Traces or C-Line Continuation
      #multiline:

        # The regexp Pattern that has to be matched. The example pattern matches all lines starting with [
        #pattern: ^\[

        # Defines if the pattern set under pattern should be negated or not. Default is false.
        #negate: false

        # Match can be set to "after" or "before". It is used to define if lines should be append to a pattern
        # that was (not) matched before or after or as long as a pattern is not matched based on negate.
        # Note: After is the equivalent to previous and before is the equivalent to to next in Logstash
        #match: after

        # The maximum number of lines that are combined to one event.
        # In case there are more the max_lines the additional lines are discarded.
        # Default is 500
        #max_lines: 500

        # After the defined timeout, an multiline event is sent even if no new pattern was found to start a new event
        # Default is 5s.
        #timeout: 5s

      # Setting tail_files to true means filebeat starts readding new files at the end
      # instead of the beginning. If this is used in combination with log rotation
      # this can mean that the first entries of a new file are skipped.
      #tail_files: false

      # Backoff values define how agressively filebeat crawls new files for updates
      # The default values can be used in most cases. Backoff defines how long it is waited
      # to check a file again after EOF is reached. Default is 1s which means the file
      # is checked every second if new lines were added. This leads to a near real time crawling.
      # Every time a new line appears, backoff is reset to the initial value.
      #backoff: 1s

      # Max backoff defines what the maximum backoff time is. After having backed off multiple times
      # from checking the files, the waiting time will never exceed max_backoff idenependent of the
      # backoff factor. Having it set to 10s means in the worst case a new line can be added to a log
      # file after having backed off multiple times, it takes a maximum of 10s to read the new line
      #max_backoff: 10s

      # The backoff factor defines how fast the algorithm backs off. The bigger the backoff factor,
      # the faster the max_backoff value is reached. If this value is set to 1, no backoff will happen.
      # The backoff value will be multiplied each time with the backoff_factor until max_backoff is reached
      #backoff_factor: 2

      # This option closes a file, as soon as the file name changes.
      # This config option is recommended on windows only. Filebeat keeps the files it's reading open. This can cause
      # issues when the file is removed, as the file will not be fully removed until also Filebeat closes
      # the reading. Filebeat closes the file handler after ignore_older. During this time no new file with the
      # same name can be created. Turning this feature on the other hand can lead to loss of data
      # on rotate files. It can happen that after file rotation the beginning of the new
      # file is skipped, as the reading starts at the end. We recommend to leave this option on false
      # but lower the ignore_older value to release files faster.
      #force_close_files: false

    # Additional prospector
    #-
      # Configuration to use stdin input
      #input_type: stdin

  # General filebeat configuration options
  #
  # Event count spool threshold - forces network flush if exceeded
  spool_size: 1

  # Defines how often the spooler is flushed. After idle_timeout the spooler is
  # Flush even though spool_size is not reached.
  #idle_timeout: 5s

  # Name of the registry file. Per default it is put in the current working
  # directory. In case the working directory is changed after when running
  # filebeat again, indexing starts from the beginning again.
  registry_file: /var/lib/filebeat/registry

  # Full Path to directory with additional prospector configuration files. Each file must end with .yml
  # These config files must have the full filebeat config part inside, but only
  # the prospector part is processed. All global options like spool_size are ignored.
  # The config_dir MUST point to a different directory then where the main filebeat config file is in.
  #config_dir:

###############################################################################
############################# Libbeat Config ##################################
# Base config file used by all other beats for using libbeat features

############################# Output ##########################################

# Configure what outputs to use when sending the data collected by the beat.
# Multiple outputs may be used.
output:
  ### Logstash as output
  logstash:
    # The Logstash hosts
    hosts: ["logstash_server:5044"]

    # Number of workers per Logstash host.
    #worker: 1

    # Set gzip compression level.
    #compression_level: 3

    # Optional load balance the events between the Logstash hosts
    #loadbalance: true

    # Optional index name. The default index name depends on the each beat.
    # For Packetbeat, the default is set to packetbeat, for Topbeat
    # top topbeat and for Filebeat to filebeat.
    #index: filebeat

    # Optional TLS. By default is off.
    #tls:
      # List of root certificates for HTTPS server verifications
      ssl.certificate_authorities: ["/etc/pki/tls/certs/logstash-forwarder.crt"]

      # Certificate for TLS client authentication
      #certificate: "/etc/pki/tls/certs/logstash-forwarder.crt"

      # Client Certificate Key
      #certificate_key: "/etc/pki/client/cert.key"

      # Controls whether the client verifies server certificates and host name.
      # If insecure is set to true, all server host names and certificates will be
      # accepted. In this mode TLS based connections are susceptible to
      # man-in-the-middle attacks. Use only for testing.
      #insecure: true

      # Configure cipher suites to be used for TLS connections
      #cipher_suites: []

      # Configure curve types for ECDHE based cipher suites
      #curve_types: []


  ### File as output
  #file:
    # Path to the directory where to save the generated files. The option is mandatory.
    #path: "/tmp/filebeat"

    # Name of the generated files. The default is `filebeat` and it generates files: `filebeat`, `filebeat.1`, `filebeat.2`, etc.
    #filename: filebeat

    # Maximum size in kilobytes of each file. When this size is reached, the files are
    # rotated. The default value is 10 MB.
    #rotate_every_kb: 10000

    # Maximum number of files under path. When this number of files is reached, the
    # oldest file is deleted and the rest are shifted from last to first. The default
    # is 7 files.
    #number_of_files: 7


  ### Console output
  # console:
    # Pretty print json event
    #pretty: false


############################# Shipper #########################################

shipper:
  # The name of the shipper that publishes the network data. It can be used to group
  # all the transactions sent by a single shipper in the web interface.
  # If this options is not defined, the hostname is used.
  #name:

  # The tags of the shipper are included in their own field with each
  # transaction published. Tags make it easy to group servers by different
  # logical properties.
  #tags: ["service-X", "web-tier"]

  # Uncomment the following if you want to ignore transactions created
  # by the server on which the shipper is installed. This option is useful
  # to remove duplicates if shippers are installed on multiple servers.
  #ignore_outgoing: true

  # How often (in seconds) shippers are publishing their IPs to the topology map.
  # The default is 10 seconds.
  #refresh_topology_freq: 10

  # Expiration time (in seconds) of the IPs published by a shipper to the topology map.
  # All the IPs will be deleted afterwards. Note, that the value must be higher than
  # refresh_topology_freq. The default is 15 seconds.
  #topology_expire: 15

  # Internal queue size for single events in processing pipeline
  #queue_size: 1000

  # Configure local GeoIP database support.
  # If no paths are not configured geoip is disabled.
  #geoip:
    #paths:
    #  - "/usr/share/GeoIP/GeoLiteCity.dat"
    #  - "/usr/local/var/GeoIP/GeoLiteCity.dat"


############################# Logging #########################################

# There are three options for the log ouput: syslog, file, stderr.
# Under Windos systems, the log files are per default sent to the file output,
# under all other system per default to syslog.
logging:

  # Send all logging output to syslog. On Windows default is false, otherwise
  # default is true.
  #to_syslog: true

  # Write all logging output to files. Beats automatically rotate files if rotateeverybytes
  # limit is reached.
  #to_files: false

  # To enable logging to files, to_files option has to be set to true
  files:
    # The directory where the log files will written to.
    #path: /var/log/mybeat

    # The name of the files where the logs are written to.
    #name: mybeat

    # Configure log file size limit. If limit is reached, log file will be
    # automatically rotated
    rotateeverybytes: 10485760 # = 10MB

    # Number of rotated log files to keep. Oldest files will be deleted first.
    #keepfiles: 7

  # Enable debug output for selected components. To enable all selectors use ["*"]
  # Other available selectors are beat, publish, service
  # Multiple selectors can be chained.
  #selectors: [ ]

  # Sets log level. The default log level is error.
  # Available log levels are: critical, error, warning, info, debug
  #level: error

 

Как-то так.

Запуск Filebeat в Unix/Linux

Перед запуском (если использовали СПОСОБ 1 и 2), проверяем что у нас используется «SysV init vs systemd»:

# ps -p 1

Запуск filebeat с SysV init

Чтобы запустить/остановить службу filebeat, используйте:

$ sudo -i service filebeat start
$ sudo -i service filebeat stop

Если filebeat не запускается по какой-либо причине, он выведет причину отказа на STDOUT. Лог-файлы можно найти в /var/log/filebeat/ папке.

Незабываем пробросить порт в iptables!

Используйте команду update-rc.d чтобы добавить службу в автозагурзку ОС Debian/Ubuntu:

# update-rc.d filebeat defaults 95 10

Используйте команду chkconfig, чтобы добавить службу в автозагрузку ОС RHEL/CentOS:

# chkconfig --add filebeat

Для просмотра, можно использовать:

# chkconfig --list filebeat

Запуск filebeat с systemd

Чтобы добавить filebeat в автозагрузку системы, используйте:

# /bin/systemctl daemon-reload
# systemctl enable filebeat.service

Чтобы запустить/остановить службу filebeat, используйте:

# systemctl start filebeat.service
# systemctl stop filebeat.service

Чтобы получить вывод лога, используйте:

# journalctl -f

Чтобы показать логи конкретно для filebeat:

# journalctl --unit filebeat

Чтобы показать записи с лога для filebeat службы начиная с заданного времени:

# journalctl --unit filebeat --since "2017-12-10 15:12:33"

Переходим к тестированию

Запуск filebeat на Mac OS X

# chown root filebeat.yml 
# ./filebeat -e -c filebeat.yml -d "publish"

Вот и все!

Тестирование filebeat в Unix/Linux

Если установка стека ELK прошла успешно, Filebeat должен собирать логи клиентов и передавать их на сервер ELK. Logstash будет загружать данные Filebeat в Elasticsearchс индексом filebeat-YYYY.MM.DD.

Перейдите на сервер ELK и убедитесь, что Elasticsearch получает данные, запросив индекс Filebeat:

# curl -XGET 'http://localhost:9200/filebeat-*/_search?pretty'

Загрузка дашборда Kibana

Elastic предоставляет несколько образцов дашбордов Kibana и шаблонов индексов Beats. Загрузите дашборды, чтобы использовать их с индексом Filebeat.

# cd /usr/local/src && curl -L -O https://artifacts.elastic.co/downloads/beats/beats-dashboards/beats-dashboards-5.4.1.zip

Распаковываем:

$ unzip beats-dashboards-*.zip

Загружаем образец дашборда, визуализацию и шаблоны индексов Beats в Elasticsearch.

$ bash beats-dashboards-*/load.sh

Ну, на этом, у меня все. если появятся дополнения, то обязательно допишу.

Статья «Установка Filebeat в Unix/Linux» завершена.

One thought on “Установка Filebeat в Unix/Linux

  1. Доброго времени суток,

    Возможно ли заставить Filebeat обработать старые события из файлового лога? Спасибо!

Добавить комментарий

Ваш адрес email не будет опубликован. Обязательные поля помечены *

Этот сайт использует Akismet для борьбы со спамом. Узнайте, как обрабатываются ваши данные комментариев.