One place for hosting & domains

      Kibana

      Como Instalar Elasticsearch, Logstash e Kibana (Elastic Stack) no Ubuntu 18.04


      O autor selecionou o Internet Archive para receber uma doação como parte do programa Write for DOnations

      Introdução

      O Elastic Stack — anteriormente conhecido como ELK Stack — é uma coleção de software open-source produzido pela Elastic que lhe permite pesquisar, analisar e visualizar logs gerados a partir de qualquer fonte, em qualquer formato, em uma prática conhecida como centralização de logs. A centralização de logs pode ser muito útil ao tentar identificar problemas com seus servidores ou aplicativos, pois permite que você pesquise todos os seus logs em um único local. Ela é útil também porque permite identificar problemas que envolvem vários servidores, correlacionando seus logs durante um período de tempo específico.

      O Elastic Stack possui quatro componentes principais:

      • Elasticsearch: um mecanismo de pesquisa RESTful distribuído que armazena todos os dados coletados.

      • Logstash: o componente de processamento de dados do Elastic Stack que envia dados de entrada para o Elasticsearch.

      • Kibana: uma interface web para a pesquisa e a visualização de logs.

      • Beats: carregadores de dados leves e de propósito único que podem enviar dados de centenas ou milhares de máquinas para o Logstash ou para o Elasticsearch.

      Neste tutorial você irá instalar o Elastic Stack em um servidor Ubuntu 18.04. Você irá aprender como instalar todos os componentes do Elastic Stack — incluindo o Filebeat, um Beat usado para encaminhar e centralizar logs e arquivos — e configurá-los para reunir e visualizar os logs do sistema. Além disso, como o Kibana normalmente está disponível apenas no localhost, usaremos o Nginx para fazer proxy dele, de modo que ele seja acessível em um navegador web. Instalaremos todos esses componentes em um único servidor, o qual nos referiremos como nosso servidor Elastic Stack.

      Nota: Ao instalar o Elastic Stack, você deve usar a mesma versão em toda a pilha ou stack. Neste tutorial vamos instalar as versões mais recentes de toda a stack, que são, no momento desta publicação, Elasticsearch 6.4.3, Kibana 6.4.3, Logstash 6.4.3 e Filebeat 6.4.3.

      Pré-requisitos

      Para completar este tutorial, você irá precisar do seguinte:

      • Um servidor Ubuntu configurado seguindo nosso Guia de Configuração Inicial de servidor com Ubuntu 18.04, incluindo um usuário não-root com privilégios sudo e um firewall configurado com ufw. A quantidade de CPU, RAM e armazenamento que o seu servidor Elastic Stack exigirá depende do volume de logs que você pretende reunir. Para este tutorial, usaremos um VPS com as seguintes especificações para o nosso servidor Elastic Stack:

        • SO: Ubuntu 18.04
        • RAM: 4GB
        • CPU: 2
      • Java 8 — que é exigido pelo Elasticsearch e pelo Logstash — instalado em seu servidor. Observe que o Java 9 não é suportado. Para istalar isso, siga a seção “Installing the Oracle JDK” do nosso guia sobre como instalar o Java 8 no Ubuntu 18.04.

      • Nginx instalado em seu servidor, que vamos configurar mais tarde neste guia como um proxy reverso para o Kibana. Siga nosso guia sobre Como Instalar o Nginx no Ubuntu 18.04 para configurar isso.

      Além disso, como o Elastic Stack é usado para acessar informações valiosas sobre seu servidor, as quais você não deseja que usuários não autorizados acessem, é importante manter seu servidor seguro instalando um certificado TLS/SSL. Isso é opcional mas é fortemente recomendado.

      No entanto, como você acabará fazendo alterações no bloco do servidor Nginx ao longo deste guia, provavelmente faria mais sentido para você concluir o guia Como Proteger o Nginx com o Let’s Encrypt no Ubuntu 18.04 no final do segundo passo deste tutorial. Com isso em mente, se você planeja configurar o Let’s Encrypt no seu servidor, você precisará do seguinte, antes de fazer isso:

      • Um domínio completamente qualificado (FQDN). Este tutorial irá utilizar example.com durante todo o processo. Você pode comprar um nome de domínio em Namecheap, obter um gratuitamente em Freenom, ou utilizar o registrador de domínio de sua escolha.

      • Ambos os registros DNS a seguir configurados para o seu servidor. Você pode seguir esta introdução ao DigitalOcean DNS para detalhes sobre como adicioná-los.

        • Um registro A com example.com apontando para o endereço IP público do seu servidor.
        • Um registro A com www.example.com apontando para o endereço IP público do seu servidor.

      Passo 1 — Instalando e Configurando o Elasticsearch

      Os componentes do Elastic Stack não estão disponíveis nos repositórios de pacotes padrão do Ubuntu. Eles podem, no entanto, ser instalados com o APT após adicionar a lista de origens de pacotes da Elastic.

      Todos os pacotes do Elastic Stack são assinados com a chave de assinatura do Elasticsearch para proteger seu sistema contra falsificação de pacotes. Os pacotes que foram autenticados usando a chave serão considerados confiáveis pelo seu gerenciador de pacotes. Nesta etapa, você importará a chave GPG pública do Elasticsearch e adicionará a lista de origens de pacotes da Elastic para instalar o Elasticsearch.

      Para começar, execute o seguinte comando para importar a chave GPG pública do Elasticsearch para o APT:

      • wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -

      Em seguida, adicione a lista de origens da Elastic ao diretório sources.list.d, onde o APT irá procurar por novas origens:

      • echo "deb https://artifacts.elastic.co/packages/6.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-6.x.list

      Em seguida, atualize suas listas de pacotes para que o APT leia a nova origem da Elastic:

      Em seguida, instale o Elasticsearch com este comando:

      • sudo apt install elasticsearch

      Quando o Elasticsearch terminar a instalação, use seu editor de texto preferido para editar o arquivo de configuração principal do Elasticsearch, elasticsearch.yml. Aqui, usaremos o nano:

      • sudo nano /etc/elasticsearch/elasticsearch.yml

      Nota: O arquivo de configuração do Elasticsearch está no formato YAML, o que significa que a identação é muito importante! Certifique-se de não adicionar espaços extras ao editar esse arquivo.

      O Elasticsearch escuta o tráfego vindo de qualquer lugar na porta 9200. Você vai querer restringir o acesso externo à sua instância do Elasticsearch para impedir que pessoas de fora leiam seus dados ou desliguem o cluster do Elasticsearch por meio da API REST. Encontre a linha que especifica network.host, descomente-a e substitua seu valor por localhost para que fique assim:

      /etc/elasticsearch/elasticsearch.yml

      . . .
      network.host: localhost
      . . .
      

      Salve e feche o elasticsearch.yml pressionando CTRL+X, seguido de Y e depoisENTER se você estiver usando o nano. Em seguida, inicie o serviço Elasticsearch com o systemctl:

      • sudo systemctl start elasticsearch

      Depois, execute o seguinte comando para permitir que o Elasticsearch seja iniciado toda vez que o servidor for inicializado:

      • sudo systemctl enable elasticsearch

      Você pode testar se o serviço do Elasticsearch está sendo executado enviando uma solicitação HTTP:

      • curl -X GET "localhost:9200"

      Você verá uma resposta mostrando algumas informações básicas sobre o seu nó local, semelhante a esta:

      Output

      { "name" : "ZlJ0k2h", "cluster_name" : "elasticsearch", "cluster_uuid" : "beJf9oPSTbecP7_i8pRVCw", "version" : { "number" : "6.4.2", "build_flavor" : "default", "build_type" : "deb", "build_hash" : "04711c2", "build_date" : "2018-09-26T13:34:09.098244Z", "build_snapshot" : false, "lucene_version" : "7.4.0", "minimum_wire_compatibility_version" : "5.6.0", "minimum_index_compatibility_version" : "5.0.0" }, "tagline" : "You Know, for Search" }

      Agora que o Elasticsearch está instalado e funcionando, vamos instalar o Kibana, o próximo componente do Elastic Stack.

      Passo 2 — Instalando e Configurando o Painel do Kibana

      De acordo com a documentação oficial, você deve instalar o Kibana somente após instalar o Elasticsearch. A instalação nesta ordem garante que os componentes dos quais cada produto depende estão corretamente posicionados.

      Como você já adicionou a origem de pacotes do Elastic no passo anterior, você pode simplesmente instalar os componentes restantes do Elastic Stack usando o apt:

      Em seguida, ative e inicie o serviço Kibana:

      • sudo systemctl enable kibana
      • sudo systemctl start kibana

      Como o Kibana está configurado para escutar somente no localhost, devemos configurar um proxy reverso para permitir acesso externo a ele. Utilizaremos o Nginx para esse propósito, que já deve estar instalado no seu servidor.

      Primeiro, use o comando openssl para criar um usuário administrativo do Kibana que será usado para acessar a interface web do mesmo. Como exemplo, nomearemos essa conta como kibanaadmin, mas para garantir maior segurança, recomendamos que você escolha um nome que não seja óbvio para seu usuário e que seja difícil de adivinhar.

      O comando a seguir criará o usuário e a senha do usuário administrativo do Kibana e os armazenará no arquivo htpasswd.users. Você irá configurar o Nginx para requerer este nome de usuário e senha e ler este arquivo momentaneamente:

      • echo "kibanaadmin:`openssl passwd -apr1`" | sudo tee -a /etc/nginx/htpasswd.users

      Digite e confirme uma senha no prompt. Lembre-se ou anote este login, pois você precisará dele para acessar a interface web do Kibana.

      Em seguida, criaremos um arquivo de bloco do servidor Nginx. Como exemplo, vamos nos referir a este arquivo como example.com, embora você possa achar útil dar um nome mais descritivo ao seu. Por exemplo, se você tiver um FQDN e registros DNS configurados para este servidor, poderá nomear esse arquivo após seu FQDN:

      • sudo nano /etc/nginx/sites-available/example.com

      Adicione o seguinte bloco de código ao arquivo, certificando-se de atualizar example.com para corresponder ao FQDN do seu servidor ou ao seu endereço IP público. Este código configura o Nginx para direcionar o tráfego HTTP do seu servidor para o aplicativo Kibana, que está escutando em localhost:5601. Além disso, configura o Nginx para ler o arquivo htpasswd.users e requerer autenticação básica.

      Observe que, se você seguiu o o tutorial de pré-requisitos do Nginx até o final, você já deve ter criado esse arquivo e preenchido com algum conteúdo. Nesse caso, exclua todo o conteúdo existente no arquivo antes de adicionar o seguinte:

      /etc/nginx/sites-available/example.com

      
      server {
          listen 80;
      
          server_name example.com;
      
          auth_basic "Restricted Access";
          auth_basic_user_file /etc/nginx/htpasswd.users;
      
          location / {
              proxy_pass http://localhost:5601;
              proxy_http_version 1.1;
              proxy_set_header Upgrade $http_upgrade;
              proxy_set_header Connection 'upgrade';
              proxy_set_header Host $host;
              proxy_cache_bypass $http_upgrade;
          }
      }
      

      Quando terminar, salve e feche o arquivo.

      Em seguida, ative a nova configuração criando um link simbólico para o diretório sites-enabled. Se você já criou um arquivo de bloco do servidor com o mesmo nome no pré-requisito do Nginx, não será necessário executar este comando:

      • sudo ln -s /etc/nginx/sites-available/example.com /etc/nginx/sites-enabled/example.com

      Em seguida, verifique a configuração para erros de sintaxe:

      Se algum erro for relatado em sua saída, volte e verifique se o conteúdo que você colocou no seu arquivo de configuração foi adicionado corretamente. Uma vez que você veja syntax is ok na saída, vá em frente e reinicie o serviço Nginx:

      • sudo systemctl restart nginx

      Se você seguiu o guia de configuração inicial do servidor, você deverá ter um firewall UFW ativado. Para permitir conexões ao Nginx, podemos ajustar as regras digitando:

      • sudo ufw allow 'Nginx Full'

      Nota: Se você seguiu o tutorial de pré-requisitos do Nginx, você pode ter criado uma regra UFW permitindo o perfil Nginx HTTP através do firewall. Como o perfil Nginx Full permite o tráfego HTTP e HTTPS através do firewall, você pode excluir com segurança a regra criada no tutorial de pré-requisitos. Faça isso com o seguinte comando:

      • sudo ufw delete allow 'Nginx HTTP'

      O Kibana agora pode ser acessado pelo seu FQDN ou pelo endereço IP público do seu servidor Elastic Stack. Você pode verificar a página de status do servidor Kibana, navegando até o seguinte endereço e digitando suas credenciais de login quando solicitado:

      • http://ip_do_seu_servidor/status

      Essa página de status exibe informações sobre o uso de recursos do servidor e lista os plug-ins instalados.

      Note: Conforme mencionado na seção de Pré-requisitos, é recomendável ativar o SSL/TLS em seu servidor. Você pode seguir este tutorial agora para obter um certificado SSL grátis para o Nginx no Ubuntu 18.04. Depois de obter seus certificados SSL/TLS, você pode voltar e concluir este tutorial.

      Agora que o painel do Kibana está configurado, vamos instalar o próximo componente: Logstash.

      Passo 3 — Instalando e Configurando o Logstash

      Embora seja possível que o Beats envie dados diretamente para o banco de dados do Elasticsearch, recomendamos o uso do Logstash para processar os dados. Isso permitirá coletar dados de diferentes origens, transformá-los em um formato comum e exportá-los para outro banco de dados.

      Instale o Logstash com este comando:

      • sudo apt install logstash

      Depois de instalar o Logstash, você pode continuar a configurá-lo. Os arquivos de configuração do Logstash são escritos no formato JSON e residem no diretório /etc/logstash/conf.d. Ao configurá-lo, é útil pensar no Logstash como um pipeline que coleta dados em uma extremidade, os processa de uma forma ou de outra e os envia para o destino (nesse caso, o destino é o Elasticsearch). Um pipeline do Logstash tem dois elementos obrigatórios, input e output, e um elemento opcional, filter. Os plugins de input ou de entrada consomem dados de uma fonte, os plug-ins filter ou de filtro processam os dados, e os plugins de output ou de saída gravam os dados em um destino.

      Crie um arquivo de configuração chamado 02-beats-input.conf onde você irá configurar sua entrada para o Filebeat:

      • sudo nano /etc/logstash/conf.d/02-beats-input.conf

      Insira a seguinte configuração de input. Isto especifica uma entrada beats que irá escutar na porta TCP 5044.

      /etc/logstash/conf.d/02-beats-input.conf

      
      input {
        beats {
          port => 5044
        }
      }
      

      Salve e feche o arquivo. Em seguida, crie um arquivo de configuração chamado 10-syslog-filter.conf, onde adicionaremos um filtro para logs do sistema, também conhecido como syslogs:

      • sudo nano /etc/logstash/conf.d/10-syslog-filter.conf

      Insira a seguinte configuração do filtro syslog. Este exemplo de configuração de logs do sistema foi retirado da documentação oficial do Elastic. Esse filtro é usado para analisar os logs de entrada do sistema para torná-los estruturados e utilizáveis pelos painéis predefinidos do Kibana:

      /etc/logstash/conf.d/10-syslog-filter.conf

      
      filter {
        if [fileset][module] == "system" {
          if [fileset][name] == "auth" {
            grok {
              match => { "message" => ["%{SYSLOGTIMESTAMP:[system][auth][timestamp]} %{SYSLOGHOST:[system][auth][hostname]} sshd(?:[%{POSINT:[system][auth][pid]}])?: %{DATA:[system][auth][ssh][event]} %{DATA:[system][auth][ssh][method]} for (invalid user )?%{DATA:[system][auth][user]} from %{IPORHOST:[system][auth][ssh][ip]} port %{NUMBER:[system][auth][ssh][port]} ssh2(: %{GREEDYDATA:[system][auth][ssh][signature]})?",
                        "%{SYSLOGTIMESTAMP:[system][auth][timestamp]} %{SYSLOGHOST:[system][auth][hostname]} sshd(?:[%{POSINT:[system][auth][pid]}])?: %{DATA:[system][auth][ssh][event]} user %{DATA:[system][auth][user]} from %{IPORHOST:[system][auth][ssh][ip]}",
                        "%{SYSLOGTIMESTAMP:[system][auth][timestamp]} %{SYSLOGHOST:[system][auth][hostname]} sshd(?:[%{POSINT:[system][auth][pid]}])?: Did not receive identification string from %{IPORHOST:[system][auth][ssh][dropped_ip]}",
                        "%{SYSLOGTIMESTAMP:[system][auth][timestamp]} %{SYSLOGHOST:[system][auth][hostname]} sudo(?:[%{POSINT:[system][auth][pid]}])?: s*%{DATA:[system][auth][user]} 🙁 %{DATA:[system][auth][sudo][error]} ;)? TTY=%{DATA:[system][auth][sudo][tty]} ; PWD=%{DATA:[system][auth][sudo][pwd]} ; USER=%{DATA:[system][auth][sudo][user]} ; COMMAND=%{GREEDYDATA:[system][auth][sudo][command]}",
                        "%{SYSLOGTIMESTAMP:[system][auth][timestamp]} %{SYSLOGHOST:[system][auth][hostname]} groupadd(?:[%{POSINT:[system][auth][pid]}])?: new group: name=%{DATA:system.auth.groupadd.name}, GID=%{NUMBER:system.auth.groupadd.gid}",
                        "%{SYSLOGTIMESTAMP:[system][auth][timestamp]} %{SYSLOGHOST:[system][auth][hostname]} useradd(?:[%{POSINT:[system][auth][pid]}])?: new user: name=%{DATA:[system][auth][user][add][name]}, UID=%{NUMBER:[system][auth][user][add][uid]}, GID=%{NUMBER:[system][auth][user][add][gid]}, home=%{DATA:[system][auth][user][add][home]}, shell=%{DATA:[system][auth][user][add][shell]}$",
                        "%{SYSLOGTIMESTAMP:[system][auth][timestamp]} %{SYSLOGHOST:[system][auth][hostname]} %{DATA:[system][auth][program]}(?:[%{POSINT:[system][auth][pid]}])?: %{GREEDYMULTILINE:[system][auth][message]}"] }
              pattern_definitions => {
                "GREEDYMULTILINE"=> "(.|n)*"
              }
              remove_field => "message"
            }
            date {
              match => [ "[system][auth][timestamp]", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
            }
            geoip {
              source => "[system][auth][ssh][ip]"
              target => "[system][auth][ssh][geoip]"
            }
          }
          else if [fileset][name] == "syslog" {
            grok {
              match => { "message" => ["%{SYSLOGTIMESTAMP:[system][syslog][timestamp]} %{SYSLOGHOST:[system][syslog][hostname]} %{DATA:[system][syslog][program]}(?:[%{POSINT:[system][syslog][pid]}])?: %{GREEDYMULTILINE:[system][syslog][message]}"] }
              pattern_definitions => { "GREEDYMULTILINE" => "(.|n)*" }
              remove_field => "message"
            }
            date {
              match => [ "[system][syslog][timestamp]", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
            }
          }
        }
      }
      

      Salve e feche o arquivo quando terminar.

      Por fim, crie um arquivo de configuração chamado 30-elasticsearch-output.conf:

      • sudo nano /etc/logstash/conf.d/30-elasticsearch-output.conf

      Insira a seguinte configuração para output. Essencialmente, esta saída configura o Logstash para armazenar os dados do Beats no Elasticsearch, que está sendo executado em localhost:9200, em um índice ou index nomeado após o uso do Beat. O Beat usado neste tutorial é o Filebeat:

      /etc/logstash/conf.d/30-elasticsearch-output.conf

      
      output {
        elasticsearch {
          hosts => ["localhost:9200"]
          manage_template => false
          index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
        }
      }
      

      Salve e feche o arquivo.

      Se você quiser adicionar filtros para outros aplicativos que usam a entrada Filebeat, certifique-se de nomear os arquivos de forma que eles sejam classificados entre a configuração de entrada e a de saída, o que significa que os nomes dos arquivos devem começar com um número de dois dígitos entre 02 e 30.

      Teste sua configuração do Logstash com este comando:

      • sudo -u logstash /usr/share/logstash/bin/logstash --path.settings /etc/logstash -t

      Se não houver erros de sintaxe, sua saída exibirá Configuration OK após alguns segundos. Se você não vir isso na sua saída, verifique se há erros que apareçam na sua saída e atualize sua configuração para corrigi-los.

      Se seu teste de configuração for bem-sucedido, inicie e ative o Logstash para colocar as mudanças de configuração em vigor:

      • sudo systemctl start logstash
      • sudo systemctl enable logstash

      Agora que o Logstash está sendo executado corretamente e está totalmente configurado, vamos instalar o Filebeat.

      Passo 4 — Instalando e Configurando o Filebeat

      O Elastic Stack usa vários carregadores de dados leves chamados Beats para coletar dados de várias fontes e transportá-los para o Logstash ou para o Elasticsearch. Aqui estão os Beats que estão atualmente disponíveis na Elastic:

      • Filebeat: coleta e envia arquivos de log.

      • Metricbeat: coleta métricas de seus sistemas e serviços.

      • Packetbeat: coleta e analisa dados da rede.

      • Winlogbeat: coleta logs de eventos do Windows.

      • Auditbeat: coleta dados da estrutura de auditoria do Linux e monitora a integridade dos arquivos.

      • Heartbeat: monitora serviços para verificar sua disponibilidade com sondagem ativa.

      Neste tutorial, usaremos o Filebeat para encaminhar logs locais para o nosso Elastic Stack.

      Instale o Filebeat usando o apt:

      • sudo apt install filebeat

      Em seguida, configure o Filebeat para se conectar ao Logstash. Aqui, vamos modificar o arquivo de configuração de exemplo que vem com o Filebeat.

      Abra o arquivo de configuração do Filebeat:

      • sudo nano /etc/filebeat/filebeat.yml

      Nota: Assim como no Elasticsearch, o arquivo de configuração do Filebeat está no formato YAML. Isso significa que a identação adequada é crucial, portanto, certifique-se de usar o mesmo número de espaços indicados nestas instruções.

      O Filebeat suporta várias saídas, mas normalmente você só envia eventos diretamente para o Elasticsearch ou para o Logstash para processamento adicional. Neste tutorial, usaremos o Logstash para executar processamento adicional nos dados coletados pelo Filebeat. O Filebeat não precisará enviar nenhum dado diretamente para o Elasticsearch, então vamos desativar essa saída. Para fazer isso, encontre a seção output.elasticsearch e comente as seguintes linhas, precedendo-as com um #:

      /etc/filebeat/filebeat.yml

      
      ...
      #output.elasticsearch:
        # Array of hosts to connect to.
        #hosts: ["localhost:9200"]
      ...
      

      Em seguida, configure a seção output.logstash. Descomente as linhas output.logstash: e hosts: ["localhost:5044"] removendo o #. Isto irá configurar o Filebeat para se conectar ao Logstash no seu servidor Elastic Stack na porta 5044, a porta para a qual especificamos uma entrada do Logstash anteriormente:

      /etc/filebeat/filebeat.yml

      
      output.logstash:
        # The Logstash hosts
        hosts: ["localhost:5044"]
      

      Salve e feche o arquivo.

      A funcionalidade do Filebeat pode ser estendida com os módulos do Filebeat. Neste tutorial vamos usar o módulo system, que coleta e analisa logs criados pelo serviço de logs do sistema em distribuições comuns do Linux.

      Vamos habilitar isso:

      • sudo filebeat modules enable system

      Você pode ver uma lista de módulos ativados e desativados executando:

      • sudo filebeat modules list

      Você verá uma lista semelhante à seguinte:

      Output

      Enabled: system Disabled: apache2 auditd elasticsearch icinga iis kafka kibana logstash mongodb mysql nginx osquery postgresql redis traefik

      Por padrão, o Filebeat é configurado para usar os caminhos padrão para os logs de syslog e de autorização. No caso deste tutorial, você não precisa alterar nada na configuração. Você pode ver os parâmetros do módulo no arquivo de configuração /etc/filebeat/modules.d/system.yml.

      Em seguida, carregue o modelo de index do Elasticsearch. Um index do Elasticsearch é uma coleção de documentos que possuem características semelhantes. Os index são identificados com um nome, que é usado para se referir ao index ao executar várias operações dentro dele. O modelo de index será aplicado automaticamente quando um novo index for criado.

      Para carregar o modelo, use o seguinte comando:

      • sudo filebeat setup --template -E output.logstash.enabled=false -E 'output.elasticsearch.hosts=["localhost:9200"]'

      Output

      Loaded index template

      O Filebeat vem com painéis de amostra do Kibana que lhe permitem visualizar dados do Filebeat no Kibana. Antes de poder usar os painéis, você precisa criar o padrão de index e carregar os painéis no Kibana.

      À medida que os painéis são carregados, o Filebeat se conecta ao Elasticsearch para verificar as informações da versão. Para carregar painéis quando o Logstash está ativado, é necessário desativar a saída do Logstash e ativar a saída do Elasticsearch:

      • sudo filebeat setup -e -E output.logstash.enabled=false -E output.elasticsearch.hosts=['localhost:9200'] -E setup.kibana.host=localhost:5601

      Você verá uma saída que se parece com isto:

      Output

      2018-09-10T08:39:15.844Z INFO instance/beat.go:273 Setup Beat: filebeat; Version: 6.4.2 2018-09-10T08:39:15.845Z INFO elasticsearch/client.go:163 Elasticsearch url: http://localhost:9200 2018-09-10T08:39:15.845Z INFO pipeline/module.go:98 Beat name: elk 2018-09-10T08:39:15.845Z INFO elasticsearch/client.go:163 Elasticsearch url: http://localhost:9200 2018-09-10T08:39:15.849Z INFO elasticsearch/client.go:708 Connected to Elasticsearch version 6.4.2 2018-09-10T08:39:15.856Z INFO template/load.go:129 Template already exists and will not be overwritten. Loaded index template Loading dashboards (Kibana must be running and reachable) 2018-09-10T08:39:15.857Z INFO elasticsearch/client.go:163 Elasticsearch url: http://localhost:9200 2018-09-10T08:39:15.865Z INFO elasticsearch/client.go:708 Connected to Elasticsearch version 6.4.2 2018-09-10T08:39:15.865Z INFO kibana/client.go:113 Kibana url: http://localhost:5601 2018-09-10T08:39:45.357Z INFO instance/beat.go:659 Kibana dashboards successfully loaded. Loaded dashboards 2018-09-10T08:39:45.358Z INFO elasticsearch/client.go:163 Elasticsearch url: http://localhost:9200 2018-09-10T08:39:45.361Z INFO elasticsearch/client.go:708 Connected to Elasticsearch version 6.4.2 2018-09-10T08:39:45.361Z INFO kibana/client.go:113 Kibana url: http://localhost:5601 2018-09-10T08:39:45.455Z WARN fileset/modules.go:388 X-Pack Machine Learning is not enabled Loaded machine learning job configurations

      Agora você pode iniciar e ativar o Filebeat:

      • sudo systemctl start filebeat
      • sudo systemctl enable filebeat

      Se você configurou seu Elastic Stack corretamente, o Filebeat começará a enviar seus registros de log e autorização para o Logstash, que então carregará esses dados no Elasticsearch.

      Para verificar se o Elasticsearch está realmente recebendo esses dados, consulte o index do Filebeat com este comando:

      • curl -XGET 'http://localhost:9200/filebeat-*/_search?pretty'

      Você verá uma saída semelhante a esta:

      Output

      ... { "took" : 32, "timed_out" : false, "_shards" : { "total" : 3, "successful" : 3, "skipped" : 0, "failed" : 0 }, "hits" : { "total" : 1641, "max_score" : 1.0, "hits" : [ { "_index" : "filebeat-6.4.2-2018.10.10", "_type" : "doc", "_id" : "H_bZ62UBB4D0uxFRu_h3", "_score" : 1.0, "_source" : { "@version" : "1", "message" : "Oct 10 06:22:36 elk systemd[1]: Reached target Local File Systems (Pre).", "@timestamp" : "2018-10-10T08:43:56.969Z", "host" : { "name" : "elk" }, "source" : "/var/log/syslog", "input" : { "type" : "log" }, "tags" : [ "beats_input_codec_plain_applied" ], "offset" : 296, "prospector" : { "type" : "log" }, "beat" : { "version" : "6.4.2", "hostname" : "elk", "name" : "elk" } } }, ...

      Se a sua saída mostrar 0 total hits, o Elasticsearch não está carregando nenhum registro sob o index que você pesquisou, e você precisará revisar sua configuração para verificar erros. Se você recebeu a saída esperada, continue para a próxima etapa, na qual veremos como navegar em alguns dos painéis do Kibana.

      Passo 5 — Explorando os Painéis do Kibana

      Vamos dar uma olhada no Kibana, a interface web que instalamos anteriormente.

      Em um navegador web, vá para o FQDN ou endereço IP público do seu servidor Elastic Stack. Depois de inserir as credenciais de login que você definiu no Passo 2, você verá a página inicial do Kibana:

      Clique no link Discover na barra de navegação à esquerda. Na página Discover, selecione o padrão de index predefinido filebeat-* para ver os dados do Filebeat. Por padrão, isso mostrará todos os dados do log nos últimos 15 minutos. Você verá um histograma com eventos de log, e algumas mensagens de log como abaixo:

      Aqui, você pode pesquisar e navegar pelos seus logs e também personalizar seu painel. Neste ponto, porém, não haverá muita coisa porque você está apenas coletando syslogs do seu servidor Elastic Stack.

      Use o painel esquerdo para navegar até a página Dashboard e pesquise pelos painéis do Filebeat System. Uma vez lá, você pode procurar os painéis de amostra que vêm com o módulo system do Filebeat.

      Por exemplo, você pode visualizar estatísticas detalhadas com base em suas mensagens do syslog:

      Você também pode ver quais usuários usaram o comando sudo e quando:

      Kibana tem muitos outros recursos, como gráficos e filtragem, então sinta-se livre para explorar.

      Conclusão

      Neste tutorial, você aprendeu como instalar e configurar o Elastic Stack para coletar e analisar logs do sistema. Lembre-se de que você pode enviar praticamente qualquer tipo de log ou dados indexados para o Logstash usando o Beats, mas os dados se tornam ainda mais úteis se forem analisados e estruturados com um filtro Logstash, pois isso transforma os dados em um formato consistente que pode ser lido facilmente pelo Elasticsearch.

      Por Justin Ellingwood e Vadym Kalsin



      Source link

      How To Install Elasticsearch, Logstash, and Kibana (Elastic Stack) on CentOS 7


      The author selected Software in the Public Interest to receive a donation as part of the Write for DOnations program.

      Introduction

      The Elastic Stack — formerly known as the ELK Stack — is a collection of open-source software produced by Elastic which allows you to search, analyze, and visualize logs generated from any source in any format, a practice known as centralized logging. Centralized logging can be very useful when attempting to identify problems with your servers or applications, as it allows you to search through all of your logs in a single place. It’s also useful because it allows you to identify issues that span multiple servers by correlating their logs during a specific time frame.

      The Elastic Stack has four main components:

      • Elasticsearch: a distributed RESTful search engine which stores all of the collected data.
      • Logstash: the data processing component of the Elastic Stack which sends incoming data to Elasticsearch.
      • Kibana: a web interface for searching and visualizing logs.
      • Beats: lightweight, single-purpose data shippers that can send data from hundreds or thousands of machines to either Logstash or Elasticsearch.

      In this tutorial, you will install the Elastic Stack on a CentOS 7 server. You will learn how to install all of the components of the Elastic Stack — including Filebeat, a Beat used for forwarding and centralizing logs and files — and configure them to gather and visualize system logs. Additionally, because Kibana is normally only available on the localhost, you will use Nginx to proxy it so it will be accessible over a web browser. At the end of this tutorial, you will have all of these components installed on a single server, referred to as the Elastic Stack server.

      Note: When installing the Elastic Stack, you should use the same version across the entire stack. This tutorial uses the latest versions of each component, which are, at the time of this writing, Elasticsearch 6.5.2, Kibana 6.5.2, Logstash 6.5.2, and Filebeat 6.5.2.

      Prerequisites

      To complete this tutorial, you will need the following:

      • One CentOS 7 server set up by following Initial Server Setup with CentOS 7, including a non-root user with sudo privileges and a firewall. The amount of CPU, RAM, and storage that your Elastic Stack server will require depends on the volume of logs that you intend to gather. For this tutorial, you will be using a VPS with the following specifications for our Elastic Stack server:

        • OS: CentOS 7.5
        • RAM: 4GB
        • CPU: 2
      • Java 8 — which is required by Elasticsearch and Logstash — installed on your server. Note that Java 9 is not supported. To install this, follow the “Install OpenJDK 8 JRE” section of our guide on how to install Java on CentOS.

      • Nginx installed on your server, which you will configure later in this guide as a reverse proxy for Kibana. Follow our guide on How To Install Nginx on CentOS 7 to set this up.

      Additionally, because the Elastic Stack is used to access valuable information about your server that you would not want unauthorized users to access, it’s important that you keep your server secure by installing a TLS/SSL certificate. This is optional but strongly encouraged. Because you will ultimately make changes to your Nginx server block over the course of this guide, we suggest putting this security in place by completing the Let’s Encrypt on CentOS 7 guide immediately after this tutorial’s second step.

      If you do plan to configure Let’s Encrypt on your server, you will need the following in place before doing so:

      • A fully qualified domain name (FQDN). This tutorial will use example.com throughout. You can purchase a domain name on Namecheap, get one for free on Freenom, or use the domain registrar of your choice.

      • Both of the following DNS records set up for your server. You can follow this introduction to DigitalOcean DNS for details on how to add them.

        • An A record with example.com pointing to your server’s public IP address.
        • An A record with www.example.com pointing to your server’s public IP address.

      Step 1 — Installing and Configuring Elasticsearch

      The Elastic Stack components are not available through the package manager by default, but you can install them with yum by adding Elastic’s package repository.

      All of the Elastic Stack’s packages are signed with the Elasticsearch signing key in order to protect your system from package spoofing. Packages which have been authenticated using the key will be considered trusted by your package manager. In this step, you will import the Elasticsearch public GPG key and add the Elastic repository in order to install Elasticsearch.

      Run the following command to download and install the Elasticsearch public signing key:

      • sudo rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch

      Next, add the Elastic repository. Use your preferred text editor to create the file elasticsearch.repo in the /etc/yum.repos.d/ directory. Here, we’ll use the vi text editor:

      • sudo vi /etc/yum.repos.d/elasticsearch.repo

      To provide yum with the information it needs to download and install the components of the Elastic Stack, enter insert mode by pressing i and add the following lines to the file.

      /etc/yum.repos.d/elasticsearch.repo

      [elasticsearch-6.x]
      name=Elasticsearch repository for 6.x packages
      baseurl=https://artifacts.elastic.co/packages/6.x/yum
      gpgcheck=1
      gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
      enabled=1
      autorefresh=1
      type=rpm-md
      

      Here you have included the human-readable name of the repo, the baseurl of the repo’s data directory, and the gpgkey required to verify Elastic packages.

      When you’re finished, press ESC to leave insert mode, then :wq and ENTER to save and exit the file. To learn more about the text editor vi and its successor vim, check out our Installing and Using the Vim Text Editor on a Cloud Server tutorial.

      With the repo added, you can now install the Elastic Stack. According to the official documentation, you should install Elasticsearch before the other components. Installing in this order ensures that the components each product depends on are correctly in place.

      Install Elasticsearch with the following command:

      • sudo yum install elasticsearch

      Once Elasticsearch is finished installing, open its main configuration file, elasticsearch.yml, in your editor:

      sudo vi /etc/elasticsearch/elasticsearch.yml
      

      Note: Elasticsearch’s configuration file is in YAML format, which means that indentation is very important! Be sure that you do not add any extra spaces as you edit this file.

      Elasticsearch listens for traffic from everywhere on port 9200. You will want to restrict outside access to your Elasticsearch instance to prevent outsiders from reading your data or shutting down your Elasticsearch cluster through the REST API. Find the line that specifies network.host, uncomment it, and replace its value with localhost so it looks like this:

      /etc/elasticsearch/elasticsearch.yml

      . . .
      network.host: localhost
      . . .
      

      Save and close elasticsearch.yml. Then, start the Elasticsearch service with systemctl:

      • sudo systemctl start elasticsearch

      Next, run the following command to enable Elasticsearch to start up every time your server boots:

      • sudo systemctl enable elasticsearch

      You can test whether your Elasticsearch service is running by sending an HTTP request:

      • curl -X GET "localhost:9200"

      You will see a response showing some basic information about your local node, similar to this:

      Output

      { "name" : "8oSCBFJ", "cluster_name" : "elasticsearch", "cluster_uuid" : "1Nf9ZymBQaOWKpMRBfisog", "version" : { "number" : "6.5.2", "build_flavor" : "default", "build_type" : "rpm", "build_hash" : "9434bed", "build_date" : "2018-11-29T23:58:20.891072Z", "build_snapshot" : false, "lucene_version" : "7.5.0", "minimum_wire_compatibility_version" : "5.6.0", "minimum_index_compatibility_version" : "5.0.0" }, "tagline" : "You Know, for Search" }

      Now that Elasticsearch is up and running, let’s install Kibana, the next component of the Elastic Stack.

      Step 2 — Installing and Configuring the Kibana Dashboard

      According to the installation order in the official documentation, you should install Kibana as the next component after Elasticsearch. After setting Kibana up, we will be able to use its interface to search through and visualize the data that Elasticsearch stores.

      Because you already added the Elastic repository in the previous step, you can just install the remaining components of the Elastic Stack using yum:

      Then enable and start the Kibana service:

      • sudo systemctl enable kibana
      • sudo systemctl start kibana

      Because Kibana is configured to only listen on localhost, we must set up a reverse proxy to allow external access to it. We will use Nginx for this purpose, which should already be installed on your server.

      First, use the openssl command to create an administrative Kibana user which you'll use to access the Kibana web interface. As an example, we will name this account kibanaadmin, but to ensure greater security we recommend that you choose a non-standard name for your user that would be difficult to guess.

      The following command will create the administrative Kibana user and password, and store them in the htpasswd.users file. You will configure Nginx to require this username and password and read this file momentarily:

      • echo "kibanaadmin:`openssl passwd -apr1`" | sudo tee -a /etc/nginx/htpasswd.users

      Enter and confirm a password at the prompt. Remember or take note of this login, as you will need it to access the Kibana web interface.

      Next, we will create an Nginx server block file. As an example, we will refer to this file as example.com.conf, although you may find it helpful to give yours a more descriptive name. For instance, if you have a FQDN and DNS records set up for this server, you could name this file after your FQDN:

      • sudo vi /etc/nginx/conf.d/example.com.conf

      Add the following code block into the file, being sure to update example.com and www.example.com to match your server's FQDN or public IP address. This code configures Nginx to direct your server's HTTP traffic to the Kibana application, which is listening on localhost:5601. Additionally, it configures Nginx to read the htpasswd.users file and require basic authentication.

      Note that if you followed the prerequisite Nginx tutorial through to the end, you may have already created this file and populated it with some content. In that case, delete all the existing content in the file before adding the following:

      example.com.conf'>/etc/nginx/conf.d/example.com.conf

      server {
          listen 80;
      
          server_name example.com www.example.com;
      
          auth_basic "Restricted Access";
          auth_basic_user_file /etc/nginx/htpasswd.users;
      
          location / {
              proxy_pass http://localhost:5601;
              proxy_http_version 1.1;
              proxy_set_header Upgrade $http_upgrade;
              proxy_set_header Connection 'upgrade';
              proxy_set_header Host $host;
              proxy_cache_bypass $http_upgrade;
          }
      }
      

      When you're finished, save and close the file.

      Then check the configuration for syntax errors:

      If any errors are reported in your output, go back and double check that the content you placed in your configuration file was added correctly. Once you see syntax is ok in the output, go ahead and restart the Nginx service:

      • sudo systemctl restart nginx

      By default, SELinux security policy is set to be enforced. Run the following command to allow Nginx to access the proxied service:

      • sudo setsebool httpd_can_network_connect 1 -P

      You can learn more about SELinux in the tutorial An Introduction to SELinux on CentOS 7.

      Kibana is now accessible via your FQDN or the public IP address of your Elastic Stack server. You can check the Kibana server's status page by navigating to the following address and entering your login credentials when prompted:

      http://your_server_ip/status
      

      This status page displays information about the server’s resource usage and lists the installed plugins.

      |Kibana status page

      Note: As mentioned in the Prerequisites section, it is recommended that you enable SSL/TLS on your server. You can follow this tutorial now to obtain a free SSL certificate for Nginx on CentOS 7. After obtaining your SSL/TLS certificates, you can come back and complete this tutorial.

      Now that the Kibana dashboard is configured, let's install the next component: Logstash.

      Step 3 — Installing and Configuring Logstash

      Although it's possible for Beats to send data directly to the Elasticsearch database, we recommend using Logstash to process the data first. This will allow you to collect data from different sources, transform it into a common format, and export it to another database.

      Install Logstash with this command:

      • sudo yum install logstash

      After installing Logstash, you can move on to configuring it. Logstash's configuration files are written in the JSON format and reside in the /etc/logstash/conf.d directory. As you configure it, it's helpful to think of Logstash as a pipeline which takes in data at one end, processes it in one way or another, and sends it out to its destination (in this case, the destination being Elasticsearch). A Logstash pipeline has two required elements, input and output, and one optional element, filter. The input plugins consume data from a source, the filter plugins process the data, and the output plugins write the data to a destination.

      Logstash pipeline

      Create a configuration file called 02-beats-input.conf where you will set up your Filebeat input:

      • sudo vi /etc/logstash/conf.d/02-beats-input.conf

      Insert the following input configuration. This specifies a beats input that will listen on TCP port 5044.

      /etc/logstash/conf.d/02-beats-input.conf

      input {
        beats {
          port => 5044
        }
      }
      

      Save and close the file. Next, create a configuration file called 10-syslog-filter.conf, which will add a filter for system logs, also known as syslogs:

      • sudo vi /etc/logstash/conf.d/10-syslog-filter.conf

      Insert the following syslog filter configuration. This example system logs configuration was taken from official Elastic documentation. This filter is used to parse incoming system logs to make them structured and usable by the predefined Kibana dashboards:

      /etc/logstash/conf.d/10-syslog-filter.conf

      filter {
        if [fileset][module] == "system" {
          if [fileset][name] == "auth" {
            grok {
              match => { "message" => ["%{SYSLOGTIMESTAMP:[system][auth][timestamp]} %{SYSLOGHOST:[system][auth][hostname]} sshd(?:[%{POSINT:[system][auth][pid]}])?: %{DATA:[system][auth][ssh][event]} %{DATA:[system][auth][ssh][method]} for (invalid user )?%{DATA:[system][auth][user]} from %{IPORHOST:[system][auth][ssh][ip]} port %{NUMBER:[system][auth][ssh][port]} ssh2(: %{GREEDYDATA:[system][auth][ssh][signature]})?",
                        "%{SYSLOGTIMESTAMP:[system][auth][timestamp]} %{SYSLOGHOST:[system][auth][hostname]} sshd(?:[%{POSINT:[system][auth][pid]}])?: %{DATA:[system][auth][ssh][event]} user %{DATA:[system][auth][user]} from %{IPORHOST:[system][auth][ssh][ip]}",
                        "%{SYSLOGTIMESTAMP:[system][auth][timestamp]} %{SYSLOGHOST:[system][auth][hostname]} sshd(?:[%{POSINT:[system][auth][pid]}])?: Did not receive identification string from %{IPORHOST:[system][auth][ssh][dropped_ip]}",
                        "%{SYSLOGTIMESTAMP:[system][auth][timestamp]} %{SYSLOGHOST:[system][auth][hostname]} sudo(?:[%{POSINT:[system][auth][pid]}])?: s*%{DATA:[system][auth][user]} 🙁 %{DATA:[system][auth][sudo][error]} ;)? TTY=%{DATA:[system][auth][sudo][tty]} ; PWD=%{DATA:[system][auth][sudo][pwd]} ; USER=%{DATA:[system][auth][sudo][user]} ; COMMAND=%{GREEDYDATA:[system][auth][sudo][command]}",
                        "%{SYSLOGTIMESTAMP:[system][auth][timestamp]} %{SYSLOGHOST:[system][auth][hostname]} groupadd(?:[%{POSINT:[system][auth][pid]}])?: new group: name=%{DATA:system.auth.groupadd.name}, GID=%{NUMBER:system.auth.groupadd.gid}",
                        "%{SYSLOGTIMESTAMP:[system][auth][timestamp]} %{SYSLOGHOST:[system][auth][hostname]} useradd(?:[%{POSINT:[system][auth][pid]}])?: new user: name=%{DATA:[system][auth][user][add][name]}, UID=%{NUMBER:[system][auth][user][add][uid]}, GID=%{NUMBER:[system][auth][user][add][gid]}, home=%{DATA:[system][auth][user][add][home]}, shell=%{DATA:[system][auth][user][add][shell]}$",
                        "%{SYSLOGTIMESTAMP:[system][auth][timestamp]} %{SYSLOGHOST:[system][auth][hostname]} %{DATA:[system][auth][program]}(?:[%{POSINT:[system][auth][pid]}])?: %{GREEDYMULTILINE:[system][auth][message]}"] }
              pattern_definitions => {
                "GREEDYMULTILINE"=> "(.|n)*"
              }
              remove_field => "message"
            }
            date {
              match => [ "[system][auth][timestamp]", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
            }
            geoip {
              source => "[system][auth][ssh][ip]"
              target => "[system][auth][ssh][geoip]"
            }
          }
          else if [fileset][name] == "syslog" {
            grok {
              match => { "message" => ["%{SYSLOGTIMESTAMP:[system][syslog][timestamp]} %{SYSLOGHOST:[system][syslog][hostname]} %{DATA:[system][syslog][program]}(?:[%{POSINT:[system][syslog][pid]}])?: %{GREEDYMULTILINE:[system][syslog][message]}"] }
              pattern_definitions => { "GREEDYMULTILINE" => "(.|n)*" }
              remove_field => "message"
            }
            date {
              match => [ "[system][syslog][timestamp]", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
            }
          }
        }
      }
      

      Save and close the file when finished.

      Lastly, create a configuration file called 30-elasticsearch-output.conf:

      • sudo vi /etc/logstash/conf.d/30-elasticsearch-output.conf

      Insert the following output configuration. This output configures Logstash to store the Beats data in Elasticsearch, which is running at localhost:9200, in an index named after the Beat used. The Beat used in this tutorial is Filebeat:

      /etc/logstash/conf.d/30-elasticsearch-output.conf

      output {
        elasticsearch {
          hosts => ["localhost:9200"]
          manage_template => false
          index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
        }
      }
      

      Save and close the file.

      If you want to add filters for other applications that use the Filebeat input, be sure to name the files so they're sorted between the input and the output configuration, meaning that the file names should begin with a two-digit number between 02 and 30.

      Test your Logstash configuration with this command:

      • sudo -u logstash /usr/share/logstash/bin/logstash --path.settings /etc/logstash -t

      If there are no syntax errors, your output will display Configruation OK after a few seconds. If you don't see this in your output, check for any errors that appear in your output and update your configuration to correct them.

      If your configuration test is successful, start and enable Logstash to put the configuration changes into effect:

      • sudo systemctl start logstash
      • sudo systemctl enable logstash

      Now that Logstash is running correctly and is fully configured, let's install Filebeat.

      Step 4 — Installing and Configuring Filebeat

      The Elastic Stack uses several lightweight data shippers called Beats to collect data from various sources and transport them to Logstash or Elasticsearch. Here are the Beats that are currently available from Elastic:

      • Filebeat: collects and ships log files.
      • Metricbeat: collects metrics from your systems and services.
      • Packetbeat: collects and analyzes network data.
      • Winlogbeat: collects Windows event logs.
      • Auditbeat: collects Linux audit framework data and monitors file integrity.
      • Heartbeat: monitors services for their availability with active probing.

      In this tutorial, we will use Filebeat to forward local logs to our Elastic Stack.

      Install Filebeat using yum:

      • sudo yum install filebeat

      Next, configure Filebeat to connect to Logstash. Here, we will modify the example configuration file that comes with Filebeat.

      Open the Filebeat configuration file:

      • sudo vi /etc/filebeat/filebeat.yml

      Note: As with Elasticsearch, Filebeat's configuration file is in YAML format. This means that proper indentation is crucial, so be sure to use the same number of spaces that are indicated in these instructions.

      Filebeat supports numerous outputs, but you’ll usually only send events directly to Elasticsearch or to Logstash for additional processing. In this tutorial, we'll use Logstash to perform additional processing on the data collected by Filebeat. Filebeat will not need to send any data directly to Elasticsearch, so let's disable that output. To do so, find the output.elasticsearch section and comment out the following lines by preceding them with a #:

      /etc/filebeat/filebeat.yml

      ...
      #output.elasticsearch:
        # Array of hosts to connect to.
        #hosts: ["localhost:9200"]
      ...
      

      Then, configure the output.logstash section. Uncomment the lines output.logstash: and hosts: ["localhost:5044"] by removing the #. This will configure Filebeat to connect to Logstash on your Elastic Stack server at port 5044, the port for which we specified a Logstash input earlier:

      /etc/filebeat/filebeat.yml

      output.logstash:
        # The Logstash hosts
        hosts: ["localhost:5044"]
      

      Save and close the file.

      You can now extend the functionality of Filebeat with Filebeat modules. In this tutorial, you will use the system module, which collects and parses logs created by the system logging service of common Linux distributions.

      Let's enable it:

      • sudo filebeat modules enable system

      You can see a list of enabled and disabled modules by running:

      • sudo filebeat modules list

      You will see a list similar to the following:

      Output

      Enabled: system Disabled: apache2 auditd elasticsearch haproxy icinga iis kafka kibana logstash mongodb mysql nginx osquery postgresql redis suricata traefik

      By default, Filebeat is configured to use default paths for the syslog and authorization logs. In the case of this tutorial, you do not need to change anything in the configuration. You can see the parameters of the module in the /etc/filebeat/modules.d/system.yml configuration file.

      Next, load the index template into Elasticsearch. An Elasticsearch index is a collection of documents that have similar characteristics. Indexes are identified with a name, which is used to refer to the index when performing various operations within it. The index template will be automatically applied when a new index is created.

      To load the template, use the following command:

      • sudo filebeat setup --template -E output.logstash.enabled=false -E 'output.elasticsearch.hosts=["localhost:9200"]'

      This will give the following output:

      Output

      Loaded index template

      Filebeat comes packaged with sample Kibana dashboards that allow you to visualize Filebeat data in Kibana. Before you can use the dashboards, you need to create the index pattern and load the dashboards into Kibana.

      As the dashboards load, Filebeat connects to Elasticsearch to check version information. To load dashboards when Logstash is enabled, you need to manually disable the Logstash output and enable Elasticsearch output:

      • sudo filebeat setup -e -E output.logstash.enabled=false -E output.elasticsearch.hosts=['localhost:9200'] -E setup.kibana.host=localhost:5601

      You will see output that looks like this:

      Output

      . . . 2018-12-05T21:23:33.806Z INFO elasticsearch/client.go:163 Elasticsearch url: http://localhost:9200 2018-12-05T21:23:33.811Z INFO elasticsearch/client.go:712 Connected to Elasticsearch version 6.5.2 2018-12-05T21:23:33.815Z INFO template/load.go:129 Template already exists and will not be overwritten. Loaded index template Loading dashboards (Kibana must be running and reachable) 2018-12-05T21:23:33.816Z INFO elasticsearch/client.go:163 Elasticsearch url: http://localhost:9200 2018-12-05T21:23:33.819Z INFO elasticsearch/client.go:712 Connected to Elasticsearch version 6.5.2 2018-12-05T21:23:33.819Z INFO kibana/client.go:118 Kibana url: http://localhost:5601 2018-12-05T21:24:03.981Z INFO instance/beat.go:717 Kibana dashboards successfully loaded. Loaded dashboards 2018-12-05T21:24:03.982Z INFO elasticsearch/client.go:163 Elasticsearch url: http://localhost:9200 2018-12-05T21:24:03.984Z INFO elasticsearch/client.go:712 Connected to Elasticsearch version 6.5.2 2018-12-05T21:24:03.984Z INFO kibana/client.go:118 Kibana url: http://localhost:5601 2018-12-05T21:24:04.043Z WARN fileset/modules.go:388 X-Pack Machine Learning is not enabled 2018-12-05T21:24:04.080Z WARN fileset/modules.go:388 X-Pack Machine Learning is not enabled Loaded machine learning job configurations

      Now you can start and enable Filebeat:

      • sudo systemctl start filebeat
      • sudo systemctl enable filebeat

      If you've set up your Elastic Stack correctly, Filebeat will begin shipping your syslog and authorization logs to Logstash, which will then load that data into Elasticsearch.

      To verify that Elasticsearch is indeed receiving this data, query the Filebeat index with this command:

      • curl -X GET 'http://localhost:9200/filebeat-*/_search?pretty'

      You will see an output that looks similar to this:

      Output

      { "took" : 1, "timed_out" : false, "_shards" : { "total" : 3, "successful" : 3, "skipped" : 0, "failed" : 0 }, "hits" : { "total" : 3225, "max_score" : 1.0, "hits" : [ { "_index" : "filebeat-6.5.2-2018.12.05", "_type" : "doc", "_id" : "vf5GgGcB_g3p-PRo_QOw", "_score" : 1.0, "_source" : { "@timestamp" : "2018-12-05T19:00:34.000Z", "source" : "/var/log/secure", "meta" : { "cloud" : { . . .

      If your output shows 0 total hits, Elasticsearch is not loading any logs under the index you searched for, and you will need to review your setup for errors. If you received the expected output, continue to the next step, in which you'll become familiar with some of Kibana's dashboards.

      Step 5 — Exploring Kibana Dashboards

      Let's look at Kibana, the web interface that we installed earlier.

      In a web browser, go to the FQDN or public IP address of your Elastic Stack server. After entering the login credentials you defined in Step 2, you will see the Kibana homepage:

      Kibana Homepage

      Click the Discover link in the left-hand navigation bar. On the Discover page, select the predefined filebeat-* index pattern to see Filebeat data. By default, this will show you all of the log data over the last 15 minutes. You will see a histogram with log events, and some log messages below:

      Discover page

      Here, you can search and browse through your logs and also customize your dashboard. At this point, though, there won't be much in there because you are only gathering syslogs from your Elastic Stack server.

      Use the left-hand panel to navigate to the Dashboard page and search for the Filebeat System dashboards. Once there, you can search for the sample dashboards that come with Filebeat's system module.

      For example, you can view detailed stats based on your syslog messages:

      Syslog Dashboard

      You can also view which users have used the sudo command and when:

      Sudo Dashboard

      Kibana has many other features, such as graphing and filtering, so feel free to explore.

      Conclusion

      In this tutorial, you installed and configured the Elastic Stack to collect and analyze system logs. Remember that you can send just about any type of log or indexed data to Logstash using Beats, but the data becomes even more useful if it is parsed and structured with a Logstash filter, as this transforms the data into a consistent format that can be read easily by Elasticsearch.



      Source link

      How to Set Up an Elasticsearch, Fluentd and Kibana (EFK) Logging Stack on Kubernetes


      Introduction

      When running multiple services and applications on a Kubernetes cluster, a centralized, cluster-level logging stack can help you quickly sort through and analyze the heavy volume of log data produced by your Pods. One popular centralized logging solution is the Elasticsearch, Fluentd, and Kibana (EFK) stack.

      Elasticsearch is a real-time, distributed, and scalable search engine which allows for full-text and structured search, as well as analytics. It is commonly used to index and search through large volumes of log data, but can also be used to search many different kinds of documents.

      Elasticsearch is commonly deployed alongside Kibana, a powerful data visualization frontend and dashboard for Elasticsearch. Kibana allows you to explore your Elasticsearch log data through a web interface, and build dashboards and queries to quickly answer questions and gain insight into your Kubernetes applications.

      In this tutorial we’ll use Fluentd to collect, transform, and ship log data to the Elasticsearch backend. Fluentd is a popular open-source data collector that we’ll set up on our Kubernetes nodes to tail container log files, filter and transform the log data, and deliver it to the Elasticsearch cluster, where it will be indexed and stored.

      We’ll begin by configuring and launching a scalable Elasticsearch cluster, and then create the Kibana Kubernetes Service and Deployment. To conclude, we’ll set up Fluentd as a DaemonSet so it runs on every Kubernetes worker node.

      Prerequisites

      Before you begin with this guide, ensure you have the following available to you:

      • A Kubernetes 1.10+ cluster with role-based access control (RBAC) enabled

        • Ensure your cluster has enough resources available to roll out the EFK stack, and if not scale your cluster by adding worker nodes. We’ll be deploying a 3-Pod Elasticsearch cluster (you can scale this down to 1 if necessary), as well as a single Kibana Pod. Every worker node will also run a Fluentd Pod. The cluster in this guide consists of 3 worker nodes and a managed control plane.
      • The kubectl command-line tool installed on your local machine, configured to connect to your cluster. You can read more about installing kubectl in the official documentation.

      Once you have these components set up, you’re ready to begin with this guide.

      Step 1 — Creating a Namespace

      Before we roll out an Elasticsearch cluster, we’ll first create a Namespace into which we’ll install all of our logging instrumentation. Kubernetes lets you separate objects running in your cluster using a “virtual cluster” abstraction called Namespaces. In this guide, we’ll create a kube-logging namespace into which we’ll install the EFK stack components. This Namespace will also allow us to quickly clean up and remove the logging stack without any loss of function to the Kubernetes cluster.

      To begin, first investigate the existing Namespaces in your cluster using kubectl:

      kubectl get namespaces
      

      You should see the following three initial Namespaces, which come preinstalled with your Kubernetes cluster:

      Output

      • NAME STATUS AGE
      • default Active 5m
      • kube-system Active 5m
      • kube-public Active 5m

      The default Namespace houses objects that are created without specifying a Namespace. The kube-system Namespace contains objects created and used by the Kubernetes system, like kube-dns, kube-proxy, and kubernetes-dashboard. It’s good practice to keep this Namespace clean and not pollute it with your application and instrumentation workloads.

      The kube-public Namespace is another automatically created Namespace that can be used to store objects you’d like to be readable and accessible throughout the whole cluster, even to unauthenticated users.

      To create the kube-logging Namespace, first open and edit a file called kube-logging.yaml using your favorite editor, such as nano:

      Inside your editor, paste the following Namespace object YAML:

      kube-logging.yaml

      kind: Namespace
      apiVersion: v1
      metadata:
        name: kube-logging
      

      Then, save and close the file.

      Here, we specify the Kubernetes object's kind as a Namespace object. To learn more about Namespace objects, consult the Namespaces Walkthrough in the official Kubernetes documentation. We also specify the Kubernetes API version used to create the object (v1), and give it a name, kube-logging.

      Once you've created the kube-logging.yaml Namespace object file, create the Namespace using kubectl create with the -f filename flag:

      • kubectl create -f kube-logging.yaml

      You should see the following output:

      Output

      namespace/kube-logging created

      You can then confirm that the Namespace was successfully created:

      At this point, you should see the new kube-logging Namespace:

      Output

      NAME STATUS AGE default Active 23m kube-logging Active 1m kube-public Active 23m kube-system Active 23m

      We can now deploy an Elasticsearch cluster into this isolated logging Namespace.

      Step 2 — Creating the Elasticsearch StatefulSet

      Now that we've created a Namespace to house our logging stack, we can begin rolling out its various components. We'll first begin by deploying a 3-node Elasticsearch cluster.

      In this guide, we use 3 Elasticsearch Pods to avoid the "split-brain" issue that occurs in highly-available, multi-node clusters. At a high-level, "split-brain" is what arises when one or more nodes can't communicate with the others, and several "split" masters get elected. To learn more, consult “Avoiding split brain.”

      One key takeaway is that you should set the discover.zen.minimum_master_nodes Elasticsearch parameter to N/2 + 1 (rounding down in the case of fractional numbers), where N is the number of master-eligible nodes in your Elasticsearch cluster. For our 3-node cluster, this means that we'll set this value to 2. That way, if one node gets disconnected from the cluster temporarily, the other two nodes can elect a new master and the cluster can continue functioning while the last node attempts to rejoin. It's important to keep this parameter in mind when scaling your Elasticsearch cluster.

      Creating the Headless Service

      To start, we'll create a headless Kubernetes service called elasticsearch that will define a DNS domain for the 3 Pods. A headless service does not perform load balancing or have a static IP; to learn more about headless services, consult the official Kubernetes documentation.

      Open a file called elasticsearch_svc.yaml using your favorite editor:

      • nano elasticsearch_svc.yaml

      Paste in the following Kubernetes service YAML:

      elasticsearch_svc.yaml

      kind: Service
      apiVersion: v1
      metadata:
        name: elasticsearch
        namespace: kube-logging
        labels:
          app: elasticsearch
      spec:
        selector:
          app: elasticsearch
        clusterIP: None
        ports:
          - port: 9200
            name: rest
          - port: 9300
            name: inter-node
      

      Then, save and close the file.

      We define a Service called elasticsearch in the kube-logging Namespace, and give it the app: elasticsearch label. We then set the .spec.selector to app: elasticsearch so that the Service selects Pods with the app: elasticsearch label. When we associate our Elasticsearch StatefulSet with this Service, the Service will return DNS A records that point to Elasticsearch Pods with the app: elasticsearch label.

      We then set clusterIP: None, which renders the service headless. Finally, we define ports 9200 and 9300 which are used to interact with the REST API, and for inter-node communication, respectively.

      Create the service using kubectl:

      • kubectl create -f elasticsearch_svc.yaml

      You should see the following output:

      Output

      service/elasticsearch created

      Finally, double-check that the service was successfully created using kubectl get:

      kubectl get services --namespace=kube-logging
      

      You should see the following:

      Output

      NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE elasticsearch ClusterIP None <none> 9200/TCP,9300/TCP 26s

      Now that we've set up our headless service and a stable .elasticsearch.kube-logging.svc.cluster.local domain for our Pods, we can go ahead and create the StatefulSet.

      Creating the StatefulSet

      A Kubernetes StatefulSet allows you to assign a stable identity to Pods and grant them stable, persistent storage. Elasticsearch requires stable storage to persist data across Pod rescheduling and restarts. To learn more about the StatefulSet workload, consult the Statefulsets page from the Kubernetes docs.

      Open a file called elasticsearch_statefulset.yaml in your favorite editor:

      • nano elasticsearch_statefulset.yaml

      We will move through the StatefulSet object definition section by section, pasting blocks into this file.

      Begin by pasting in the following block:

      elasticsearch_statefulset.yaml

      apiVersion: apps/v1
      kind: StatefulSet
      metadata:
        name: es-cluster
        namespace: kube-logging
      spec:
        serviceName: elasticsearch
        replicas: 3
        selector:
          matchLabels:
            app: elasticsearch
        template:
          metadata:
            labels:
              app: elasticsearch
      

      In this block, we define a StatefulSet called es-cluster in the kube-logging namespace. We then associate it with our previously created elasticsearch Service using the serviceName field. This ensures that each Pod in the StatefulSet will be accessible using the following DNS address: es-cluster-[0,1,2].elasticsearch.kube-logging.svc.cluster.local, where [0,1,2] corresponds to the Pod's assigned integer ordinal.

      We specify 3 replicas (Pods) and set the matchLabels selector to app: elasticseach, which we then mirror in the .spec.template.metadata section. The .spec.selector.matchLabels and .spec.template.metadata.labels fields must match.

      We can now move on to the object spec. Paste in the following block of YAML immediately below the preceding block:

      elasticsearch_statefulset.yaml

      . . .
          spec:
            containers:
            - name: elasticsearch
              image: docker.elastic.co/elasticsearch/elasticsearch-oss:6.4.3
              resources:
                  limits:
                    cpu: 1000m
                  requests:
                    cpu: 100m
              ports:
              - containerPort: 9200
                name: rest
                protocol: TCP
              - containerPort: 9300
                name: inter-node
                protocol: TCP
              volumeMounts:
              - name: data
                mountPath: /usr/share/elasticsearch/data
              env:
                - name: cluster.name
                  value: k8s-logs
                - name: node.name
                  valueFrom:
                    fieldRef:
                      fieldPath: metadata.name
                - name: discovery.zen.ping.unicast.hosts
                  value: "es-cluster-0.elasticsearch,es-cluster-1.elasticsearch,es-cluster-2.elasticsearch"
                - name: discovery.zen.minimum_master_nodes
                  value: "2"
                - name: ES_JAVA_OPTS
                  value: "-Xms512m -Xmx512m"
      

      Here we define the Pods in the StatefulSet. We name the containers elasticsearch and choose the docker.elastic.co/elasticsearch/elasticsearch-oss:6.4.3 Docker image. At this point, you may modify this image tag to correspond to your own internal Elasticsearch image, or a different version. Note that for the purposes of this guide, only Elasticsearch 6.4.3 has been tested.

      The -oss suffix ensures that we use the open-source version of Elasticsearch. If you'd like to use the default version containing X-Pack (which includes a free license), omit the -oss suffix. Note that you will have to modify the steps in this guide slightly to account for the added authentication provided by X-Pack.

      We then use the resources field to specify that the container needs at least 0.1 vCPU guaranteed to it, and can burst up to 1 vCPU (which limits the Pod's resource usage when performing an initial large ingest or dealing with a load spike). You should modify these values depending on your anticipated load and available resources. To learn more about resource requests and limits, consult the official Kubernetes Documentation.

      We then open and name ports 9200 and 9300 for REST API and inter-node communication, respectively. We specify a volumeMount called data that will mount the PersistentVolume named data to the container at the path /usr/share/elasticsearch/data. We will define the VolumeClaims for this StatefulSet in a later YAML block.

      Finally, we set some environment variables in the container:

      • cluster.name: The Elasticsearch cluster's name, which in this guide is k8s-logs.
      • node.name: The node's name, which we set to the .metadata.name field using valueFrom. This will resolve to es-cluster-[0,1,2], depending on the node's assigned ordinal.
      • discovery.zen.ping.unicast.hosts: This field sets the discovery method used to connect nodes to each other within an Elasticsearch cluster. We use unicast discovery, which specifies a static list of hosts for our cluster. In this guide, thanks to the headless service we configured earlier, our Pods have domains of the form es-cluster-[0,1,2].elasticsearch.kube-logging.svc.cluster.local, so we set this variable accordingly. Using local namespace Kubernetes DNS resolution, we can shorten this to es-cluster-[0,1,2].elasticsearch. To learn more about Elasticsearch discovery, consult the official Elasticsearch documentation.
      • discovery.zen.minimum_master_nodes: We set this to (N/2) + 1, where N is the number of master-eligible nodes in our cluster. In this guide we have 3 Elasticsearch nodes, so we set this value to 2 (rounding down to the nearest integer). To learn more about this parameter, consult the official Elasticsearch documenation.
      • ES_JAVA_OPTS: Here we set this to -Xms512m -Xmx512m which tells the JVM to use a minimum and maximum heap size of 512 MB. You should tune these parameters depending on your cluster's resource availability and needs. To learn more, consult Setting the heap size.

      The next block we'll paste in looks as follows:

      elasticsearch_statefulset.yaml

      . . .
            initContainers:
            - name: fix-permissions
              image: busybox
              command: ["sh", "-c", "chown -R 1000:1000 /usr/share/elasticsearch/data"]
              securityContext:
                privileged: true
              volumeMounts:
              - name: data
                mountPath: /usr/share/elasticsearch/data
            - name: increase-vm-max-map
              image: busybox
              command: ["sysctl", "-w", "vm.max_map_count=262144"]
              securityContext:
                privileged: true
            - name: increase-fd-ulimit
              image: busybox
              command: ["sh", "-c", "ulimit -n 65536"]
              securityContext:
                privileged: true
      

      In this block, we define several Init Containers that run before the main elasticsearch app container. These Init Containers each run to completion in the order they are defined. To learn more about Init Containers, consult the official Kubernetes Documentation.

      The first, named fix-permissions, runs a chown command to change the owner and group of the Elasticsearch data directory to 1000:1000, the Elasticsearch user's UID. By default Kubernetes mounts the data directory as root, which renders it inaccessible to Elasticsearch. To learn more about this step, consult Elasticsearch's “Notes for production use and defaults.”

      The second, named increase-vm-max-map, runs a command to increase the operating system's limits on mmap counts, which by default may be too low, resulting in out of memory errors. To learn more about this step, consult the official Elasticsearch documentation.

      The next Init Container to run is increase-fd-ulimit, which runs the ulimit command to increase the maximum number of open file descriptors. To learn more about this step, consult the “Notes for Production Use and Defaults” from the official Elasticsearch documentation.

      Note: The Elasticsearch Notes for Production Use also mentions disabling swapping for performance reasons. Depending on your Kubernetes installation or provider, swapping may already be disabled. To check this, exec into a running container and run cat /proc/swaps to list active swap devices. If you see nothing there, swap is disabled.

      Now that we've defined our main app container and the Init Containers that run before it to tune the container OS, we can add the final piece to our StatefulSet object definition file: the volumeClaimTemplates.

      Paste in the following volumeClaimTemplate block:

      elasticsearch_statefulset.yaml

      . . .
        volumeClaimTemplates:
        - metadata:
            name: data
            labels:
              app: elasticsearch
          spec:
            accessModes: [ "ReadWriteOnce" ]
            storageClassName: do-block-storage
            resources:
              requests:
                storage: 100Gi
      

      In this block, we define the StatefulSet's volumeClaimTemplates. Kubernetes will use this to create PersistentVolumes for the Pods. In the block above, we name it data (which is the name we refer to in the volumeMounts defined previously), and give it the same app: elasticsearch label as our StatefulSet.

      We then specify its access mode as ReadWriteOnce, which means that it can only be mounted as read-write by a single node. We define the storage class as do-block-storage in this guide since we use a DigitalOcean Kubernetes cluster for demonstration purposes. You should change this value depending on where you are running your Kubernetes cluster. To learn more, consult the Persistent Volume documentation.

      Finally, we specify that we'd like each PersistentVolume to be 100GiB in size. You should adjust this value depending on your production needs.

      The complete StatefulSet spec should look something like this:

      elasticsearch_statefulset.yaml

      apiVersion: apps/v1
      kind: StatefulSet
      metadata:
        name: es-cluster
        namespace: kube-logging
      spec:
        serviceName: elasticsearch
        replicas: 3
        selector:
          matchLabels:
            app: elasticsearch
        template:
          metadata:
            labels:
              app: elasticsearch
          spec:
            containers:
            - name: elasticsearch
              image: docker.elastic.co/elasticsearch/elasticsearch-oss:6.4.3
              resources:
                  limits:
                    cpu: 1000m
                  requests:
                    cpu: 100m
              ports:
              - containerPort: 9200
                name: rest
                protocol: TCP
              - containerPort: 9300
                name: inter-node
                protocol: TCP
              volumeMounts:
              - name: data
                mountPath: /usr/share/elasticsearch/data
              env:
                - name: cluster.name
                  value: k8s-logs
                - name: node.name
                  valueFrom:
                    fieldRef:
                      fieldPath: metadata.name
                - name: discovery.zen.ping.unicast.hosts
                  value: "es-cluster-0.elasticsearch,es-cluster-1.elasticsearch,es-cluster-2.elasticsearch"
                - name: discovery.zen.minimum_master_nodes
                  value: "2"
                - name: ES_JAVA_OPTS
                  value: "-Xms512m -Xmx512m"
            initContainers:
            - name: fix-permissions
              image: busybox
              command: ["sh", "-c", "chown -R 1000:1000 /usr/share/elasticsearch/data"]
              securityContext:
                privileged: true
              volumeMounts:
              - name: data
                mountPath: /usr/share/elasticsearch/data
            - name: increase-vm-max-map
              image: busybox
              command: ["sysctl", "-w", "vm.max_map_count=262144"]
              securityContext:
                privileged: true
            - name: increase-fd-ulimit
              image: busybox
              command: ["sh", "-c", "ulimit -n 65536"]
              securityContext:
                privileged: true
        volumeClaimTemplates:
        - metadata:
            name: data
            labels:
              app: elasticsearch
          spec:
            accessModes: [ "ReadWriteOnce" ]
            storageClassName: do-block-storage
            resources:
              requests:
                storage: 100Gi
      

      Once you're satisfied with your Elasticsearch configuration, save and close the file.

      Now, deploy the StatefulSet using kubectl:

      • kubectl create -f elasticsearch_statefulset.yaml

      You should see the following output:

      Output

      statefulset.apps/es-cluster created

      You can monitor the StatefulSet as it is rolled out using kubectl rollout status:

      • kubectl rollout status sts/es-cluster --namespace=kube-logging

      You should see the following output as the cluster is rolled out:

      Output

      Waiting for 3 pods to be ready... Waiting for 2 pods to be ready... Waiting for 1 pods to be ready... partitioned roll out complete: 3 new pods have been updated...

      Once all the Pods have been deployed, you can check that your Elasticsearch cluster is functioning correctly by performing a request against the REST API.

      To do so, first forward the local port 9200 to the port 9200 on one of the Elasticsearch nodes (es-cluster-0) using kubectl port-forward:

      • kubectl port-forward es-cluster-0 9200:9200 --namespace=kube-logging

      Then, in a separate terminal window, perform a curl request against the REST API:

      • curl http://localhost:9200/_cluster/state?pretty

      You shoulds see the following output:

      Output

      { "cluster_name" : "k8s-logs", "compressed_size_in_bytes" : 348, "cluster_uuid" : "QD06dK7CQgids-GQZooNVw", "version" : 3, "state_uuid" : "mjNIWXAzQVuxNNOQ7xR-qg", "master_node" : "IdM5B7cUQWqFgIHXBp0JDg", "blocks" : { }, "nodes" : { "u7DoTpMmSCixOoictzHItA" : { "name" : "es-cluster-1", "ephemeral_id" : "ZlBflnXKRMC4RvEACHIVdg", "transport_address" : "10.244.8.2:9300", "attributes" : { } }, "IdM5B7cUQWqFgIHXBp0JDg" : { "name" : "es-cluster-0", "ephemeral_id" : "JTk1FDdFQuWbSFAtBxdxAQ", "transport_address" : "10.244.44.3:9300", "attributes" : { } }, "R8E7xcSUSbGbgrhAdyAKmQ" : { "name" : "es-cluster-2", "ephemeral_id" : "9wv6ke71Qqy9vk2LgJTqaA", "transport_address" : "10.244.40.4:9300", "attributes" : { } } }, ...

      This indicates that our Elasticsearch cluster k8s-logs has successfully been created with 3 nodes: es-cluster-0, es-cluster-1, and es-cluster-2. The current master node is es-cluster-0.

      Now that your Elasticsearch cluster is up and running, you can move on to setting up a Kibana frontend for it.

      Step 3 — Creating the Kibana Deployment and Service

      To launch Kibana on Kubernetes, we'll create a Service called kibana, and a Deployment consisting of one Pod replica. You can scale the number of replicas depending on your production needs, and optionally specify a LoadBalancer type for the Service to load balance requests across the Deployment pods.

      This time, we'll create the Service and Deployment in the same file. Open up a file called kibana.yaml in your favorite editor:

      Paste in the following service spec:

      kibana.yaml

      apiVersion: v1
      kind: Service
      metadata:
        name: kibana
        namespace: kube-logging
        labels:
          app: kibana
      spec:
        ports:
        - port: 5601
        selector:
          app: kibana
      ---
      apiVersion: apps/v1
      kind: Deployment
      metadata:
        name: kibana
        namespace: kube-logging
        labels:
          app: kibana
      spec:
        replicas: 1
        selector:
          matchLabels:
            app: kibana
        template:
          metadata:
            labels:
              app: kibana
          spec:
            containers:
            - name: kibana
              image: docker.elastic.co/kibana/kibana-oss:6.4.3
              resources:
                limits:
                  cpu: 1000m
                requests:
                  cpu: 100m
              env:
                - name: ELASTICSEARCH_URL
                  value: http://elasticsearch:9200
              ports:
              - containerPort: 5601
      

      Then, save and close the file.

      In this spec we've defined a service called kibana in the kube-logging namespace, and gave it the app: kibana label.

      We've also specified that it should be accessible on port 5601 and use the app: kibana label to select the Service's target Pods.

      In the Deployment spec, we define a Deployment called kibana and specify that we'd like 1 Pod replica.

      We use the docker.elastic.co/kibana/kibana-oss:6.4.3 image. At this point you may substitute your own private or public Kibana image to use. We once again use the -oss suffix to specify that we'd like the open-source version.

      We specify that we'd like at the very least 0.1 vCPU guaranteed to the Pod, bursting up to a limit of 1 vCPU. You may change these parameters depending on your anticipated load and available resources.

      Next, we use the ELASTICSEARCH_URL environment variable to set the endpoint and port for the Elasticsearch cluster. Using Kubernetes DNS, this endpoint corresponds to its Service name elasticsearch. This domain will resolve to a list of IP addresses for the 3 Elasticsearch Pods. To learn more about Kubernetes DNS, consult DNS for Services and Pods.

      Finally, we set Kibana's container port to 5601, to which the kibana Service will forward requests.

      Once you're satisfied with your Kibana configuration, you can roll out the Service and Deployment using kubectl:

      • kubectl create -f kibana.yaml

      You should see the following output:

      Output

      service/kibana created deployment.apps/kibana created

      You can check that the rollout succeeded by running the following command:

      • kubectl rollout status deployment/kibana --namespace=kube-logging

      You should see the following output:

      Output

      deployment "kibana" successfully rolled out

      To access the Kibana interface, we'll once again forward a local port to the Kubernetes node running Kibana. Grab the Kibana Pod details using kubectl get:

      • kubectl get pods --namespace=kube-logging

      Output

      NAME READY STATUS RESTARTS AGE es-cluster-0 1/1 Running 0 55m es-cluster-1 1/1 Running 0 54m es-cluster-2 1/1 Running 0 54m kibana-6c9fb4b5b7-plbg2 1/1 Running 0 4m27s

      Here we observe that our Kibana Pod is called kibana-6c9fb4b5b7-plbg2.

      Forward the local port 5601 to port 5601 on this Pod:

      • kubectl port-forward kibana-6c9fb4b5b7-plbg2 5601:5601 --namespace=kube-logging

      You should see the following output:

      Output

      Forwarding from 127.0.0.1:5601 -> 5601 Forwarding from [::1]:5601 -> 5601

      Now, in your web browser, visit the following URL:

      http://localhost:5601
      

      If you see the following Kibana welcome page, you've successfully deployed Kibana into your Kubernetes cluster:

      Kibana Welcome Screen

      You can now move on to rolling out the final component of the EFK stack: the log collector, Fluentd.

      Step 4 — Creating the Fluentd DaemonSet

      In this guide, we'll set up Fluentd as a DaemonSet, which is a Kubernetes workload type that runs a copy of a given Pod on each Node in the Kubernetes cluster. Using this DaemonSet controller, we'll roll out a Fluentd logging agent Pod on every node in our cluster. To learn more about this logging architecture, consult “Using a node logging agent” from the official Kubernetes docs.

      In Kubernetes, containerized applications that log to stdout and stderr have their log streams captured and redirected to JSON files on the nodes. The Fluentd Pod will tail these log files, filter log events, transform the log data, and ship it off to the Elasticsearch logging backend we deployed in Step 2.

      In addition to container logs, the Fluentd agent will tail Kubernetes system component logs like kubelet, kube-proxy, and Docker logs. To see a full list of sources tailed by the Fluentd logging agent, consult the kubernetes.conf file used to configure the logging agent. To learn more about logging in Kubernetes clusters, consult “Logging at the node level” from the official Kubernetes documentation.

      Begin by opening a file called fluentd.yaml in your favorite text editor:

      Once again, we'll paste in the Kubernetes object definitions block by block, providing context as we go along. In this guide, we use the Fluentd DaemonSet spec provided by the Fluentd maintainers. Another helpful resource provided by the Fluentd maintainers is Kubernetes Logging with Fluentd.

      First, paste in the following ServiceAccount definition:

      fluentd.yaml

      apiVersion: v1
      kind: ServiceAccount
      metadata:
        name: fluentd
        namespace: kube-logging
        labels:
          app: fluentd
      

      Here, we create a Service Account called fluentd that the Fluentd Pods will use to access the Kubernetes API. We create it in the kube-logging Namespace and once again give it the label app: fluentd. To learn more about Service Accounts in Kubernetes, consult Configure Service Accounts for Pods in the official Kubernetes docs.

      Next, paste in the following ClusterRole block:

      fluentd.yaml

      . . .
      ---
      apiVersion: rbac.authorization.k8s.io/v1
      kind: ClusterRole
      metadata:
        name: fluentd
        labels:
          app: fluentd
      rules:
      - apiGroups:
        - ""
        resources:
        - pods
        - namespaces
        verbs:
        - get
        - list
        - watch
      

      Here we define a ClusterRole called fluentd to which we grant the get, list, and watch permissions on the pods and namespaces objects. ClusterRoles allow you to grant access to cluster-scoped Kubernetes resources like Nodes. To learn more about Role-Based Access Control and Cluster Roles, consult Using RBAC Authorization from the official Kubernetes documentation.

      Now, paste in the following ClusterRoleBinding block:

      fluentd.yaml

      . . .
      ---
      kind: ClusterRoleBinding
      apiVersion: rbac.authorization.k8s.io/v1
      metadata:
        name: fluentd
      roleRef:
        kind: ClusterRole
        name: fluentd
        apiGroup: rbac.authorization.k8s.io
      subjects:
      - kind: ServiceAccount
        name: fluentd
        namespace: kube-logging
      

      In this block, we define a ClusterRoleBinding called fluentd which binds the fluentd ClusterRole to the fluentd Service Account. This grants the fluentd ServiceAccount the permissions listed in the fluentd Cluster Role.

      At this point we can begin pasting in the actual DaemonSet spec:

      fluentd.yaml

      . . .
      ---
      apiVersion: apps/v1
      kind: DaemonSet
      metadata:
        name: fluentd
        namespace: kube-logging
        labels:
          app: fluentd
      

      Here, we define a DaemonSet called fluentd in the kube-logging Namespace and give it the app: fluentd label.

      Next, paste in the following section:

      fluentd.yaml

      . . .
      spec:
        selector:
          matchLabels:
            app: fluentd
        template:
          metadata:
            labels:
              app: fluentd
          spec:
            serviceAccount: fluentd
            serviceAccountName: fluentd
            tolerations:
            - key: node-role.kubernetes.io/master
              effect: NoSchedule
            containers:
            - name: fluentd
              image: fluent/fluentd-kubernetes-daemonset:v0.12-debian-elasticsearch
              env:
                - name:  FLUENT_ELASTICSEARCH_HOST
                  value: "elasticsearch.kube-logging.svc.cluster.local"
                - name:  FLUENT_ELASTICSEARCH_PORT
                  value: "9200"
                - name: FLUENT_ELASTICSEARCH_SCHEME
                  value: "http"
                - name: FLUENT_UID
                  value: "0"
      

      Here, we match the app: fluentd label defined in .metadata.labels and then assign the DaemonSet the fluentd Service Account. We also select the app: fluentd as the Pods managed by this DaemonSet.

      Next, we define a NoSchedule toleration to match the equivalent taint on Kubernetes master nodes. This will ensure that the DaemonSet also gets rolled out to the Kubernetes masters. If you don't want to run a Fluentd Pod on your master nodes, remove this toleration. To learn more about Kubernetes taints and tolerations, consult “Taints and Tolerations” from the official Kubernetes docs.

      Next, we begin defining the Pod container, which we call fluentd.

      We use the official v0.12 Debian image provided by the Fluentd maintainers. If you'd like to use your own private or public Fluentd image, or use a different image version, modify the image tag in the container spec. The Dockerfile and contents of this image are available in Fluentd's fluentd-kubernetes-daemonset Github repo.

      Next, we configure Fluentd using some environment variables:

      • FLUENT_ELASTICSEARCH_HOST: We set this to the Elasticsearch headless Service address defined earlier: elasticsearch.kube-logging.svc.cluster.local. This will resolve to a list of IP addresses for the 3 Elasticsearch Pods. The actual Elasticsearch host will most likely be the first IP address returned in this list. To distribute logs across the cluster, you will need to modify the configuration for Fluentd’s Elasticsearch Output plugin. To learn more about this plugin, consult Elasticsearch Output Plugin.
      • FLUENT_ELASTICSEARCH_PORT: We set this to the Elasticsearch port we configured earlier, 9200.
      • FLUENT_ELASTICSEARCH_SCHEME: We set this to http.
      • FLUENT_UID: We set this to 0 (superuser) so that Fluentd can access the files in /var/log.

      Finally, paste in the following section:

      fluentd.yaml

      . . .
              resources:
                limits:
                  memory: 512Mi
                requests:
                  cpu: 100m
                  memory: 200Mi
              volumeMounts:
              - name: varlog
                mountPath: /var/log
              - name: varlibdockercontainers
                mountPath: /var/lib/docker/containers
                readOnly: true
            terminationGracePeriodSeconds: 30
            volumes:
            - name: varlog
              hostPath:
                path: /var/log
            - name: varlibdockercontainers
              hostPath:
                path: /var/lib/docker/containers
      

      Here we specify a 512 MiB memory limit on the FluentD Pod, and guarantee it 0.1vCPU and 200MiB of memory. You can tune these resource limits and requests depending on your anticipated log volume and available resources.

      Next, we mount the /var/log and /var/lib/docker/containers host paths into the container using the varlog and varlibdockercontainers volumeMounts. These volumes are defined at the end of the block.

      The final parameter we define in this block is terminationGracePeriodSeconds, which gives Fluentd 30 seconds to shut down gracefully upon receiving a SIGTERM signal. After 30 seconds, the containers are sent a SIGKILL signal. The default value for terminationGracePeriodSeconds is 30s, so in most cases this parameter can be omitted. To learn more about gracefully terminating Kubernetes workloads, consult Google's “Kubernetes best practices: terminating with grace.”

      The entire Fluentd spec should look something like this:

      fluentd.yaml

      apiVersion: v1
      kind: ServiceAccount
      metadata:
        name: fluentd
        namespace: kube-logging
        labels:
          app: fluentd
      ---
      apiVersion: rbac.authorization.k8s.io/v1
      kind: ClusterRole
      metadata:
        name: fluentd
        labels:
          app: fluentd
      rules:
      - apiGroups:
        - ""
        resources:
        - pods
        - namespaces
        verbs:
        - get
        - list
        - watch
      ---
      kind: ClusterRoleBinding
      apiVersion: rbac.authorization.k8s.io/v1
      metadata:
        name: fluentd
      roleRef:
        kind: ClusterRole
        name: fluentd
        apiGroup: rbac.authorization.k8s.io
      subjects:
      - kind: ServiceAccount
        name: fluentd
        namespace: kube-logging
      ---
      apiVersion: apps/v1
      kind: DaemonSet
      metadata:
        name: fluentd
        namespace: kube-logging
        labels:
          app: fluentd
      spec:
        selector:
          matchLabels:
            app: fluentd
        template:
          metadata:
            labels:
              app: fluentd
          spec:
            serviceAccount: fluentd
            serviceAccountName: fluentd
            tolerations:
            - key: node-role.kubernetes.io/master
              effect: NoSchedule
            containers:
            - name: fluentd
              image: fluent/fluentd-kubernetes-daemonset:v0.12-debian-elasticsearch
              env:
                - name:  FLUENT_ELASTICSEARCH_HOST
                  value: "elasticsearch.kube-logging.svc.cluster.local"
                - name:  FLUENT_ELASTICSEARCH_PORT
                  value: "9200"
                - name: FLUENT_ELASTICSEARCH_SCHEME
                  value: "http"
                - name: FLUENT_UID
                  value: "0"
              resources:
                limits:
                  memory: 512Mi
                requests:
                  cpu: 100m
                  memory: 200Mi
              volumeMounts:
              - name: varlog
                mountPath: /var/log
              - name: varlibdockercontainers
                mountPath: /var/lib/docker/containers
                readOnly: true
            terminationGracePeriodSeconds: 30
            volumes:
            - name: varlog
              hostPath:
                path: /var/log
            - name: varlibdockercontainers
              hostPath:
                path: /var/lib/docker/containers
      

      Once you've finished configuring the Fluentd DaemonSet, save and close the file.

      Now, roll out the DaemonSet using kubectl:

      • kubectl create -f fluentd.yaml

      You should see the following output:

      Output

      serviceaccount/fluentd created clusterrole.rbac.authorization.k8s.io/fluentd created clusterrolebinding.rbac.authorization.k8s.io/fluentd created daemonset.extensions/fluentd created

      Verify that your DaemonSet rolled out successfully using kubectl:

      • kubectl get ds --namespace=kube-logging

      You should see the following status output:

      Output

      NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE fluentd 3 3 3 3 3 <none> 58s

      This indicates that there are 3 fluentd Pods running, which corresponds to the number of nodes in our Kubernetes cluster.

      We can now check Kibana to verify that log data is being properly collected and shipped to Elasticsearch.

      With the kubectl port-forward still open, navigate to http://localhost:5601.

      Click on Discover in the left-hand navigation menu.

      You should see the following configuration window:

      Kibana Index Pattern Configuration

      This allows you to define the Elasticsearch indices you'd like to explore in Kibana. To learn more, consult Defining your index patterns in the official Kibana docs. For now, we'll just use the logstash-* wildcard pattern to capture all the log data in our Elasticsearch cluster. Enter logstash-* in the text box and click on Next step.

      You'll then be brought to the following page:

      Kibana Index Pattern Settings

      This allows you to configure which field Kibana will use to filter log data by time. In the dropdown, select the @timestamp field, and hit Create index pattern.

      Now, hit Discover in the left hand navigation menu.

      You should see a histogram graph and some recent log entries:

      Kibana Incoming Logs

      At this point you've successfully configured and rolled out the EFK stack on your Kubernetes cluster. To learn how to use Kibana to analyze your log data, consult the Kibana User Guide.

      In the next optional section, we'll deploy a simple counter Pod that prints numbers to stdout, and find its logs in Kibana.

      Step 5 (Optional) — Testing Container Logging

      To demonstrate a basic Kibana use case of exploring the latest logs for a given Pod, we'll deploy a minimal counter Pod that prints sequential numbers to stdout.

      Let's begin by creating the Pod. Open up a file called counter.yaml in your favorite editor:

      Then, paste in the following Pod spec:

      counter.yaml

      apiVersion: v1
      kind: Pod
      metadata:
        name: counter
      spec:
        containers:
        - name: count
          image: busybox
          args: [/bin/sh, -c,
                  'i=0; while true; do echo "$i: $(date)"; i=$((i+1)); sleep 1; done']
      

      Save and close the file.

      This is a minimal Pod called counter that runs a while loop, printing numbers sequentially.

      Deploy the counter Pod using kubectl:

      • kubectl create -f counter.yaml

      Once the Pod has been created and is running, navigate back to your Kibana dashboard.

      From the Discover page, in the search bar enter kubernetes.pod_name:counter. This filters the log data for Pods named counter.

      You should then see a list of log entries for the counter Pod:

      Counter Logs in Kibana

      You can click into any of the log entries to see additional metadata like the container name, Kubernetes node, Namespace, and more.

      Conclusion

      In this guide we've demonstrated how to set up and configure Elasticsearch, Fluentd, and Kibana on a Kubernetes cluster. We've used a minimal logging architecture that consists of a single logging agent Pod running on each Kubernetes worker node.

      Before deploying this logging stack into your production Kubernetes cluster, it’s best to tune the resource requirements and limits as indicated throughout this guide. You may also want to use the X-Pack enabled image with built-in monitoring and security.

      The logging architecture we’ve used here consists of 3 Elasticsearch Pods, a single Kibana Pod (not load-balanced), and a set of Fluentd Pods rolled out as a DaemonSet. You may wish to scale this setup depending on your production use case. To learn more about scaling your Elasticsearch and Kibana stack, consult Scaling Elasticsearch.

      Kubernetes also allows for more complex logging agent architectures that may better suit your use case. To learn more, consult Logging Architecture from the Kubernetes docs.



      Source link