One place for hosting & domains

      How To Serve Flask Applications with uWSGI and Nginx on Ubuntu 18.04


      Introdução

      Neste guia, você construirá um aplicativo Python usando o microframework do Flask no Ubuntu 18.04. A maior parte deste artigo será sobre como configurar o servidor do aplicativo uWSGI e como iniciar o aplicativo e configurar o Nginx para atuar como um proxy reverso no front-end.

      Pré-requisitos

      Antes de iniciar este guia, você deve ter:

      • Um servidor com o Ubuntu 18.04 instalado e um usuário não raiz com privilégios sudo. Siga nosso guia de configuração inicial do servidor para orientação.
      • O Nginx instalado, seguindo os Passos 1 e 2 de Como Instalar o Nginx no Ubuntu 18.04.
      • Um nome de domínio configurado para apontar para o seu servidor. Você pode comprar um no Namecheap ou obter um de graça no Freenom. Você pode aprender como apontar domínios para o DigitalOcean seguindo a relevante documentação para domínios e DNS. Certifique-se de criar os seguintes registros DNS:

        • Um registro com o your_domain <^>apontando para o endereço IP público do seu servidor.
        • Um registro A com www.your_domain apontando para o endereço IP público do seu servidor.
      • Familiaridade com a uWSGI, nosso servidor do aplicativo, e as especficiações da WSGI. Este debate sobre definições e conceitos examinará ambos em detalhes.

      Passo 1 — Instalando os componentes dos repositórios do Ubuntu

      Nosso primeiro passo será instalar todas as partes dos repositórios do Ubuntu que vamos precisar. Vamos instalar o pip e o gerenciador de pacotes Python para gerenciar nossos componentes Python. Também vamos obter os arquivos de desenvolvimento do Python necessários para construir a uWSGI.

      Primeiramente, vamos atualizar o índice local de pacotes e instalar os pacotes que irão nos permitir construir nosso ambiente Python. Estes incluem o python3-pip, junto com alguns outros pacotes e ferramentas de desenvolvimento necessários para um ambiente de programação robusto:

      • sudo apt update
      • sudo apt install python3-pip python3-dev build-essential libssl-dev libffi-dev python3-setuptools

      Com esses pacotes instalados, vamos seguir em frente para criar um ambiente virtual para nosso projeto.

      Passo 2 — Criando um Ambiente Virtual em Python

      Em seguida, vamos configurar um ambiente virtual para isolar nosso aplicativo Flask dos outros arquivos Python no sistema.

      Inicie instalando o pacote python3-venv, que instalará o módulo venv:

      • sudo apt install python3-venv

      Em seguida, vamos fazer um diretório pai para nosso projeto Flask. Acesse o diretório após criá-lo:

      • mkdir ~/myproject
      • cd ~/myproject

      Crie um ambiente virtual para armazenar os requisitos Python do projeto Flask digitando:

      • python3.6 -m venv myprojectenv

      Isso instalará uma cópia local do Python e do pip para um diretório chamado myprojectenv dentro do diretório do seu projeto.

      Antes de instalar aplicativos no ambiente virtual, você precisa ativá-lo. Faça isso digitando:

      • source myprojectenv/bin/activate

      Seu prompt mudará para indicar que você agora está operando no ambiente virtual. Ele se parecerá com isso (myprojectenv)user@host:~/myproject$.

      Passo 3 — Configurando um aplicativo Flask

      Agora que você está no seu ambiente virtual, instale o Flask e a uWSGI e comece a projetar o seu aplicativo.

      Primeiramente, vamos instalar o wheel com a instância local do pip para garantir que nossos pacotes serão instalados mesmo se estiverem faltando arquivos wheel:

      Nota: Independentemente da versão de Python que você estiver usando, quando o ambiente virtual for ativado, você deverá usar o comando pip (não pip3).

      Em seguida, vamos instalar o Flask e a uWSGI:

      Criando um App de exemplo

      Agora que você tem o Flask disponível, você pode criar um aplicativo simples. O Flask é um microframework. Ele não inclui muitas das ferramentas que os frameworks mais completos talvez tenham. Ele existe, principalmente, como um módulo que você pode importar para seus projetos para ajudá-lo na inicialização de um aplicativo Web.

      Embora seu aplicativo possa ser mais complexo, vamos criar nosso app Flask em um único arquivo, chamado “myproject.py:

      • nano ~/myproject/myproject.py

      O código do aplicativo ficará neste arquivo. Ele importará o Flask e instanciará um objeto Flask. Você pode usar isto para definir as funções que devem ser executadas quando uma rota específica for solicitada:

      ~/myproject/myproject.py

      from flask import Flask
      app = Flask(__name__)
      
      @app.route("/")
      def hello():
          return "<h1 style='color:blue'>Hello There!</h1>"
      
      if __name__ == "__main__":
          app.run(host='0.0.0.0')
      

      Isso define basicamente qual conteúdo apresentar quando o domínio raiz for acessado. Salve e feche o arquivo quando você terminar.

      Se você seguiu o guia de configuração inicial do servidor, você deverá ter um firewall UFW ativado. Para testar o aplicativo, você precisa permitir o acesso à porta 5000:

      Agora, você pode testar seu app Flask digitando:

      Você verá um resultado como o seguinte, incluindo um aviso útil lembrando você para não usar essa configuração do servidor na produção:

      Output

      * Serving Flask app "myproject" (lazy loading) * Environment: production WARNING: Do not use the development server in a production environment. Use a production WSGI server instead. * Debug mode: off * Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)

      Visite o endereço IP do seu servidor seguido de :5000 no seu navegador Web:

      http://your_server_ip:5000
      

      Você deve ver algo como isto:

      Flask sample app

      Quando terminar, tecle CTRL-C na janela do seu terminal para parar o servidor de desenvolvimento Flask.

      Criando o ponto de entrada da WSGI

      Em seguida, vamos criar um arquivo que servirá como o ponto de entrada para nosso aplicativo. Isso dirá ao nosso servidor uWSGI como interagir com ele.

      Vamos chamar o arquivo de wsgi.py:

      Neste arquivo, vamos importar a instância Flask do nosso aplicativo e então executá-lo:

      ~/myproject/wsgi.py

      from myproject import app
      
      if __name__ == "__main__":
          app.run()
      

      Salve e feche o arquivo quando você terminar.

      Passo 4 — Configurando a uWSGI

      Seu aplicativo agora está gravado com um ponto de entrada estabelecido. Podemos agora seguir em frente para configurar a uWSGI.

      Testando o atendimento à uWSGI

      Vamos testar para ter certeza de que a uWSGI pode atender nosso aplicativo.

      Podemos fazer isso simplesmente passando-lhe o nome do nosso ponto de entrada. Criamos esse ponto de entrada através do nome do módulo (menos a extensão .py) mais o nome do objeto callable dentro do aplicativo. No nosso caso, trata-se do wsgi:app.

      Vamos também especificar o soquete, de modo que ele seja iniciado em uma interface disponível publicamente, bem como o protocolo, para que ele use o HTTP em vez do protocolo binário uwsgi. Vamos usar o mesmo número de porta, 5000, que abrimos mais cedo:

      • uwsgi --socket 0.0.0.0:5000 --protocol=http -w wsgi:app

      Visite o endereço IP do seu servidor com :5000 anexo ao final no seu navegador Web novamente:

      http://your_server_ip:5000
      

      Você deve ver o resultado do seu aplicativo novamente:

      Flask sample app

      Quando você tiver confirmado que ele está funcionando corretamente, pressione CTRL-C na janela do seu terminal.

      Acabamos agora o nosso ambiente virtual, para que possamos desativá-lo:

      Agora, qualquer comando Python voltará a usar o ambiente do sistema Python.

      Criando um arquivo de configuração da uWSGI

      Você testou e viu que a uWSGI pode atender o seu aplicativo. Porém, em última instância, você irá querer algo mais robusto para o uso a longo prazo. Você pode criar um arquivo de configuração da uWSGI com as opções relevantes para isso.

      Vamos colocar aquele arquivo no diretório do nosso projeto e chamá-lo de myproject.ini:

      • nano ~/myproject/myproject.ini

      Dentro, vamos começar com o cabeçalho [uwsgi], para que a uWSGI saiba aplicar as configurações. Vamos especificar duas coisas: o módulo propriamente dito, recorrendo ao arquivo wsgi.py (menos a extensão) e ao objeto callable dentro do arquivo, app:

      ~/myproject/myproject.ini

      [uwsgi]
      module = wsgi:app
      

      Em seguida, vamos dizer à uWSGI para iniciar em modo mestre e gerar cinco processos de trabalho para atender a pedidos reais:

      ~/myproject/myproject.ini

      [uwsgi]
      module = wsgi:app
      
      master = true
      processes = 5
      

      Quando você estava testando, você expôs a uWSGI em uma porta da rede. No entanto, você usará o Nginx para lidar com conexões reais do cliente, as quais então passarão as solicitações para a uWSGI. Uma vez que esses componentes estão operando no mesmo computador,é preferífel usar um soquete Unix porque ele é mais rápido e mais seguro. Vamos chamar o soquete de myproject<^>.sock e colocá-lo neste diretório.

      Vamos alterar também as permissões no soquete. Mais tarde, iremos atribuir a propriedade do grupo Nginx sobre o processo da uWSGI. Dessa forma, precisamos assegurar que o proprietário do grupo do soquete consiga ler as informações que estão nele e gravar nele. Quando o processo parar, também limparemos o soquete, adicionando a opção vacuum:

      ~/myproject/myproject.ini

      [uwsgi]
      module = wsgi:app
      
      master = true
      processes = 5
      
      socket = myproject.sock
      chmod-socket = 660
      vacuum = true
      

      A última coisa que vamos fazer é definir a opção die-on-term. Isso pode ajudar a garantir que o sistema init e a uWSGI tenham as mesmas suposições sobre o que cada sinal de processo significa. Configurar isso alinha os dois componentes do sistema, implementando o comportamento esperado:

      ~/myproject/myproject.ini

      [uwsgi]
      module = wsgi:app
      
      master = true
      processes = 5
      
      socket = myproject.sock
      chmod-socket = 660
      vacuum = true
      
      die-on-term = true
      

      Você pode ter notado que não especificamos um protocolo como fizemos a partir da linha de comando. Isso acontece porque, por padrão, a uWSGI fala usando o protocolo uwsgi, um protocolo binário rápido projetado para se comunicar com outros servidores. O Nginx pode falar este protocolo de maneira nativa, então é melhor usar isso do que forçar a comunicação pelo HTTP.

      Quando você terminar, salve e feche o arquivo.

      Passo 5 — Criando um arquivo de unidade systemd

      Em seguida, vamos criar o arquivo de unidade systemd. Criar um arquivo de unidade systemd permitirá que o sistema init do Ubuntu inicie automaticamente a uWSGI e atenda o aplicativo Flask sempre que o servidor for reinicializado.

      Para começar, crie um arquivo de unidade que termine com .service dentro do diretório /etc/systemd/system:

      • sudo nano /etc/systemd/system/myproject.service

      Ali, vamos começar com a seção [Unit], que é usada para especificar os metadados e dependências. Vamos colocar uma descrição do nosso serviço aqui e dizer ao sistema init para iniciar isso somente após o objetivo da rede ter sido alcançado:

      /etc/systemd/system/myproject.service

      [Unit]
      Description=uWSGI instance to serve myproject
      After=network.target
      

      Em seguida, vamos abrir a seção [Service]. Isso especificará o usuário e o grupo sob o qual que queremos que o processo seja executado. Vamos dar à nossa conta de usuário regular a propriedade sobre o processo, uma vez que ela possui todos os arquivos relevantes. Vamos também dar propriedade sobre o grupo ao grupo www-data para que o Nginx possa se comunicar facilmente com os processos da uWSGI. Lembre-se de substituir esse nome de usuário pelo seu nome de usuário:

      /etc/systemd/system/myproject.service

      [Unit]
      Description=uWSGI instance to serve myproject
      After=network.target
      
      [Service]
      User=sammy
      Group=www-data
      

      Em seguida, vamos mapear o diretório de trabalho e definir a variável de ambiente PATH para que o sistema init saiba que os executáveis do processo estão localizados dentro do nosso ambiente virtual. Vamos também especificar o comando para iniciar o serviço. O systemd exige que seja dado o caminho completo para o executável uWSGI, que está instalado dentro do nosso ambiente virtual. Vamos passar o nome do arquivo de configuração .ini que criamos no nosso diretório de projeto.

      Lembre-se de substituir o nome de usuário e os caminhos do projeto por seus próprios dados:

      /etc/systemd/system/myproject.service

      [Unit]
      Description=uWSGI instance to serve myproject
      After=network.target
      
      [Service]
      User=sammy
      Group=www-data
      WorkingDirectory=/home/sammy/myproject
      Environment="PATH=/home/sammy/myproject/myprojectenv/bin"
      ExecStart=/home/sammy/myproject/myprojectenv/bin/uwsgi --ini myproject.ini
      

      Finalmente, vamos adicionar uma seção [Install]. Isso dirá ao systemd ao que vincular este serviço se nós o habilitarmos para iniciar na inicialização. Queremos que este serviço comece quando o sistema regular de vários usuários estiver funcionando:

      /etc/systemd/system/myproject.service

      [Unit]
      Description=uWSGI instance to serve myproject
      After=network.target
      
      [Service]
      User=sammy
      Group=www-data
      WorkingDirectory=/home/sammy/myproject
      Environment="PATH=/home/sammy/myproject/myprojectenv/bin"
      ExecStart=/home/sammy/myproject/myprojectenv/bin/uwsgi --ini myproject.ini
      
      [Install]
      WantedBy=multi-user.target
      

      Com isso, nosso arquivo de serviço systemd está completo. Salve e feche-o agora.

      Podemos agora iniciar o serviço uWSGI que criamos e habilitá-lo para que ele seja iniciado na inicialização:

      • sudo systemctl start myproject
      • sudo systemctl enable myproject

      Vamos verificar o status:

      • sudo systemctl status myproject

      Você deve ver um resultado como este:

      Output

      ● myproject.service - uWSGI instance to serve myproject Loaded: loaded (/etc/systemd/system/myproject.service; enabled; vendor preset: enabled) Active: active (running) since Fri 2018-07-13 14:28:39 UTC; 46s ago Main PID: 30360 (uwsgi) Tasks: 6 (limit: 1153) CGroup: /system.slice/myproject.service ├─30360 /home/sammy/myproject/myprojectenv/bin/uwsgi --ini myproject.ini ├─30378 /home/sammy/myproject/myprojectenv/bin/uwsgi --ini myproject.ini ├─30379 /home/sammy/myproject/myprojectenv/bin/uwsgi --ini myproject.ini ├─30380 /home/sammy/myproject/myprojectenv/bin/uwsgi --ini myproject.ini ├─30381 /home/sammy/myproject/myprojectenv/bin/uwsgi --ini myproject.ini └─30382 /home/sammy/myproject/myprojectenv/bin/uwsgi --ini myproject.ini

      Se encontrar erros, certifique-se de resolvê-los antes de continuar com o tutorial.

      Passo 5 — Configurando o Nginx para solicitações de proxy

      Nosso servidor do aplicativo uWSGI agora deverá estar funcionando, esperando pedidos no arquivo do soquete, no diretório do projeto. Vamos configurar o Nginx para passar pedidos da Web àquele soquete, usando o protocolo uwsgi.

      Comece criando um novo arquivo de configuração do bloco do servidor no diretório sites-available do Nginx. Vamos chamá-lo de myproject para mantê-lo alinhado com o resto do guia:

      • sudo nano /etc/nginx/sites-available/myproject

      Abra um bloco de servidor e diga ao Nginx para escutar na porta padrão 80. Vamos também dizer a ele para usar este bloco para pedidos para o nome de domínio do nosso servidor:

      /etc/nginx/sites-available/myproject

      server {
          listen 80;
          server_name your_domain www.your_domain;
      }
      

      Em seguida, vamos adicionar um bloco de localização que corresponda a cada pedido. Dentro deste bloco, vamos incluir o arquivo uwsgi_params, que especifica alguns parâmetros gerais da uWSGI que precisam ser configurados. Vamos então passar os pedidos para o soquete que definimos usando a diretiva uwsgi_pass:

      /etc/nginx/sites-available/myproject

      server {
          listen 80;
          server_name your_domain www.your_domain;
      
          location / {
              include uwsgi_params;
              uwsgi_pass unix:/home/sammy/myproject/myproject.sock;
          }
      }
      

      Salve e feche o arquivo quando você terminar.

      Para habilitar a configuração do bloco do servidor Nginx que você acabou de criar, vincule o arquivo ao diretório sites-enabled:

      • sudo ln -s /etc/nginx/sites-available/myproject /etc/nginx/sites-enabled

      Com o arquivo naquele diretório, podemos realizar testes à procura de erros de sintaxe, digitando:

      Se esse procedimento retornar sem indicar problemas, reinicie o processo do Nginx para ler a nova configuração:

      • sudo systemctl restart nginx

      Finalmente, vamos ajustar o firewall novamente. Já não precisamos de acesso através da porta 5000, então podemos remover essa regra. Podemos então conceder acesso total ao servidor Nginx:

      • sudo ufw delete allow 5000
      • sudo ufw allow 'Nginx Full'

      Agora, você consegue navegar até o nome de domínio do seu servidor no seu navegador Web:

      http://your_domain
      

      Você deve ver o resultado do seu aplicativo:

      Flask sample app

      Caso encontre quaisquer erros, tente verificar o seguinte:

      • sudo less /var/log/nginx/error.log: verifica os registros de erros do Nginx.
      • sudo less /var/log/nginx/access.log: verifica os registros de acesso do Nginx.
      • sudo journalctl -u nginx: verifica os registros de processo do Nginx.
      • sudo journalctl -u myproject: verifica os registros de uWSGI do seu app Flask.

      Passo 7 — Protegendo o aplicativo

      Para garantir que o tráfego para seu servidor permaneça protegido, vamos obter um certificado SSL para seu domínio. Há várias maneiras de fazer isso, incluindo a obtenção de um certificado gratuito do Let’s Encrypt, gerando um certificado autoassinado ou comprando algum de outro provedor e configurando o Nginx para usá-lo, seguindo os Passos 2 a 6 de Como criar um certificado SSL autoassinado para o Nginx no Ubuntu 18.04. Vamos escolher a opção um por questão de conveniência.

      Primeiramente, adicione o repositório de software Ubuntu do Certbot:

      • sudo add-apt-repository ppa:certbot/certbot

      Aperte ENTER para aceitar.

      Em seguida, instale o pacote Nginx do Certbot com o apt:

      • sudo apt install python-certbot-nginx

      O Certbot oferecer várias maneiras de obter certificados SSL através de plug-ins. O plug-in Nginx cuidará da reconfiguração do Nginx e recarregará a configuração sempre que necessário. Para usar este plug-in, digite o seguinte:

      • sudo certbot --nginx -d your_domain -d www.your_domain

      Esse comando executa o certbot com o plug-in --nginx, usando -d para especificar os nomes para os quais desejamos um certificado válido.

      Se essa é a primeira vez que você executa o certbot, você será solicitado a informar um endereço de e-mail e concordar com os termos de serviço. Após fazer isso, o certbot se comunicará com o servidor da Let’s Encrypt, executando posteriormente um desafio para verificar se você controla o domínio para o qual está solicitando um certificado.

      Se tudo correr bem, o certbot perguntará como você quer definir suas configurações de HTTPS.

      Output

      Please choose whether or not to redirect HTTP traffic to HTTPS, removing HTTP access. ------------------------------------------------------------------------------- 1: No redirect - Make no further changes to the webserver configuration. 2: Redirect - Make all requests redirect to secure HTTPS access. Choose this for new sites, or if you're confident your site works on HTTPS. You can undo this change by editing your web server's configuration. ------------------------------------------------------------------------------- Select the appropriate number [1-2] then [enter] (press 'c' to cancel):

      Select your choice then hit ​​​ENTER​​​​​. A configuração será atualizada e o Nginx recarregará para aplicar as novas configurações. O certbot será encerrado com uma mensagem informando que o processo foi concluído e onde os certificados estão armazenados:

      Output

      IMPORTANT NOTES: - Congratulations! Your certificate and chain have been saved at: /etc/letsencrypt/live/your_domain/fullchain.pem Your key file has been saved at: /etc/letsencrypt/live/your_domain/privkey.pem Your cert will expire on 2018-07-23. To obtain a new or tweaked version of this certificate in the future, simply run certbot again with the "certonly" option. To non-interactively renew *all* of your certificates, run "certbot renew" - Your account credentials have been saved in your Certbot configuration directory at /etc/letsencrypt. You should make a secure backup of this folder now. This configuration directory will also contain certificates and private keys obtained by Certbot so making regular backups of this folder is ideal. - If you like Certbot, please consider supporting our work by: Donating to ISRG / Let's Encrypt: https://letsencrypt.org/donate Donating to EFF: https://eff.org/donate-le

      Se seguiu as instruções de instalação do Nginx nos pré-requisitos, a permissão do perfil HTTP redundante não será mais necessária:

      • sudo ufw delete allow 'Nginx HTTP'

      Para verificar a configuração, vamos navegar novamente até o seu domínio, usando https://:

      https://your_domain
      

      Você deve ver novamente o resultado do seu aplicativo, junto com o indicador de segurança do seu navegador, que deve indicar que o site está protegido.

      Conclusão

      Neste guia, você criou e protegeu um aplicativo Flask simples em um ambiente virtual Python. Você criou um ponto de entrada da WSGI para que qualquer servidor de aplicativo compatível com a WSGI possa interagir com ela; depois, configurou o servidor de app uWSGI para fornecer essa função. Depois, criou um arquivo de serviço systemd para iniciar automaticamente o servidor do aplicativo na inicialização. Você também criou um bloco de servidor Nginx que passa o tráfego Web do cliente para o servidor do aplicativo - retransmitindo pedidos externos - e protegeu o tráfego para seu servidor com o Let’s Encrypt.

      O Flask é um framework muito simples - mas extremamente flexível, destinado a fornecer funcionalidade a seus aplicativos sem ser restritivo demais em termos de estrutura e design. Você pode usar a pilha geral descrita neste guia para atender os aplicativos flask que projetar.



      Source link

      How To Set Up the Eclipse Theia Cloud IDE Platform on DigitalOcean Kubernetes


      The author selected the Free and Open Source Fund to receive a donation as part of the Write for DOnations program.

      Introduction

      With developer tools moving to the cloud, creation and adoption of cloud IDE (Integrated Development Environment) platforms is growing. Cloud IDEs are accessible from every type of modern device through web browsers, and they offer numerous advantages for real-time collaboration scenarios. Working in a cloud IDE provides a unified development and testing environment for you and your team, while minimizing platform incompatibilities. Because they are natively based on cloud technologies, they are able to make use of the cluster to achieve tasks, which can greatly exceed the power and reliability of a single development computer.

      Eclipse Theia is an extensible cloud IDE running on a remote server and accessible from a web browser. Visually, it’s designed to look and behave similarly to Microsoft Visual Studio Code, which means that it supports many programming languages, has a flexible layout, and has an integrated terminal. What separates Eclipse Theia from other cloud IDE software is its extensibility; it can be modified using custom extensions, which allow you to craft a cloud IDE suited to your needs.

      In this tutorial, you will set up the default version of the Eclipse Theia cloud IDE platform on your DigitalOcean Kubernetes cluster and expose it at your domain, secured with Let’s Encrypt certificates and requiring the visitor to authenticate. In the end, you’ll have Eclipse Theia running on your Kubernetes cluster available via HTTPS and requiring the visitor to log in.

      Prerequisites

      • A DigitalOcean Kubernetes cluster with your connection configured as the kubectl default. Instructions on how to configure kubectl are shown in the Connect to your Cluster step when you create your cluster. To create a Kubernetes cluster on DigitalOcean, see Kubernetes Quickstart.

      • The Helm package manager installed on your local machine, and Tiller installed on your cluster. To do this, complete Steps 1 and 2 of the How To Install Software on Kubernetes Clusters with the Helm Package Manager tutorial.

      • The Nginx Ingress Controller and Cert Manager installed on your cluster using Helm in order to expose Eclipse Theia using Ingress Resources. To do this, follow How to Set Up an Nginx Ingress on DigitalOcean Kubernetes Using Helm.

      • A fully registered domain name to host Eclipse Theia. This tutorial will use theia.your_domain throughout. You can purchase a domain name on Namecheap, get one for free on Freenom, or use the domain registrar of your choice.

      Step 1 — Installing and Exposing Eclipse Theia

      To begin you’ll install Eclipse Theia to your DigitalOcean Kubernetes cluster. Then, you will expose it at your desired domain using an Nginx Ingress.

      Since you created two example deployments and a resource as part of the prerequisites, you can freely delete them by running the following commands:

      • kubectl delete -f hello-kubernetes-ingress.yaml
      • kubectl delete -f hello-kubernetes-first.yaml
      • kubectl delete -f hello-kubernetes-second.yaml

      For this tutorial, you’ll store the deployment configuration on your local machine, in a file named eclipse-theia.yaml. Create it using the following command:

      Add the following lines to the file:

      eclipse-theia.yaml

      apiVersion: v1
      kind: Namespace
      metadata:
        name: theia
      ---
      apiVersion: networking.k8s.io/v1beta1
      kind: Ingress
      metadata:
        name: theia-next
        namespace: theia
        annotations:
          kubernetes.io/ingress.class: nginx
      spec:
        rules:
        - host: theia.your_domain
          http:
            paths:
            - backend:
                serviceName: theia-next
                servicePort: 80
      ---
      apiVersion: v1
      kind: Service
      metadata:
       name: theia-next
       namespace: theia
      spec:
       ports:
       - port: 80
         targetPort: 3000
       selector:
         app: theia-next
      ---
      apiVersion: apps/v1
      kind: Deployment
      metadata:
        labels:
          app: theia-next
        name: theia-next
        namespace: theia
      spec:
        selector:
          matchLabels:
            app: theia-next
        replicas: 1
        template:
          metadata:
            labels:
              app: theia-next
          spec:
            containers:
            - image: theiaide/theia:next
              imagePullPolicy: Always
              name: theia-next
              ports:
              - containerPort: 3000
      

      This configuration defines a Namespace, a Deployment, a Service, and an Ingress. The Namespace is called theia and will contain all Kubernetes objects related to Eclipse Theia, separated from the rest of the cluster. The Deployment consists of one instance of the theiaide/theia:next Docker image with the port 3000 exposed on the container. The Service looks for the Deployment and remaps the container port to the usual HTTP port, 80, allowing in-cluster access to Eclipse Theia.

      The Ingress contains a rule to serve the Service at port 80 externally at your desired domain. In its annotations, you specify that the Nginx Ingress Controller should be used for request processing. Remember to replace theia.your_domain with your desired domain that you’ve pointed to your cluster’s Load Balancer, then save and close the file.

      Save and exit the file.

      Then, create the configuration in Kubernetes by running the following command:

      • kubectl create -f eclipse-theia.yaml

      You’ll see the following output:

      Output

      namespace/theia created ingress.networking.k8s.io/theia-next created service/theia-next created deployment.apps/theia-next created

      You can watch the Eclipse Theia pod creation by running:

      • kubectl get pods -w -n theia

      The output will look like this:

      Output

      NAME READY STATUS RESTARTS AGE theia-next-847d8c8b49-jt9bc 0/1 ContainerCreating 0 22s

      After some time, the status will turn to RUNNING, which means you’ve successfully installed Eclipse Theia to your cluster.

      Navigate to your domain in your browser. You’ll see the default Eclipse Theia editor GUI.

      The default Eclipse Theia editor GUI

      You’ve deployed Eclipse Theia to your DigitalOcean Kubernetes cluster and exposed it at your desired domain with an Ingress. Next, you’ll secure access to your Eclipse Theia deployment by enabling login authentication.

      Step 2 — Enabling Login Authentication For Your Domain

      In this step, you’ll enable username and password authentication for your Eclipse Theia deployment. You’ll achieve this by first curating a list of valid login combinations using the htpasswd utility. Then, you’ll create a Kubernetes secret containing that list and configure the Ingress to authenticate visitors according to it. In the end, your domain will only be accessible when the visitor inputs a valid username and password combination. This will prevent guests and other unwanted visitors from accessing Eclipse Theia.

      The htpasswd utility comes from the Apache web server and is used for creating files that store lists of login combinations. The format of htpasswd files is one username:hashed_password combination per line, which is the format the Nginx Ingress Controller expects the list to conform to.

      Start by installing htpasswd on your system by running the following command:

      • sudo apt install apache2-utils -y

      You’ll store the list in a file called auth. Create it by running:

      This file needs to be named auth because the Nginx Ingress Controller expects the secret to contain a key called data.auth. If it’s missing, the controller will return HTTP 503 Service Unavailable status.

      Add a username and password combination to auth by running the following command:

      Remember to replace username with your desired username. You’ll be asked for an accompanying password and the combination will be added into the auth file. You can repeat this command for as many users as you wish to add.

      Note: If the system you are working on does not have htpasswd installed, you can use a Dockerized version instead.

      You’ll need to have Docker installed on your machine. For instructions on how to do so, visit the official docs.

      Run the following command to run a dockerized version:

      • docker run --rm -it httpd htpasswd -n <username>

      Remember to replace <username> with the username you want to use. You’ll be asked for a password. The hashed login combination will be written out on the console, and you’ll need to manually add it to the end of the auth file. Repeat this process for as many logins as you wish to add.

      When you are done, create a new secret in Kubernetes with the contents of the file by running the following command:

      • kubectl create secret generic theia-basic-auth --from-file=auth -n theia

      You can see the secret with:

      • kubectl get secret theia-basic-auth -o yaml -n theia

      The output will look like:

      Output

      apiVersion: v1 data: auth: c2FtbXk6JGFwcjEkVFMuSDdyRWwkaFNSNWxPbkc0OEhncmpGZVFyMzEyLgo= kind: Secret metadata: creationTimestamp: "..." name: theia-basic-auth namespace: default resourceVersion: "10900" selfLink: /api/v1/namespaces/default/secrets/theia-basic-auth uid: 050767b9-8823-4fd3-b498-5f11074f768b type: Opaque

      Next, you’ll need to edit the Ingress to make it use the secret. Open the deployment configuration for editing:

      Add the highlighted lines to your file:

      eclipse-theia.yaml

      apiVersion: v1
      kind: Namespace
      metadata:
        name: theia
      ---
      apiVersion: networking.k8s.io/v1beta1
      kind: Ingress
      metadata:
        name: theia-next
        namespace: theia
        annotations:
          kubernetes.io/ingress.class: nginx
          nginx.ingress.kubernetes.io/auth-type: basic
          nginx.ingress.kubernetes.io/auth-secret: theia-basic-auth
          nginx.ingress.kubernetes.io/auth-realm: 'Authentication Required - Eclipse Theia'
      spec:
        rules:
        - host: theia.your_domain
          http:
            paths:
            - backend:
                serviceName: theia-next
                servicePort: 80
      ...
      

      First, in the auth-type annotation, you specify that the authentication type is basic. This means that Nginx will require the user to type in a username and password. Then, in auth-secret, you specify that the secret that contains the list of valid combinations is theia-basic-auth, which you’ve just created. The remaining auth-realm annotation specifies a message that will be shown to the user as an explanation of why authentication is required. You can change the message contained in this field to your liking.

      Save and close the file.

      To propagate the changes to your cluster, run the following command:

      • kubectl apply -f eclipse-theia.yaml

      You’ll see the output:

      Output

      namespace/theia unchanged ingress.networking.k8s.io/theia-next configured service/theia-next unchanged deployment.apps/theia-next unchanged

      Navigate to your domain in your browser, where you’ll now be asked to log in.

      You’ve enabled basic login authentication on your Ingress by configuring it to use the secret containing the hashed username and password combinations. In the next step, you’ll secure access further by adding TLS certificates, so that the traffic between you and your Eclipse Theia deployment stays encrypted.

      Step 3 — Applying Let’s Encrypt HTTPS Certificates

      Next you will secure your Eclipse Theia installation by applying Let’s Encrypt certificates to your Ingress, which Cert-Manager will automatically provision. After completing this step, your Eclipse Theia installation will be accessible via HTTPS.

      Open eclipse-theia.yaml for editing:

      Add the highlighted lines to your file, making sure to replace the placeholder domain with your own:

      eclipse-theia.yaml

      apiVersion: v1
      kind: Namespace
      metadata:
        name: theia
      ---
      apiVersion: networking.k8s.io/v1beta1
      kind: Ingress
      metadata:
        name: theia-next
        namespace: theia
        annotations:
          kubernetes.io/ingress.class: nginx
          nginx.ingress.kubernetes.io/auth-type: basic
          nginx.ingress.kubernetes.io/auth-secret: theia-basic-auth
          nginx.ingress.kubernetes.io/auth-realm: 'Authentication Required - Eclipse Theia'
          cert-manager.io/cluster-issuer: letsencrypt-prod
      spec:
        tls:
        - hosts:
          - theia.your_domain
          secretName: theia-prod
        rules:
        - host: theia.your_domain
          http:
            paths:
            - backend:
                serviceName: theia-next
                servicePort: 80
      ...
      

      First, you specify the letsencrypt-prod ClusterIssuer you created as part of the prerequisites as the issuer that will be used to provision certificates for this Ingress. Then, in the tls section, you specify the exact domain that should be secured, as well as a name for a secret that will be holding those certificates.

      Save and exit the file.

      Apply the changes to your cluster by running the following command:

      • kubectl apply -f eclipse-theia.yaml

      The output will look like:

      Output

      namespace/theia unchanged ingress.networking.k8s.io/theia-next configured service/theia-next unchanged deployment.apps/theia-next unchanged

      It will take a few minutes for the certificates to be provisioned and fully applied. You can track the progress by observing the output of the following command:

      • kubectl describe certificate theia-prod -n theia

      When it finishes, the end of the output will look similar to this:

      Output

      ... Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal GeneratedKey 42m cert-manager Generated a new private key Normal Requested 42m cert-manager Created new CertificateRequest resource "theia-prod-3785736528" Normal Issued 42m cert-manager Certificate issued successfully

      Refresh your domain in your browser. You’ll see a green padlock shown on the leftmost side of the address bar signifying that the connection is secure.

      You’ve configured the Ingress to use Let’s Encrypt certificates thus making your Eclipse Theia deployment more secure. Now you can review the default Eclipse Theia user interface.

      Step 4 — Using the Eclipse Theia Interface

      In this section, you’ll explore some of the features of the Eclipse Theia interface.

      On the left-hand side of the IDE, there is a vertical row of four buttons opening the most commonly used features in a side panel.

      Eclipse Theia GUI - Sidepanel

      This bar is customizable so you can move these views to a different order or remove them from the bar. By default, the first view opens the Explorer panel that provides tree-like navigation of the project’s structure. You can manage your folders and files here—creating, deleting, moving, and renaming them as necessary.

      After creating a new file through the File menu, you’ll see an empty file open in a new tab. Once saved, you can view the file’s name in Explorer side panel. To create folders right click on the Explorer sidebar and click on New Folder. You can expand a folder by clicking on its name as well as dragging and dropping files and folders to upper parts of the hierarchy to move them to a new location.

      Eclipse Theia GUI - New Folder

      The next two options provide access to search and replace functionality. Following it, the next one provides a view of source control systems that you may be using, such as Git.

      The final view is the debugger option, which provides all the common actions for debugging in the panel. You can save debugging configurations in the launch.json file.

      Debugger View with launch.json open

      The central part of the GUI is your editor, which you can separate by tabs for your code editing. You can change your editing view to a grid system or to side-by-side files. Like all modern IDEs, Eclipse Theia supports syntax highlighting for your code.

      Editor Grid View

      You can gain access to a terminal by typing CTRL+SHIFT+`, or by clicking on Terminal in the upper menu, and selecting New Terminal. The terminal will open in a lower panel and its working directory will be set to the project’s workspace, which contains the files and folders shown in the Explorer side panel.

      Terminal open

      You’ve explored a high-level overview of the Eclipse Theia interface and reviewed some of the most commonly used features.

      Conclusion

      You now have Eclipse Theia, a versatile cloud IDE, installed on your DigitalOcean Kubernetes cluster. You’ve secured it with a free Let’s Encrypt TLS certificate and set up the instance to require a login from the visitor. You can work on your source code and documents with it individually or collaborate with your team. You can also try building your own version of Eclipse Theia if you need additional functionality. For further information on how to do that, visit the Theia docs.



      Source link

      What is Linode Longview


      Updated by Linode

      Contributed by
      Linode

      Our guide to installing and using Linode Longview.

      Longview is Linode’s system data graphing service. It tracks metrics for CPU, memory, and network bandwidth, both aggregate and per-process, and it provides real-time graphs that can help expose performance problems.

      The Longview client is open source and provides an agent that can be installed on any Linux distribution–including systems not hosted by Linode. However, Linode only offers technical support for CentOS, Debian, and Ubuntu.

      Note

      Longview for Cloud Manager is still being actively developed to reach parity with Linode’s Classic Manager. This guide will be updated as development work continues. See the Cloud Manager’s changelog for the latest information on Cloud Manager releases.

      Note

      Longview does not currently support CentOS 8.

      In this Guide:

      This guide provides an overview of Linode Longview. You will learn how to:

      Before you Begin

      • In order to monitor and visualize a Linode’s system statistics, you will need to install the Longview agent on your Linode. Have your Linode’s IP address available in order to SSH into the machine and install the Longview agent.

      Install Linode Longview

      In this section, you will create a Longview Client instance in the Linode Cloud Manager and then install the Longview agent on an existing Linode. These steps will enable you to gather and visualize important system statistics for the corresponding Linode.

      Add the Longview Client

      1. Log into the Linode Cloud Manager and click on the Longview link in the sidebar.

      2. Viewing the Longview Clients page, click on the Add a Client link on the top right-hand corner of the page. This will create a Longview Client instance.

        Linode Cloud Manager Longview Clients Page

      3. An entry will appear displaying your Longview Client instance along with its auto-generated label, its current status, installation instructions, and API key. Its status will display as Waiting for data, since you have not yet installed the Longview agent on a running Linode.

        Note

        The displayed curl command will be used in the next section to install the Longview agent on the desired Linode. The long string appended to the url https://lv.linode.com/ is your Longview Client instance’s GUID (globally unique identifier).

        Linode Cloud Manager Longview Clients list

      Install the Longview Agent

      1. Install the Longview agent on the Linode whose system you’d like to monitor and visualize. Open a terminal on your local computer and log into your Linode over SSH. Replace the IP address with your own Linode’s IP address.

        ssh [email protected]
        
      2. Change to the root user.

        su - root
        
      3. Switch back to the Linode Cloud Manager in your browser, copy the Longview Client instance’s curl command, and paste it into your Terminal window. Press Enter to execute the command. The installation will take a few minutes to complete.

        Note

        Ensure you replace the example curl command below with your own Longview Client instance’s GUID.

        curl -s https://lv.linode.com/05AC7F6F-3B10-4039-9DEE09B0CC382A3D | sudo bash
        
      4. Once the installation is complete, verify that the Longview agent is running:

        sudo systemctl status longview
        

        You should see a similar output:

        Centos:

          
            ● longview.service - SYSV: Longview statistics gathering
              Loaded: loaded (/etc/rc.d/init.d/longview; bad; vendor preset: disabled)
              Active: active (running) since Tue 2019-12-10 22:35:11 UTC; 40s ago
                Docs: man:systemd-sysv-generator(8)
              CGroup: /system.slice/longview.service
                      └─12202 linode-longview
        
        Dec 10 22:35:11 li322-60.members.linode.com systemd[1]: Starting SYSV: Longview statistics gathering...
        Dec 10 22:35:11 li322-60.members.linode.com longview[12198]: Starting longview: [  OK  ]
        Dec 10 22:35:11 li322-60.members.linode.com systemd[1]: Started SYSV: Longview statistics gathering.
        
        

        Debian or Ubuntu:

          
        ● longview.service - LSB: Longview Monitoring Agent
           Loaded: loaded (/etc/init.d/longview; generated; vendor preset: enabled)
           Active: active (running) since Mon 2019-12-09 21:55:39 UTC; 2s ago
             Docs: man:systemd-sysv-generator(8)
          Process: 2997 ExecStart=/etc/init.d/longview start (code=exited, status=0/SUCCESS)
            Tasks: 1 (limit: 4915)
           CGroup: /system.slice/longview.service
                   └─3001 linode-longview
               
        

        If the Longview agent is not running, start it with the following command:

        sudo systemctl start longview
        

        Your output should resemble the example output above.

      5. Switch back to the Linode Cloud Manager’s Longview Clients page in your browser and observe your Longview client’s quick view metrics and graph.

        Note

        It can take several minutes for data to load and display in the Cloud Manager but once it does, you’ll see the graphs and charts populating with your Linode’s metrics.

        Linode Cloud Manager Longview Clients Overview metrics

      Manually Install the Longview Agent with yum or apt

      It’s also possible to manually install Longview for CentOS, Debian, and Ubuntu. You should only need to manually install it if the instructions in the previous section failed.

      1. Before completing the steps below, ensure you have added a Longview Client instance using the Cloud Manager.

      2. Add a configuration file to store the repository information for the Longview agent:

        CentOS:

        Using the text editor of your choice, like nano, create a .repo file and copy the contents of the example file below. Replace REV in the repository URL with your CentOS version (e.g., 8). If unsure, you can find your CentOS version number with cat /etc/redhat-release.

        /etc/yum.repos.d/longview.repo
        1
        2
        3
        4
        5
        
        [longview]
        name=Longview Repo
        baseurl=https://yum-longview.linode.com/centos/REV/noarch/
        enabled=1
        gpgcheck=1

        Debian or Ubuntu:

        Find the codename of the distribution running on your Linode.

        [email protected]:~# lsb_release -sc
        stretch
        

        Using the text editor of your choice, like nano, create a custom sources file that includes Longview’s Debian repository and the Debian distribution codename. In the command below, replace stretch with the output of the previous step.

        /etc/apt/sources.list.d/longview.list
        1
        
        deb http://apt-longview.linode.com/ stretch main
      3. Download the repository’s GPG key and import or move it to the correct location:

        Centos:

        sudo curl -O https://yum-longview.linode.com/linode.key
        sudo rpm --import linode.key
        

        Debian or Ubuntu:

        sudo curl -O https://apt-longview.linode.com/linode.gpg
        sudo mv linode.gpg /etc/apt/trusted.gpg.d/linode.gpg
        
      4. Create a directory for the API key:

        sudo mkdir /etc/linode/
        
      5. Copy the API key from the Installation tab of your Longview client’s detailed view in the Linode Cloud Manager. Put the key into a file, replacing the key in the command below with your own.

        echo '266096EE-CDBA-0EBB-23D067749E27B9ED' | sudo tee /etc/linode/longview.key
        
      6. Install Longview:

        CentOS:

        sudo yum install linode-longview
        

        Debian or Ubuntu:

        sudo apt-get update
        sudo apt-get install linode-longview
        
      7. Once the installation is complete, verify that the Longview agent is running:

        sudo systemctl status longview
        

        You should see a similar output:

        CentOS:

          
        ● longview.service - SYSV: Longview statistics gathering
           Loaded: loaded (/etc/rc.d/init.d/longview; bad; vendor preset: disabled)
           Active: active (running) since Tue 2019-12-10 22:35:11 UTC; 40s ago
             Docs: man:systemd-sysv-generator(8)
           CGroup: /system.slice/longview.service
                   └─12202 linode-longview
        
        Dec 10 22:35:11 li322-60.members.linode.com systemd[1]: Starting SYSV: Longview statistics gathering...
        Dec 10 22:35:11 li322-60.members.linode.com longview[12198]: Starting longview: [  OK  ]
        Dec 10 22:35:11 li322-60.members.linode.com systemd[1]: Started SYSV: Longview statistics gathering.
            
        

        Debian or Ubuntu:

          
        ● longview.service - LSB: Longview Monitoring Agent
           Loaded: loaded (/etc/init.d/longview; generated; vendor preset: enabled)
           Active: active (running) since Mon 2019-12-09 21:55:39 UTC; 2s ago
             Docs: man:systemd-sysv-generator(8)
          Process: 2997 ExecStart=/etc/init.d/longview start (code=exited, status=0/SUCCESS)
            Tasks: 1 (limit: 4915)
           CGroup: /system.slice/longview.service
                   └─3001 linode-longview
              
        

        If the Longview client is not running, start it with the following command:

        sudo systemctl start longview
        

        Your output should resemble the example output above.

      8. Switch back to the Linode Cloud Manager’s Longview Clients page in your browser and observe your Longview client’s quick view metrics and graph.

        Note

        It can take several minutes for data to load and display in the Cloud Manager but once it does, you’ll see the graphs and charts populating with your Linode’s metrics.

        Linode Cloud Manager Longview Clients Overview metrics

      Longview’s Data Explained

      This section will provide an overview of the data and graphs available to you in the Longview Client’s detailed view.

      Access your Longview Client’s Detailed View

      1. To view a Longview Client’s detailed graphs and metrics, log into the Linode Cloud Manager and click on the Longview link in the sidebar.

        Access Longview in the Cloud Manager

      2. Viewing the Longview Clients listing page, click on the View Details button corresponding to the client whose Linode’s system statistics you’d like to view.

        View details for your Longview Client instance

      3. You will be brought to your Longview client’s Overview tab where you can view all the data and graphs corresponding to your Linode.

        To learn more about the Data available in a Longview Client’s Overview page, see the Overview section.

      4. From here you can click on any of your Longview Client instance’s tabs to view more related information.

        Note

        If your Linode has NGINX, Apache, or MySQL installed you will see a corresponding tab appear containing related system data.

      Overview

      The Overview tab shows all of your system’s most important statistics in one place. You can hover your cursor over any of your graphs in the Resource Allocation History section to view details about specific data points.

      Cloud Manager Longview Client's overview page

      1. Basic information about the system, including the operating system name and version, processor speed, uptime, and available updates. This area also includes your system’s top active processes.
      2. The time resolution for the graphs displayed in the Resource Allocation History section. The available options are Past 30 Minutes and Past 12 hours.
      3. Percentage of CPU time spent in wait (on disk), in user space, and in kernel space.
      4. Total amount of RAM being used, as well as the amount of memory in cache, in buffers, and in swap.
      5. Amount of network data that has been transferred to and from your system.
      6. Disk I/O. This is the amount of data being read from, or written to, the system’s disk storage.
      7. Average CPU load.
      8. Listening network services along with their related process, owner, protocol, port, and IP.
      9. A list of current active connections to the Linode.

      Installation

      The Installation tab provides quick instructions on how to install the Longview agent on your Linode and also displays the Longview client instance’s API key.

      Longview's installation tab

      Longview Plan Details

      Longview Free updates every 5 minutes and provides twelve hours of data history. Longview Pro gives you data resolution at 60 second intervals, and you can view a complete history of your Linode’s data instead of only the previous 30 minutes.

      Note

      Longview Pro is not yet available in the Linode Cloud Manager. Longview for Cloud Manager is still being actively developed to reach parity with Linode’s Classic Manager. This guide will be updated as development work continues. See the Cloud Manager’s changelog for the latest information on Cloud Manager releases.

      Troubleshooting

      If you’re experiencing problems with the Longview client, follow these steps to help determine the cause.

      Basic Diagnostics

      Ensure that:

      1. Your system is fully updated.

        Note

        Longview requires Perl 5.8 or later.

      2. The Longview client is running. You can verify with one of the two commands below, depending on your distribution’s initialization system:

        CentOS, Debian, and Ubuntu

        sudo systemctl status longview   # For distributions with systemd.
        

        Other Distributions

        sudo service longview status     # For distributions without systemd.
        

        If the Longview client is not running, start it with one of the following commands, depending on your distribution’s init system:

        CentOS, Debian, and Ubuntu

        sudo systemctl start longview
        

        Other Distributions

        sudo service longview start
        

        If the service fails to start, check Longview’s log for errors. The log file is located in /var/log/linode/longview.log.

      Debug Mode

      Restart the Longview client in debug mode for increased logging verbosity.

      1. First stop the Longview client:

        CentOS, Debian, and Ubuntu

        sudo systemctl stop longview   # For distributions with systemd.
        

        Other Distributions

        sudo service longview stop     # For distributions without systemd.
        
      2. Then restart Longview with the debug flag:

        sudo /etc/init.d/longview debug
        
      3. When you’re finished collecting information, repeat the first two steps to stop Longview and restart it again without the debug flag.

        If Longview does not close properly, find the process ID and kill the process:

        ps aux | grep longview
        sudo kill $PID
        

      Firewall Rules

      If your Linode has a firewall, it must allow communication with Longview’s aggregation host at longview.linode.com (IPv4: 96.126.119.66). You can view your firewall rules with one of the commands below, depending on the firewall controller used by your Linux distribution:

      firewalld

      sudo firewall-cmd --list-all
      

      Note

      iptables

      sudo iptables -S
      

      Note

      ufw

      sudo ufw show added
      

      Note

      If the output of those commands show no rules for the Longview domain (or for 96.126.119.66, which is the IP for the Longview domain), you must add them. A sample iptables rule that allows outbound HTTPS traffic to Longview would be the following:

      iptables -A OUTPUT -p tcp --dport 443 -d longview.linode.com -j ACCEPT
      

      Note

      If you use iptables, you should also make sure to persist any of your firewall rule changes. Otherwise, your changes will not be enforced if your Linode is rebooted. Review the iptables-persistent section of our iptables guide for help with this.

      Verify API key

      The API key given in the Linode Cloud Manager should match that on your system in /etc/linode/longview.key.

      1. In the Linode Cloud Manager, the API key is located in the Installation tab of your Longview Client instance’s detailed view.

      2. SSH into your Linode. The Longview key is located at /etc/linode/longview.key. Use cat to view the contents of that file and compare it to what’s shown in the Linode Cloud Manager:

        cat /etc/linode/longview.key
        

        The two should be the same. If they are not, paste the key from the Linode Cloud Manager into longview.key, overwriting anything already there.

      Cloned Keys

      If you clone a Linode which has Longview installed, you may encounter the following error:

        
      Multiple clients appear to be posting data with this API key. Please check your clients' configuration.
      
      

      This is caused by both Linodes posting data using the same Longview key. To resolve it:

      1. Uninstall the Longview agent on the cloned system.

        CentOS:

        sudo yum remove linode-longview
        

        Debian or Ubuntu:

        sudo apt-get remove linode-longview
        

        Other Distributions:

        sudo rm -rf /opt/linode/longview
        
      2. Add a new Linode Longview Client instance. This will create a new Longview API key independent from the system which it was cloned from.

        Note

        The GUID provided in the Longview Client’s installation URL is not the same as the Longview API key.

      3. Install the Longview Agent on the cloned Linode.

      Contact Support

      If you still need assistance after performing these checks, please open a support ticket.

      Uninstall the Longview Client

      1. Log into the Linode Cloud Manager and click on the Longview link in the sidebar.

        Access Longview in the Cloud Manager

      2. Click the ellipsis button corresponding to the Longview Client instance you’d like to remove and select delete.

        Delete your Longview Client

      3. Next, remove the Longview agent from the operating system you want to stop monitoring. SSH into your Linode.

        ssh [email protected]
        
      4. Remove the linode-longview package with the command appropriate for your Linux distribution.

        CentOS:

        sudo yum remove linode-longview
        

        Debian or Ubuntu:

        sudo apt-get remove linode-longview
        

        Other Distributions:

        sudo rm -rf /opt/linode/longview
        

      Find answers, ask questions, and help others.

      This guide is published under a CC BY-ND 4.0 license.



      Source link