One place for hosting & domains

      How To Display Data from the DigitalOcean API with Django


      The author selected the Mozilla Foundation to receive a donation as part of the Write for DOnations program.

      Introduction

      As demand for full-stack development continues to grow, web frameworks are making development workflows less cumbersome and more efficient; Django is one of those frameworks. Django has been used in major websites such as Mozilla, Pinterest, and Instagram. Unlike Flask, which is a neutral micro-framework, the Django PyPI package includes everything you would need for full-stack development; no need to set up a database or control panel for development.

      One common use-case for Django is to use it to display information from APIs (such as Instagram posts or GitHub repositories) in your own websites and web apps. While this is possible with other frameworks, Django’s “batteries included” philosphy means there will be less hassle and fewer packages required to create the same result.

      In this tutorial you will build a Django project that will display your DigitalOcean account’s Droplet information using the DigitalOcean v2 API. Specifically, you will be creating a website that will display a table of Droplets listing each of their IP addresses, IDs, hosting regions, and resources. Your website will use BulmaCSS to style the page so you can focus on development while also having something nice to look at in the end.

      Once you complete this tutorial, you will have a Django project that can produce a webpage that looks like this:

      Template with Table of Droplet Data

      Prerequisites

      Before you begin this guide you’ll need the following:

      • A DigitalOcean account with at least one Droplet, and a personal access token. Make sure to record the token in a safe place; you’ll need it later on in this tutorial.
      • Familiarity in making requests to APIs. For a comprehensive tutorial on working with APIs, take a look at How to Use Web APIs in Python3.
      • A local virtual environment for Python for maintaining dependencies. In this tutorial we’ll use the name do_django_api for our project directory and env for our virtual environment.
      • Familiarity with Django’s template logic for rendering pages with API data.
      • Familiarity with Django’s view logic for handling data recieved from the API and giving it to a template for rendering.

      Step 1 — Making a Basic Django Project

      From within the virtual environment env, install Django:

      Now you can start a Django project and run some initial setup commands.

      Use django-admin startproject <name> to create a subdirectory in the project folder named after your Django project, then switch to that directory.

      • django-admin startproject do_django_project
      • cd do_django_project

      Once it’s created, inside this subdirectory, you will find manage.py, which is the usual way to interact with Django and run your project. Use migrate to update Django’s development database:

      • python3 manage.py migrate

      You’ll see output that looks like this as the database updates:

      Output

      Operations to perform: Apply all migrations: admin, auth, contenttypes, sessions Running migrations: Applying contenttypes.0001_initial... OK Applying auth.0001_initial... OK Applying admin.0001_initial... OK Applying admin.0002_logentry_remove_auto_add... OK Applying admin.0003_logentry_add_action_flag_choices... OK Applying contenttypes.0002_remove_content_type_name... OK Applying auth.0002_alter_permission_name_max_length... OK Applying auth.0003_alter_user_email_max_length... OK Applying auth.0004_alter_user_username_opts... OK Applying auth.0005_alter_user_last_login_null... OK Applying auth.0006_require_contenttypes_0002... OK Applying auth.0007_alter_validators_add_error_messages... OK Applying auth.0008_alter_user_username_max_length... OK Applying auth.0009_alter_user_last_name_max_length... OK Applying auth.0010_alter_group_name_max_length... OK Applying auth.0011_update_proxy_permissions... OK Applying sessions.0001_initial... OK

      Next, use the runserver command to run the project so you can test it out:

      • python3 manage.py runserver

      The output will look like this as the server starts:

      Output

      Watching for file changes with StatReloader Performing system checks... System check identified no issues (0 silenced). September 22, 2019 - 22:57:07 Django version 2.2.5, using settings 'do_django_project.settings' Starting development server at http://127.0.0.1:8000/ Quit the server with CONTROL-C.

      You now have a basic Django project and a development server running. To view your running development server, visit 127.0.0.1:8000 in a browser. It will display the Django startup page:

      Generic Django Start-Page

      Next you’ll create a Django app and configure your project to run a view from that app so you’ll see something more interesting than the default page.

      Step 2 — Making a Basic Django App

      In this step, you’ll create the skeleton of the app that will hold your Droplet results. You’ll come back to this app later once you’ve set up the API call to populate it with data.

      Make sure you’re in the do_django_project directory, and create a Django app using the following command:

      • python3 manage.py startapp display_droplets

      Now you need to add the new app to INSTALLED_APPS in the settings.py file, so Django will recognize it. settings.py is a Django configuration file that’s located inside another subdirectory in the Django project and has the same name as the project folder (do_django_project). Django created both folders for you. Switch to the do_django_project directory:

      Edit settings.py in the editor of your choice:

      Add your new app to the INSTALLED_APPS section of the file:

      do_django_api/do_django_project/do_django_project/settings.py

      INSTALLED_APPS = [
          'django.contrib.admin',
          'django.contrib.auth',
          'django.contrib.contenttypes',
          'django.contrib.sessions',
          'django.contrib.messages',
          'django.contrib.staticfiles',
          # The new app
          'display_droplets',
      ]
      

      Save and close the file when you’re done.

      GetDroplets View Function

      Next you’ll create a function, GetDroplets, inside the display_droplets app’s views.py file. This function will render the template you’ll use to display Droplet data, as context, from the API. context is a dictionary that is used to take data from Python code and send it to an HTML template so it can be displayed in a web page.

      Switch to the display_droplets directory:

      • cd ..
      • cd display_droplets

      Open views.py for editing:

      Add the following code to the file:

      do_django_api/do_django_project/display_droplets/views.py

      from django.views.generic import TemplateView
      
      class GetDroplets(TemplateView):
          template_name = 'droplets.html'
          def get_context_data(self, *args, **kwargs):
              pass
      

      Save and close the file.

      Later you will populate this function and create the droplets.html file, but first let’s configure urls.py to call this function when you visit the development server root directory (127.0.0.1:8000).

      Switch back to the do_django_project directory:

      • cd ..
      • cd do_django_project

      Open urls.py for editing:

      Add an import statement for GetDroplets, then add an additional path to urlpatterns that will point to the new view.

      do_django_api/do_django_project/do_django_project/urls.py

      from django.contrib import admin
      from django.urls import path
      from display_droplets.views import GetDroplets
      
      urlpatterns = [
          path('admin/', admin.site.urls),
          path('', GetDroplets.as_view(template_name='droplets.html'), name='Droplet View'),
      ]
      

      If you want to make your own custom paths, the first parameter is the URL (such as example.com/**admin**), the second parameter is the function to call to produce the web page, and the third is just a name for the path.

      Save and close the file.

      Droplets Template

      Next you’ll be working with templates. Templates are HTML files that Django uses to create web pages. In this case, you’ll use a template to construct an HTML page that displays the API data.

      Switch back to the display_droplets directory:

      • cd ..
      • cd display_droplets

      Inside this directory, create a template folder and switch to that directory:

      • mkdir templates
      • cd templates

      Create droplets.html and open it for editing:

      To avoid having to write any sort of CSS for this project, we’ll use Bulma CSS because it’s a free and lightweight CSS framework that allows you to create clean-looking web pages just by adding a few class attributes to the HTML.

      Now let’s create a template with a basic navigation bar. Add the following code to the droplets.html file.

      do_django_api/do_django_project/display_droplets/templates/droplets.html

      <!DOCTYPE html>
      <html lang="en">
      <head>
          <meta charset="UTF-8">
          <title>DigitalOcean Droplets</title>
          <link crossorigin="anonymous"
                href="https://cdnjs.cloudflare.com/ajax/libs/bulma/0.7.4/css/bulma.min.css"
                integrity="sha256-8B1OaG0zT7uYA572S2xOxWACq9NXYPQ+U5kHPV1bJN4="
                rel="stylesheet"/>
          <link rel="shortcut icon" type="image/png" href="https://assets.digitalocean.com/logos/favicon.png"/>
      </head>
      <body>
      <nav aria-label="main navigation" class="navbar is-light" role="navigation">
          <div class="navbar-brand">
              <div class="navbar-item">
                  <img atl="DigitalOcean" src="https://assets.digitalocean.com/logos/DO_Logo_icon_blue.png"
                       style="margin-right: 0.5em;">Droplets
              </div>
          </div>
      </nav>
      </body>
      </html>
      

      Save and close the file.

      This code imports Bulma into boilerplate HTML and creates a nav bar displaying “Droplets.”

      Refresh your browser tab to view the changes you made to the template.

      Template with Basic Header

      So far you haven’t touched anything related to APIs; you’ve created a foundation for the project. Next you’ll put this page to good use by making an API call and presenting the Droplet data.

      Step 3 — Making The API Call

      In this step, you’ll set up an API call and send the Droplet data as context to the template to display in a table.

      Getting Droplet Data

      Navigate back to the display_droplets app directory:

      Install the requests library so you can talk to the API:

      The requests library enables your code to request data from APIs and add headers (additional data sent along with our request).

      Next, you’ll create a services.py file, which is where you’ll make the API call. This function will use requests to talk to https://api.digitalocean.com/v2/droplets and append each Droplet in the JSON file returned to a list.

      Open services.py for editing:

      Add the following code to the file:

      do_django_api/do_django_project/display_droplets/services.py

      import os
      import requests
      
      def get_droplets():
          url = 'https://api.digitalocean.com/v2/droplets'
          r = requests.get(url, headers={'Authorization':'Bearer %s' % 'access_token'})
          droplets = r.json()
          droplet_list = []
          for i in range(len(droplets['droplets'])):
              droplet_list.append(droplets['droplets'][i])
          return droplet_list
      

      Inside the get_droplets function, two things occur: a request is made and data is parsed. url contains the URL requesting Droplet data from the DigitalOcean API. r stores the requested data.

      requests takes two parameters in this case: url and headers. If you want data from a different API, you’d replace the url value with the appropriate URL. headers sends DigitalOcean your access token, so they know you’re allowed to make the request and for what account the request is being made.

      droplets contains the information from the r variable, but now it has been converted from JSON, the format the API sends information in, into a dictionary which is easy to use in a for loop.

      The next three lines create an array, droplet_list[]. Then a for loop iterates over the information in droplets, and adds each item to the list. All of the information taken from the API and stored in droplets can be found in DigitalOcean’s Developer Docs.

      Note: Don’t forget to replace access_token with your access token. Also, keep it safe and never publish that token online.

      Save and close the file.

      Protecting Your Access Token

      You should always hide your access token, but if someone ever wanted to run your project, you should have an easy way for them to add their own access token without having to edit Python code. DotENV is the solution as variables are kept in a .env file that can be conveniently edited.

      Navigate back to the do_django_project directory:

      To start working with environment variables, install python-dotenv:

      • pip install python-dotenv

      Once it’s installed, you need to configure Django to handle environment variables, so you can reference them in code. To do that, you need to add a few lines of code to manage.py and wsgi.py.

      Open manage.py for editing:

      Add the following code:

      do_django_api/do_django_project/manage.py

      
      """Django's command-line utility for administrative tasks."""
      import os
      import sys
      import dotenv
      
      def main():
          os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'do_django_project.settings')
          try:
              from django.core.management import execute_from_command_line
          except ImportError as exc:
              raise ImportError(
                  "Couldn't import Django. Are you sure it's installed and "
                  "available on your PYTHONPATH environment variable? Did you "
                  "forget to activate a virtual environment?"
              ) from exc
          execute_from_command_line(sys.argv)
      
      if __name__ == '__main__':
          main()
      
      dotenv.load_dotenv(
          os.path.join(os.path.dirname(__file__), '.env')
      )
      

      Adding this in manage.py means that when you issue commands to Django in development it will handle environment variables from your .env file.

      Save and close the file.

      If you ever need to handle environment variables in your production projects, you can do that from the wsgi.py file. Change to the do_django_project directory:

      And open wsgi.py for editing:

      Add the following code to wsgi.py:

      do_django_api/do_django_project/do_django_project/wsgi.py

      
      import os
      import dotenv
      
      from django.core.wsgi import get_wsgi_application
      
      os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'do_django_project.settings')
      
      dotenv.load_dotenv(
          os.path.join(os.path.dirname(os.path.dirname(__file__)), '.env')
      )
      
      application = get_wsgi_application()
      

      This code snippet has an additional os.path.dirname() because wsgi.py needs to look two directories back to find the .env file. This snippet is not the same as the one used for manage.py.

      Save and close the file.

      Now you can use an environment variable in services.py instead of your access token. Switch back to the display_droplets directory:

      • cd ..
      • cd display_droplets

      Open services.py for editing:

      Now replace your access token with an environment variable:

      do_django_api/display_droplets/services.py

      import os
      import requests
      
      def get_droplets():
          url = "https://api.digitalocean.com/v2/droplets"
          r = requests.get(url, headers={'Authorization':'Bearer %s' % os.getenv('DO_ACCESS_TOKEN')})
          droplets = r.json()
          droplet_list = []
          for i in range(len(droplets['droplets'])):
              droplet_list.append(droplets['droplets'][i])
          return droplet_list
      

      Save and close the file.

      The next step is to create a .env file. Switch back to the do_django_project directory:

      Create a .env file and pen the file for editing:

      In .env, add your token as the variable DO_ACCESS_TOKEN:

      do_django_api/do_django_project/.env

      DO_ACCESS_TOKEN=access_token
      

      Save and close the file.

      Note: Add .env to your .gitignore file so it is never included in your commits.

      The API connection is now set up and configured, and you’ve protected your access token as well. It’s time to present the information you retrieved to the user.

      Step 4 — Handling Droplet Data in Views and Templates

      Now that you can make API calls, you need to send the Droplet data to the template for rendering. Let’s return to the stub of the function, GetDroplets you created earlier in views.py. In the function you’ll send droplet_list as context to the droplets.html template.

      Switch to the display_droplets directory:

      Open views.py for editing:

      Add the following code to views.py:

      do_django_api/do_django_project/display_droplets/views.py

      from django.shortcuts import render
      from django.views.generic import TemplateView
      from .services import get_droplets
      
      class GetDroplets(TemplateView):
          template_name = 'droplets.html'
          def get_context_data(self, *args, **kwargs):
              context = {
                  'droplets' : get_droplets(),
              }
              return context
      

      Information sent to the droplets.html template is handled via the context dictionary. This is why droplets acts as a key and the array returned from get_droplets() acts as a value.

      Save and close the file.

      Presenting the Data in the Template

      Inside the droplets.html template you’ll create a table and populate it with the droplet data.

      Switch to the templates directory:

      Open droplets.html for editing:

      Add the following code after the nav element in droplets.html:

      do_django_api/do_django_project/display_droplets/templates/droplets.html

      <table class="table is-fullwidth is-striped is-bordered">
          <thead>
          <tr>
              <th>Name</th>
              <th>IPv4 Address(es)</th>
              <th>Id</th>
              <th>Region</th>
              <th>Memory</th>
              <th>CPUs</th>
              <th>Disk Size</th>
          </tr>
          </thead>
          <tbody>
          {% for droplet in droplets %}
          <tr>
              <th>{{ droplet.name }}</th>
              {% for ip in droplet.networks.v4 %}
              <td>{{ ip.ip_address }}</td>
              {% endfor %}
              <td>{{ droplet.id }}</td>
              <td>{{ droplet.region.name }}</td>
              <td>{{ droplet.memory }}</td>
              <td>{{ droplet.vcpus }}</td>
              <td>{{ droplet.disk }}</td>
          </tr>
          {% endfor %}
          </tbody>
      </table>
      

      {% for droplet in droplets %} ... {% endfor %} is a loop that iterates through the array of Droplets retrieved from views.py. Each Droplet is inserted in a table row. The various {{ droplet.<attribute> }} lines retrieve that attribute for each Droplet in the loop, and inserts it in a table cell.

      Save and close the file.

      Refresh your browser and you will see a list of Droplets.

      Template with Table of Droplet Data

      You can now handle the DigitalOcean API inside your Django projects. You’ve taken the data retrieved from the API and plugged it into the template you created earlier, to display the information in a readable and flexible manner.

      Conclusion

      In this article you built a Django project that can display Droplet information from the DigitalOcean API with Bulma CSS styling. You’ve learned three important skills by following this tutorial:

      • How to handle API requests in Python using the requests and json modules.
      • How to display API data in a Django project using view and template logic.
      • How to safely handle your API tokens using dotenv in Django.

      Now that you’ve gotten an introduction to handling APIs in Django, you can create a project of your own using either another feature from the DigitalOcean API or another API altogether. You can also check out other Django tutorials or a similar tutorial with the React framework.





      Source link

      Como criar um API Gateway Usando o Ambassador no Kubernetes da DigitalOcean


      O autor escolheu a Free and Open Source Fund para receber uma doação como parte do programa Write for DOnations.

      Introdução

      O Ambassador é um API Gateway para aplicações nativas em nuvem que roteia o tráfego entre serviços heterogêneos e mantém fluxos de trabalho descentralizados. Ele atua como um único ponto de entrada e suporta tarefas como descoberta de serviço, gerenciamento de configuração, regras de roteamento e limitação de taxas de acesso. Ele também oferece grande flexibilidade e facilidade de configuração para seus serviços.

      O Envoy é um proxy de serviço de código aberto projetado para aplicações nativas em nuvem. No Kubernetes, o Ambassador pode ser usado para instalar e gerenciar a configuração do Envoy. O Ambassador suporta alterações na configuração com tempo zero de inatividade e integração com outros recursos, como autenticação, descoberta de serviços e service meshes.

      Neste tutorial, você configurará um Ambassador API Gateway em um cluster Kubernetes usando o Helm e o configurará para rotear o tráfego de entrada para vários serviços com base nas regras de roteamento. Você configurará essas regras para rotear o tráfego com base no nome do host ou no caminho para os serviços relevantes.

      Pré-requisitos

      Antes de começar este guia, você precisará do seguinte:

      • Um cluster Kubernetes na DigitalOcean com o kubectl configurado. Para criar um cluster Kubernetes na DigitalOcean, veja nosso guia Kubernetes Quickstart.

      • O gerenciador de pacotes Helm instalado em sua máquina local e o Tiller instalado em seu cluster. Complete os passos 1 e 2 do tutorial How To Install Software on Kubernetes Clusters with the Helm Package Manager

      • Um nome de domíno totalmente qualificado com pelo menos dois registros A configurados. Ao longo deste tutorial iremos utilizar svc1.seu-domínio, svc2.seu-domínio e svc3.seu-domínio. Você pode seguir o guia DNS Quickstart para configurar seus registros na DigitalOcean.

      Passo 1 — Instalando o Ambassador

      Nesta seção, você instalará o Ambassador no seu cluster Kubernetes. O Ambassador pode ser instalado usando um chart do Helm ou passando um arquivo de configuração YAML para o comando kubectl.

      Nota: O Kubernetes na DigitalOcean tem o RBAC ativado por padrão, portanto, ao usar um arquivo de configuração YAML para instalação, é necessário garantir que você use o RBAC ativado. Você pode encontrar mais detalhes sobre o deployment do Ambassador no Kubernetes via YAML na documentação do Ambassador

      Para os propósitos deste tutorial, você usará um chart do Helm para instalar o Ambassador no seu cluster. Após seguir os pré-requisitos, você terá o Helm instalado em seu cluster.

      Para começar, execute o seguinte comando para instalar o Ambassador via Helm:

      • helm upgrade --install --wait ambassador stable/ambassador

      Você verá uma saída semelhante à seguinte:

      Output

      Release "ambassador" does not exist. Installing it now. NAME: ambassador LAST DEPLOYED: Tue Jun 18 02:15:00 2019 NAMESPACE: default STATUS: DEPLOYED RESOURCES: ==> v1/Deployment NAME READY UP-TO-DATE AVAILABLE AGE ambassador 3/3 3 3 2m39s ==> v1/Pod(related) NAME READY STATUS RESTARTS AGE ambassador-7d55c468cb-4gpq9 1/1 Running 0 2m38s ambassador-7d55c468cb-jr9zr 1/1 Running 0 2m38s ambassador-7d55c468cb-zhm7l 1/1 Running 0 2m38s ==> v1/Service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE ambassador LoadBalancer 10.245.183.114 139.59.52.164 80:30001/TCP,443:31557/TCP 2m40s ambassador-admins ClusterIP 10.245.46.43 <none> 8877/TCP 2m41s ==> v1/ServiceAccount NAME SECRETS AGE ambassador 1 2m43s ==> v1beta1/ClusterRole NAME AGE ambassador 2m41s ==> v1beta1/ClusterRoleBinding NAME AGE ambassador 2m41s ==> v1beta1/CustomResourceDefinition NAME AGE authservices.getambassador.io 2m42s consulresolvers.getambassador.io 2m41s kubernetesendpointresolvers.getambassador.io 2m42s kubernetesserviceresolvers.getambassador.io 2m43s mappings.getambassador.io 2m41s modules.getambassador.io 2m41s ratelimitservices.getambassador.io 2m42s tcpmappings.getambassador.io 2m41s tlscontexts.getambassador.io 2m42s tracingservices.getambassador.io 2m43s . . .

      Isso criará um deployment do Ambassador, um serviço e um balanceador de carga do Ambassador com os seus nodes do cluster Kubernetes conectados. Você precisará do IP do balanceador de carga para mapeá-lo para os registros A do seu domínio

      Para obter o endereço IP do balanceador de carga do seu Ambassador, execute o seguinte:

      • kubectl get svc --namespace default ambassador

      Você verá uma saída semelhante a:

      Output

      NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE ambassador LoadBalancer IP-do-seu-cluster seu-endereço-IP 80:30001/TCP,443:31557/TCP 8m4s

      Observe o IP externo seu-endereço-IP neste passo e mapeie os domínios (através de seu provedor de domínio) svc1.seu-domínio, svc2.seu-domínio e svc3.seu-domínio para apontar para este endereço IP.

      Você pode ativar o HTTPS com o seu balanceador de carga na DigitalOcean usando os passos fornecidos em Como configurar a terminação SSL. É recomendável configurar a terminação TLS por meio do balanceador de carga. Outra maneira de configurar a terminação TLS é usar o Suporte TLS do Ambassador

      Você instalou o Ambassador no seu cluster Kubernetes usando o Helm, que criou um deployment do Ambassador com três réplicas no namespace padrão. Isso também criou um balanceador de carga com um IP público para rotear todo o tráfego em direção ao API Gateway. Em seguida, você criará deployments do Kubernetes para três serviços diferentes que você usará para testar esse API Gateway.

      Passo 2 — Configurando Deployments de Servidor Web

      Nesta seção, você criará três deployments para executar três containers de servidor web diferentes. Você criará arquivos YAML com definições de deployments do Kubernetes para os três diferentes containers de servidores web e fará o deploy usando o kubectl.

      Abra seu editor de texto preferido para criar seu primeiro deployment para um servidor web Nginx:

      Digite a seguinte configuração yaml no seu arquivo:

      svc1-deploy.yaml

      apiVersion: extensions/v1beta1
      kind: Deployment
      metadata:
        name: svc1
      spec:
        replicas: 1
        selector:
          matchLabels:
            app: nginx
            name: svc1
        strategy:
          type: RollingUpdate
        template:
          metadata:
            labels:
              app: nginx
              name: svc1
          spec:
            containers:
            - name: nginx
              image: nginx:latest
              ports:
              - name: http
                containerPort: 80
      

      Aqui você definiu um Deployment do Kubernetes com a imagem de container nginx:latest a ser deployada com 1 réplica, chamada svc1. O Deployment é definido para expor o cluster na porta 80.

      Salve e feche o arquivo.

      Em seguida, execute o seguinte comando para aplicar esta configuração:

      • kubectl apply -f svc1-deploy.yaml

      Você verá a saída confirmando a criação:

      Output

      deployment.extensions/svc1 created

      Agora, crie um segundo deployment de servidor web. Abra um arquivo chamado svc2-deploy.yaml com:

      Digite a seguinte configuração YAML no arquivo:

      svc2-deploy.yaml

      apiVersion: extensions/v1beta1
      kind: Deployment
      metadata:
        name: svc2
      spec:
        replicas: 1
        selector:
          matchLabels:
            app: httpd
            name: svc2
        strategy:
          type: RollingUpdate
        template:
          metadata:
            labels:
              app: httpd
              name: svc2
          spec:
            containers:
            - name: httpd
              image: httpd:latest
              ports:
              - name: http
                containerPort: 80
      

      Aqui você definiu um Deployment do Kubernetes com a imagem de container httpd a ser deployada com 1 réplica, chamada svc2.

      Salve e feche o arquivo.

      Execute o seguinte comando para aplicar esta configuração:

      • kubectl apply -f svc2-deploy.yaml

      Você verá esta saída:

      Output

      deployment.extensions/svc2 created

      Finalmente, para o terceiro deployment, abra e crie o arquivo svc3-deploy.yaml:

      Adicione as seguintes linhas ao arquivo:

      svc3-deploy.yaml

      apiVersion: extensions/v1beta1
      kind: Deployment
      metadata:
        name: svc3
      spec:
        replicas: 1
        selector:
          matchLabels:
            app: httpbin
            name: svc3
        strategy:
          type: RollingUpdate
        template:
          metadata:
            labels:
              app: httpbin
              name: svc3
          spec:
            containers:
            - name: httpbin
              image: kennethreitz/httpbin:latest
              ports:
              - name: http
                containerPort: 80
      

      Aqui você definiu um Deployment do Kubernetes com a imagem de container httpbin a ser deployada com 1 réplica, chamada svc3.

      Salve e feche o arquivo.

      Por fim, execute o seguinte comando para aplicar:

      • kubectl apply -f svc3-deploy.yaml

      E você verá a seguinte saída:

      Output

      deployment.extensions/svc3 created

      Você fez o deploy de três containers de servidor web usando deployments do Kubernetes. No próximo passo, você irá expor esses deployments ao tráfego da Internet.

      Nesta seção, você irá expor suas aplicações web à Internet, criando os Serviços Kubernetes com Anotações do Ambassador para configurar regras para rotear o tráfego para eles. Annotations ou Anotações no Kubernetes são uma maneira de adicionar metadados aos objetos. O Ambassador usa esses valores de anotação dos serviços para configurar suas regras de roteamento.

      Como lembrete, você precisa ter seus domínios (por exemplo: svc1.seu-domínio, svc2.seu-domínio e svc3.seu-domínio) mapeados para o IP público do balanceador de carga em seus registros DNS.

      Defina um serviço Kubernetes para o deployment do svc1 com anotações do Ambassador, criando e abrindo este arquivo:

      Nota: O nome do mapeamento deve ser exclusivo para cada bloco de anotação do Ambassador. O mapeamento atua como um identificador para cada bloco de anotação e, se repetido, ele irá sobrepor o bloco de anotação mais antigo.

      svc1-service.yaml

      apiVersion: v1
      kind: Service
      metadata:
        name: svc1
        annotations:
          getambassador.io/config: |
            ---
            apiVersion: ambassador/v1
            kind: Mapping
            name: svc1-service_mapping
            host: svc1.seu-domínio
            prefix: /
            service: svc1:80
      spec:
        selector:
          app: nginx
          name: svc1
        ports:
        - name: http
          protocol: TCP
          port: 80
      

      Neste código YAML, você definiu um serviço Kubernetes svc1 usando anotações do Ambassador para mapear o nome do host svc1.seu-domínio para este serviço.

      Salve e saia do arquivo svc1-service.yaml e execute o seguinte para aplicar esta configuração:

      • kubectl apply -f svc1-service.yaml

      Você verá esta saída:

      Output

      service/svc1 created

      Crie seu segundo serviço Kubernetes para o deployment do svc2 com anotações do Ambassador. Este é outro exemplo de roteamento baseado em host com o Ambassador:

      Adicione a seguinte configuração ao arquivo:

      svc2-service.yaml

      apiVersion: v1
      kind: Service
      metadata:
        name: svc2
        annotations:
          getambassador.io/config: |
            ---
            apiVersion: ambassador/v1
            kind: Mapping
            name: svc2-service_mapping
            host: svc2.seu-domínio
            prefix: /
            service: svc2:80
      spec:
        selector:
          app: httpd
          name: svc2
        ports:
        - name: http
          protocol: TCP
          port: 80
      

      Salve isso como svc2-service.yaml. Aqui, você definiu outro serviço Kubernetes usando anotações do Ambassador para rotear o tráfego para svc2 quando qualquer solicitação é recebida pelo Ambassador com o valor do cabeçalho host como svc2.seu-domínio. Portanto, esse roteamento baseado em host permitirá que você envie uma solicitação ao subdomínio svc2.seu-domínio, que encaminhará o tráfego para o serviço svc2 e servirá sua solicitação a partir do servidor web httpd.

      Para criar este serviço, execute o seguinte:

      • kubectl apply -f svc2-service.yaml

      Você verá a seguinte saída:

      Output

      service/svc2 created

      Crie um terceiro serviço Kubernetes para seu deployment svc3 e sirva-o através do caminho svc2.seu-domínio/bin. Isso configurará o roteamento baseado em caminho para o Ambassador:

      svc3-service.yaml

      apiVersion: v1
      kind: Service
      metadata:
        name: svc3
      spec:
        selector:
          app: httpbin
          name: svc3
        ports:
        - name: http
          protocol: TCP
          port: 80
      

      Salve isso como svc3-service.yaml e execute o seguinte para aplicar a configuração:

      • kubectl apply -f svc3-service.yaml

      Sua saída será:

      Output

      service/svc3 created

      Edite svc2-service.yaml para acrescentar o segundo bloco de anotação do Ambassador para rotear o /bin para o serviço svc3:

      svc2-service.yaml

      apiVersion: v1
      kind: Service
      metadata:
        name: svc2
        annotations:
          getambassador.io/config: |
            ---
            apiVersion: ambassador/v1
            kind: Mapping
            name: svc2-service_mapping
            host: svc2.seu-domínio
            prefix: /
            service: svc2:80
            ---
            apiVersion: ambassador/v1
            kind: Mapping
            name: svc3-service_mapping
            host: svc2.seu-domínio
            prefix: /bin
            service: svc3:80
      spec:
        selector:
          app: httpd
          name: svc2
        ports:
        - name: http
          protocol: TCP
          port: 80
      

      Você adicionou o segundo bloco de anotação do Ambassador para configurar os caminhos que começam com /bin para mapear para o seu serviço Kubernetes svc3. Para rotear solicitações de svc2.seu-domínio/bin para svc3, você adicionou o segundo bloco de anotação aqui com o valor do host svc2.seu-domínio, que é o mesmo para os dois blocos. Portanto, o roteamento baseado em caminho permitirá que você envie uma solicitação ao svc2.seu-domínio/bin, que será recebido pelo serviço svc3 e servido pela aplicação httpbin neste tutorial.

      Agora execute o seguinte para aplicar as alterações:

      • kubectl apply -f svc2-service.yaml

      Você verá esta saída:

      Output

      service/svc2 configured

      Você criou os Serviços Kubernetes para os três deployments e adicionou regras de roteamento com base no host e no caminho com as anotações do Ambassador. Em seguida, você adicionará configuração avançada a esses serviços para configurar o roteamento, o redirecionamento e cabeçalhos personalizados.

      Passo 4 — Configurações Avançadas do Ambassador para Roteamento

      Nesta seção, você irá configurar os serviços com mais anotações do Ambassador para modificar cabeçalhos e configurar redirecionamento.

      Faça um curl no domínio svc1.seu-domínio e verifique os cabeçalhos de resposta:

      • curl -I svc1.seu-domínio

      Sua saída será semelhante à seguinte:

      Output

      HTTP/1.1 200 OK server: envoy date: Mon, 17 Jun 2019 21:41:00 GMT content-type: text/html content-length: 612 last-modified: Tue, 21 May 2019 14:23:57 GMT etag: "5ce409fd-264" accept-ranges: bytes x-envoy-upstream-service-time: 0

      Esta saída mostra os cabeçalhos recebidos do serviço roteado usando o Ambassador. Você adicionará cabeçalhos personalizados à sua resposta de serviço usando as anotações do Ambassador e validará a saída para os novos cabeçalhos adicionados.

      Para adicionar cabeçalhos personalizados à sua resposta de serviço, remova o cabeçalho x-envoy-upstream-service-time da resposta e adicione um novo cabeçalho de resposta x-geo-location: Brazil para o svc1. (Você pode alterar este cabeçalho conforme seus requisitos.)

      Edite o arquivo svc1-service.yaml:

      Atualize a anotação com as seguintes linhas destacadas:

      svc1-service.yaml

      apiVersion: v1
      kind: Service
      metadata:
        name: svc1
        annotations:
          getambassador.io/config: |
            ---
            apiVersion: ambassador/v1
            kind: Mapping
            name: svc1-service_mapping
            host: svc1.example.com
            prefix: /
            remove_response_headers:
            - x-envoy-upstream-service-time
            add_response_headers:
              x-geo-location: Brazil
            service: svc1:80
      spec:
        selector:
          app: nginx
          name: svc1
        ports:
        - name: http
          protocol: TCP
          port: 80
      

      Aqui você modificou o serviço svc1 para remover x-envoy-upstream-service-time e adicionou o cabeçalho x-geo-location: Brazil na resposta HTTP.

      Aplique as alterações que você fez:

      • kubectl apply -f svc1-service.yaml

      Você verá a seguinte saída:

      Output

      service/svc1 configured

      Agora execute curl para validar os cabeçalhos atualizados na resposta do serviço:

      • curl -I svc1.seu-domínio

      A saída será semelhante a esta:

      Output

      HTTP/1.1 200 OK server: envoy date: Mon, 17 Jun 2019 21:45:26 GMT content-type: text/html content-length: 612 last-modified: Tue, 21 May 2019 14:23:57 GMT etag: "5ce409fd-264" accept-ranges: bytes x-geo-location: Brazil

      Agora edite o svc3-service.yaml para redirecionar solicitações para o seu nome de host svc3.seu-domínio para o caminho svc2.seu-domínio/bin:

      Acrescente o bloco de anotação do Ambassador, conforme mostrado no YAML a seguir, e salve-o:

      svc3-service.yaml

      apiVersion: v1
      kind: Service
      metadata:
        name: svc3
        annotations:
          getambassador.io/config: |
            ---
            apiVersion: ambassador/v1
            kind:  Mapping
            name:  redirect_mapping
            host: svc3.seu-domínio
            prefix: /
            service: svc2.seu-domínio
            host_redirect: true
            path_redirect: /bin
      spec:
        selector:
          app: httpbin
          name: svc3
        ports:
        - name: http
          protocol: TCP
          port: 80
      

      Você adicionou host_redirect: true para configurar uma resposta de redirecionamento 301 de svc3 para svc2.seu-domínio/bin para o nome de host svc3.seu-domínio . O parâmetro host_redirect envia uma resposta de redirecionamento 301 para o cliente. Se não estiver definido, as requisições receberão respostas 200 HTTP em vez de respostas 301 HTTP.

      Agora execute o seguinte comando para aplicar essas alterações:

      • kubectl apply -f svc3-service.yaml

      Você verá uma saída semelhante a:

      Output

      service/svc3 configured

      Agora você pode verificar a resposta para svc3.seu-domínio usando curl:

      • curl -I svc3.seu-domínio

      Sua saída será semelhante à seguinte:

      Output

      HTTP/1.1 301 Moved Permanently location: http://svc2.seu-domínio/bin date: Mon, 17 Jun 2019 21:52:05 GMT server: envoy transfer-encoding: chunked

      A saída é um cabeçalho HTTP para a resposta da solicitação ao serviço svc3.seu-domínio mostrando que a configuração de host_redirect: true na anotação do serviço forneceu corretamente o código de status HTTP: 301 Moved Permanently.

      Você configurou o serviço com anotações do Ambassador para modificar cabeçalhos HTTP e configurar redirecionamentos. Em seguida, você adicionará a configuração global ao serviço Ambassador API Gateway.

      Passo 5 — Definindo Configurações Globais do Ambassador

      Nesta seção, você editará o serviço Ambassador para adicionar a configuração global de compactação GZIP. A compactação GZIP compactará o tamanho dos assets HTTP e reduzirá os requisitos de largura de banda da rede, levando a tempos de resposta mais rápidos para os clientes web. Essa configuração afeta todo o tráfego que está sendo roteado pelo Ambassador API Gateway. Da mesma forma, você pode configurar outros módulos globais com o Ambassador, que permitem ativar comportamentos especiais para o serviço em nível global. Essas configurações globais podem ser aplicadas usando anotações do serviço Ambassador. Você pode consultar a documentação da Configuração Global do Ambassador para obter mais informações.

      O seguinte comando kubectl edit abrirá o editor padrão, que é o vim. Para usar o nano, por exemplo, você pode definir a variável de ambiente KUBE_EDITOR como nano:

      • export KUBE_EDITOR="nano"

      Edite o serviço Ambassador:

      • kubectl edit service ambassador

      Agora adicione as linhas destacadas a um novo bloco de anotação para compactação GZIP:

      Editing Ambassador Service

      apiVersion: v1
      kind: Service
      metadata:
        annotations:
          getambassador.io/config: |
            ---
            apiVersion: ambassador/v1
            kind: Module
            name: ambassador
            config:
              service_port: 8080
            ---
            apiVersion: ambassador/v0
            kind:  Module
            name:  ambassador
            config:
              gzip:
                memory_level: 5
                min_content_length: 256
                compression_level: BEST
                compression_strategy: DEFAULT
                content_type:
                - application/javascript
                - application/json
                - text/html
                - text/plain
                disable_on_etag_header: false
                remove_accept_encoding_header: false
        creationTimestamp: "2019-06-17T20:45:04Z"
        labels:
          app.kubernetes.io/instance: ambassador
          app.kubernetes.io/managed-by: Tiller
          app.kubernetes.io/name: ambassador
          helm.sh/chart: ambassador-2.8.2
        name: ambassador
        namespace: default
        resourceVersion: "2153"
        . . .
      

      Você adicionou o bloco de anotação do Ambassador ao seu serviço Ambassador e configurou o GZIP globalmente para o API Gateway. Aqui você incluiu a configuração para controlar a quantidade de memória interna usada com memory_level, que pode ser um valor de 1 a 9. O compression_level configurado como BEST garante uma taxa de compactação mais alta, com o custo de uma latência mais alta. Com o min_content_length, você configurou o comprimento mínimo de resposta para 256 bytes. Para o content_type, você incluiu especificamente um conjunto de tipos de mídia (anteriormente MIME-types) que produz compactação. Finalmente, você adicionou as duas configurações finais como false para permitir a compactação.

      Você pode ler mais sobre a compactação GZIP na página sobre GZIP do Envoy.

      Quaisquer alterações neste serviço se aplicam como configurações globais para o API Gateway.

      Depois de sair do editor, você verá uma saída semelhante à seguinte:

      Output

      service/ambassador edited

      Verifique o svc1.seu-domínio usando o curl para ver o cabeçalho content-encoding tendo o valor gzip:

      • curl --compressed -i http://svc1.example.com

      A saída será semelhante a esta:

      Output

      HTTP/1.1 200 OK server: envoy date: Mon, 17 Jun 2019 22:25:35 GMT content-type: text/html last-modified: Tue, 21 May 2019 14:23:57 GMT accept-ranges: bytes x-geo-location: Brazil vary: Accept-Encoding content-encoding: gzip transfer-encoding: chunked <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html>

      Aqui você pode ver a página HTML padrão do Nginx com seu cabeçalho de resposta, mostrando que o content-encoding da resposta recebida é compactação gzip.

      Você adicionou a configuração global ao Ambassador para habilitar a configuração GZIP para respostas de tipos de conteúdo selecionados no API Gateway.

      Conclusão

      Você configurou com êxito um API Gateway para seu cluster Kubernetes usando o Ambassador. Agora você pode expor suas aplicações usando roteamento baseado em host e caminho, com cabeçalhos personalizados e a compactação GZIP global.

      Para obter mais informações sobre as anotações do Ambassador e parâmetros de configuração, leia a documentação oficial do Ambassador.



      Source link

      Deploy and Manage a Cluster with Linode Kubernetes Engine and the Linode API – A Tutorial


      Updated by Linode Contributed by Linode

      Note

      Linode Kubernetes Engine (LKE) is currently in Private Beta, and you may not have access to LKE through the Cloud Manager or other tools. To request access to the Private Beta, sign up here. Beta access awards you $100/month in free credits for the duration of the beta, which is automatically applied to your account when an LKE cluster is in use. Additionally, you will have access to the Linode Green Light community, a new program connecting beta users with our product and engineering teams.

      Additionally, because LKE is in Beta, there may be breaking changes to how you access and manage LKE. This guide will be updated to reflect these changes if and when they occur.

      What is the Linode Kubernetes Engine (LKE)?

      The Linode Kubernetes Engine (LKE) is a fully-managed container orchestration engine for deploying and managing containerized applications and workloads. LKE combines Linode’s ease of use and simple pricing with the infrastructure efficiency of Kubernetes. When you deploy a LKE cluster, you receive a Kubernetes Master at no additional cost; you only pay for the Linodes (worker nodes), NodeBalancers (load balancers), and Block Storage Volumes. Your LKE Cluster’s Master node runs the Kubernetes control plane processes – including the API, scheduler, and resource controllers.

      Additional LKE features

      • etcd Backups : A snapshot of your cluster’s metadata is backed up continuously, so your cluster is automatically restored in the event of a failure.
      • High Availability : All of your control plane components are monitored and will automatically recover if they fail.

      You can easily deploy an LKE cluster in several ways:

      These Linode-provided interfaces can be used to create, delete, and update the structural elements of your cluster, including:

      • The number of nodes that make up a cluster’s node pools.
      • The region where your node pools are deployed.
      • The hardware resources for each node in your node pools.
      • The Kubernetes version deployed to your cluster’s Master node and worker nodes.

      The Kubernetes API and kubectl are the primary ways you will interact with your LKE cluster once it’s been created. These tools can be used to configure, deploy, inspect, and secure your Kubernetes workloads, deploy applications, create services, configure storage and networking, and define controllers.

      In this Guide

      This guide will cover how to use the Linode API to:

      Before You Begin

      1. Familiarize yourself with the Linode Kubernetes Engine service. This information will help you understand the benefits and limitations of LKE.

      2. Create an API Token. You will need this to access the LKE service.

      3. Install kubectl on your computer. You will use kubectl to interact with your cluster once it’s deployed.

      4. If you are new to Kubernetes, refer to our A Beginner’s Guide to Kubernetes series to learn about general Kubernetes concepts. This guide assumes a general understanding of core Kubernetes concepts.

      Enable Network Helper

      In order to use the Linode Kubernetes Engine, you will need to have Network Helper enabled globally on your account. Network Helper is a Linode-provided service that automatically sets a static network configuration for your Linode when it boots. To enable this global account setting, follow these instructions.

      If you don’t want to use Network Helper on some Linodes that are not part of your LKE clusters, the service can also be disabled on a per-Linode basis; see instructions here.

      Note

      If you have already deployed an LKE cluster and did not enable Network Helper, you can add a new node pool with the same type, size, and count as your initial node pool. Once your new node pool is ready, you can then delete the original node pool.

      Install kubectl

      macOS:

      Install via Homebrew:

      brew install kubernetes-cli
      

      If you don’t have Homebrew installed, visit the Homebrew home page for instructions. Alternatively, you can manually install the binary; visit the Kubernetes documentation for instructions.

      Linux:

      1. Download the latest kubectl release:

        curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
        
      2. Make the downloaded file executable:

        chmod +x ./kubectl
        
      3. Move the command into your PATH:

        sudo mv ./kubectl /usr/local/bin/kubectl
        

      Note

      Windows:

      Visit the Kubernetes documentation for a link to the most recent Windows release.

      Create an LKE Cluster

      Required Parameters Description
      region The data center region where your cluster will be deployed. Currently, us-central is the only available region for LKE clusters.
      label A human readable name to identify your cluster. This must be unique. If no label is provided, one will be assigned automatically. Labels must start with an alpha [a-z][A-Z] character, must only consist of alphanumeric characters and dashes, and must not contain two dashes in a row.
      node_pools The collections of Linodes that serve as the worker nodes in your LKE cluster.
      version The desired version of Kubernetes for this cluster.
      1. To create an LKE Cluster, send a POST request to the /lke/clusters endpoint. The example below displays all possible request body parameters. Note that tags is an optional parameter.

        curl -H "Content-Type: application/json" 
             -H "Authorization: Bearer $TOKEN" 
             -X POST -d '{
                "label": "cluster12345",
                "region": "us-central",
                "version": "1.16",
                "tags": ["ecomm", "blogs"],
                "node_pools": [
                  { "type": "g6-standard-2", "count": 2},
                  { "type": "g6-standard-4", "count": 3}
                ]
             }' https://api.linode.com/v4/lke/clusters
        

        You will receive a response similar to:

          
        {"version": "1.16", "updated": "2019-08-02T17:17:49", "region": "us-central", "tags": ["ecomm", "blogs"], "label": "cluster12345", "id": 456, "created": "2019-22-02T17:17:49"}%
            
        
      2. Make note of your cluster’s ID, as you will need it to continue to interact with your cluster in the next sections. In the example above, the cluster’s ID is "id": 456. You can also access your cluster’s ID by listing all LKE Clusters on your account.

        Note

        Each Linode account has a limit to the number of Linode resources they can deploy. This includes services, like Linodes, NodeBalancers, Block Storage, etc. If you run into issues deploying the number of nodes you designate for a given cluster’s node pool, you may have run into a limit on the number of resources allowed on your account. Contact Linode Support if you believe this may be the case.

      Connect to your LKE Cluster

      Now that your LKE cluster is created, you can access and manage your cluster using kubectl on your computer. This will give you the ability to interact with the Kubernetes API, and to create and manage Kubernetes objects in your cluster.

      To communicate with your LKE cluster, kubectl requires a copy of your cluster’s kubeconfig. In this section, you will access the contents of your kubeconfig using the Linode API and then set up kubectl to communicate with your LKE cluster.

      1. Access your LKE cluster’s kubeconfig file by sending a GET request to the /lke/clusters/{clusterId}/kubeconfig endpoint. Ensure you replace 12345 with your cluster’s ID that you recorded in the previous section:

        curl -H "Authorization: Bearer $TOKEN" 
          https://api.linode.com/v4/lke/clusters/12345/kubeconfig
        

        The API returns a base64 encoded string (a useful format for automated pipelines) representing your kubeconfig. Your output will resemble the following:

          
        {"kubeconfig": "YXBpVmVyc2lvbjogdjEKY2x1c3RlcnM6Ci0gY2x1c3RlcjoKICAgIGNlcnRpZmljYXRlLWF1dGhvcml0eS1kYXRhOiBMUzB0TFMxQ1JVZEpUaUJEUlZKVVNVWkpRMEZVUlMwdExTMHRDazFKU1VONVJFTkRRV0pEWjBGM1NVSkJaMGxDUVVSQlRrSm5hM0ZvYTJsSE9YY3dRa0ZSYzBaQlJFRldUVkpOZDBWUldVUldVVkZFUlhkd2NtUlhTbXdLWTIwMWJHUkhWbnBOUWpSWVJGUkZOVTFFWjNkTmFrVXpUVlJqTVUxV2IxaEVWRWsx ... 0TFMwdExRbz0K"}%
        
        
      2. Copy the kubeconfig field’s value from the response body, since you will need it in the next step.

        Note

        Make sure you only copy the long string inside the quotes following "kubeconfig": in your output. Do not copy the curly braces or anything outside of them. You will receive an error if you use the full output in later steps.

      3. Save the base64 kubeconfig to an environment variable:

        KUBE_VAR='YXBpVmVyc2lvbjogdjEK ... 0TFMwdExRbz0K'
        
      4. Navigate to your computer’s ~/.kube directory. This is where kubectl looks for kubeconfig files, by default.

        cd ~/.kube
        
      5. Create a directory called configs within ~/.kube. You can use this directory to store your kubeconfig files.

        mkdir configs
        cd configs
        
      6. Decode the contents of $KUBE_VAR and save it to a new YAML file:

        echo $KUBE_VAR | base64 -D > cluster12345-config.yaml
        

        Note

        The YAML file that you decode to (cluster12345-config.yaml here) can have any name of your choosing.

      7. Add the kubeconfig file to your $KUBECONFIG environment variable.

        export KUBECONFIG=cluster12345-config.yaml
        
      8. Verify that your cluster is selected as kubectl’s current context:

        kubectl config get-contexts
        
      9. View the contents of the configuration:

        kubectl config view
        

        Note

      10. View all nodes in your LKE cluster using kubectl:

        kubectl get nodes
        

        Your output will resemble the following example, but will vary depending on your own cluster’s configurations.

          
        NAME                      STATUS   ROLES  AGE     VERSION
        lke166-193-5d44703cd092   Ready    none   2d22h   v1.14.0
        lke166-194-5d44703cd780   Ready    none   2d22h   v1.14.0
        lke166-195-5d44703cd691   Ready    none   2d22h   v1.14.0
        lke166-196-5d44703cd432   Ready    none   2d22h   v1.14.0
        lke166-197-5d44703cd211   Ready    none   2d22h   v1.14.0
        
        

        Now that you are connected to your LKE cluster, you can begin using kubectl to deploy applications, inspect and manage cluster resources, and view logs.

      Persist the Kubeconfig Context

      If you create a new terminal window, it will not have access to the context that you specified using the previous instructions. This context information can be made persistent between new terminals by setting the KUBECONFIG environment variable in your shell’s configuration file.

      Note

      These instructions will persist the context for users of the Bash terminal. They will be similar for users of other terminals:

      1. Open up your Bash profile (e.g. ~/.bash_profile) in the text editor of your choice and add your configuration file to the $KUBECONFIG PATH variable.

        If an export KUBECONFIG line is already present in the file, append to the end of this line as follows; if it is not present, add this line to the end of your file:

        export KUBECONFIG:$KUBECONFIG:$HOME/.kube/config:$HOME/.kube/configs/cluster12345-config.yaml
        

        Note

        Alter the $HOME/.kube/configs/cluster12345-config.yaml path in the above line with the name of the file you decoded to in the previous section.

      2. Close your terminal window and open a new window to receive the changes to the $KUBECONFIG variable.

      3. Use the config get-contexts command for kubectl to view the available cluster contexts:

        kubectl config get-contexts
        

        You should see output similar to the following:

          
        CURRENT  NAME                         CLUSTER     AUTHINFO          NAMESPACE
        *        [email protected]  kubernetes  kubernetes-admin
        
        
      4. If your context is not already selected, (denoted by an asterisk in the current column), switch to this context using the config use-context command. Supply the full name of the cluster (including the authorized user and the cluster):

        kubectl config use-context [email protected]
        

        You should see output like the following:

          
        Switched to context "[email protected]".
        
        
      5. You are now ready to interact with your cluster using kubectl. You can test the ability to interact with the cluster by retrieving a list of Pods in the kube-system namespace:

        kubectl get pods -n kube-system
        

      Inspect your LKE Cluster

      Once you have created an LKE Cluster, you can access information about its structural configuration using the Linode API.

      List LKE Clusters

      To view a list of all your LKE clusters, send a GET request to the /lke/clusters endpoint.

      curl -H "Authorization: Bearer $TOKEN" 
          https://api.linode.com/v4/lke/clusters
      

      The returned response body will display the number of clusters deployed to your account and general details about your LKE clusters:

        
      {"results": 2, "data": [{"updated": "2019-08-02T17:17:49", "region": "us-central", "id": 456, "version": "1.16", "label": "cluster-12345", "created": "2019-08-02T17:17:49", "tags": ["ecomm", "blogs"]}, {"updated": "2019-08-05T17:00:04", "region": "us-central", "id": 789, "version": "1.16", "label": "cluster-56789", "created": "2019-08-05T17:00:04", "tags": ["ecomm", "marketing"]}], "pages": 1, "page": 1}%
      
      

      View an LKE Cluster

      You can use the Linode API to access details about an individual LKE cluster. You will need your cluster’s ID to access information about this resource. If you don’t know your cluster’s ID, see the List LKE Clusters section.

      Required Parameters Description
      clusterId ID of the LKE cluster to lookup.

      To view your LKE cluster, send a GET request to the the /lke/clusters/{clusterId} endpoint. In this example, ensure you replace 12345 with your cluster’s ID:

      curl -H "Authorization: Bearer $TOKEN" 
              https://api.linode.com/v4/lke/clusters/12345
      

      Your output will resemble the following:

        
      {"created": "2019-08-02T17:17:49", "updated": "2019-08-02T17:17:49", "version": "1.16", "tags": ["ecomm", "blogs"], "label": "cluster-12345", "id": 456, "region": "us-central"}%
      
      

      List a Cluster’s Node Pools

      A node pool consists of one or more Linodes (worker nodes). Each node in the pool has the same plan type. Your LKE cluster can have several node pools. Each pool is assigned its own plan type and number of nodes. To view a list of an LKE cluster’s node pools, you need your cluster’s ID. If you don’t know your cluster’s ID, see the List LKE Clusters section.

      Required Parameters Description
      clusterId ID of the LKE cluster to lookup.

      To list your cluster’s node pools, send a GET request to the /lke/clusters/{clusterId}/pools endpoint. In this example, replace 12345 with your cluster’s ID:

      curl -H "Authorization: Bearer $TOKEN" 
          https://api.linode.com/v4/lke/clusters/12345/pools
      

      The response body will include information on each node pool’s pool ID, Linode type, and node count; and each node’s individual ID and status.

        
      {"pages": 1, "page": 1, "data": [{"count": 2, "id": 193, "type": "g6-standard-2", "linodes": [{"id": "13841932", "status": "ready "}, {"id": "13841933", "status": "ready"}]}, {"count": 3, "id": 194, "type": "g6-standard-4", "linodes": [{"id": "13841934", "status": "ready"}, {"id": "13841935", "status": "ready"}, {"id": "13841932", "status": "ready"}]}], "results": 2}%
      
      

      View a Node Pool

      You can use the Linode API to access details about a specific node pool in an LKE cluster. You will need your cluster’s ID and node pool ID to access information about this resource. To retrieve your cluster’s ID, see the List LKE Clusters section. To find a node pool’s ID, see the List a Cluster’s Node Pools section.

      Required Parameters Description
      clusterId ID of the LKE cluster to lookup.
      poolId ID of the LKE node pool to lookup.

      To view a specific node pool, send a GET request to the /lke/clusters/{clusterId}/pools/{poolId} endpoint. In this example, replace 12345 with your cluster’s ID and 456 with the node pool’s ID:

      curl -H "Authorization: Bearer $TOKEN" 
          https://api.linode.com/v4/lke/clusters/12345/pools/456
      

      The response body provides information about the number of nodes in the node pool, the node pool’s ID, and type. You will also retrieve information about each individual node in the node pool, including the Linode’s ID and status.

        
      {"count": 2, "id": 193, "type": "g6-standard-2", "linodes": [{"id": "13841932", "status": "ready"}, {"id": "13841933", "status": "ready"}]}%
      
      

      Note

      If desired, you can use your node pool’s Linode ID(s) to get more details about each node in the pool. Send a GET request to the /linode/indstances/{linodeId} endpoint. In this example, ensure you replace 13841932 with your Linode’s ID.

      curl -H "Authorization: Bearer $TOKEN" 
          https://api.linode.com/v4/linode/instances/13841932
      

      Although you have access to your cluster’s nodes, it is recommended that you only interact with your nodes via the Linode’s LKE interfaces (like the LKE endpoints in Linode’s API, or the Kubernetes section in the Linode Cloud Manager), or via the Kubernetes API and kubectl.

      Modify your LKE Cluster

      Once an LKE cluster is created, you can modify two aspects of it: the cluster’s label, and the cluster’s node pools. In this section you will learn how to modify each of these parts of your cluster.

      Update your LKE Cluster Label

      Required Parameters Description
      clusterId ID of the LKE cluster to lookup.

      To update your LKE cluster’s label, send a PUT request to the /lke/clusters/{clusterId} endpoint. In this example, ensure you replace 12345 with your cluster’s ID:

      curl -H "Content-Type: application/json" 
              -H "Authorization: Bearer $TOKEN" 
              -X PUT -d '{
              "label": "updated-cluster-name"
              }' https://api.linode.com/v4/lke/clusters/12345
      

      The response body will display the updated cluster label:

        
      {"created": "2019-08-02T17:17:49", "updated": "2019-08-05T19:11:19", "version": "1.16", "tags": ["ecomm", "blogs"], "label": "updated-cluster-name", "id": 456, "region": "us-central"}%
      
      

      Add a Node Pool to your LKE Cluster

      A node pool consists of one or more Linodes (worker nodes). Each node in the pool has the same plan type and is identical to each other. Your LKE cluster can have several node pools, each pool with its own plan type and number of nodes.

      You will need your cluster’s ID in order to add a node pool to it. If you don’t know your cluster’s ID, see the List LKE Clusters section.

      Required Parameters Description
      clusterId ID of the LKE cluster to lookup.
      type The Linode plan type to use for all the nodes in the pool. Linode plans designate the type of hardware resources applied to your instance.
      count The number of nodes to include in the node pool. Each node will have the same plan type.

      To add a node pool to an existing LKE cluster, send a POST request to the /lke/clusters/{clusterId}/pools endpoint. The request body must include the type and count parameters. In the URL of this example, ensure you replace 12345 with your own cluster’s ID:

      curl -H "Content-Type: application/json" 
              -H "Authorization: Bearer $TOKEN" 
              -X POST -d '{
              "type": "g6-standard-1",
              "count": 5
              }' https://api.linode.com/v4/lke/clusters/12345/pools
      

      The response body will resemble the following:

        
      {"count": 5, "id": 196, "type": "g6-standard-1", "linodes": [{"id": "13841945", "status": "ready"}, {"id": "13841946", "status": "ready"}, {"id": "13841947", "status": "ready"}, {"id": "13841948", "status": "ready"}, {"id": "13841949", "status": "ready"}]}%
      
      

      Note

      Each Linode account has a limit to the number of Linode resources they can deploy. This includes services, like Linodes, NodeBalancers, Block Storage, etc. If you run into issues deploying the number of nodes you designate for a given cluster’s node pool, you may have run into a limit on the number of resources allowed on your account. Contact Linode Support if you believe this may be the case.

      Resize your LKE Node Pool

      You can resize an LKE cluster’s node pool to add or decrease its number of nodes. You will need your cluster’s ID and the node pool’s ID in order to resize it. If you don’t know your cluster’s ID, see the List LKE Clusters section. If you don’t know your node pool’s ID, see the List a Cluster’s Node Pools section.

      Note

      You cannot modify an existing node pool’s plan type. If you would like your LKE cluster to use a different node pool plan type, you can add a new node pool to your cluster with the same number of nodes to replace the current node pool. You can then delete the node pool that is no longer needed.
      Required Parameters Description
      clusterId ID of the LKE cluster to lookup.
      poolId ID of the LKE node pool to lookup.
      count The number of Linodes in the node pool.

      To update your node pool’s node count, send a PUT request to the /lke/clusters/{clusterId}/pools/{poolId} endpoint. In the URL of this example, replace 12345 with your cluster’s ID and 196 with your node pool’s ID:

      curl -H "Content-Type: application/json" 
          -H "Authorization: Bearer $TOKEN" 
          -X PUT -d '{
              "type": "g6-standard-4",
              "count": 6
          }' https://api.linode.com/v4/lke/clusters/12345/pools/196
      

      Note

      Each Linode account has a limit to the number of Linode resources they can deploy. This includes services, like Linodes, NodeBalancers, Block Storage, etc. If you run into issues deploying the number of nodes you designate for a given cluster’s node pool, you may have run into a limit on the number of resources allowed on your account. Contact Linode Support if you believe this may be the case.

      Delete a Node Pool from an LKE Cluster

      When you delete a node pool you also delete the Linodes (nodes) and routes to them. The Pods running on those nodes are evicted and rescheduled. If you have assigned Pods to the deleted Nodes, the Pods might remain in an unschedulable condition if no other node in the cluster satisfies the node selector.

      Required Parameters Description
      clusterId ID of the LKE cluster to lookup.
      poolId ID of the LKE node pool to lookup.

      To delete a node pool from a LKE cluster, send a DELETE request to the /lke/clusters/{clusterId}/pools/{poolId} end point. In the URL of this example, replace 12345 with your cluster’s ID and 196 with your cluster’s node pool ID:

      Caution

      This step is permanent and will result in the loss of data.

      curl -H "Authorization: Bearer $TOKEN" 
          -X DELETE 
          https://api.linode.com/v4/lke/clusters/12345/pools/196
      

      Delete an LKE Cluster

      Deleting an LKE cluster will delete the Master node, all worker nodes, and all NodeBalancers created by the cluster. However, it will not delete any Volumes created by the LKE cluster.

      To delete an LKE Cluster, send a DELETE request to the /lke/clusters/{clusterId} endpoint. In the URL of this example, replace 12345 with your cluster’s ID:

      Caution

      This step is permanent and will result in the loss of data.

      curl -H "Authorization: Bearer $TOKEN" 
          -X DELETE 
          https://api.linode.com/v4/lke/clusters/12345
      

      Where to Go From Here?

      Now that you have created an LKE cluster, you can start deploying workloads to it. Review these guides for further help:

      Find answers, ask questions, and help others.

      This guide is published under a CC BY-ND 4.0 license.



      Source link