One place for hosting & domains

      DigitalOcean

      How To Display Data from the DigitalOcean API with React


      The author selected the Wikimedia Foundation to receive a donation as part of the Write for DOnations program.

      Introduction

      Over the last few years, open-source web frameworks have greatly simplified the process of coding an application. React, for example, has only added to the popularity of JavaScript by making the language more accessible to new developers and increasing the productivity of seasoned developers. Created by Facebook, React allows developers to quickly create high-end user interfaces for highly-scalable web-applications by supporting such features as declarative views, state management, and client-side rendering, each of which can greatly reduce the complexity of building an app in JavaScript.

      You can leverage frameworks like React to load and display data from the DigitalOcean API, through which you can manage your Droplets and other products within the DigitalOcean cloud using HTTP requests. Although one can fetch data from an API with many other JavaScript frameworks, React provides useful benefits like lifecycles and local state management that make it particularly well-suited for the job. With React, the data retrieved from the API is added to the local state when the application starts and can go through various lifecycles as components mount and dismount. At any point, you can retrieve the data from your local state and display it accordingly.

      In this tutorial, you will create a simple React application that interacts with the DigitalOcean API v2 to make calls and retrieve information about your Droplets. Your app will display a list containing your current Droplets and their details, like name, region, and technical specifications, and you will use the front-end framework Bootstrap to style your application.

      Once you have finished this tutorial, you will have a basic interface displaying a list of your DigitalOcean Droplets, styled to look like the following:

      The final version of your React Application

      Prerequisites

      Before you begin this guide, you’ll need a DigitalOcean account and at least one Droplet set up, in addition to the following:

      Step 1 — Creating a Basic React Application

      In this first step, you’ll create a basic React application using the Create React App package from npm. This package automatically installs and configures the essential dependencies needed to run React, like the module builder Webpack and the JavaScript compiler Babel. After installing, you’ll run the Create React App package using the package runner npx, which comes pre-installed with Node.js.

      To install Create React App and create the first version of your application, run the following command, replacing my-app with the name you want to give to your application:

      • npx create-react-app my-app

      After the installation is complete, move into the new project directory and start running the application using these commands:

      The preceding command starts a local development server provided by Create React App, which disables the command line prompt in your terminal. To proceed with the tutorial, open up a new terminal window and navigate back to the project directory before proceeding to the next step.

      You now have the first version of your React application running in development mode, which you can view by opening http://localhost:3000 in a web browser. At this point, your app will only display the welcome screen from Create React App:

      The first version of your React application

      Now that you have installed and created the first version of your React application, you can add a table component to your app that will eventually hold the data from the DigitalOcean API.

      Step 2 — Creating a Component to Show the Droplet Data

      In this step, you will create the first component that displays information about your Droplets. This component will be a table that lists all of your Droplets and their corresponding details.

      The DigitalOcean API documentation states that you can retrieve a list containing all of your Droplets by sending a request to the following endpoint using cURL: https://api.digitalocean.com/v2/droplets. Using the output from this request, you can create a table component containing id, name, region, memory, vcpus, and disk for each Droplet. Later on in this tutorial, you'll insert the data retrieved from the API into the table component.

      To define a clear structure for your application, create a new directory called components inside the src directory where you'll store all the code you write. Create a new file called Table.js inside the src/components directory and open it with nano or a text editor of your choice:

      • mkdir src/components
      • nano src/components/Table.js

      Define the table component by adding the following code to the file:

      src/components/Table.js

      import React from 'react';
      
      const Table = () => {
        return (
          <table>
            <thead>
              <tr>
                <th>Id</th>
                <th>Name</th>
                <th>Region</th>
                <th>Memory</th>
                <th>CPUs</th>
                <th>Disk Size</th>
              </tr>
            </thead>
            <tbody>
              <tr>
                <td></td>
                <td></td>
                <td></td>
                <td></td>
                <td></td>
                <td></td>
              </tr>
            </tbody>
          </table>
        );
      }
      
      export default Table
      

      The code block above imports the React framework and defines a new component called Table, which consists of a table with a heading and a body.

      When you have added these lines of code, save and exit the file. With the nano text editor, you can do this by pressing CTRL+X, typing y, and pressing ENTER.

      Now that you have created the table component, it is time to include this component in your application. You'll do this by importing the component into the entry point of the application, which is in the file src/App.js. Open this file with the following command:

      Next, remove the boilerplate code that displays the Create React App welcome message in src/App.js, which is highlighted in the following code block.

      src/App.js

      import React, { Component } from 'react';
      import logo from './logo.svg';
      import './App.css';
      
      class App extends Component {
        render() {
          return (
            <div className="App">
              <header className="App-header">
                <img src={logo} className="App-logo" alt="logo" />
                <p>
                  Edit <code>src/App.js</code> and save to reload.
                </p>
                <a
                  className="App-link"
                  href="https://reactjs.org"
                  target="_blank"
                  rel="noopener noreferrer"
                >
                  Learn React
                </a>
              </header>
            </div>
          );
        }
      }
      
      export default App;
      

      After removing the lines that displayed the welcome message, include the table component inside this same file by adding the following highlighted lines:

      src/App.js

      import React, { Component } from 'react';
      import Table from './components/Table.js';
      
      class App extends Component {
        render() {
          return (
            <div className="App">
              <Table />
            </div>
          );
        }
      }
      
      export default App;
      

      If you access http://localhost:3000 in your web browser again, your application will now display a basic table with table heads:

      The React application with a basic table

      In this step, you have created a table component and included this component into the entry point of your application. Next, you will set up a connection to the DigitalOcean API, which you'll use to retrieve the data that this table will display.

      Step 3 — Securing Your API Credentials

      Setting up a connection to the DigitalOcean API consists of several actions, starting with safely storing your Personal Access Token as an environment variable. This can be done by using dotenv, a package that allows you to store sensitive information in a .env file that your application can later access from the environment.

      Use npm to install the dotenv package:

      After installing dotenv, create an environment file called .env in the root directory of your application by executing this command:

      Add the following into .env, which contains your Personal Access Token and the URL for the DigitalOcean API :

      .env

      DO_API_URL=https://api.digitalocean.com/v2
      DO_ACCESS_TOKEN=YOUR_API_KEY
      

      To ensure this sensitive data doesn't get committed to a repository, add it to your .gitignore file with the following command:

      • echo ".env" >> .gitignore

      You have now created a safe and simple configuration file for your environment variables, which will provide your application with the information it needs to send requests to the DigitalOcean API. To ensure your API credentials aren't visible on the client side, you will next set up a proxy server to forward requests and responses between your application server and the DigitalOcean API.

      Install the middleware http-proxy-middleware by executing the following command:

      • npm install http-proxy-middleware

      After installing this, the next step is to set up your proxy. Create the setupProxy.js file in the src directory:

      Inside this file, add the following code to set up the proxy server:

      src/setupProxy.js

      const proxy = require('http-proxy-middleware')
      
      module.exports = function(app) {
      
        require('dotenv').config()
      
        const apiUrl = process.env.DO_API_URL
        const apiToken = process.env.DO_ACCESS_TOKEN
        const headers  = {
          "Content-Type": "application/json",
          "Authorization": "Bearer " + apiToken
        }
      
        // define http-proxy-middleware
        let DOProxy = proxy({
          target: apiUrl,
          changeOrigin: true,
        pathRewrite: {
          '^/api/' : '/'
        },
          headers: headers,
        })
      
        // define the route and map the proxy
        app.use('/api', DOProxy)
      
      };
      

      In the preceding code block, const apiURL = sets the url for the DigitalOcean API as the endpoint, and const apiToken = loads your Personal Access Token into the proxy server. The option pathRewrite mounts the proxy server to /api rather than / so that it does not interfere with the application server but still matches the DigitalOcean API.

      You've now successfully created a proxy server that will send all API requests made from your React application to the DigitalOcean API. This proxy server will make sure your Personal Access Token, which is safely stored as an environment variable, isn't exposed on the client side. Next, you will create the actual requests to retrieve your Droplet data for your application.

      Step 4 — Making API Calls to DigitalOcean

      Now that your display component is ready and the connection details to DigitalOcean are stored and secured through a proxy server, you can start retrieving data from the DigitalOcean API. First, add the following highlighted lines of code to src/App.js just before and after you declare the class App:

      src/App.js

      import React, { Component } from 'react';
      ...
      class App extends Component {
        constructor(props) {
          super(props);
          this.state = {
            droplets: []
          }
        }
      
          render() {
      ...
      

      These lines of code call a constructor method in your class component, which in React initializes the local state by providing this.state with an object or objects. In this case, the objects are your Droplets. From the code block above, you can see that the array containing your Droplets is empty, making it possible to fill it with the results from the API call.

      In order to display your current Droplets, you'll need to fetch this information from the DigitalOcean API. Using the JavaScript function Fetch, you'll send a request to the DigitalOcean API and update the state for droplets with the data you retrieve. You can do this using the componentDidMount method by adding the following lines of code after the constructor:

      src/App.js

      class App extends Component {
        constructor(props) {
          super(props);
          this.state = {
            droplets: []
          }
        }
      
        componentDidMount() {
          fetch('http://localhost:3000/api/droplets')
          .then(res => res.json())
          .then(json => json.droplets)
          .then(droplets => this.setState({ 'droplets': droplets }))
        }
      ...
      

      With your Droplet data stored into the state, it's time to retrieve it within the render function of your application and to send this data as a prop to the table component. Add the following highlighted statement to the table component in App.js:

      src/App.js

      ...
      class App extends Component {
        render() {
          return (
            <div className="App">
              <Table droplets={ this.state.droplets } />
            </div>
          );
        }
      }
      ...
      

      You have now created the functionality to retrieve data from the API, but you still need to make this data accessible via a web browser. In the next step, you will accomplish this by displaying your Droplet data in your table component.

      Step 5 — Displaying Droplet Data in Your Table Component

      Now that you have transferred the Droplet data to the table component, you can iterate this data over rows in the table. But since the application makes the request to the API after App.js is mounted, the property value for droplets will be empty at first. Therefore, you also need to add code to make sure droplets isn't empty before you try to display the data. To do this, add the following highlighted lines to the tbody section of Table.js:

      src/components/Table.js

      const Table = ({ droplets }) => {
        return (
          <table>
            <thead>
              <tr>
                <th>Id</th>
                <th>Name</th>
                <th>Region</th>
                <th>Memory</th>
                <th>CPUs</th>
                <th>Disk Size</th>
              </tr>
            </thead>
            <tbody>
              { (droplets.length > 0) ? droplets.map( (droplet, index) => {
                 return (
                  <tr key={ index }>
                    <td>{ droplet.id }</td>
                    <td>{ droplet.name }</td>
                    <td>{ droplet.region.slug}</td>
                    <td>{ droplet.memory }</td>
                    <td>{ droplet.vcpus }</td>
                    <td>{ droplet.disk }</td>
                  </tr>
                )
               }) : <tr><td colSpan="5">Loading...</td></tr> }
            </tbody>
          </table>
        );
      }
      

      With the addition of the preceding code, your application will display a Loading... placeholder message when no Droplet data is present. When the DigitalOcean API does return Droplet data, your application will iterate it over table rows containing columns for each data type and will display the result to your web browser:

      The React Application with Droplet data

      Note: If your web browser displays an error at http://localhost:3000, press CTRL+C in the terminal that is running your development server to stop your application. Run the following command to restart your application:

      In this step, you have modified the table component of your application to display your Droplet data in a web browser and added a placeholder message for when there are no Droplets found. Next, you will use a front-end web framework to style your data to make it more visually appealing and easier to read.

      Step 6 — Styling Your Table Component Using Bootstrap

      Your table is now populated with data, but the information is not displayed in the most appealing manner. To fix this, you can style your application by adding Bootstrap to your project. Bootstrap is an open-source styling and component library that lets you add responsive styling to a project with CSS templates.

      Install Bootstrap with npm using the following command:

      After Bootstrap has finished installing, import its CSS file into your project by adding the following highlighted line to src/App.js:

      src/App.js

      import React, { Component } from 'react';
      import Table from './components/Table.js';
      import 'bootstrap/dist/css/bootstrap.min.css';
      
      class App extends Component {
      ...
      

      Now that you have imported the CSS, apply the Bootstrap styling to your table component by adding the class table to the <table> tag in src/components/Table.js.

      src/components/Table.js

      import React from 'react';
      
      const Table = ({ droplets }) => {
        return (
          <table className="table">
            <thead>
      ...
      

      Next, finish styling your application by placing a header above your table with a title and the DigitalOcean logo. Click on Download Logos in the Brand Assets section of DigitalOcean's Press page to download a set of logos, pick your favorite from the SVG directory (this tutorial uses DO_Logo_icon_blue.svg), and add it to your project by copying the logo file into a new directory called assets within the src directory of your project. After uploading the logo, import it into the header by adding the highlighted lines to src/App.js:

      src/App.js

      import React, { Component } from 'react';
      import Table from './components/Table.js';
      import 'bootstrap/dist/css/bootstrap.min.css';
      import logo from './assets/DO_Logo_icon_blue.svg';
      
      class App extends Component {
      ...
        render() {
          return (
            <div className="App">
              <nav class="navbar navbar-light bg-light">
                <a class="navbar-brand" href="./">
                  <img src={logo} alt="logo" width="40" /> My Droplets
                </a>
              </nav>
              <Table droplets={ this.state.droplets } />
            </div>
          );
        }
      }
      
      export default App;
      

      In the preceding code block, the classes within the nav tag add a particular styling from Bootstrap to your header.

      Now that you have imported Bootstrap and applied its styling to your application, your data will show up in your web browser with an organized and legible display:

      The final version of your React Application

      Conclusion

      In this article, you've created a basic React application that fetches data from the DigitalOcean API through a secured proxy server and displays it with Bootstrap styling. Now that you are familiar with the React framework, you can apply the concepts you learned here to more complicated applications, such as the one found in How To Build a Modern Web Application to Manage Customer Information with Django and React on Ubuntu 18.04. If you want to find out what other actions are possible with the DigitalOcean API, have a look at the API documentation on DigitalOcean's website.



      Source link

      How to Set Up an Nginx Ingress with Cert-Manager on DigitalOcean Kubernetes


      Introduction

      Kubernetes Ingresses allow you to flexibly route traffic from outside your Kubernetes cluster to Services inside of your cluster. This is accomplished using Ingress Resources, which define rules for routing HTTP and HTTPS traffic to Kubernetes Services, and Ingress Controllers, which implement the rules by load balancing traffic and routing it to the appropriate backend Services. Popular Ingress Controllers include Nginx, Contour, HAProxy, and Traefik. Ingresses provide a more efficient and flexible alternative to setting up multiple LoadBalancer services, each of which uses its own dedicated Load Balancer.

      In this guide, we’ll set up the Kubernetes-maintained Nginx Ingress Controller, and create some Ingress Resources to route traffic to several dummy backend services. Once we’ve set up the Ingress, we’ll install cert-manager into our cluster to manage and provision TLS certificates for encrypting HTTP traffic to the Ingress.

      Prerequisites

      Before you begin with this guide, you should have the following available to you:

      • A Kubernetes 1.10+ cluster with role-based access control (RBAC) enabled
      • The kubectl command-line tool installed on your local machine and configured to connect to your cluster. You can read more about installing kubectl in the official documentation.
      • A domain name and DNS A records which you can point to the DigitalOcean Load Balancer used by the Ingress. If you are using DigitalOcean to manage your domain’s DNS records, consult How to Manage DNS Records to learn how to create A records.
      • The Helm package manager installed on your local machine and Tiller installed on your cluster, as detailed in How To Install Software on Kubernetes Clusters with the Helm Package Manager.
      • The wget command-line utility installed on your local machine. You can install wget using the package manager built into your operating system.

      Once you have these components set up, you’re ready to begin with this guide.

      Step 1 — Setting Up Dummy Backend Services

      Before we deploy the Ingress Controller, we’ll first create and roll out two dummy echo Services to which we’ll route external traffic using the Ingress. The echo Services will run the hashicorp/http-echo web server container, which returns a page containing a text string passed in when the web server is launched. To learn more about http-echo, consult its GitHub Repo, and to learn more about Kubernetes Services, consult Services from the official Kubernetes docs.

      On your local machine, create and edit a file called echo1.yaml using nano or your favorite editor:

      Paste in the following Service and Deployment manifest:

      echo1.yaml

      apiVersion: v1
      kind: Service
      metadata:
        name: echo1
      spec:
        ports:
        - port: 80
          targetPort: 5678
        selector:
          app: echo1
      ---
      apiVersion: apps/v1
      kind: Deployment
      metadata:
        name: echo1
      spec:
        selector:
          matchLabels:
            app: echo1
        replicas: 2
        template:
          metadata:
            labels:
              app: echo1
          spec:
            containers:
            - name: echo1
              image: hashicorp/http-echo
              args:
              - "-text=echo1"
              ports:
              - containerPort: 5678
      

      In this file, we define a Service called echo1 which routes traffic to Pods with the app: echo1 label selector. It accepts TCP traffic on port 80 and routes it to port 5678,http-echo's default port.

      We then define a Deployment, also called echo1, which manages Pods with the app: echo1 Label Selector. We specify that the Deployment should have 2 Pod replicas, and that the Pods should start a container called echo1 running the hashicorp/http-echo image. We pass in the text parameter and set it to echo1, so that the http-echo web server returns echo1. Finally, we open port 5678 on the Pod container.

      Once you're satisfied with your dummy Service and Deployment manifest, save and close the file.

      Then, create the Kubernetes resources using kubectl create with the -f flag, specifying the file you just saved as a parameter:

      • kubectl create -f echo1.yaml

      You should see the following output:

      Output

      service/echo1 created deployment.apps/echo1 created

      Verify that the Service started correctly by confirming that it has a ClusterIP, the internal IP on which the Service is exposed:

      You should see the following output:

      Output

      NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE echo1 ClusterIP 10.245.222.129 <none> 80/TCP 60s

      This indicates that the echo1 Service is now available internally at 10.245.222.129 on port 80. It will forward traffic to containerPort 5678 on the Pods it selects.

      Now that the echo1 Service is up and running, repeat this process for the echo2 Service.

      Create and open a file called echo2.yaml:

      echo2.yaml

      apiVersion: v1
      kind: Service
      metadata:
        name: echo2
      spec:
        ports:
        - port: 80
          targetPort: 5678
        selector:
          app: echo2
      ---
      apiVersion: apps/v1
      kind: Deployment
      metadata:
        name: echo2
      spec:
        selector:
          matchLabels:
            app: echo2
        replicas: 1
        template:
          metadata:
            labels:
              app: echo2
          spec:
            containers:
            - name: echo2
              image: hashicorp/http-echo
              args:
              - "-text=echo2"
              ports:
              - containerPort: 5678
      

      Here, we essentially use the same Service and Deployment manifest as above, but name and relabel the Service and Deployment echo2. In addition, to provide some variety, we create only 1 Pod replica. We ensure that we set the text parameter to echo2 so that the web server returns the text echo2.

      Save and close the file, and create the Kubernetes resources using kubectl:

      • kubectl create -f echo2.yaml

      You should see the following output:

      Output

      service/echo2 created deployment.apps/echo2 created

      Once again, verify that the Service is up and running:

      You should see both the echo1 and echo2 Services with assigned ClusterIPs:

      Output

      NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE echo1 ClusterIP 10.245.222.129 <none> 80/TCP 6m6s echo2 ClusterIP 10.245.128.224 <none> 80/TCP 6m3s kubernetes ClusterIP 10.245.0.1 <none> 443/TCP 4d21h

      Now that our dummy echo web services are up and running, we can move on to rolling out the Nginx Ingress Controller.

      Step 2 — Setting Up the Kubernetes Nginx Ingress Controller

      In this step, we'll roll out the Kubernetes-maintained Nginx Ingress Controller. Note that there are several Nginx Ingress Controllers; the Kubernetes community maintains the one used in this guide and Nginx Inc. maintains kubernetes-ingress. The instructions in this tutorial are based on those from the official Kubernetes Nginx Ingress Controller Installation Guide.

      The Nginx Ingress Controller consists of a Pod that runs the Nginx web server and watches the Kubernetes Control Plane for new and updated Ingress Resource objects. An Ingress Resource is essentially a list of traffic routing rules for backend Services. For example, an Ingress rule can specify that HTTP traffic arriving at the path /web1 should be directed towards the web1 backend web server. Using Ingress Resources, you can also perform host-based routing: for example, routing requests that hit web1.your_domain.com to the backend Kubernetes Service web1.

      In this case, because we’re deploying the Ingress Controller to a DigitalOcean Kubernetes cluster, the Controller will create a LoadBalancer Service that spins up a DigitalOcean Load Balancer to which all external traffic will be directed. This Load Balancer will route external traffic to the Ingress Controller Pod running Nginx, which then forwards traffic to the appropriate backend Services.

      We'll begin by first creating the Kubernetes resources required by the Nginx Ingress Controller. These consist of ConfigMaps containing the Controller's configuration, Role-based Access Control (RBAC) Roles to grant the Controller access to the Kubernetes API, and the actual Ingress Controller Deployment. To see a full list of these required resources, consult the manifest from the Kubernetes Nginx Ingress Controller’s GitHub repo.

      To create these mandatory resources, use kubectl apply and the -f flag to specify the manifest file hosted on GitHub:

      • kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/mandatory.yaml

      We use apply instead of create here so that in the future we can incrementally apply changes to the Ingress Controller objects instead of completely overwriting them. To learn more about apply, consult Managing Resources from the official Kubernetes docs.

      You should see the following output:

      Output

      namespace/ingress-nginx created configmap/nginx-configuration created configmap/tcp-services created configmap/udp-services created serviceaccount/nginx-ingress-serviceaccount created clusterrole.rbac.authorization.k8s.io/nginx-ingress-clusterrole created role.rbac.authorization.k8s.io/nginx-ingress-role created rolebinding.rbac.authorization.k8s.io/nginx-ingress-role-nisa-binding created clusterrolebinding.rbac.authorization.k8s.io/nginx-ingress-clusterrole-nisa-binding created deployment.extensions/nginx-ingress-controller created

      This output also serves as a convenient summary of all the Ingress Controller objects created from the mandatory.yaml manifest.

      Next, we'll create the Ingress Controller LoadBalancer Service, which will create a DigitalOcean Load Balancer that will load balance and route HTTP and HTTPS traffic to the Ingress Controller Pod deployed in the previous command.

      To create the LoadBalancer Service, once again kubectl apply a manifest file containing the Service definition:

      • kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/cloud-generic.yaml

      You should see the following output:

      Output

      service/ingress-nginx created

      Now, confirm that the DigitalOcean Load Balancer was successfully created by fetching the Service details with kubectl:

      • kubectl get svc --namespace=ingress-nginx

      You should see an external IP address, corresponding to the IP address of the DigitalOcean Load Balancer:

      Output

      NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE ingress-nginx LoadBalancer 10.245.247.67 203.0.113.0 80:32486/TCP,443:32096/TCP 20h

      Note down the Load Balancer's external IP address, as you'll need it in a later step.

      This load balancer receives traffic on HTTP and HTTPS ports 80 and 443, and forwards it to the Ingress Controller Pod. The Ingress Controller will then route the traffic to the appropriate backend Service.

      We can now point our DNS records at this external Load Balancer and create some Ingress Resources to implement traffic routing rules.

      Step 3 — Creating the Ingress Resource

      Let's begin by creating a minimal Ingress Resource to route traffic directed at a given subdomain to a corresponding backend Service.

      In this guide, we'll use the test domain example.com. You should substitute this with the domain name you own.

      We'll first create a simple rule to route traffic directed at echo1.example.com to the echo1 backend service and traffic directed at echo2.example.com to the echo2 backend service.

      Begin by opening up a file called echo_ingress.yaml in your favorite editor:

      Paste in the following ingress definition:

      echo_ingress.yaml

      apiVersion: extensions/v1beta1
      kind: Ingress
      metadata:
        name: echo-ingress
      spec:
        rules:
        - host: echo1.example.com
          http:
            paths:
            - backend:
                serviceName: echo1
                servicePort: 80
        - host: echo2.example.com
          http:
            paths:
            - backend:
                serviceName: echo2
                servicePort: 80
      

      When you've finished editing your Ingress rules, save and close the file.

      Here, we've specified that we'd like to create an Ingress Resource called echo-ingress, and route traffic based on the Host header. An HTTP request Host header specifies the domain name of the target server. To learn more about Host request headers, consult the Mozilla Developer Network definition page. Requests with host echo1.example.com will be directed to the echo1 backend set up in Step 1, and requests with host echo2.example.com will be directed to the echo2 backend.

      You can now create the Ingress using kubectl:

      • kubectl apply -f echo_ingress.yaml

      You'll see the following output confirming the Ingress creation:

      Output

      ingress.extensions/echo-ingress created

      To test the Ingress, navigate to your DNS management service and create A records for echo1.example.com and echo2.example.com pointing to the DigitalOcean Load Balancer's external IP. The Load Balancer's external IP is the external IP address for the ingress-nginx Service, which we fetched in the previous step. If you are using DigitalOcean to manage your domain's DNS records, consult How to Manage DNS Records to learn how to create A records.

      Once you've created the necessary echo1.example.com and echo2.example.com DNS records, you can test the Ingress Controller and Resource you've created using the curl command line utility.

      From your local machine, curl the echo1 Service:

      You should get the following response from the echo1 service:

      Output

      echo1

      This confirms that your request to echo1.example.com is being correctly routed through the Nginx ingress to the echo1 backend Service.

      Now, perform the same test for the echo2 Service:

      You should get the following response from the echo2 Service:

      Output

      echo2

      This confirms that your request to echo2.example.com is being correctly routed through the Nginx ingress to the echo2 backend Service.

      At this point, you've successfully set up a basic Nginx Ingress to perform virtual host-based routing. In the next step, we'll install cert-manager using Helm to provision TLS certificates for our Ingress and enable the more secure HTTPS protocol.

      Step 4 — Installing and Configuring Cert-Manager

      In this step, we'll use Helm to install cert-manager into our cluster. cert-manager is a Kubernetes service that provisions TLS certificates from Let's Encrypt and other certificate authorities and manages their lifecycles. Certificates can be requested and configured by annotating Ingress Resources with the certmanager.k8s.io/issuer annotation, appending a tls section to the Ingress spec, and configuring one or more Issuers to specify your preferred certificate authority. To learn more about Issuer objects, consult the official cert-manager documentation on Issuers.

      We'll first begin by using Helm to installcert-manager into our cluster:

      • helm install --name cert-manager --namespace kube-system --version v0.4.1 stable/cert-manager

      You should see the following output:

      Output

      . . . NOTES: cert-manager has been deployed successfully! In order to begin issuing certificates, you will need to set up a ClusterIssuer or Issuer resource (for example, by creating a 'letsencrypt-staging' issuer). More information on the different types of issuers and how to configure them can be found in our documentation: https://cert-manager.readthedocs.io/en/latest/reference/issuers.html For information on how to configure cert-manager to automatically provision Certificates for Ingress resources, take a look at the `ingress-shim` documentation: https://cert-manager.readthedocs.io/en/latest/reference/ingress-shim.html

      This indicates that the cert-manager installation was successful.

      Before we begin issuing certificates for our Ingress hosts, we need to create an Issuer, which specifies the certificate authority from which signed x509 certificates can be obtained. In this guide, we'll use the Let's Encrypt certificate authority, which provides free TLS certificates and offers both a staging server for testing your certificate configuration, and a production server for rolling out verifiable TLS certificates.

      Let's create a test Issuer to make sure the certificate provisioning mechanism is functioning correctly. Open a file named staging_issuer.yaml in your favorite text editor:

      nano staging_issuer.yaml
      

      Paste in the following ClusterIssuer manifest:

      staging_issuer.yaml

      apiVersion: certmanager.k8s.io/v1alpha1
      kind: ClusterIssuer
      metadata:
       name: letsencrypt-staging
      spec:
       acme:
         # The ACME server URL
         server: https://acme-staging-v02.api.letsencrypt.org/directory
         # Email address used for ACME registration
         email: your_email_address_here
         # Name of a secret used to store the ACME account private key
         privateKeySecretRef:
           name: letsencrypt-staging
         # Enable the HTTP-01 challenge provider
         http01: {}
      

      Here we specify that we'd like to create a ClusterIssuer object called letsencrypt-staging, and use the Let's Encrypt staging server. We'll later use the production server to roll out our certificates, but the production server may rate-limit requests made against it, so for testing purposes it's best to use the staging URL.

      We then specify an email address to register the certificate, and create a Kubernetes Secret called letsencrypt-staging to store the certificate's private key. We also enable the HTTP-01 challenge mechanism. To learn more about these parameters, consult the official cert-manager documentation on Issuers.

      Roll out the ClusterIssuer using kubectl:

      • kubectl create -f staging_issuer.yaml

      You should see the following output:

      Output

      clusterissuer.certmanager.k8s.io/letsencrypt-staging created

      Now that we've created our Let's Encrypt staging Issuer, we're ready to modify the Ingress Resource we created above and enable TLS encryption for the echo1.example.com and echo2.example.com paths.

      Open up echo_ingress.yaml once again in your favorite editor:

      Add the following to the Ingress Resource manifest:

      echo_ingress.yaml

      apiVersion: extensions/v1beta1
      kind: Ingress
      metadata:
        name: echo-ingress
        annotations:  
          kubernetes.io/ingress.class: nginx
          certmanager.k8s.io/cluster-issuer: letsencrypt-staging
      spec:
        tls:
        - hosts:
          - echo1.example.com
          - echo2.example.com
          secretName: letsencrypt-staging
        rules:
        - host: echo1.example.com
          http:
            paths:
            - backend:
                serviceName: echo1
                servicePort: 80
        - host: echo2.example.com
          http:
            paths:
            - backend:
                serviceName: echo2
                servicePort: 80
      

      Here we add some annotations to specify the ingress.class, which determines the Ingress Controller that should be used to implement the Ingress Rules. In addition, we define the cluster-issuer to be letsencrypt-staging, the certificate Issuer we just created.

      Finally, we add a tls block to specify the hosts for which we want to acquire certificates, and specify the private key we created earlier.

      When you're done making changes, save and close the file.

      We'll now update the existing Ingress Resource using kubectl apply:

      • kubectl apply -f echo_ingress.yaml

      You should see the following output:

      Output

      ingress.extensions/echo-ingress configured

      You can use kubectl describe to track the state of the Ingress changes you've just applied:

      Output

      Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal CREATE 14m nginx-ingress-controller Ingress default/echo-ingress Normal UPDATE 1m (x2 over 13m) nginx-ingress-controller Ingress default/echo-ingress Normal CreateCertificate 1m cert-manager Successfully created Certificate "letsencrypt-staging"

      Once the certificate has been successfully created, you can run an additional describe on it to further confirm its successful creation:

      • kubectl describe certificate

      You should see the following output in the Events section:

      Output

      Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal CreateOrder 50s cert-manager Created new ACME order, attempting validation... Normal DomainVerified 15s cert-manager Domain "echo2.example.com" verified with "http-01" validation Normal DomainVerified 3s cert-manager Domain "echo1.example.com" verified with "http-01" validation Normal IssueCert 3s cert-manager Issuing certificate... Normal CertObtained 1s cert-manager Obtained certificate from ACME server Normal CertIssued 1s cert-manager Certificate issued successfully

      This confirms that the TLS certificate was successfully issued and HTTPS encryption is now active for the two domains configured.

      We're now ready to send a request to a backend echo server to test that HTTPS is functioning correctly.

      Run the following wget command to send a request to echo1.example.com and print the response headers to STDOUT:

      • wget --save-headers -O- echo1.example.com

      You should see the following output:

      Output

      URL transformed to HTTPS due to an HSTS policy --2018-12-11 14:38:24-- https://echo1.example.com/ Resolving echo1.example.com (echo1.example.com)... 203.0.113.0 Connecting to echo1.example.com (echo1.example.net)|203.0.113.0|:443... connected. ERROR: cannot verify echo1.example.com's certificate, issued by ‘CN=Fake LE Intermediate X1’: Unable to locally verify the issuer's authority. To connect to echo1.example.com insecurely, use `--no-check-certificate'.

      This indicates that HTTPS has successfully been enabled, but the certificate cannot be verified as it's a fake temporary certificate issued by the Let's Encrypt staging server.

      Now that we've tested that everything works using this temporary fake certificate, we can roll out production certificates for the two hosts echo1.example.com and echo2.example.com.

      Step 5 — Rolling Out Production Issuer

      In this step we’ll modify the procedure used to provision staging certificates, and generate a valid, verifiable production certificate for our Ingress hosts.

      To begin, we'll first create a production certificate ClusterIssuer.

      Open a file called prod_issuer.yaml in your favorite editor:

      nano prod_issuer.yaml
      

      Paste in the following manifest:

      prod_issuer.yaml

      apiVersion: certmanager.k8s.io/v1alpha1
      kind: ClusterIssuer
      metadata:
        name: letsencrypt-prod
      spec:
        acme:
          # The ACME server URL
          server: https://acme-v02.api.letsencrypt.org/directory
          # Email address used for ACME registration
          email: your_email_address_here
          # Name of a secret used to store the ACME account private key
          privateKeySecretRef:
            name: letsencrypt-prod
          # Enable the HTTP-01 challenge provider
          http01: {}
      

      Note the different ACME server URL, and the letsencrypt-prod secret key name.

      When you're done editing, save and close the file.

      Now, roll out this Issuer using kubectl:

      • kubectl create -f prod_issuer.yaml

      You should see the following output:

      Output

      clusterissuer.certmanager.k8s.io/letsencrypt-prod created

      Update echo_ingress.yaml to use this new Issuer:

      Make the following changes to the file:

      echo_ingress.yaml

      apiVersion: extensions/v1beta1
      kind: Ingress
      metadata:
        name: echo-ingress
        annotations:  
          kubernetes.io/ingress.class: nginx
          certmanager.k8s.io/cluster-issuer: letsencrypt-prod
      spec:
        tls:
        - hosts:
          - echo1.example.com
          - echo2.example.com
          secretName: letsencrypt-prod
        rules:
        - host: echo1.example.com
          http:
            paths:
            - backend:
                serviceName: echo1
                servicePort: 80
        - host: echo2.example.com
          http:
            paths:
            - backend:
                serviceName: echo2
                servicePort: 80
      

      Here, we update both the ClusterIssuer and secret key to letsencrypt-prod.

      Once you're satisfied with your changes, save and close the file.

      Roll out the changes using kubectl apply:

      • kubectl apply -f echo_ingress.yaml

      Output

      ingress.extensions/echo-ingress configured

      Wait a couple of minutes for the Let's Encrypt production server to issue the certificate. You can track its progress using kubectl describe on the certificate object:

      • kubectl describe certificate letsencrypt-prod

      Once you see the following output, the certificate has been issued successfully:

      Output

      Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal CreateOrder 4m4s cert-manager Created new ACME order, attempting validation... Normal DomainVerified 3m30s cert-manager Domain "echo2.example.com" verified with "http-01" validation Normal DomainVerified 3m18s cert-manager Domain "echo1.example.com" verified with "http-01" validation Normal IssueCert 3m18s cert-manager Issuing certificate... Normal CertObtained 3m16s cert-manager Obtained certificate from ACME server Normal CertIssued 3m16s cert-manager Certificate issued successfully

      We'll now perform a test using curl to verify that HTTPS is working correctly:

      You should see the following:

      Output

      <html> <head><title>308 Permanent Redirect</title></head> <body> <center><h1>308 Permanent Redirect</h1></center> <hr><center>nginx/1.15.6</center> </body> </html>

      This indicates that HTTP requests are being redirected to use HTTPS.

      Run curl on https://echo1.example.com:

      • curl https://echo1.example.com

      You should now see the following output:

      Output

      echo1

      You can run the previous command with the verbose -v flag to dig deeper into the certificate handshake and to verify the certificate information.

      At this point, you've successfully configured HTTPS using a Let's Encrypt certificate for your Nginx Ingress.

      Conclusion

      In this guide, you set up an Nginx Ingress to load balance and route external requests to backend Services inside of your Kubernetes cluster. You also secured the Ingress by installing the cert-manager certificate provisioner and setting up a Let's Encrypt certificate for two host paths.

      There are many alternatives to the Nginx Ingress Controller. To learn more, consult Ingress controllers from the official Kubernetes documentation.



      Source link

      Como Criar uma Imagem do Seu Ambiente Linux e Lançá-la na DigitalOcean


      Introdução

      O recurso Custom Images ou Imagens Personalizadas da DigitalOcean lhe permite trazer seu disco virtual personalizado de Linux e Unix-like de um ambiente local ou de outra plataforma de nuvem para a DigitalOcean e utilizá-lo para iniciar Droplets na DigitalOcean.

      Como descrito na documentação do Custom Images, os seguintes tipos de imagens são suportados nativamente pela ferramenta de upload do Custom Images:

      Embora imagens com formato ISO não sejam oficialmente suportadas, você pode aprender como criar e carregar uma imagem compatível usando o VirtualBox seguindo o tutorial How to Create a DigitalOcean Droplet from an Ubuntu ISO Format Image.

      Se você ainda não tem uma imagem compatível para carregar na DigitalOcean, você pode criar e comprimir uma imagem de disco do seu sistema Unix-like ou Linux, desde que ela tenha o software e os drivers de pré-requisitos instalados.

      Vamos começar assegurando que sua imagem atende ao requisitos do Custom Images. Para fazer isso, vamos configurar o sistema e instalar alguns pré-requisitos de software. Depois, vamos criar a imagem utilizando o utilitário de linha de comando dd e comprimí-la usando o gzip. Na sequência, vamos fazer o upload desse arquivo de imagem compactado para o DigitalOcean Spaces, de onde podemos importá-lo como uma Imagem Personalizada. Finalmente, vamos inicializar um droplet usando a imagem enviada

      Pré-requisitos

      Se possível, você deve usar uma das imagens fornecidas pela DigitalOcean como base, ou uma imagem de nuvem oficial fornecida pela distribuição como o Ubuntu Cloud. Então você pode instalar softwares e aplicaçoes em cima dessa imagem de base para fazer uma nova imagem usando ferramentas como o Packer e o VirtualBox. Muitos provedores de nuvem e ambientes de virtualização também fornecem ferramentas para exportar discos virtuais para um dos formatos compatíveis listados acima, assim, se possível, você deve usá-las para simplificar o processo de importação. Nos casos em que você precisa criar manualmente uma imagem de disco do seu sistema você pode seguir as instruções nesse guia. Observe que essas instruções só foram testadas com um sistema Ubuntu 18.04 e as etapas podem variar dependendo do sistema operacional e da configuração do seu servidor.

      Antes de começar com este tutorial, você deve ter o seguinte disponível para você:

      • Um sistema Linux ou Unix-like que atenda a todos os requisitos listados na documentação de produto do Custom Images. Por exemplo, seu disco de boot deve ter:

        • Um tamanho máximo de 100GB
        • Um tabela de partição MBR ou GPT com um gerenciador de boot grub
        • Drivers do VirtIO instalados
      • Um usuário não-root com privilégios administrativos disponível para você no sistema que você está fazendo imagem. Para criar um novo usuário e conceder a ele privilégios administrativos no Ubuntu 18.04, siga nosso tutorial de Configuração Inicial de Servidor com Ubuntu 18.04. Para aprender como fazer isto no Debian 9, consulte Configuração Inicial de Servidor com Debian 9.

      • Um dispositivo de armazenamento adicional usado para armazenar a imagem de disco criada neste guia, preferivelmente tão grande quanto o disco que está sendo copiado. Isso pode ser um volume de armazenamento em blocos anexado, um drive externo USB, um espaço em disco adicional, etc.

      • Um Space na DigitalOcean e o utilitário de transferência de arquivos s3cmd configurado para uso com o seu Space. Para aprender como criar um Space, consulte o Guia Rápido do Spaces. Para aprender como configurar o s3cmd para uso com o seu Space, consulte o Guia de Configuração do s3cmd 2.x.

      Passo 1 — Instalando o Cloud-Init e ativando o SSH

      Para começar, vamos instalar o pacote de inicialização do cloud-Init. O cloud-init é um conjunto de scripts que executam no boot para configurar certas propriedades da instância de nuvem como a localidade padrão, hostname, chaves SSH e dispositivos de rede.

      Os passos para a instalação do cloud-init vão variar dependendo do sistema operacional que você instalou. Em geral, o pacote cloud-init deve estar disponível no gerenciador de pacotes do seu SO, assim se você não estiver utilizando uma distribuição baseada no Debian, você deve substituir o apt nos seguintes passos pelo seu comando do gerenciador de pacotes específico da distribuição.

      Instalando o cloud-init

      Neste guia, vamos utilizar um servidor Ubuntu 18.04 e então usaremos o apt para baixar e instalar o pacote cloud-init. Observe que o cloud-init pode já estar instalado em seu sistema (algumas distribuições Linux instalam o cloud-initpor padrão). Para verificar, efetue o login em seu servidor e execute o seguinte comando:

      Se você vir a seguinte saída, o cloud-init já foi instalado no seu servidor e você pode continuar configurando-o para uso com a DigitalOcean:

      Output

      usage: /usr/bin/cloud-init [-h] [--version] [--file FILES] [--debug] [--force] {init,modules,single,query,dhclient-hook,features,analyze,devel,collect-logs,clean,status} ... /usr/bin/cloud-init: error: the following arguments are required: subcommand

      Se, em vez disso, você vir o seguinte, você precisa instalar o cloud-init:

      Output

      cloud-init: command not found

      Para instalar o cloud-init, atualize o índice de pacotes e em seguida instale o pacote usando o apt:

      • sudo apt update
      • sudo apt install cloud-init

      Agora que instalamos o cloud-init, vamos configurá-lo para uso com a DigitalOcean, assegurando que ele utilize o datasource ConfigDrive. O datasource do cloud-init determina como o cloud-init procurará e atualizará a configuração e os metadados da instância. Os Droplets da DigitalOcean usam o datasource ConfigDrive, por isso, vamos verificar se ele vem em primeiro lugar na lista de datasources que o cloud-init pesquisa sempre que o Droplet inicializa.

      Reconfigurando o cloud-init

      Por padrão, no Ubuntu 18.04, o cloud-init configura a si mesmo para utilizar o datasource NoCloud primeiro. Isso irá causar problemas ao executar a imagem na DigitalOcean, por isso precisamos reconfigurar o cloud-init para utilizar o datasource ConfigDdrive e garantir que o cloud-init execute novamente quando a imagem é lançada na DigitalOcean.

      A partir da linha de comando, navegue até o diretório /etc/cloud/cloud.cfg.d:

      • cd /etc/cloud/cloud.cfg.d

      Use o comando ls para listar os arquivos de configuração do cloud-init presentes dentro do diretório:

      Output

      05_logging.cfg 50-curtin-networking.cfg 90_dpkg.cfg curtin-preserve-sources.cfg README

      Dependendo da sua instalação, alguns desses arquivos podem não estar presentes. Se presente, exclua o arquivo 50-curtin-networking.cfg, que configura as interfaces de rede para seu servidor Ubuntu. Quando a imagem é lançada na DigitalOcean, o cloud-init irá executar e reconfigurar estas interfaces automaticamente, portanto esse arquivo não é necessário. Se esse arquivo não for excluído, o Droplet da DigitalOcean criado a partir dessa imagem Ubuntu terá suas interfaces configuradas incorretamente e não serão acessíveis pela internet:

      • sudo rm 50-curtin-networking.cfg

      Em seguida, vamos executar dpkg-reconfigure cloud-init para remover o datasource NoCloud, garantindo que o cloud-init procure e localize o datasource ConfigDrive usado na DigitalOcean:

      • sudo dpkg-reconfigure cloud-init

      Você deve ver o seguinte menu gráfico:

      O datasource NoCloud está inicialmente destacado. Pressione ESPAÇO para desmarcá-lo e, em seguida, pressione ENTER.

      Finalmente, navegue até /etc/netplan:

      Remova o arquivo 50-cloud-init.yaml, que foi gerado a partir do arquivo de rede cloud-init que removemos anteriormente:

      • sudo rm 50-cloud-init.yaml

      A etapa final é garantir que limpemos a configuração da execução inicial do cloud-init para que ela seja executada novamente quando a imagem for lançada na DigitalOcean.

      Para fazer isso, execute cloud-init clean:

      Neste ponto você instalou e configurou o cloud-init para uso com a DigitalOcean. Agora você pode seguir para ativar o acesso SSH ao seu droplet.

      Ativar o Acesso SSH

      Depois que você instalou e configurou o cloud-init, o próximo passo é assegurar que você tenha um usuário e senha de administrador não-root disponível para você em sua máquina, conforme descrito nos pré-requisitos. Este passo é essencial para diagnosticar quaisquer erros que possam surgir após o upload da sua imagem e o lançamento do seu droplet. Se uma configuração de rede preexistente ou uma configuração incorreta do cloud-init tornar o seu Droplet inacessível na rede, você pode utilizar esse usuário em combinação ao Console do Droplet da DigitalOcean para acessar seu sistema e diagnosticar quaisquer problemas que possam ter surgido.

      Depois que você tiver configurado seu usuário administrativo não-root, a etapa final é garantir que você tenha um servidor SSH instalado e executando. O SSH geralmente vem pré-instalado em muitas distribuições populares do Linux. O procedimento para verificar se um processo está executando irá variar dependendo do sistema operacional do seu servidor. Se você não tiver certeza de como fazer isso, consulte a documentação do seu sistema operacional sobre o gerenciamento de serviços. No Ubuntu, você pode verificar que o SSH está funcionando utilizando este comando:

      Você deve ver a seguinte saída:

      Output

      ● ssh.service - OpenBSD Secure Shell server Loaded: loaded (/lib/systemd/system/ssh.service; enabled; vendor preset: enabled) Active: active (running) since Mon 2018-10-22 19:59:38 UTC; 8 days 1h ago Docs: man:sshd(8) man:sshd_config(5) Process: 1092 ExecStartPre=/usr/sbin/sshd -t (code=exited, status=0/SUCCESS) Main PID: 1115 (sshd) Tasks: 1 (limit: 4915) Memory: 9.7M CGroup: /system.slice/ssh.service └─1115 /usr/sbin/sshd -D

      Se o SSH não estiver em execução, você pode instalá-lo usando o apt (nas distribuições baseadas em Debian):

      • sudo apt install openssh-server

      Por padrão, o servidor SSH vai iniciar no boot a menos que esteja configurado de outra forma. Isso é desejável ao executar o sistema na nuvem, já que a DigitalOcean pode copiar automaticamente sua chave pública e conceder acesso SSH imediato ao seu Droplet após a criação.

      Depois que você criou um usuário administrativo não-root, ativou o SSH, e instalou o cloud-init, você está pronto para continuar criando uma imagem do seu disco de boot.

      Passo 2 — Criando uma Imagem de Disco

      Neste passo, vamos criar uma imagem de disco de formato RAW usando o utilitário de linha de comando dd, e compactá-lo usando o gzip. Vamos então carregar a imagem para o Spaces da DigitalOcean usando o s3cmd.

      Para começar, efetue login em seu servidor, e inspecione o arranjo de dispositivos de bloco para o seu sistema usando lsblk:

      Você deverá ver algo como o seguinte:

      Output

      NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT loop0 7:0 0 12.7M 1 loop /snap/amazon-ssm-agent/495 loop1 7:1 0 87.9M 1 loop /snap/core/5328 vda 252:0 0 25G 0 disk └─vda1 252:1 0 25G 0 part / vdb 252:16 0 420K 1 disk

      Nesse caso, observamos que nosso disco principal de boot é o /dev/vda, um disco de 25GB, e a partição primária, montada no /, é a /dev/vda1. Na maioria dos casos o disco contendo a partição montada no / será o disco de origem para a imagem. Vamos usar o dd para criar uma imagem do /dev/vda.

      Neste ponto, você deve decidir onde você quer armazenar a imagem de disco. Uma opção é anexar outro dispositivo de armazenamento em bloco, de preferência tão grande quanto o disco que você está fazendo a imagem. Em seguida, você pode salvar a imagem neste disco temporário anexado e enviá-la para o Spaces da DigitalOcean.

      Se você tem acesso físico ao servidor, você pode adicionar um drive adicional à máquina ou anexar outro dispositivo de armazenamento, como um disco USB externo.

      Outra opção, que iremos demonstrar nesse guia, é copiar a imagem por SSH para uma máquina local, a partir da qual você pode enviá-la para o Spaces.

      Independentemente do método escolhido, verifique se o dispositivo de armazenamento no qual você salvou a imagem compactada tem espaço livre suficiente. Se o disco que você está fazendo imagem está quase vazio, você pode esperar que o arquivo de imagem compactado seja significativamente menor que o disco original.

      Atenção: Antes de rodar o seguinte comando dd, certifique-se de que todos os aplicativos críticos tenham sido parados e seu sistema esteja o mais folgado possível. Copiar um disco sendo usado ativamente pode resultar em alguns arquivos corrompidos, portanto, certifique-se de interromper qualquer operação que use muitos dados e encerre o máximo possível de aplicativos em execução.

      Opção 1: Criando a Imagem Localmente

      A sintaxe para o comando dd que vamos executar é a seguinte:

      • dd if=/dev/vda bs=4M conv=sparse | pv -s 25G | gzip > /mnt/tmp_disk/ubuntu.gz

      Neste caso, estamos selecionando /dev/vda como o disco de entrada para se fazer imagem, e definindo o tamanho dos blocos de entrada/saída para 4MB (sendo que o padrão é 512 bytes). Isso geralmente acelera um pouco as coisas. Além disso, estamos usando a flag conv=sparse para minimizar o tamanho do arquivo de saída pulando o espaço vazio. Para aprender mais sobre parâmetros do dd, consulte a sua manpage.

      Em seguida, fazemos um pipe da saída para o utilitário de visualização de pipe pv para que possamos acompanhar o progresso da transferência visualmente (esse pipe é opcional e requer a instalação do pv usando o gerenciador de pacotes). Se você sabe o tamanho do disco inicial (nesse caso é 25GB), você pode adicionar -s 25G ao pipe do pv para ter uma estimativa de quando a transferência será concluída.

      Fazemos então um pipe de tudo isso para o gzip e salvamos em um arquivo chamado ubuntu.gz no volume de armazenamento de bloco temporário que anexamos ao servidor. Substitua /mnt/tmp_disk com o caminho para o dispositivo de armazenamento externo que você anexou ao seu servidor.

      Opção 2: Criando a Imagem via SSH

      Em vez de provisionar armazenamento adicional para sua máquina remota, você também pode executar a cópia via SSH se tiver espaço em disco suficiente disponível na sua máquina local. Observe que, dependendo da largura de banda disponível para você, isso pode ser lento e você pode incorrer em custos adicionais para a transferência de dados pela rede.

      Para copiar e compactar o disco via SSH, execute o seguinte comando em sua máquina local:

      • ssh usuário_remoto@ip_do_seu_servidor "sudo dd if=/dev/vda bs=4M conv=sparse | gzip -1 -" | dd of=ubuntu.gz

      Neste caso, estamos fazendo SSH para o nosso servidor remoto executando o comando dd lá, e fazendo um pipe da saída para o gzip. Em seguida, transferimos a saída do gzip pela rede e a salvamos localmente como ubuntu.gz. Certifique-se de que você tenha o utilitário dd disponível em sua máquina local antes de executar esse comando:

      Output

      /bin/dd

      Crie o arquivo de imagem compactado usando qualquer um dos métodos acima. Isso pode levar várias horas, dependendo do tamanho do disco que você está criando e do método que você está usando para criar a imagem.

      Depois de criar o arquivo de imagem compactado, você pode passar a enviá-lo para seus Spaces da DigitalOcean usando o s3cmd.

      Passo 3 — Fazendo Upload da Imagem para Spaces e Custom Images

      Conforme descrito nos pré-requisitos, você deve ter o s3cmd instalado e configurado para uso com seu Space da DigitalOcean na máquina que contém sua imagem compactada.

      Localize o arquivo de imagem compactado e faça o upload dele para seu Space usando o s3cmd:

      Nota: Você deve substituir your_space_name pelo nome do seu Space e não a sua URL. Por exemplo, se a URL do seu Space é https://example-space-name.nyc3.digitaloceanspaces.com, então o nome do seu Space é example-space-name.

      • s3cmd put /caminho_da_imagem/ubuntu.gz s3://your_space_name

      Quando o upload estiver concluído, navegue até seu Space usando o Painel de Controle da DigitalOcean, e localize a imagem na lista de arquivos. Tornaremos a imagem publicamente acessível temporariamente para que o Custom Images possa acessá-la e salvar uma cópia.

      À direita da lista de imagens, clique no menu suspenso More e, em seguida, clique em Manage Permissions:

      Em seguida, clique no botão de opção ao lado de Public e clique em Update para tornar a imagem publicamente acessível.

      Atenção: Sua imagem estará temporariamente acessível publicamente para qualquer pessoa com o caminho do seu Space durante este processo. Se você gostaria de evitar tornar sua imagem temporariamente pública, você pode criar sua Imagem Personalizada usando a API da DigitalOcean. Certifique-se de definir sua imagem como Private usando o procedimento acima depois que sua imagem for transferida com sucesso para o Custom Images.

      Busque a URL do Spaces para sua imagem passando o mouse sobre o nome da imagem no Painel de controle, e clique em Copy URL na janela que aparece.

      Agora, navegue para Images na barra de navegação à esquerda, e depois para Custom Images.

      A partir daqui, envie sua imagem usando esta URL, conforme detalhado na Documentação de Produto do Custom Images.

      Você pode então criar um Droplet a partir desta imagem. Observe que você precisa adicionar uma chave SSH ao Droplet na criação. Para aprender como fazer isso, consulte How to Add SSH Keys to Droplets.

      Uma vez que o seu Droplet inicializa, se você puder fazer SSH nele, você lançou com sucesso a sua Imagem Personalizada como um Droplet da DigitalOcean.

      Fazendo Debug

      Se você tentar fazer SSH no seu Droplet e não conseguir conectar, certifique-se de que sua imagem atenda aos requisitos listados e tenha o cloud-init e o SSH instalados e configurados corretamente. Se você ainda não conseguir acessar o Droplet, você pode tentar utilizar o Console do Droplet da DigitalOcean e o usuário não-root que você criou anteriormente para explorar o sistema e fazer o debug das configurações de sua rede, do cloud-init e do SSH. Outra maneira de fazer o debug de sua imagem é usar uma ferramenta de virtualização como o Virtualbox para inicializar sua imagem de disco dentro de uma máquina virtual, e fazer o debug da configuração do seu sistema a partir da VM.

      Conclusão

      Neste guia, você aprendeu como criar uma imagem de disco de um sistema Ubuntu 18.04 usando o utilitário de linha de comando dd e fazer o upload dela para a DigitalOcean como uma Custom Image ou Imagem Personalizada a partir da qual você pode lançar Droplets.

      As etapas neste guia podem variar dependendo do seu sistema operacional, do hardware existente e da configuração do kernel, mas, em geral, as imagens criadas a partir de distribuições populares do Linux devem funcionar usando esse método. Certifique-se de seguir cuidadosamente as etapas de instalação e configuração do cloud-init e de garantir que o sistema atenda a todos os requisitos listados na seção pré-requisitos acima.

      Para aprender mais sobre Custom Images, consulte a documentação de produto do Custom Images.

      Por Hanif Jetha



      Source link