One place for hosting & domains

      How To Set Up an NFS Mount on Ubuntu 20.04


      Introduction

      NFS, or Network File System, is a distributed file system protocol that allows you to mount remote directories on your server. This lets you manage storage space in a different location and write to that space from multiple clients. NFS provides a relatively standard and performant way to access remote systems over a network and works well in situations where the shared resources must be accessed regularly.

      In this guide, we’ll go over how to install the software needed for NFS functionality on Ubuntu 20.04, configure two NFS mounts on a server and client, and mount and unmount the remote shares.

      Prerequisites

      We will use two servers in this tutorial, with one sharing part of its filesystem with the other. To follow along, you will need:

      • Two Ubuntu 20.04 servers. Each of these should have a non-root user with sudo privileges, a firewall set up with UFW, and private networking, if it’s available to you.

        • For assistance setting up a non-root user with sudo privileges and a firewall, follow our Initial Server Setup with Ubuntu 20.04 guide.
        • If you’re using DigitalOcean Droplets for your server and client, you can read more about setting up a private network in our documentation on How to Create a VPC.

      Throughout this tutorial, we refer to the server that shares its directories as the host and the server that mounts these directories as the client. You will need to know the IP address for both. Be sure to use the private network address, if available.

      Throughout this tutorial we will refer to these IP addresses by the placeholders host_ip and client_ip. Please substitute as needed.

      Step 1 — Downloading and Installing the Components

      We’ll begin by installing the necessary components on each server.

      On the Host

      On the host server, install the nfs-kernel-server package, which will allow you to share your directories. Since this is the first operation that you’re performing with apt in this session, refresh your local package index before the installation:

      • sudo apt update
      • sudo apt install nfs-kernel-server

      Once these packages are installed, switch to the client server.

      On the Client

      On the client server, we need to install a package called nfs-common, which provides NFS functionality without including any server components. Again, refresh the local package index prior to installation to ensure that you have up-to-date information:

      • sudo apt update
      • sudo apt install nfs-common

      Now that both servers have the necessary packages, we can start configuring them.

      Step 2 — Creating the Share Directories on the Host

      We’re going to share two separate directories, with different configuration settings, in order to illustrate two key ways that NFS mounts can be configured with respect to superuser access.

      Superusers can do anything anywhere on their system. However, NFS-mounted directories are not part of the system on which they are mounted, so by default, the NFS server refuses to perform operations that require superuser privileges. This default restriction means that superusers on the client cannot write files as root, reassign ownership, or perform any other superuser tasks on the NFS mount.

      Sometimes, however, there are trusted users on the client system who need to perform these actions on the mounted file system but who have no need for superuser access on the host. You can configure the NFS server to allow this, although it introduces an element of risk, as such a user could gain root access to the entire host system.

      Example 1: Exporting a General Purpose Mount

      In the first example, we’ll create a general-purpose NFS mount that uses default NFS behavior to make it difficult for a user with root privileges on the client machine to interact with the host using those client superuser privileges. You might use something like this to store files which were uploaded using a content management system or to create space for users to easily share project files.

      First, make the share directory:

      • sudo mkdir /var/nfs/general -p

      Since we’re creating it with sudo, the directory is owned by the host’s root user:

      Output

      drwxr-xr-x 2 root root 4096 May 14 18:36 .

      NFS will translate any root operations on the client to the nobody:nogroup credentials as a security measure. Therefore, we need to change the directory ownership to match those credentials.

      • sudo chown nobody:nogroup /var/nfs/general

      You’re now ready to export this directory.

      Example 2: Exporting the Home Directory

      In our second example, the goal is to make user home directories stored on the host available on client servers, while allowing trusted administrators of those client servers the access they need to conveniently manage users.

      To do this, we’ll export the /home directory. Since it already exists, we don’t need to create it. We won’t change the permissions, either. If we did, it could lead to a range of issues for anyone with a home directory on the host machine.

      Step 3 — Configuring the NFS Exports on the Host Server

      Next, we’ll dive into the NFS configuration file to set up the sharing of these resources.

      On the host machine, open the /etc/exports file in your text editor with root privileges:

      The file has comments showing the general structure of each configuration line. The syntax is as follows:

      /etc/exports

      directory_to_share    client(share_option1,...,share_optionN)
      

      We’ll need to create a line for each of the directories that we plan to share. Be sure to change the client_ip placeholder shown here to your actual IP address:

      /etc/exports

      /var/nfs/general    client_ip(rw,sync,no_subtree_check)
      /home               client_ip(rw,sync,no_root_squash,no_subtree_check)
      

      Here, we’re using the same configuration options for both directories with the exception of no_root_squash. Let’s take a look at what each of these options mean:

      • rw: This option gives the client computer both read and write access to the volume.
      • sync: This option forces NFS to write changes to disk before replying. This results in a more stable and consistent environment since the reply reflects the actual state of the remote volume. However, it also reduces the speed of file operations.
      • no_subtree_check: This option prevents subtree checking, which is a process where the host must check whether the file is actually still available in the exported tree for every request. This can cause many problems when a file is renamed while the client has it opened. In almost all cases, it is better to disable subtree checking.
      • no_root_squash: By default, NFS translates requests from a root user remotely into a non-privileged user on the server. This was intended as security feature to prevent a root account on the client from using the file system of the host as root. no_root_squash disables this behavior for certain shares.

      When you are finished making your changes, save and close the file. Then, to make the shares available to the clients that you configured, restart the NFS server with the following command:

      • sudo systemctl restart nfs-kernel-server

      Before you can actually use the new shares, however, you’ll need to be sure that traffic to the shares is permitted by firewall rules.

      Step 4 — Adjusting the Firewall on the Host

      First, let’s check the firewall status to see if it’s enabled and, if so, to see what’s currently permitted:

      Output

      Status: active To Action From -- ------ ---- OpenSSH ALLOW Anywhere OpenSSH (v6) ALLOW Anywhere (v6)

      On our system, only SSH traffic is being allowed through, so we’ll need to add a rule for NFS traffic.

      With many applications, you can use sudo ufw app list and enable them by name, but nfs is not one of those. However, because ufw also checks /etc/services for the port and protocol of a service, we can still add NFS by name. Best practice recommends that you enable the most restrictive rule that will still allow the traffic you want to permit, so rather than enabling traffic from just anywhere, we’ll be specific.

      Use the following command to open port 2049 on the host, being sure to substitute your client IP address:

      • sudo ufw allow from client_ip to any port nfs

      You can verify the change by typing:

      You should see traffic allowed from port 2049 in the output:

      Output

      Status: active To Action From -- ------ ---- OpenSSH ALLOW Anywhere 2049 ALLOW 203.0.113.24 OpenSSH (v6) ALLOW Anywhere (v6)

      This confirms that UFW will only allow NFS traffic on port 2049 from our client machine.

      Step 5 — Creating Mount Points and Mounting Directories on the Client

      Now that the host server is configured and serving its shares, we’ll prepare our client.

      In order to make the remote shares available on the client, we need to mount the directories on the host that we want to share to empty directories on the client.

      Note: If there are files and directories in your mount point, they will become hidden as soon as you mount the NFS share. To avoid the loss of important files, be sure that if you mount in a directory that already exists that the directory is empty.

      We’ll create two directories for our mounts:

      • sudo mkdir -p /nfs/general
      • sudo mkdir -p /nfs/home

      Now that we have a location to put the remote shares and we’ve opened the firewall, we can mount the shares using the IP address of our host server:

      • sudo mount host_ip:/var/nfs/general /nfs/general
      • sudo mount host_ip:/home /nfs/home

      These commands will mount the shares from the host computer onto the client machine. You can double-check that they mounted successfully in several ways. You can check this with a mount or findmnt command, but df -h provides a more readable output:

      Output

      Filesystem Size Used Avail Use% Mounted on udev 474M 0 474M 0% /dev tmpfs 99M 936K 98M 1% /run /dev/vda1 25G 1.8G 23G 8% / tmpfs 491M 0 491M 0% /dev/shm tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs 491M 0 491M 0% /sys/fs/cgroup /dev/vda15 105M 3.9M 101M 4% /boot/efi tmpfs 99M 0 99M 0% /run/user/1000 10.132.212.247:/var/nfs/general 25G 1.8G 23G 8% /nfs/general 10.132.212.247:/home 25G 1.8G 23G 8% /nfs/home

      Both of the shares we mounted appear at the bottom. Because they were mounted from the same file system, they show the same disk usage. To see how much space is actually being used under each mount point, use the disk usage command du and the path of the mount. The -s flag provides a summary of usage rather than displaying the usage for every file. The -h prints human-readable output.

      For example:

      Output

      36K /nfs/home

      This shows us that the contents of the entire home directory is using only 36K of the available space.

      Step 6 — Testing NFS Access

      Next, let’s test access to the shares by writing something to each of them.

      Example 1: The General Purpose Share

      First, write a test file to the /var/nfs/general share:

      • sudo touch /nfs/general/general.test

      Then, check its ownership:

      • ls -l /nfs/general/general.test

      Output

      -rw-r--r-- 1 nobody nogroup 0 Aug 1 13:31 /nfs/general/general.test

      Because we mounted this volume without changing NFS’s default behavior and created the file as the client machine’s root user via the sudo command, ownership of the file defaults to nobody:nogroup. client superusers won’t be able to perform typical administrative actions, like changing the owner of a file or creating a new directory for a group of users, on this NFS-mounted share.

      Example 2: The Home Directory Share

      To compare the permissions of the General Purpose share with the Home Directory share, create a file in /nfs/home the same way:

      • sudo touch /nfs/home/home.test

      Then look at the ownership of the file:

      • ls -l /nfs/home/home.test

      Output

      -rw-r--r-- 1 root root 0 Aug 1 13:32 /nfs/home/home.test

      We created home.test as root using the sudo command, exactly the same way we created the general.test file. However, in this case it is owned by root because we overrode the default behavior when we specified the no_root_squash option on this mount. This allows our root users on the client machine to act as root and makes the administration of user accounts much more convenient. At the same time, it means we don’t have to give these users root access on the host.

      Step 7 — Mounting the Remote NFS Directories at Boot

      We can mount the remote NFS shares automatically at boot by adding them to /etc/fstab file on the client.

      Open this file with root privileges in your text editor:

      At the bottom of the file, add a line for each of our shares. They will look like this:

      /etc/fstab

      . . .
      host_ip:/var/nfs/general    /nfs/general   nfs auto,nofail,noatime,nolock,intr,tcp,actimeo=1800 0 0
      host_ip:/home               /nfs/home      nfs auto,nofail,noatime,nolock,intr,tcp,actimeo=1800 0 0
      
      

      Note: You can find more information about the options we are specifying here in the NFS man page. You can access this by running the following command:

      The client will automatically mount the remote partitions at boot, although it may take a few moments to establish the connection and for the shares to be available.

      Step 8 — Unmounting an NFS Remote Share

      If you no longer want the remote directory to be mounted on your system, you can unmount it by moving out of the share’s directory structure and unmounting, like this:

      • cd ~
      • sudo umount /nfs/home
      • sudo umount /nfs/general

      Take note that the command is named umount not unmount as you may expect.

      This will remove the remote shares, leaving only your local storage accessible:

      Output

      Filesystem Size Used Avail Use% Mounted on udev 474M 0 474M 0% /dev tmpfs 99M 936K 98M 1% /run /dev/vda1 25G 1.8G 23G 8% / tmpfs 491M 0 491M 0% /dev/shm tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs 491M 0 491M 0% /sys/fs/cgroup /dev/vda15 105M 3.9M 101M 4% /boot/efi tmpfs 99M 0 99M 0% /run/user/1000

      If you also want to prevent them from being remounted on the next reboot, edit /etc/fstab and either delete the line or comment it out by placing a # character at the beginning of the line. You can also prevent auto-mounting by removing the auto option, which will allow you to still mount it manually.

      Conclusion

      In this tutorial, we created an NFS host and illustrated some key NFS behaviours by creating two different NFS mounts, which we shared with a NFS client.

      If you’re looking to implement NFS in production, it’s important to note that the protocol itself is not encrypted. In cases where you’re sharing over a private network, this may not be a problem. In other cases, a VPN or some other type of encrypted tunnel will be necessary to protect your data.



      Source link

      So richten Sie ReadWriteMany (RWX) Persistent Volumes mit NFS unter DigitalOcean Kubernetes ein


      Einführung

      Aufgrund der verteilten und dynamischen Natur von Containern ist die statische Verwaltung und Konfiguration von Speicher in Kubernetes zu einem schwierigen Problem geworden, da Arbeitslasten jetzt in der Lage sind, innerhalb von Sekunden von einer virtuellen Maschine (VM) auf eine andere zu wechseln. Um dieses Problem zu lösen, verwaltet Kubernetes Volumes mit einem System aus Persistent Volumes (PV), API-Objekten, die eine Speicherkonfiguration/ein Volume darstellen, und PersistentVolumeClaims (PVC), einer Speicheranforderung, die von einem Persistent Volume erfüllt werden soll. Darüber hinaus können Container Storage Interface (CSI)-Treiber dazu beitragen, die Handhabung und Bereitstellung von Speicher für containerisierte Workloads zu automatisieren und zu verwalten. Diese Treiber sind für die Bereitstellung, das Einbinden, das Aufheben der Einbindung, das Entfernen und das Erstellen von Snapshots von Volumes verantwortlich.

      Das digitalocean-csi integriert einen Kubernetes-Cluster mit dem Produkt DigitalOcean Block Storage. Damit kann ein Entwickler Blockspeicher-Volumes für containerisierte Anwendungen in Kubernetes dynamisch bereitstellen. Anwendungen können jedoch manchmal erfordern, dass Daten über mehrere Droplets hinweg persistent gespeichert und gemeinsam genutzt werden. Die Blockspeicher-CSI-Standardlösung von DigitalOcean ist nicht in der Lage, das gleichzeitige Einbinden eines Blockspeicher-Volumes in mehrere Droplets zu unterstützen. Dies bedeutet, dass es sich um eine ReadWriteOnce (RWO)-Lösung handelt, da das Volume auf einen Knoten beschränkt ist. Das Protokoll Network File System (NFS) hingegen unterstützt das Exportieren derselben Freigabe an viele Consumer. Dies wird ReadWriteMany (RWX) genannt, da viele Knoten das Volume als Read-Write einbinden können. Wir können daher einen NFS-Server innerhalb unseres Clusters verwenden, um Speicher bereitzustellen, der die zuverlässige Unterstützung von DigitalOcean Block Storage mit der Flexibilität von NFS-Freigaben nutzen kann.

      In diesem Tutorial werden Sie die dynamische Bereitstellung für NFS-Volumes innerhalb eines DigitalOcean Kubernetes (DOKS)-Clusters, in dem die Exporte auf DigitalOcean Block-Speichervolumes gespeichert werden, konfigurieren. Anschließend werden Sie mehrere Instanzen einer Nginx-Demoanwendung bereitstellen und die gemeinsame Nutzung von Daten zwischen den einzelnen Instanzen testen.

      Voraussetzungen

      Bevor Sie mit diesem Leitfaden beginnen, benötigen Sie Folgendes:

      • Die auf Ihrem lokalen Rechner installierte Befehlszeilenschnittstelle kubectl. Mehr über die Installation und Konfiguration von kubectl können Sie der offiziellen Dokumentation entnehmen.

      • Einen DigitalOcean Kubernetes-Cluster, bei dem Ihre Verbindung standardmäßig als kubectl konfiguriert ist. Um einen Kubernetes-Cluster auf DigitalOcean zu erstellen, lesen Sie unseren Kubernetes-Schnellstart. Eine Anleitung zur Konfiguration von kubectl finden Sie unter dem Schritt Verbinden mit Ihrem Cluster, wenn Sie Ihren Cluster erstellen.

      • Den auf Ihrem lokalen Rechner installierten Helm-Paketmanager und das auf Ihrem Cluster installierte Tiller. Führen Sie dazu die Schritte 1 und 2 des Tutorials So installieren Sie Software auf Kubernetes Clustern mit dem Helm-Paketmanager aus.

      Hinweis: Ab Helm Version 3.0 muss Tiller nicht mehr installiert werden, damit Helm funktioniert. Wenn Sie die neueste Version von Helm verwenden, finden Sie die Anweisungen in der Helm-Installationsdokumentation.

      Schritt 1 – Bereitstellen des NFS-Servers mit Helm

      Für die Bereitstellung des NFS-Servers verwenden Sie ein Helm Chart. Die Bereitstellung eines Helm Charts ist eine automatisierte Lösung, die schneller und weniger fehleranfällig ist als die manuelle Erstellung der NFS-Serverbereitstellung.

      Stellen Sie zuerst sicher, dass das standardmäßige Chart-Repository stable verfügbar ist, indem Sie das Repo hinzufügen:

      • helm repo add stable https://kubernetes-charts.storage.googleapis.com/

      Als Nächstes rufen Sie die Metadaten für das Repository ab, das Sie gerade hinzugefügt haben. Dadurch wird sichergestellt, dass der Helm-Client aktualisiert wird:

      Um den Zugriff auf das Repo stable zu verifizieren, führen Sie eine Suche in den Charts aus:

      Dadurch erhalten Sie eine Liste der verfügbaren Charts, ähnlich der folgenden:

      Output

      NAME CHART VERSION APP VERSION DESCRIPTION stable/acs-engine-autoscaler 2.2.2 2.1.1 DEPRECATED Scales worker nodes within agent pools stable/aerospike 0.3.2 v4.5.0.5 A Helm chart for Aerospike in Kubernetes stable/airflow 5.2.4 1.10.4 Airflow is a platform to programmatically autho... stable/ambassador 5.3.0 0.86.1 A Helm chart for Datawire Ambassador ...

      Das Ergebnis bedeutet, dass Ihr Helm-Client ausgeführt wird und auf dem neuesten Stand ist.

      Nachdem Helm nun eingerichtet ist, installieren Sie das Helm Chart nfs-server-provisioner, um den NFS-Server einzurichten. Wenn Sie den Inhalt des Charts untersuchen möchten, werfen Sie einen Blick in seine Dokumentation auf GitHub.

      Wenn Sie das Helm-Chart bereitstellen, legen Sie einige Variablen für Ihren NFS-Server fest, um die Konfiguration für Ihre Anwendung weiter zu spezifizieren. Sie können auch andere Konfigurationsoptionen untersuchen und sie an die Bedürfnisse der Anwendung anpassen.

      Verwenden Sie den folgenden Befehl, um das Helm-Chart zu installieren:

      • helm install nfs-server stable/nfs-server-provisioner --set persistence.enabled=true,persistence.storageClass=do-block-storage,persistence.size=200Gi

      Dieser Befehl stellt einen NFS-Server mit den folgenden Konfigurationsoptionen bereit:

      • Fügt ein Persistent Volume für den NFS-Server mit dem Flag --set hinzu. Dadurch wird sichergestellt, dass alle gemeinsam genutzten NFS-Daten auch bei Pod-Neustarts persistent gespeichert sind.
      • Für die persistente Speicherung wird die Speicherklasse do-block-storage verwendet.
      • Stellt insgesamt 200Gi für den NFS-Server bereit, die in Exporte aufgeteilt werden können.

      Hinweis: Die Option persistence.size bestimmt die Gesamtkapazität aller NFS-Volumes, die Sie bereitstellen können. Zum Zeitpunkt dieser Veröffentlichung unterstützen nur die DOKS Version 1.16.2-do.3 und spätere Versionen die Erweiterung von Volumes, sodass die Größenänderung dieses Volumes eine manuelle Aufgabe ist, wenn Sie eine frühere Version verwenden. Stellen Sie in diesem Fall sicher, dass Sie diese Größe im Hinblick auf Ihre zukünftigen Bedürfnisse einstellen.

      Nachdem dieser Befehl abgeschlossen ist, erhalten Sie eine Ausgabe, die der folgenden ähnelt:

      Output

      NAME: nfs-server LAST DEPLOYED: Thu Feb 13 19:30:07 2020 NAMESPACE: default STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: The NFS Provisioner service has now been installed. A storage class named 'nfs' has now been created and is available to provision dynamic volumes. You can use this storageclass by creating a PersistentVolumeClaim with the correct storageClassName attribute. For example: --- kind: PersistentVolumeClaim apiVersion: v1 metadata: name: test-dynamic-volume-claim spec: storageClassName: "nfs" accessModes: - ReadWriteOnce resources: requests: storage: 100Mi

      Um den von Ihnen bereitgestellten NFS-Server zu sehen, führen Sie den folgenden Befehl aus:

      Dadurch wird Folgendes angezeigt:

      Output

      NAME READY STATUS RESTARTS AGE nfs-server-nfs-server-provisioner-0 1/1 Running 0 11m

      Prüfen Sie als Nächstes die von Ihnen erstellte storageclass:

      Sie erhalten eine Ausgabe, die der folgenden ähnelt:

      Output

      NAME PROVISIONER AGE do-block-storage (default) dobs.csi.digitalocean.com 90m nfs cluster.local/nfs-server-nfs-server-provisioner 3m

      Sie haben nun einen NFS-Server sowie einen storageclass, die Sie für die dynamische Bereitstellung von Volumes verwenden können. Als Nächstes können Sie eine Bereitstellung erstellen, die diesen Speicher verwendet, und ihn über mehrere Instanzen hinweg gemeinsam nutzen.

      Schritt 2 – Bereitstellen einer Anwendung unter Verwendung eines gemeinsam genutzten PersistentVolumeClaim

      In diesem Schritt erstellen Sie eine beispielhafte Bereitstellung auf Ihrem DOKS-Cluster, um Ihre Speichereinrichtung zu testen. Dies wird eine Nginx Webserver-Anwendung namens web sein.

      Um diese Anwendung bereitzustellen, schreiben Sie zunächst die YAML-Datei, um die Bereitstellung zu spezifizieren. Öffnen Sie eine Datei nginx-test.yaml mit Ihrem Texteditor; dieses Tutorial verwendet nano:

      Fügen Sie in diese Datei die folgenden Zeilen ein, um die Bereitstellung mit einem PersistentVolumeClaim namens nfs-data zu definieren:

      nginx-test.yaml

      apiVersion: apps/v1
      kind: Deployment
      metadata:
        labels:
          app: web
        name: web
      spec:
        replicas: 1
        selector:
          matchLabels:
            app: web
        strategy: {}
        template:
          metadata:
            creationTimestamp: null
            labels:
              app: web
          spec:
            containers:
            - image: nginx:latest
              name: nginx
              resources: {}
              volumeMounts:
              - mountPath: /data
                name: data
            volumes:
            - name: data
              persistentVolumeClaim:
                claimName: nfs-data
      ---
      apiVersion: v1
      kind: PersistentVolumeClaim
      metadata:
        name: nfs-data
      spec:
        accessModes:
        - ReadWriteMany
        resources:
          requests:
            storage: 2Gi
        storageClassName: nfs
      

      Speichern Sie die Datei und beenden Sie den Texteditor.

      Diese Bereitstellung ist so konfiguriert, dass sie die zugehörigen PersistentVolumeClaim nfs-data verwendet und sie unter /data einbindet.

      In der PVC-Definition werden Sie feststellen, dass der storageClassName auf nfs gesetzt ist. Damit wird dem Cluster mitgeteilt, dass er diese Speicherung nach den Regeln der nfs storageClass, die Sie im vorherigen Schritt erstellt haben, erfüllen muss. Der neue PersistentVolumeClaim wird verarbeitet und dann wird eine NFS-Freigabe bereitgestellt, um den Anspruch in Form eines Persistent Volumes zu erfüllen. Der Pod wird versuchen, dieses PVC einzubinden, sobald es bereitgestellt wurde. Nach dem Einbinden verifizieren Sie die Funktionalität ReadWriteMany (RWX).

      Führen Sie die Bereitstellung mit dem folgenden Befehl aus:

      • kubectl apply -f nginx-test.yaml

      Dadurch erhalten Sie folgende Ausgabe:

      Output

      deployment.apps/web created persistentvolumeclaim/nfs-data created

      Als Nächstes prüfen Sie, ob der Pod web anläuft:

      Dadurch wird Folgendes ausgegeben:

      Output

      NAME READY STATUS RESTARTS AGE nfs-server-nfs-server-provisioner-0 1/1 Running 0 23m web-64965fc79f-b5v7w 1/1 Running 0 4m

      Nachdem nun die Beispielbereitstellung läuft, können Sie sie mit dem Befehl kubectl scale auf drei Instanzen skalieren:

      • kubectl scale deployment web --replicas=3

      Dadurch erhalten Sie folgende Ausgabe:

      Output

      deployment.extensions/web scaled

      Führen Sie nun den Befehl kubectl get erneut aus:

      Sie finden die skalierten Instanzen der Bereitstellung:

      Output

      NAME READY STATUS RESTARTS AGE nfs-server-nfs-server-provisioner-0 1/1 Running 0 24m web-64965fc79f-q9626 1/1 Running 0 5m web-64965fc79f-qgd2w 1/1 Running 0 17s web-64965fc79f-wcjxv 1/1 Running 0 17s

      Sie haben nun drei Instanzen Ihrer Nginx-Bereitstellung, die mit demselben Persistent Volume verbunden sind. Im nächsten Schritt stellen Sie sicher, dass sie Daten untereinander gemeinsam nutzen können.

      Schritt 3 – Validieren der gemeinsamen Nutzung von NFS-Daten

      Im letzten Schritt überprüfen Sie, ob die Daten von allen Instanzen, die in die NFS-Freigabe eingebunden sind, gemeinsam genutzt werden. Dazu erstellen Sie eine Datei im Verzeichnis /data in einem der Pods und verifizieren dann, ob die Datei in dem Verzeichnis /data eines anderen Pods vorhanden ist.

      Um dies zu validieren, verwenden Sie den Befehl kubectl exec. Mit diesem Befehl können Sie einen Pod angeben und einen Befehl innerhalb dieses Pods ausführen. Um mehr über die Überprüfung von Ressourcen unter Verwendung von kubectl zu erfahren, werfen Sie einen Blick auf unseren kubectl-Spickzettel.

      Um innerhalb eines Ihrer Pods web eine Datei namens hello_world zu erstellen, verwenden Sie den Befehl kubectl exec, um den Befehl touch weiterzugeben. Beachten Sie, dass die Zahl nach web im Pod-Namen für Sie unterschiedlich sein wird. Achten Sie also darauf, den hervorgehobenen Pod-Namen durch einen Ihrer eigenen Pods zu ersetzen, die Sie im letzten Schritt als Ausgabe von kubectl get pods gefunden haben.

      • kubectl exec web-64965fc79f-q9626 -- touch /data/hello_world

      Ändern Sie dann den Namen des Pod und verwenden Sie den Befehl ls, um die Dateien im Verzeichnis /data eines anderen Pods aufzulisten:

      • kubectl exec web-64965fc79f-qgd2w -- ls /data

      Ihre Ausgabe zeigt die Datei an, die Sie im ersten Pod erstellt haben:

      Output

      hello_world

      Dies zeigt, dass alle Pods Daten über NFS gemeinsam nutzen und dass Ihre Einrichtung korrekt funktioniert.

      Zusammenfassung

      In diesem Tutorial haben Sie einen NFS-Server erstellt, der durch DigitalOcean Block Storage unterstützt wurde. Der NFS-Server verwendete dann diesen Blockspeicher zur Bereitstellung und zum Export von NFS-Freigaben für Workloads in einem RWX-kompatiblen Protokoll. Auf diese Weise konnten Sie eine technische Beschränkung der Blockspeicherung von DigitalOcean umgehen und dieselben PVC-Daten über viele Pods hinweg gemeinsam nutzen. Durch dieses Tutorial ist Ihr DOKS-Cluster nun so eingerichtet, dass er eine wesentlich größere Anzahl von Anwendungsfällen für die Bereitstellung unterstützt.

      Wenn Sie mehr über Kubernetes erfahren möchten, sehen Sie sich unser Curriculum für Full-Stack-Entwickler oder die Produktdokumentation für DigitalOcean Kubernetes an.



      Source link

      Como configurar os volumes persistentes ReadWriteMany (RWX) com NFS no Kubernetes da DigitalOcean


      Introdução

      Devido à natureza distribuída e dinâmica dos contêineres, gerenciar e configurar o armazenamento de maneira estática se transformou em um problema no Kubernetes, uma vez que agora as cargas de trabalho conseguem mover-se de uma máquina virtual (VM) para outra em questão de segundos. Para lidar com a questão, o Kubernetes gerencia os volumes com um sistema de Persistent Volumes (PV) (volumes persistentes), objetos de API que representam uma configuração de armazenamento/volume e de PersistentVolumeClaims (PVC), uma solicitação por armazenamento atendida por um volume persistente. Além disso, drivers da Interface de Armazenamento de Contêiner (CSI) podem ajudar a automatizar e gerenciar o manuseamento e fornecimento de armazenamento para cargas de trabalho em contêiner. Esses drivers são responsáveis pelo fornecimento, montagem, desmontagem, remoção e serviços de instantâneo de volumes.

      O digitalocean-csi integra um cluster do Kubernetes com o produto armazenamento em bloco da DigitalOcean. Um desenvolvedor pode usar isso para fornecer volumes de armazenamento de bloco dinamicamente para aplicativos em contêiner no Kubernetes. No entanto, os aplicativos podem, por vezes, exigir que dados sejam mantidos e partilhados por vários Droplets. A solução padrão CSI de armazenamento em bloco da DigitalOcean não consegue suportar a montagem de um volume de armazenamento de bloco em muitos Droplets simultaneamente. Isso significa que esta é uma solução ReadWriteOnce (RWO) (Ler e escrever uma vez), uma vez que o volume é limitado a um nó. O protocolo de Sistema de Arquivos de Rede (NFS), por outro lado, dá suporte à exportação da mesma quantidade a muitos consumidores. Isso se chama ReadWriteMany (RWX) (Ler e escrever muitos), pois muitos nós podem montar o volume como leitura-gravação. Assim, podemos usar um servidor NFS dentro do nosso cluster para fornecer armazenamento que pode potencializar o suporte confiável do armazenamento em bloco da DigitalOcean com a flexibilidade das quantidades do NFS.

      Neste tutorial, você irá configurar o fornecimento dinâmico de volumes NFS dentro de um cluster DigitalOcean Kubernetes (DOKS) no qual as exportações são armazenadas em volumes de armazenamento em bloco da DigitalOcean. Em seguida, implantará várias instâncias de um aplicativo de demonstração Nginx e testará o compartilhamento de dados entre cada instância.

      Pré-requisitos

      Antes de iniciar este guia, você precisará do seguinte:

      Nota: a partir da versão 3.0 do Helm, o Tiller já não precisa estar instalado para que o Helm funcione. Caso esteja usando a versão mais recente do Helm, consulte a documentação de instalação do Helm para obter instruções.

      Para implantar o servidor NFS, você usará um gráfico do Helm. Comparada à implantação manual do servidor NFS, a implantação de um gráfico do Helm se apresenta como uma solução automatizada mais rápida e menos suscetível a erros.

      Primeiro, certifique-se de que o repositório padrão de gráficos stable esteja disponível para você, adicionando o repo:

      • helm repo add stable https://kubernetes-charts.storage.googleapis.com/

      Em seguida, puxe os metadados para o repositório que acabou de adicionar. Isso garantirá que o cliente do Helm esteja atualizado:

      Para verificar o acesso ao repo stable, execute uma pesquisa nos gráficos:

      Isso dará a você a lista de gráficos disponíveis, semelhante a esta:

      Output

      NAME CHART VERSION APP VERSION DESCRIPTION stable/acs-engine-autoscaler 2.2.2 2.1.1 DEPRECATED Scales worker nodes within agent pools stable/aerospike 0.3.2 v4.5.0.5 A Helm chart for Aerospike in Kubernetes stable/airflow 5.2.4 1.10.4 Airflow is a platform to programmatically autho... stable/ambassador 5.3.0 0.86.1 A Helm chart for Datawire Ambassador ...

      Esse resultado significa que seu cliente do Helm está em execução e atualizado.

      Agora que o Helm está configurado, instale o gráfico do Helm nfs-server-provisioner para configurar o servidor NFS. Caso queira examinar o conteúdo do gráfico, consulte sua documentação no GitHub.

      Quando implantar o gráfico do Helm, você vai definir algumas variáveis para seu servidor NFS para especificar ainda mais a configuração do seu aplicativo. Você também pode investigar outras opções de configuração e ajustá-las para se adequarem às necessidades do aplicativo.

      Para instalar o gráfico do Helm, use o seguinte comando:

      • helm install nfs-server stable/nfs-server-provisioner --set persistence.enabled=true,persistence.storageClass=do-block-storage,persistence.size=200Gi

      Esse comando provisiona o servidor NFS com as seguintes opções de configuração:

      • Adiciona um volume persistente ao servidor NFS com o sinalizador --set. Isso garante que todos os dados do NFS compartilhados permaneçam nas reinicializações do pod.
      • Para o armazenamento persistente, usa a classe de armazenamento do-block-storage.
      • Provisiona um total de 200 Gi para que o servidor NFS consiga dividir em exportações.

      Nota: a opção persistence.size determinará a capacidade total de todos os volumes NFS que você pode provisionar. No momento em que este artigo foi publicado, apenas a versão 1.16.2-do.3 e posteriores do DOKS ofereciam suporte à expansão de volumes. Dessa maneira, se você estiver usando uma versão mais antiga do DOKS, o redimensionamento desse volume terá que ser feito manualmente. Caso seja esse o caso, certifique-se de definir esse tamanho levando em conta suas necessidades futuras.

      Depois que esse comando terminar, você receberá um resultado semelhante ao seguinte:

      Output

      NAME: nfs-server LAST DEPLOYED: Thu Feb 13 19:30:07 2020 NAMESPACE: default STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: The NFS Provisioner service has now been installed. A storage class named 'nfs' has now been created and is available to provision dynamic volumes. You can use this storageclass by creating a PersistentVolumeClaim with the correct storageClassName attribute. For example: --- kind: PersistentVolumeClaim apiVersion: v1 metadata: name: test-dynamic-volume-claim spec: storageClassName: "nfs" accessModes: - ReadWriteOnce resources: requests: storage: 100Mi

      Para ver o servidor NFS que você provisionou, execute o seguinte comando:

      Isso mostrará o seguinte:

      Output

      NAME READY STATUS RESTARTS AGE nfs-server-nfs-server-provisioner-0 1/1 Running 0 11m

      Em seguida, verifique a storageclass que você criou:

      Isso dará um resultado parecido com este:

      Output

      NAME PROVISIONER AGE do-block-storage (default) dobs.csi.digitalocean.com 90m nfs cluster.local/nfs-server-nfs-server-provisioner 3m

      Agora, você tem um servidor NFS em execução, bem como uma storageclass que pode usar para o provisionamento dinâmico de volumes. Em seguida, você poderá criar uma implantação que usará esse armazenamento, compartilhando-a em múltiplas instâncias.

      Passo 2 — Implantando um aplicativo usando um PersistentVolumeClaim compartilhado

      Neste passo, você criará uma implantação de exemplo no seu cluster DOKS para testar sua configuração de armazenamento. A implantação será a de um app de servidor Web Nginx chamado web.

      Para implantar esse aplicativo, primeiro grave o arquivo YAML para especificar a implantação. Abra um arquivo nginx-test.yaml com seu editor de texto; este tutorial usará o nano:

      Neste arquivo, adicione as linhas a seguir para definir a implantação com um PersistentVolumeClaim chamado nfs-data:

      nginx-test.yaml

      apiVersion: apps/v1
      kind: Deployment
      metadata:
        labels:
          app: web
        name: web
      spec:
        replicas: 1
        selector:
          matchLabels:
            app: web
        strategy: {}
        template:
          metadata:
            creationTimestamp: null
            labels:
              app: web
          spec:
            containers:
            - image: nginx:latest
              name: nginx
              resources: {}
              volumeMounts:
              - mountPath: /data
                name: data
            volumes:
            - name: data
              persistentVolumeClaim:
                claimName: nfs-data
      ---
      apiVersion: v1
      kind: PersistentVolumeClaim
      metadata:
        name: nfs-data
      spec:
        accessModes:
        - ReadWriteMany
        resources:
          requests:
            storage: 2Gi
        storageClassName: nfs
      

      Salve o arquivo e saia do editor de texto.

      Essa implantação (web) foi configurada para usar o nfs-data do PersistentVolumeClaim (PVC) que a acompanha e para montá-la em /data.

      Na definição do PVC, você descobrirá que o storageClassName está definido como nfs. Isso diz ao cluster para atender esse armazenamento, usando as regras de storageClass nfs que você criou no passo anterior. O novo PersistentVolumeClaim será processado e, em seguida, um compartilhamento NFS será provisionado para atender à solicitação e à declaração, na forma de um Volume Persistente. O pod tentará montar aquele PVC assim que ele tiver sido provisionado. Assim que terminar a montagem, você verificará a funcionalidade ReadWriteMany (RMX).

      Execute a implantação com o seguinte comando:

      • kubectl apply -f nginx-test.yaml

      Isso dará o seguinte resultado:

      Output

      deployment.apps/web created persistentvolumeclaim/nfs-data created

      Em seguida, verifique e veja o pod web em funcionamento:

      Isso irá mostrar o seguinte:

      Output

      NAME READY STATUS RESTARTS AGE nfs-server-nfs-server-provisioner-0 1/1 Running 0 23m web-64965fc79f-b5v7w 1/1 Running 0 4m

      Agora que a implantação de exemplo está em funcionamento, você pode dimensioná-la para três instâncias, usando o comando kubectl scale:

      • kubectl scale deployment web --replicas=3

      Isso dará o resultado:

      Output

      deployment.extensions/web scaled

      Agora, execute o kubectl get novamente:

      Você encontrará as instâncias dimensionadas da implantação:

      Output

      NAME READY STATUS RESTARTS AGE nfs-server-nfs-server-provisioner-0 1/1 Running 0 24m web-64965fc79f-q9626 1/1 Running 0 5m web-64965fc79f-qgd2w 1/1 Running 0 17s web-64965fc79f-wcjxv 1/1 Running 0 17s

      Agora, você tem três instâncias da sua implantação do Nginx, conectadas ao mesmo Volume Persistente. No próximo passo, você irá garantir que elas possam compartilhar dados entre si.

      Passo 3 — Validando o compartilhamento de dados do NFS

      No passo final, você validará a configuração para que os dados sejam compartilhados por todas as instâncias que foram montadas para o compartilhamento do NFS. Para fazer isso, criará um arquivo no diretório /data em um dos pods e, em seguida, verificará se o arquivo existe no diretório /data de outro pod.

      Para validar isso, você usará o comando kubectl exec. Esse comando permite que você especifique um pod e execute um comando dentro daquele pod. Para aprender mais sobre a inspeção de recursos usando o kubectl, veja nossa Folha de referências do kubectl.

      Para criar um arquivo chamado hello_world dentro de um dos seus pods web, use o kubectl exec para repassar o comando touch. Note que o número após web – no nome do pod – será diferente para você. Assim, certifique-se de substituir o nome do pod destacado por um dos pods que você apurou no resultado do comando kubectl get pods, no último passo.

      • kubectl exec web-64965fc79f-q9626 -- touch /data/hello_world

      Em seguida, altere o nome do pod e use o comando ls para listar os arquivos no diretório /data de um outro pod:

      • kubectl exec web-64965fc79f-qgd2w -- ls /data

      Seu resultado mostrará o arquivo que criou no primeiro pod:

      Output

      hello_world

      Isso mostra que todos os pods compartilham dados usando o NFS e que sua configuração está funcionando corretamente.

      Conclusão

      Neste tutorial, você criou um servidor NFS com apoio do armazenamento em bloco da DigitalOcean. Depois, o servidor NFS usou aquele armazenamento em bloco para provisionar e exportar os compartilhamentos NFS para as cargas de trabalho em um protocolo compatível com RWX. Ao fazer isso, você conseguiu contornar uma limitação técnica do armazenamento em bloco da DigitalOcean e compartilhar os mesmos dados do PVC em muitos pods. Por seguir este tutorial, agora o seu cluster DOKS está configurado para acomodar um conjunto muito mais amplo de casos de uso de implantação.

      Caso queira aprender mais sobre o Kubernetes, confira nosso Currículo de Kubernetes para desenvolvedores full-stack, ou examine a documentação de produto do Kubernetes da DigitalOcean.



      Source link