One place for hosting & domains

      February 2020

      Understanding Generators in JavaScript

      The author selected the Open Internet/Free Speech Fund to receive a donation as part of the Write for DOnations program.


      In ECMAScript 2015, generators were introduced to the JavaScript language. A generator is a process that can be paused and resumed and can yield multiple values. A generator in JavaScript consists of a generator function, which returns an iterable Generator object.

      Generators can maintain state, providing an efficient way to make iterators, and are capable of dealing with infinite data streams, which can be used to implement infinite scroll on the frontend of a web application, to operate on sound wave data, and more. Additionally, when used with Promises, generators can mimic the async/await functionality, which allows us to deal with asynchronous code in a more straightforward and readable manner. Although async/await is a more prevalent way to deal with common, simple asynchronous use cases, like fetching data from an API, generators have more advanced features that make learning how to use them worthwhile.

      In this article, we’ll cover how to create generator functions, how to iterate over Generator objects, the difference between yield and return inside a generator, and other aspects of working with generators.

      Generator Functions

      A generator function is a function that returns a Generator object, and is defined by the function keyword followed by an asterisk (*), as shown in the following:

      // Generator function declaration
      function* generatorFunction() {}

      Occasionally, you will see the asterisk next to the function name, as opposed to the function keyword, such as function *generatorFunction(). This works the same, but function* is a more widely accepted syntax.

      Generator functions can also be defined in an expression, like regular functions:

      // Generator function expression
      const generatorFunction = function*() {}

      Generators can even be the methods of an object or class:

      // Generator as the method of an object
      const generatorObj = {
        *generatorMethod() {},
      // Generator as the method of a class
      class GeneratorClass {
        *generatorMethod() {}

      The examples throughout this article will use the generator function declaration syntax.

      Note: Unlike regular functions, generators cannot be constructed with the new keyword, nor can they be used in conjunction with arrow functions.

      Now that you know how to declare generator functions, lets look at the iterable Generator objects that they return.

      Generator Objects

      Traditionally, functions in JavaScript run to completion, and calling a function will return a value when it arrives at the return keyword. If the return keyword is omitted, a function will implicitly return undefined.

      In the following code, for example, we declare a sum() function that returns a value that is the sum of two integer arguments:

      // A regular function that sums two values
      function sum(a, b) {
        return a + b

      Calling the function returns a value that is the sum of the arguments:

      const value = sum(5, 6) // 11

      A generator function, however, does not return a value immediately, and instead returns an iterable Generator object. In the following example, we declare a function and give it a single return value, like a standard function:

      // Declare a generator function with a single return value
      function* generatorFunction() {
        return 'Hello, Generator!'

      When we invoke the generator function, it will return the Generator object, which we can assign to a variable:

      // Assign the Generator object to generator
      const generator = generatorFunction()

      If this were a regular function, we would expect generator to give us the string returned in the function. However, what we actually get is an object in a suspended state. Calling generator will therefore give output similar to the following:


      generatorFunction {<suspended>} __proto__: Generator [[GeneratorLocation]]: VM272:1 [[GeneratorStatus]]: "suspended" [[GeneratorFunction]]: ƒ* generatorFunction() [[GeneratorReceiver]]: Window [[Scopes]]: Scopes[3]

      The Generator object returned by the function is an iterator. An iterator is an object that has a next() method available, which is used for iterating through a sequence of values. The next() method returns an object with value and done properties. value represent the returned value, and done indicates whether the iterator has run through all its values or not.

      Knowing this, let’s call next() on our generator and get the current value and state of the iterator:

      // Call the next method on the Generator object

      This will give the following output:


      {value: "Hello, Generator!", done: true}

      The value returned from calling next() is Hello, Generator!, and the state of done is true, because this value came from a return that closed out the iterator. Since the iterator is done, the generator function’s status will change from suspended to closed. Calling generator again will give the following:


      generatorFunction {<closed>}

      As of right now, we’ve only demonstrated how a generator function can be a more complex way to get the return value of a function. But generator functions also have unique features that distinguish them from normal functions. In the next section, we’ll learn about the yield operator and see how a generator can pause and resume execution.

      yield Operators

      Generators introduce a new keyword to JavaScript: yield. yield can pause a generator function and return the value that follows yield, providing a lightweight way to iterate through values.

      In this example, we’ll pause the generator function three times with different values, and return a value at the end. Then we will assign our Generator object to the generator variable.

      // Create a generator function with multiple yields
      function* generatorFunction() {
        yield 'Neo'
        yield 'Morpheus'
        yield 'Trinity'
        return 'The Oracle'
      const generator = generatorFunction()

      Now, when we call next() on the generator function, it will pause every time it encounters yield. done will be set to false after each yield, indicating that the generator has not finished. Once it encounters a return, or there are no more yields encountered in the function, done will flip to true, and the generator will be finished.

      Use the next() method four times in a row:

      // Call next four times

      These will give the following four lines of output in order:


      {value: "Neo", done: false} {value: "Morpheus", done: false} {value: "Trinity", done: false} {value: "The Oracle", done: true}

      Note that a generator does not require a return; if omitted, the last iteration will return {value: undefined, done: true}, as will any subsequent calls to next() after a generator has completed.

      Iterating Over a Generator

      Using the next() method, we manually iterated through the Generator object, receiving all the value and done properties of the full object. However, just like Array, Map, and Set, a Generator follows the iteration protocol, and can be iterated through with for...of:

      // Iterate over Generator object
      for (const value of generator) {

      This will return the following:


      Neo Morpheus Trinity

      The spread operator can also be used to assign the values of a Generator to an array.

      // Create an array from the values of a Generator object
      const values = [...generator]

      This will give the following array:


      (3) ["Neo", "Morpheus", "Trinity"]

      Both spread and for...of will not factor the return into the values (in this case, it would have been 'The Oracle').

      Note: While both of these methods are effective for working with finite generators, if a generator is dealing with an infinite data stream, it won’t be possible to use spread or for...of directly without creating an infinite loop.

      Closing a Generator

      As we’ve seen, a generator can have its done property set to true and its status set to closed by iterating through all its values. There are two additional ways to immediately cancel a generator: with the return() method, and with the throw() method.

      With return(), the generator can be terminated at any point, just as if a return statement had been in the function body. You can pass an argument into return(), or leave it blank for an undefined value.

      To demonstrate return(), we’ll create a generator with a few yield values but no return in the function definition:

      function* generatorFunction() {
        yield 'Neo'
        yield 'Morpheus'
        yield 'Trinity'
      const generator = generatorFunction()

      The first next() will give us 'Neo', with done set to false. If we invoke a return() method on the Generator object right after that, we’ll now get the passed value and done set to true. Any additional call to next() will give the default completed generator response with an undefined value.

      To demonstrate this, run the following three methods on generator:
      generator.return('There is no spoon!')

      This will give the three following results:


      {value: "Neo", done: false} {value: "There is no spoon!", done: true} {value: undefined, done: true}

      The return() method forced the Generator object to complete and to ignore any other yield keywords. This is particularly useful in asynchronous programming when you need to make functions cancelable, such as interrupting a web request when a user wants to perform a different action, as it is not possible to directly cancel a Promise.

      If the body of a generator function has a way to catch and deal with errors, you can use the throw() method to throw an error into the generator. This starts up the generator, throws the error in, and terminates the generator.

      To demonstrate this, we will put a try...catch inside the generator function body and log an error if one is found:

      // Define a generator function with a try...catch
      function* generatorFunction() {
        try {
          yield 'Neo'
          yield 'Morpheus'
        } catch (error) {
      // Invoke the generator and throw an error
      const generator = generatorFunction()

      Now, we will run the next() method, followed by throw():
      generator.throw(new Error('Agent Smith!'))

      This will give the following output:


      {value: "Neo", done: false} Error: Agent Smith! {value: undefined, done: true}

      Using throw(), we injected an error into the generator, which was caught by the try...catch and logged to the console.

      Generator Object Methods and States

      The following table shows a list of methods that can be used on Generator objects:

      next()Returns the next value in a generator
      return()Returns a value in a generator and finishes the generator
      throw()Throws an error and finishes the generator

      The next table lists the possible states of a Generator object:

      suspendedGenerator has halted execution but has not terminated
      closedGenerator has terminated by either encountering an error, returning, or iterating through all values

      yield Delegation

      In addition to the regular yield operator, generators can also use the yield* expression to delegate further values to another generator. When the yield* is encountered within a generator, it will go inside the delegated generator and begin iterating through all the yields until that generator is closed. This can be used to separate different generator functions to semantically organize your code, while still having all their yields be iterable in the right order.

      To demonstrate, we can create two generator functions, one of which will yield* operate on the other:

      // Generator function that will be delegated to
      function* delegate() {
        yield 3
        yield 4
      // Outer generator function
      function* begin() {
        yield 1
        yield 2
        yield* delegate()

      Next, let’s iterate through the begin() generator function:

      // Iterate through the outer generator
      const generator = begin()
      for (const value of generator) {

      This will give the following values in the order they are generated:


      1 2 3 4

      The outer generator yielded the values 1 and 2, then delegated to the other generator with yield*, which returned 3 and 4.

      yield* can also delegate to any object that is iterable, such as an Array or a Map. Yield delegation can be helpful in organizing code, since any function within a generator that wanted to use yield would also have to be a generator.

      Infinite Data Streams

      One of the useful aspects of generators is the ability to work with infinite data streams and collections. This can be demonstrated by creating an infinite loop inside a generator function that increments a number by one.

      In the following code block, we define this generator function and then initiate the generator:

      // Define a generator function that increments by one
      function* incrementer() {
        let i = 0
        while (true) {
          yield i++
      // Initiate the generator
      const counter = incrementer()

      Now, iterate through the values using next():

      // Iterate through the values

      This will give the following output:


      {value: 0, done: false} {value: 1, done: false} {value: 2, done: false} {value: 3, done: false}

      The function returns successive values in the infinite loop while the done property remains false, ensuring that it will not finish.

      With generators, you don’t have to worry about creating an infinite loop, because you can halt and resume execution at will. However, you still have to have caution with how you invoke the generator. If you use spread or for...of on an infinite data stream, you will still be iterating over an infinite loop all at once, which will cause the environment to crash.

      For a more complex example of an infinite data stream, we can create a Fibonacci generator function. The Fibonacci sequence, which continuously adds the two previous values together, can be written using an infinite loop within a generator as follows:

      // Create a fibonacci generator function
      function* fibonacci() {
        let prev = 0
        let next = 1
        yield prev
        yield next
        // Add previous and next values and yield them forever
        while (true) {
          const newVal = next + prev
          yield newVal
          prev = next
          next = newVal

      To test this out, we can loop through a finite number and print the Fibonacci sequence to the console.

      // Print the first 10 values of fibonacci
      const fib = fibonacci()
      for (let i = 0; i < 10; i++) {

      This will give the following:


      0 1 1 2 3 5 8 13 21 34

      The ability to work with infinite data sets is one part of what makes generators so powerful. This can be useful for examples like implementing infinite scroll on the frontend of a web application.

      Passing Values in Generators

      Throughout this article, we’ve used generators as iterators, and we’ve yielded values in each iteration. In addition to producing values, generators can also consume values from next(). In this case, yield will contain a value.

      It’s important to note that the first next() that is called will not pass a value, but will only start the generator. To demonstrate this, we can log the value of yield and call next() a few times with some values.

      function* generatorFunction() {
        return 'The end'
      const generator = generatorFunction()

      This will give the following output:


      100 200 {value: "The end", done: true}

      It is also possible to seed the generator with an initial value. In the following example, we’ll make a for loop and pass each value into the next() method, but pass an argument to the initial function as well:

      function* generatorFunction(value) {
        while (true) {
          value = yield value * 10
      // Initiate a generator and seed it with an initial value
      const generator = generatorFunction(0)
      for (let i = 0; i < 5; i++) {

      We’ll retrieve the value from next() and yield a new value to the next iteration, which is the previous value times ten. This will give the following:


      0 10 20 30 40

      Another way to deal with starting up a generator is to wrap the generator in a function that will always call next() once before doing anything else.

      async/await with Generators

      An asynchronous function is a type of function available in ES6+ JavaScript that makes working with asynchronous data easier to understand by making it appear synchronous. Generators have a more extensive array of capabilities than asynchronous functions, but are capable of replicating similar behavior. Implementing asynchronous programming in this way can increase the flexibility of your code.

      In this section, we will demonstrate an example of reproducing async/await with generators.

      Let’s build an asynchronous function that uses the Fetch API to get data from the JSONPlaceholder API (which provides example JSON data for testing purposes) and logs the response to the console.

      Start out by defining an asynchronous function called getUsers that fetches data from the API and returns an array of objects, then call getUsers:

      const getUsers = async function() {
        const response = await fetch('')
        const json = await response.json()
        return json
      // Call the getUsers function and log the response
      getUsers().then(response => console.log(response))

      This will give JSON data similar to the following:


      [ {id: 1, name: "Leanne Graham" ...}, {id: 2, name: "Ervin Howell" ...}, {id: 3, name": "Clementine Bauch" ...}, {id: 4, name: "Patricia Lebsack"...}, {id: 5, name: "Chelsey Dietrich"...}, ...]

      Using generators, we can create something almost identical that does not use the async/await keywords. Instead, it will use a new function we create and yield values instead of await promises.

      In the following code block, we define a function called getUsers that uses our new asyncAlt function (which we will write later on) to mimic async/await.

      const getUsers = asyncAlt(function*() {
        const response = yield fetch('')
        const json = yield response.json()
        return json
      // Invoking the function
      getUsers().then(response => console.log(response))

      As we can see, it looks almost identical to the async/await implementation, except that there is a generator function being passed in that yields values.

      Now we can create an asyncAlt function that resembles an asynchronous function. asyncAlt has a generator function as a parameter, which is our function that yields the promises that fetch returns. asyncAlt returns a function itself, and resolves every promise it finds until the last one:

      // Define a function named asyncAlt that takes a generator function as an argument
      function asyncAlt(generatorFunction) {
        // Return a function
        return function() {
          // Create and assign the generator object
          const generator = generatorFunction()
          // Define a function that accepts the next iteration of the generator
          function resolve(next) {
            // If the generator is closed and there are no more values to yield,
            // resolve the last value
            if (next.done) {
              return Promise.resolve(next.value)
            // If there are still values to yield, they are promises and
            // must be resolved.
            return Promise.resolve(next.value).then(response => {
              return resolve(
          // Begin resolving promises
          return resolve(

      This will give the same output as the async/await version:


      [ {id: 1, name: "Leanne Graham" ...}, {id: 2, name: "Ervin Howell" ...}, {id: 3, name": "Clementine Bauch" ...}, {id: 4, name: "Patricia Lebsack"...}, {id: 5, name: "Chelsey Dietrich"...}, ...]

      Note that this implementation is for demonstrating how generators can be used in place of async/await, and is not a production-ready design. It does not have error handling set up, nor does it have the ability to pass parameters into the yielded values. Though this method can add flexibility to your code, often async/await will be a better choice, since it abstracts implementation details away and lets you focus on writing productive code.


      Generators are processes that can halt and resume execution. They are a powerful, versatile feature of JavaScript, although they are not commonly used. In this tutorial, we learned about generator functions and generator objects, methods available to generators, the yield and yield* operators, and generators used with finite and infinite data sets. We also explored one way to implement asynchronous code without nested callbacks or long promise chains.

      If you would like to learn more about JavaScript syntax, take a look at our Understanding This, Bind, Call, and Apply in JavaScript and Understanding Map and Set Objects in JavaScript tutorials.

      Source link

      Recommended Steps to Secure a DigitalOcean Kubernetes Cluster

      The author selected Open Sourcing Mental Illness to receive a donation as part of the Write for DOnations program.


      Kubernetes, the open-source container orchestration platform, is steadily becoming the preferred solution for automating, scaling, and managing high-availability clusters. As a result of its increasing popularity, Kubernetes security has become more and more relevant.

      Considering the moving parts involved in Kubernetes and the variety of deployment scenarios, securing Kubernetes can sometimes be complex. Because of this, the objective of this article is to provide a solid security foundation for a DigitalOcean Kubernetes (DOKS) cluster. Note that this tutorial covers basic security measures for Kubernetes, and is meant to be a starting point rather than an exhaustive guide. For additional steps, see the official Kubernetes documentation.

      In this guide, you will take basic steps to secure your DigitalOcean Kubernetes cluster. You will configure secure local authentication with TLS/SSL certificates, grant permissions to local users with Role-based access controls (RBAC), grant permissions to Kubernetes applications and deployments with service accounts, and set up resource limits with the ResourceQuota and LimitRange admission controllers.


      In order to complete this tutorial you will need:

      • A DigitalOcean Kubernetes (DOKS) managed cluster with 3 Standard nodes configured with at least 2 GB RAM and 1 vCPU each. For detailed instructions on how to create a DOKS cluster, read our Kubernetes Quickstart guide. This tutorial uses DOKS version 1.16.2-do.1.
      • A local client configured to manage the DOKS cluster, with a cluster configuration file downloaded from the DigitalOcean Control Panel and saved as ~/.kube/config. For detailed instructions on how to configure remote DOKS management, read our guide How to Connect to a DigitalOcean Kubernetes Cluster. In particular, you will need:
        • The kubectl command-line interface installed on your local machine. You can read more about installing and configuring kubectl in its official documentation. This tutorial will use kubectl version 1.17.0-00.
        • The official DigitalOcean command-line tool, doctl. For instructions on how to install this, see the doctl GitHub page. This tutorial will use doctl version 1.36.0.

      Step 1 — Enabling Remote User Authentication

      After completing the prerequisites, you will end up with one Kubernetes superuser that authenticates through a predefined DigitalOcean bearer token. However, sharing those credentials is not a good security practice, since this account can cause large-scale and possibly destructive changes to your cluster. To mitigate this possibility, you can set up additional users to be authenticated from their respective local clients.

      In this section, you will authenticate new users to the remote DOKS cluster from local clients using secure SSL/TLS certificates. This will be a three-step process: First, you will create Certificate Signing Requests (CSR) for each user, then you will approve those certificates directly in the cluster through kubectl. Finally, you will build each user a kubeconfig file with the appropriate certificates. For more information regarding additional authentication methods supported by Kubernetes, refer to the Kubernetes authentication documentation.

      Creating Certificate Signing Requests for New Users

      Before starting, check the DOKS cluster connection from the local machine configured during the prerequisites:

      Depending on your configuration, the output will be similar to this one:


      Kubernetes master is running at CoreDNS is running at To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

      This means that you are connected to the DOKS cluster.

      Next, create a local folder for client’s certificates. For the purpose of this guide, ~/certs will be used to store all certificates:

      In this tutorial, we will authorize a new user called sammy to access the cluster. Feel free to change this to a user of your choice. Using the SSL and TLS library OpenSSL, generate a new private key for your user using the following command:

      • openssl genrsa -out ~/certs/sammy.key 4096

      The -out flag will make the output file ~/certs/sammy.key, and 4096 sets the key as 4096-bit. For more information on OpenSSL, see our OpenSSL Essentials guide.

      Now, create a certificate signing request configuration file. Open the following file with a text editor (for this tutorial, we will use nano):

      • nano ~/certs/sammy.csr.cnf

      Add the following content into the sammy.csr.cnf file to specify in the subject the desired username as common name (CN), and the group as organization (O):


      [ req ]
      default_bits = 2048
      prompt = no
      default_md = sha256
      distinguished_name = dn
      [ dn ]
      CN = sammy
      O = developers
      [ v3_ext ]

      The certificate signing request configuration file contains all necessary information, user identity, and proper usage parameters for the user. The last argument extendedKeyUsage=serverAuth,clientAuth will allow users to authenticate their local clients with the DOKS cluster using the certificate once it’s signed.

      Next, create the sammy certificate signing request:

      • openssl req -config ~/certs/sammy.csr.cnf -new -key ~/certs/sammy.key -nodes -out ~/certs/sammy.csr

      The -config lets you specify the configuration file for the CSR, and -new signals that you are creating a new CSR for the key specified by -key.

      You can check your certificate signing request by running the following command:

      • openssl req -in ~/certs/sammy.csr -noout -text

      Here you pass in the CSR with -in and use -text to print out the certificate request in text.

      The output will show the certificate request, the beginning of which will look like this:


      Certificate Request: Data: Version: 1 (0x0) Subject: CN = sammy, O = developers Subject Public Key Info: Public Key Algorithm: rsaEncryption RSA Public-Key: (4096 bit) ...

      Repeat the same procedure to create CSRs for any additional users. Once you have all certificate signing requests saved in the administrator’s ~/certs folder, proceed with the next step to approve them.

      Managing Certificate Signing Requests with the Kubernetes API

      You can either approve or deny TLS certificates issued to the Kubernetes API by using kubectl command-line tool. This gives you the ability to ensure that the requested access is appropriate for the given user. In this section, you will send the certificate request for sammy and aprove it.

      To send a CSR to the DOKS cluster use the following command:

      cat <<EOF | kubectl apply -f -
      kind: CertificateSigningRequest
        name: sammy-authentication
        - system:authenticated
        request: $(cat ~/certs/sammy.csr | base64 | tr -d 'n')
        - digital signature
        - key encipherment
        - server auth
        - client auth

      Using a Bash here document, this command uses cat to pass the certificate request to kubectl apply.

      Let’s take a closer look at the certificate request:

      • name: sammy-authentication creates a metadata identifier, in this case called sammy-authentication.
      • request: $(cat ~/certs/sammy.csr | base64 | tr -d 'n') sends the sammy.csr certificate signing request to the cluster encoded as Base64.
      • server auth and client auth specify the intended usage of the certificate. In this case, the purpose is user authentication.

      The output will look similar to this:

      Output created

      You can check certificate signing request status using the command:

      Depending on your cluster configuration, the output will be similar to this:


      NAME AGE REQUESTOR CONDITION sammy-authentication 37s your_DO_email Pending

      Next, approve the CSR by using the command:

      • kubectl certificate approve sammy-authentication

      You will get a message confirming the operation:

      Output approved

      Note: As an administrator you can also deny a CSR by using the command kubectl certificate deny sammy-authentication. For more information about managing TLS certificates, please read Kubernetes official documentation.

      Now that the CSR is approved, you can download it to the local machine by running:

      • kubectl get csr sammy-authentication -o jsonpath='{.status.certificate}' | base64 --decode > ~/certs/sammy.crt

      This command decodes the Base64 certificate for proper usage by kubectl, then saves it as ~/certs/sammy.crt.

      With the sammy signed certificate in hand, you can now build the user’s kubeconfig file.

      Building Remote Users Kubeconfig

      Next, you will create a specific kubeconfig file for the sammy user. This will give you more control over the user’s access to your cluster.

      The first step in building a new kubeconfig is making a copy of the current kubeconfig file. For the purpose of this guide, the new kubeconfig file will be called config-sammy:

      • cp ~/.kube/config ~/.kube/config-sammy

      Next, edit the new file:

      • nano ~/.kube/config-sammy

      Keep the first eight lines of this file, as they contain the necessary information for the SSL/TLS connection with the cluster. Then starting from the user parameter, replace the text with the following highlighted lines so that the file looks similar to the following:


      apiVersion: v1
      - cluster:
          certificate-authority-data: certificate_data
        name: do-nyc1-do-cluster
      - context:
          cluster: do-nyc1-do-cluster
          user: sammy
        name: do-nyc1-do-cluster
      current-context: do-nyc1-do-cluster
      kind: Config
      preferences: {}
      - name: sammy
          client-certificate: /home/your_local_user/certs/sammy.crt
          client-key: /home/your_local_user/certs/sammy.key

      Note: For both client-certificate and client-key, use the absolute path to their corresponding certificate location. Otherwise, kubectl will produce an error.

      Save and exit the file.

      You can test the new user connection using kubectl cluster-info:

      • kubectl --kubeconfig=/home/your_local_user/.kube/config-sammy cluster-info

      You will see an error similar to this:


      To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'. Error from server (Forbidden): services is forbidden: User "sammy" cannot list resource "services" in API group "" in the namespace "kube-system"

      This error is expected because the user sammy has no authorization to list any resource on the cluster yet. Granting authorization to users will be covered in the next step. For now, the output is confirming that the SSL/TLS connection was successful and the sammy authentication credentials were accepted by the Kubernetes API.

      Step 2 — Authorizing Users Through Role Based Access Control (RBAC)

      Once a user is authenticated, the API determines its permissions using Kubernetes built-in Role Based Access Control (RBAC) model. RBAC is an effective method of restricting user rights based on the role assigned to it. From a security point of view, RBAC allows setting fine-grained permissions to limit users from accessing sensitive data or executing superuser-level commands. For more detailed information regarding user roles refer to Kubernetes RBAC documentation.

      In this step, you will use kubectl to assign the predefined role edit to the user sammy in the default namespace. In a production environment, you may want to use custom roles and/or custom role bindings.

      Granting Permissions

      In Kubernetes, granting permissions means assigning the desired role to a user. Assign edit permissions to the user sammy in the default namespace using the following command:

      • kubectl create rolebinding sammy-edit-role --clusterrole=edit --user=sammy --namespace=default

      This will give output similar to the following:

      Output created

      Let’s analyze this command in more detail:

      • create rolebinding sammy-edit-role creates a new role binding, in this case called sammy-edit-role.
      • --clusterrole=edit assigns the predefined role edit at a global scope (cluster role).
      • --user=sammy specifies what user to bind the role to.
      • --namespace=default grants the user role permissions within the specified namespace, in this case default.

      Next, verify user permissions by listing pods in the default namespace. You can tell if RBAC authorization is working as expected if no errors are shown.

      • kubectl --kubeconfig=/home/your_local_user/.kube/config-sammy auth can-i get pods

      You will get the following output:



      Now that you have assigned permissions to sammy, you can now practice revoking those permissions in the next section.

      Revoking Permissions

      Revoking permissions in Kubernetes is done by removing the user role binding.

      For this tutorial, delete the edit role from the user sammy by running the following command:

      • kubectl delete rolebinding sammy-edit-role

      You will get the following output:

      Output "sammy-edit-role" deleted

      Verify if user permissions were revoked as expected by listing the default namespace pods:

      • kubectl --kubeconfig=/home/localuser/.kube/config-sammy --namespace=default get pods

      You will receive the following error:


      Error from server (Forbidden): pods is forbidden: User "sammy" cannot list resource "pods" in API group "" in the namespace "default"

      This shows that the authorization has been revoked.

      From a security standpoint, the Kubernetes authorization model gives cluster administrators the flexibility to change users rights on-demand as required. Moreover, role-based access control is not limited to a physical user; you can also grant and remove permissions to cluster services, as you will learn in the next section.

      For more information about RBAC authorization and how to create custom roles, please read the official documentation.

      Step 3 — Managing Application Permissions with Service Accounts

      As mentioned in the previous section, RBAC authorization mechanisms extend beyond human users. Non-human cluster users, such as applications, services, and processes running inside pods, authenticate with the API server using what Kubernetes calls service accounts. When a pod is created within a namespace, you can either let it use the default service account or you can define a service account of your choice. The ability to assign individual SAs to applications and processes gives administrators the freedom of granting or revoking permissions as required. Moreover, assigning specific SAs to production-critical applications is considered a best security practice. Since service accounts are used for authentication, and thus for RBAC authorization checks, cluster administrators could contain security threats by changing service account access rights and isolating the offending process.

      To demonstrate service accounts, this tutorial will use an Nginx web server as a sample application.

      Before assigning a particular SA to your application, you need to create the SA. Create a new service account called nginx-sa in the default namespace:

      • kubectl create sa nginx-sa

      You will get:


      serviceaccount/nginx-sa created

      Verify that the service account was created by running the following:

      This will give you a list of your service accounts:


      NAME SECRETS AGE default 1 22h nginx-sa 1 80s

      Now you will assign a role to the nginx-sa service account. For this example, grant nginx-sa the same permissions as the sammy user:

      • kubectl create rolebinding nginx-sa-edit
      • --clusterrole=edit
      • --serviceaccount=default:nginx-sa
      • --namespace=default

      Running this will yield the following:

      Output created

      This command uses the same format as for the user sammy, except for the --serviceaccount=default:nginx-sa flag, where you assign the nginx-sa service account in the default namespace.

      Check that the role binding was successful using this command:

      This will give the following output:


      NAME AGE nginx-sa-edit 23s

      Once you’ve confirmed that the role binding for the service account was successfully configured, you can assign the service account to an application. Assigning a particular service account to an application will allow you to manage its access rights in real-time and therefore enhance cluster security.

      For the purpose of this tutorial, an nginx pod will serve as the sample application. Create the new pod and specify the nginx-sa service account with the following command:

      • kubectl run nginx --image=nginx --port 80 --serviceaccount="nginx-sa"

      The first portion of the command creates a new pod running an nginx web server on port :80, and the last portion --serviceaccount="nginx-sa" indicates that this pod should use the nginx-sa service account and not the default SA.

      This will give you output similar to the following:


      deployment.apps/nginx created

      Verify that the new application is using the service account by using kubectl describe:

      • kubectl describe deployment nginx

      This will output a lengthy description of the deployment parameters. Under the Pod Template section, you will see output similar to this:


      ... Pod Template: Labels: run=nginx Service Account: nginx-sa ...

      In this section, you created the nginx-sa service account in the default namespace and assigned it to the nginx webserver. Now you can control nginx permissions in real-time by changing its role as needed. You can also group applications by assigning the same service account to each one and then make bulk changes to permissions. Finally, you could isolate critical applications by assigning them a unique SA.

      Summing up, the idea behind assigning roles to your applications/deployments is to fine-tune permissions. In real-world production environments, you may have several deployments requiring different permissions ranging from read-only to full administrative privileges. Using RBAC brings you the flexibility to restrict the access to the cluster as needed.

      Next, you will set up admission controllers to control resources and safeguard against resource starvation attacks.

      Step 4 — Setting Up Admission Controllers

      Kubernetes admission controllers are optional plug-ins that are compiled into the kube-apiserver binary to broaden security options. Admission controllers intercept requests after they pass the authentication and authorization phase. Once the request is intercepted, admission controllers execute the specified code just before the request is applied.

      While the outcome of either an authentication or authorization check is a boolean that allows or denies the request, admission controllers can be much more diverse. Admission controllers can validate requests in the same manner as authentication, but can also mutate or change the requests and modify objects before they are admitted.

      In this step, you will use the ResourceQuota and LimitRange admission controllers to protect your cluster by mutating requests that could contribute to a resource starvation or Denial-of-Service attack. The ResourceQuota admission controller allows administrators to restrict computing resources, storage resources, and the quantity of any object within a namespace, while the LimitRange admission controller will limit the number of resources used by containers. Using these two admission controllers together will protect your cluster from attacks that render your resources unavailable.

      To demonstrate how ResourceQuota works, you will implement a few restrictions in the default namespace. Start by creating a new ResourceQuota object file:

      • nano resource-quota-default.yaml

      Add in the following object definition to set constraints for resource consumption in the default namespace. You can adjust the values as needed depending on your nodes’ physical resources:


      apiVersion: v1
      kind: ResourceQuota
        name: resource-quota-default
          pods: "2"
          requests.cpu: "500m"
          requests.memory: 1Gi
          limits.cpu: "1000m"
          limits.memory: 2Gi
          configmaps: "5"
          persistentvolumeclaims: "2"
          replicationcontrollers: "10"
          secrets: "3"
          services: "4"
          services.loadbalancers: "2"

      This definition uses the hard keyword to set hard constraints, such as the maximum number of pods, configmaps, PersistentVolumeClaims, ReplicationControllers, secrets, services, and loadbalancers. This also set contraints on compute resources, like:

      • requests.cpu, which sets the maximum CPU value of requests in milliCPU, or one thousandth of a CPU core.
      • requests.memory, which sets the maximum memory value of requests in bytes.
      • limits.cpu, which sets the maximum CPU value of limits in milliCPUs.
      • limits.memory, which sets the maximum memory value of limits in bytes.

      Save and exit the file.

      Now, create the object in the namespace running the following command:

      • kubectl create -f resource-quota-default.yaml --namespace=default

      This will yield the following:


      resourcequota/resource-quota-default created

      Notice that you are using the -f flag to indicate to Kubernetes the location of the ResourceQuota file and the --namespace flag to specify which namespace will be updated.

      Once the object has been created, your ResourceQuota will be active. You can check the default namespace quotas with describe quota:

      • kubectl describe quota --namespace=default

      The output will look similar to this, with the hard limits you set in the resource-quota-default.yaml file:


      Name: resource-quota-default Namespace: default Resource Used Hard -------- ---- ---- configmaps 0 5 limits.cpu 0 1 limits.memory 0 2Gi persistentvolumeclaims 0 2 pods 1 2 replicationcontrollers 0 10 requests.cpu 0 500m requests.memory 0 1Gi secrets 2 3 services 1 4 services.loadbalancers 0 2

      ResourceQuotas are expressed in absolute units, so adding additional nodes will not automatically increase the values defined here. If more nodes are added, you will need to manually edit the values here to proportionate the resources. ResourceQuotas can be modified as often as you need, but they cannot be removed unless the entire namespace is removed.

      If you need to modify a particular ResourceQuota, update the corresponding .yaml file and apply the changes using the following command:

      • kubectl apply -f resource-quota-default.yaml --namespace=default

      For more information regarding the ResourceQuota Admission Controller, refer to the official documentation.

      Now that your ResourceQuota is set up, you will move on to configuring the LimitRange Admission Controller. Similar to how the ResourceQuota enforces limits on namespaces, the LimitRange enforces the limitations declared by validating and mutating containers.

      In a similar way to before, start by creating the object file:

      • nano limit-range-default.yaml

      Now, you can use the LimitRange object to restrict resource usage as needed. Add the following content as an example of a typical use case:


      apiVersion: v1
      kind: LimitRange
        name: limit-range-default
        - max:
            cpu: "400m"
            memory: "1Gi"
            cpu: "100m"
            memory: "100Mi"
            cpu: "250m"
            memory: "800Mi"
            cpu: "150m"
            memory: "256Mi"
          type: Container

      The sample values used in limit-ranges-default.yaml restrict container memory to a maximum of 1Gi and limits CPU usage to a maximum of 400m, which is a Kubernetes metric equivalent to 400 milliCPU, meaning the container is limited to use almost half its core.

      Next, deploy the object to the API server using the following command:

      • kubectl create -f limit-range-default.yaml --namespace=default

      This will give the following output:


      limitrange/limit-range-default created

      Now you can check the new limits with following command:

      • kubectl describe limits --namespace=default

      Your output will look similar to this:


      Name: limit-range-default Namespace: default Type Resource Min Max Default Request Default Limit Max Limit/Request Ratio ---- -------- --- --- --------------- ------------- ----------------------- Container cpu 100m 400m 150m 250m - Container memory 100Mi 1Gi 256Mi 800Mi -

      To see LimitRanger in action, deploy a standard nginx container with the following command:

      • kubectl run nginx --image=nginx --port=80 --restart=Never

      This will give the following output:


      pod/nginx created

      Check how the admission controller mutated the container by running the following command:

      • kubectl get pod nginx -o yaml

      This will give many lines of output. Look in the container specification section to find the resource limits specified in the LimitRange Admission Controller:


      ... spec: containers: - image: nginx imagePullPolicy: IfNotPresent name: nginx ports: - containerPort: 80 protocol: TCP resources: limits: cpu: 250m memory: 800Mi requests: cpu: 150m memory: 256Mi ...

      This would be the same as if you manually declared the resources and requests in the container specification.

      In this step, you used the ResourceQuota and LimitRange admission controllers to protect against malicious attacks toward your cluster’s resources. For more information about LimitRange admission controller, read the official documentation.


      Throughout this guide, you configured a basic Kubernetes security template. This established user authentication and authorization, applications privileges, and cluster resource protection. Combining all the suggestions covered in this article, you will have a solid foundation for a production Kubernetes cluster deployment. From there, you can start hardening individual aspects of your cluster depending on your scenario.

      If you would like to learn more about Kubernetes, check out our Kubernetes resource page, or follow our Kubernetes for Full-Stack Developers self-guided course.

      Source link

      Comment installer Anaconda sur Ubuntu 18.04 [Démarrage rapide]


      Plateforme open source conçue pour les workflows de science des données et d’apprentissage automatique, Anaconda est un gestionnaire de package, un gestionnaire d’environnement et une distribution des langages de programmation Python et R.

      Ce tutoriel vous guidera dans l’installation d’Anaconda sur un serveur Ubuntu 18.04. Pour une version plus détaillée de ce tutoriel, avec des explications plus précises de chaque étape, veuillez vous référer à Comment installer la distribution Python d’Anaconda sur Ubuntu 18.04.

      Étape 1 – Récupérer la dernière version d’Anaconda

      Depuis un navigateur Web, rendez-vous sur la page de distribution d’Anaconda, disponible via le lien suivant :

      Trouvez la dernière version de Linux et copiez le script bash de l’installateur.

      Étape 2 – Télécharger le script bash d’Anaconda

      Connectez-vous à votre serveur Ubuntu 18.04 en tant qu’utilisateur non root avec privilèges sudo, allez dans le répertoire /tmp et utilisez curl pour télécharger le lien que vous avez copié du site Web d’Anaconda :

      • cd /tmp
      • curl -O

      Étape 3 – Vérifier l’intégrité des données de l’installateur

      Vérifiez l’intégrité de l’installateur grâce à la vérification du code de hachage de chiffrement via la somme de contrôle SHA-256 :

      • sha256sum



      Étape 4 – Exécuter le script Anaconda

      • bash

      Vous obtiendrez le résultat suivant pour examiner le contrat de licence en appuyant sur ENTER (ENTRÉE) jusqu’à la fin.


      Welcome to Anaconda3 2019.03 In order to continue the installation process, please review the license agreement. Please, press ENTER to continue >>> ... Do you approve the license terms? [yes|no]

      Lorsque vous arriverez à la fin de la licence, tapez yes si vous acceptez la licence pour terminer l’installation.

      Étape 5 – Terminer le processus d’installation

      Une fois la licence acceptée, vous serez invité à choisir l’emplacement de l’installation. Vous pouvez appuyer sur ENTER pour accepter l’emplacement par défaut, ou spécifier un emplacement différent.


      Anaconda3 will now be installed into this location: /home/sammy/anaconda3 - Press ENTER to confirm the location - Press CTRL-C to abort the installation - Or specify a different location below [/home/sammy/anaconda3] >>>

      À ce stade, l’installation débutera. Veuillez noter que le processus d’installation prend un certain temps.

      Étape 6 – Sélectionner les options

      Une fois l’installation terminée, vous recevrez le résultat suivant :


      ... installation finished. Do you wish the installer to prepend the Anaconda3 install location to PATH in your /home/sammy/.bashrc ? [yes|no] [no] >>>

      Il est recommandé de taper yes pour utiliser la commande conda.

      Étape 7 – Activer l’installation

      Vous pouvez maintenant activer l’installation avec la commande suivante :

      Étape 8 - Tester l'installation

      Utilisez la commande conda pour tester l'installation et l'activation :

      Vous recevrez une sortie de tous les packages disponibles via l'installation d'Anaconda.

      Étape 9 - Configurer les environnements Anaconda

      Vous pouvez créer des environnements Anaconda avec la commande conda create. Par exemple, un environnement Python 3 nommé my_env peut être créé avec la commande suivante :

      • conda create --name my_env python=3

      Activez le nouvel environnement de cette manière :

      Le préfixe de votre invite de commandes changera pour indiquer que vous êtes dans un environnement Anaconda actif et que vous êtes maintenant prêt à commencer à travailler sur un projet.

      Tutoriels connexes

      Voici des liens vers des tutoriels plus détaillés qui sont liés à ce guide :

      Source link