One place for hosting & domains


      How To Build a Custom Terraform Module


      Terraform modules encapsulate distinct logical components of your infrastructure by grouping their resources together. You can reuse them later with possible customizations, without repeating the resource definitions each time you need them, which is beneficial to large and complexly structured projects. You can customize module instances using input variables you define as well as extract information from them using outputs. Aside from creating your own custom modules, you can also use the pre-made modules published publicly at the Terraform Registry. Developers can use and customize them using inputs like the modules you create, but their source code is stored in and pulled from the cloud.

      In this tutorial, you’ll create a Terraform module that will set up multiple Droplets behind a Load Balancer for redundancy. You’ll also use the for_each and count looping features of the Hashicorp Configuration Language (HCL) to deploy multiple customized instances of the module at the same time.


      Note: This tutorial has specifically been tested with Terraform 0.13.

      Module Structure and Benefits

      In this section, you’ll learn what benefits modules bring, where they are usually placed in the project, and how they should be structured.

      Custom Terraform modules are created to encapsulate connected components that are used and deployed together frequently in bigger projects. They are self contained, bundling only the resources, variables, and providers they need.

      Modules are typically stored in a central folder in the root of the project, each in its respective subfolder underneath. In order to retain a clean separation between modules, always architect them to have a single purpose and make sure they never contain submodules.

      It is useful to create modules from your resource schemes when you find yourself repeating them with infrequent customizations. Packaging a single resource as a module can be superfluous and gradually removes the simplicity of the overall architecture.

      For small development and test projects, incorporating modules is not necessary because they do not bring much improvement in those cases. Modules, with their ability for customization, are the building element of complexly structured projects. Developers use modules for larger projects because of the significant advantages in avoiding code duplication. Modules also offer the benefit that definitions only need modification in one place, which will then be propagated through the rest of the infrastructure.

      Next you’ll define, use, and customize modules in your Terraform projects.

      Creating a Module

      In this section, you’ll define multiple Droplets and a Load Balancer as Terraform resources and package them into a module. You’ll also make the resulting module customizable using module inputs.

      You’ll store the module in a directory named droplet-lb, under a directory called modules. Assuming you are in the terraform-modules directory you created as part of the prerequisites, create both at once by running:

      • mkdir -p modules/droplet-lb

      The -p argument instructs mkdir to create all directories in the supplied path.

      Navigate to it:

      As was noted in the previous section, modules contain the resources and variables they use. Starting from Terraform 0.13, they must also include definitions of the providers they use. Modules do not require any special configuration to note that the code represents a module, as Terraform regards every directory containing HCL code as a module, even the root directory of the project.

      Variables defined in a module are exposed as its inputs and can be used in resource definitions to customize them. The module you’ll create will have two inputs: the number of Droplets to create and the name of their group. Create and open for editing a file called where you’ll store the variables:

      Add the following lines:

      variable "droplet_count" {}
      variable "group_name" {}

      Save and close the file.

      You’ll store the Droplet definition in a file named Create and open it for editing:

      Add the following lines:

      resource "digitalocean_droplet" "droplets" {
        count  = var.droplet_count
        image  = "ubuntu-18-04-x64"
        name   = "${var.group_name}-${count.index}"
        region = "fra1"
        size   = "s-1vcpu-1gb"

      For the count parameter, which specifies how many instances of a resource to create, you pass in the droplet_count variable. Its value will be specified when the module is called from the main project code. The name of each of the deployed Droplets will be different, which you achieve by appending the index of the current Droplet to the supplied group name. Deployment of the Droplets will be in the fra1 region and they will run Ubuntu 18.04.

      When you are done, save and close the file.

      With the Droplets now defined, you can move on to creating the Load Balancer. You’ll store its resource definition in a file named Create and open it for editing by running:

      Add its resource definition:

      resource "digitalocean_loadbalancer" "www-lb" {
        name   = "lb-${var.group_name}"
        region = "fra1"
        forwarding_rule {
          entry_port     = 80
          entry_protocol = "http"
          target_port     = 80
          target_protocol = "http"
        healthcheck {
          port     = 22
          protocol = "tcp"
        droplet_ids = [
          for droplet in digitalocean_droplet.droplets:

      You define the Load Balancer with the group name in its name in order to make it distinguishable. You deploy it in the fra1 region together with the Droplets. The next two sections specify the target and monitoring ports and protocols.

      The highlighted droplet_ids block takes in the IDs of the Droplets, which should be managed by the Load Balancer. Since there are multiple Droplets, and their count is not known in advance, you use a for loop to traverse the collection of Droplets (digitalocean_droplet.droplets) and take their IDs. You surround the for loop with brackets ([]) so that the resulting collection will be a list.

      You’ve now defined the Droplet, Load Balancer, and variables for your module. You’ll need to define the provider requirements, specifying which providers the module uses, including their version and where they are located. Since Terraform 0.13, modules must explicitly define the sources of non-Hashicorp maintained providers they use; this is because they do not inherit them from the parent project.

      You’ll store the provider requirements in a file named Create it for editing by running:

      Add the following lines to require the digitalocean provider:

      terraform {
        required_providers {
          digitalocean = {
            source = "digitalocean/digitalocean"
        required_version = ">= 0.13"

      Save and close the file when you’re done. The droplet-lb module now requires the digitalocean provider.

      Modules also support outputs, which you can use to extract internal information about the state of their resources. You’ll define an output that exposes the IP address of the Load Balancer, and store it in a file named Create it for editing:

      Add the following definition:

      output "lb_ip" {
        value = digitalocean_loadbalancer.www-lb.ip

      This output retrieves the IP address of the Load Balancer. Save and close the file.

      The droplet-lb module is now functionally complete and ready for deployment. You’ll call it from the main code, which you’ll store in the root of the project. First, navigate to it by going upward through your file directory two times:

      Then, create and open for editing a file called, in which you’ll use the module:

      Add the following lines:

      module "groups" {
        source = "./modules/droplet-lb"
        droplet_count = 3
        group_name    = "group1"
      output "loadbalancer-ip" {
        value = module.groups.lb_ip

      In this declaration you invoke the droplet-lb module located in the directory specified as source. You configure the input it provides, droplet_count and group_name, which is set to group1 so you’ll later be able to discern between instances.

      Since the Load Balancer IP output is defined in a module, it won’t automatically be shown when you apply the project. The solution to this is to create another output retrieving its value (loadbalancer_ip). Save and close the file when you’re done.

      Initialize the module by running:

      The output will look like this:


      Initializing modules... - droplet-lb in modules/droplet-lb Initializing the backend... Initializing provider plugins... - Using previously-installed digitalocean/digitalocean v1.22.2 Terraform has been successfully initialized! You may now begin working with Terraform. Try running "terraform plan" to see any changes that are required for your infrastructure. All Terraform commands should now work. If you ever set or change modules or backend configuration for Terraform, rerun this command to reinitialize your working directory. If you forget, other commands will detect it and remind you to do so if necessary.

      You can try planning the project to see what actions Terraform would take by running:

      • terraform plan -var "do_token=${DO_PAT}"

      The output will be similar to this:


      ... An execution plan has been generated and is shown below. Resource actions are indicated with the following symbols: + create Terraform will perform the following actions: # module.groups.digitalocean_droplet.droplets[0] will be created + resource "digitalocean_droplet" "droplets" { ... + name = "group1-0" ... } # module.groups.digitalocean_droplet.droplets[1] will be created + resource "digitalocean_droplet" "droplets" { ... + name = "group1-1" ... } # module.groups.digitalocean_droplet.droplets[2] will be created + resource "digitalocean_droplet" "droplets" { ... + name = "group1-2" ... } # module.groups.digitalocean_loadbalancer.www-lb will be created + resource "digitalocean_loadbalancer" "www-lb" { ... + name = "group1-lb" ... } Plan: 4 to add, 0 to change, 0 to destroy. ...

      This output details that Terraform would create three Droplets, named group1-0, group1-1, and group1-2, and would also create a Load Balancer called group1-lb, which will manage the traffic to and from the three Droplets.

      You can try applying the project to the cloud by running:

      • terraform apply -var "do_token=${DO_PAT}"

      Enter yes when prompted. The output will show all the actions and the IP address of the Load Balancer will also be shown:


      module.groups.digitalocean_droplet.droplets[1]: Creating... module.groups.digitalocean_droplet.droplets[0]: Creating... module.groups.digitalocean_droplet.droplets[2]: Creating... ... Apply complete! Resources: 4 added, 0 changed, 0 destroyed. Outputs: loadbalancer-ip = ip_address

      Because you’ll modify the configuration significantly in the next step, destroy the deployed resources by running:

      • terraform destroy -var "do_token=${DO_PAT}"

      Enter yes when prompted. The output will end in:


      .. Destroy complete! Resources: 4 destroyed.

      In this step, you’ve created a module containing a customizable number of Droplets and a Load Balancer that will automatically be configured to manage their ingoing and outgoing traffic. You’ll now deploy multiple instances of a module from the same code using for_each and count.

      Deploying Multiple Module Instances

      In this section, you’ll use count and for_each to deploy the droplet-lb module multiple times, with customizations.

      Using count

      One way to deploy multiple instances of the same module at once is to pass in how many to the count parameter, which is automatically available to every module. Open for editing:

      Modify it to look like this:

      module "groups" {
        source = "./modules/droplet-lb"
        count  = 3
        droplet_count = 3
        group_name    = "group1-${count.index}"

      By setting count to 3, you instruct Terraform to deploy the module three times, each with a different group name. When you’re done, save and close the file.

      Plan the deployment by running:

      • terraform plan -var "do_token=${DO_PAT}"

      The output will be long, and will look like this:


      ... An execution plan has been generated and is shown below. Resource actions are indicated with the following symbols: + create Terraform will perform the following actions: # module.groups[0].digitalocean_droplet.droplets[0] will be created ... # module.groups[0].digitalocean_droplet.droplets[1] will be created ... # module.groups[0].digitalocean_droplet.droplets[2] will be created ... # module.groups[0].digitalocean_loadbalancer.www-lb will be created ... # module.groups[1].digitalocean_droplet.droplets[0] will be created ... # module.groups[1].digitalocean_droplet.droplets[1] will be created ... # module.groups[1].digitalocean_droplet.droplets[2] will be created ... # module.groups[1].digitalocean_loadbalancer.www-lb will be created ... # module.groups[2].digitalocean_droplet.droplets[0] will be created ... # module.groups[2].digitalocean_droplet.droplets[1] will be created ... # module.groups[2].digitalocean_droplet.droplets[2] will be created ... # module.groups[2].digitalocean_loadbalancer.www-lb will be created ... Plan: 12 to add, 0 to change, 0 to destroy. ...

      Terraform details in the output that each of the three module instances would have three Droplets and a Load Balancer associated with them.

      Using for_each

      You can use for_each for modules when you require more complex instance customization, or when the number of instances depends on third-party data (often presented as maps) and is not known while writing the code.

      You’ll now define a map that pairs group names to Droplet counts and deploy instances of droplet-lb according to it. Open for editing by running:

      Modify the file to make it look like this:

      variable "group_counts" {
        type    = map
        default = {
          "group1" = 1
          "group2" = 3
      module "groups" {
        source   = "./modules/droplet-lb"
        for_each = var.group_counts
        droplet_count = each.value
        group_name    = each.key

      You first define a map called group_counts that contains how many Droplets a given group should have. Then, you invoke the module droplet-lb, but specify that the for_each loop should operate on var.group_counts, the map you’ve defined just before. droplet_count takes each.value, the value of the current pair, which is the count of Droplets for the current group. group_name receives the name of the group.

      Save and close the file when you’re done.

      Try applying the configuration by running:

      • terraform plan -var "do_token=${DO_PAT}"

      The output will detail the actions Terraform would take to create the two groups with their Droplets and Load Balancers:


      ... An execution plan has been generated and is shown below. Resource actions are indicated with the following symbols: + create Terraform will perform the following actions: # module.groups["group1"].digitalocean_droplet.droplets[0] will be created ... # module.groups["group1"].digitalocean_loadbalancer.www-lb will be created ... # module.groups["group2"].digitalocean_droplet.droplets[0] will be created ... # module.groups["group2"].digitalocean_droplet.droplets[1] will be created ... # module.groups["group2"].digitalocean_droplet.droplets[2] will be created ... # module.groups["group2"].digitalocean_loadbalancer.www-lb will be created ...

      In this step, you’ve used count and for_each to deploy multiple customized instances of the same module, from the same code.


      In this tutorial you’ve created and deployed Terraform modules. You’ve used modules to group logically linked resources together and customized them in order to deploy multiple different instances from a central code definition. You’ve also used outputs to show attributes of resources contained in the module.

      If you would like to learn more about Terraform, check out our How To Manage Infrastructure with Terraform series.

      Source link

      Module Design Pattern in JavaScript

      While this tutorial has content that we believe is of great benefit to our community, we have not yet tested or
      edited it to ensure you have an error-free learning experience. It’s on our list, and we’re working on it!
      You can help us out by using the “report an issue” button at the bottom of the tutorial.

      Part of the Series:
      JavaScript Design Patterns

      Every developer strives to write maintainable, readable, and reusable code. Code structuring becomes more important as applications become larger. Design patterns prove crucial to solving this challenge – providing an organization structure for common issues in a particular circumstance.

      The design pattern below is only one of many useful patterns that can help you level up as a JavaScript developer. For the full set, see JavaScript Design Patterns.

      JavaScript modules are the most prevalently used design patterns for keeping particular pieces of code independent of other components. This provides loose coupling to support well-structured code.

      For those that are familiar with object-oriented languages, modules are JavaScript “classes”. One of the many advantages of classes is encapsulation – protecting states and behaviors from being accessed from other classes. The module pattern allows for public and private (plus the lesser-know protected and privileged) access levels.

      Modules should be Immediately-Invoked-Function-Expressions (IIFE) to allow for private scopes – that is, a closure that protect variables and methods (however, it will return an object instead of a function). This is what it looks like:

      (function() {
          // declare private variables and/or functions
          return {
              // declare public variables and/or functions

      Here we instantiate the private variables and/or functions before returning our object that we want to return. Code outside of our closure is unable to access these private variables since it is not in the same scope. Let’s take a more concrete implementation:

      var HTMLChanger = (function() {
          var contents="contents"
          var changeHTML = function() {
          var element = document.getElementById('attribute-to-change');
          element.innerHTML = contents;
          return {
          callChangeHTML: function() {
      HTMLChanger.callChangeHTML();       // Outputs: 'contents'
      console.log(HTMLChanger.contents);  // undefined

      Notice that callChangeHTML binds to the returned object and can be referenced within the HTMLChanger namespace. However, when outside the module, contents are unable to be referenced.

      Revealing Module Pattern

      A variation of the module pattern is called the Revealing Module Pattern. The purpose is to maintain encapsulation and reveal certain variables and methods returned in an object literal. The direct implementation looks like this:

      var Exposer = (function() {
          var privateVariable = 10;
          var privateMethod = function() {
          console.log('Inside a private method!');
          var methodToExpose = function() {
          console.log('This is a method I want to expose!');
          var otherMethodIWantToExpose = function() {
          return {
              first: methodToExpose,
              second: otherMethodIWantToExpose
      Exposer.first();        // Output: This is a method I want to expose!
      Exposer.second();       // Output: Inside a private method!
      Exposer.methodToExpose; // undefined

      Although this looks much cleaner, an obvious disadvantage is unable to reference the private methods. This can pose unit testing challenges. Similarly, the public behaviors are non-overridable.

      Source link

      Comment utiliser le module subprocess pour exécuter des programmes externes en Python 3

      L’auteur a choisi le COVID-19 Relief Fund pour recevoir un don dans le cadre du programme Write for DOnations.


      Python 3 comprend le module subprocess permettant d’exécuter des programmes externes et de lire leurs sorties dans votre code Python.

      Il se peut que vous trouviez subprocess utile si vous voulez utiliser un autre programme sur votre ordinateur à partir de votre code Python. Par exemple, vous pouvez invoquer git depuis votre code Python pour récupérer les fichiers de votre projet qui sont suivis dans le contrôle de version de git. Comme tout programme auquel vous pouvez accéder sur votre ordinateur par subprocess, les exemples présentés ici s’appliquent à tout programme externe que vous pourriez vouloir invoquer à partir de votre code Python.

      subprocess comprend plusieurs classes et fonctions, mais dans ce tutoriel, nous couvrirons l’une des fonctions les plus utiles de subprocess : Nous passerons en revue ses différentes utilisations et les principaux arguments des mots-clés.

      Conditions préalables

      Pour tirer le meilleur parti de ce tutoriel, il est recommandé d’être familiarisé avec la  programmation en Python 3. Vous pouvez consulter ces tutoriels pour obtenir les informations de base nécessaires :

      Exécution d’un programme externe

      Vous pouvez utiliser la fonction pour exécuter un programme externe à partir de votre code Python. Mais d’abord, vous devez importer les modules subprocess et sys dans votre programme :

      import subprocess
      import sys
      result =[sys.executable, "-c", "print('ocean')"])

      Si vous l’exécutez, vous obtiendrez une sortie comme ci-dessous :



      Passons en revue cet exemple :

      • sys.executable est le chemin absolu vers l’exécutable Python avec lequel votre programme a été invoqué à l’origine. Par exemple, sys.executable pourrait être un chemin tel que /usr/local/bin/python.
      • reçoit une liste de chaînes de caractères comprenant les composants de la commande que nous essayons d’exécuter. Comme la première chaîne que nous passons est sys.executable, nous ordonnons à d’exécuter un nouveau programme Python.
      • Le composant -c est une option de ligne de commande python qui vous permet de passer une chaîne avec un programme Python entier à exécuter. Dans notre cas, nous passons un programme qui imprime la chaîne ocean.

      Vous pouvez penser que chaque entrée de la liste que nous passons à est séparée par un espace. Par exemple, [sys.executable, "-c", "print('ocean')"] se traduit approximativement par /usr/local/bin/python -c "print('ocean')". Notez que subprocess cite automatiquement les composants de la commande avant d’essayer de les exécuter sur le système d’exploitation sous-jacent de sorte que, par exemple, vous pouvez passer un nom de fichier qui contient des espaces.

      Warning : ne jamais transmettre une entrée non fiable à Comme a la capacité d’exécuter des commandes arbitraires sur votre ordinateur, des acteurs malveillants peuvent l’utiliser pour manipuler votre ordinateur de manière inattendue.

      Capturer output d’un programme externe

      Maintenant que nous pouvons invoquer un programme externe en utilisant, voyons comment nous pouvons récupérer la sortie de ce programme. Par exemple, ce processus pourrait être utile si nous voulions utiliser git ls-files pour produire tous vos fichiers actuellement stockés sous contrôle de version.

      Note : Les exemples présentés dans cette section nécessitent Python 3.7 ou version supérieure. En particulier, les arguments capture_output et le mot-clé text ont été ajoutés à Python 3.7 lors de sa sortie en juin 2018.

      Ajoutons à notre exemple précédent :

      import subprocess
      import sys
      result =
          [sys.executable, "-c", "print('ocean')"], capture_output=True, text=True
      print("stdout:", result.stdout)
      print("stderr:", result.stderr)

      Si nous exécutons ce code, nous obtiendrons le résultat suivant :


      stdout: ocean stderr:

      Cet exemple est en grande partie le même que celui présenté dans la première section : nous sommes toujours en train d’exécuter un sous-processus pour imprimer ocean. Il est toutefois important de noter que nous passons les arguments des mots-clés capture_output=True et text=True à renvoie un objet subprocess.CompletedProcess qui est lié à result. L’objet subprocess.CompletedProcess comprend des détails sur le code de sortie du programme externe et sa sortie. capture_output=True garantit que result.stdout et result.stderr sont remplis avec la sortie correspondante du programme externe. Par défaut, result.stdout et result.stderr sont liés en tant qu’octets, mais l’argument du mot-clé text=True indique à Python de décoder plutôt les octets en chaînes de caractères.

      Dans la section de sortie, stdout est ocean (plus la nouvelle ligne de fin que print ajoute implicitement), et nous n’avons pas de stderr.

      Essayons un exemple qui produit une valeur non vide pour stderr :

      import subprocess
      import sys
      result =
          [sys.executable, "-c", "raise ValueError('oops')"], capture_output=True, text=True
      print("stdout:", result.stdout)
      print("stderr:", result.stderr)

      Si nous exécutons ce code, nous obtenons une sortie comme celle qui suit :


      stdout: stderr: Traceback (most recent call last): File "<string>", line 1, in <module> ValueError: oops

      Ce code exécute un sous-processus Python qui génère immédiatement une ValueError. Lorsque nous inspectons le result final, nous ne voyons rien dans stdout et un Traceback de notre ValueError dans stderr. C’est parce que par défaut Python écrit le Traceback de l’exception non gérée à stderr .

      Lever une exception sur un code de sortie incorrect

      Il est parfois utile de lever une exception si un programme que nous exécutons sort avec un code de sortie incorrect. Les programmes qui sortent avec un code nul sont considérés comme réussis, mais les programmes qui sortent avec un code non-nul sont considérés comme ayant rencontré une erreur. Par exemple, ce modèle pourrait être utile si nous voulions lever une exception dans le cas où nous exécutons des fichiers git ls-files dans un répertoire qui n’est pas réellement un référentiel git.

      Nous pouvons utiliser l’argument check=True du mot-clé pour qu’une exception soit levée si le programme externe renvoie un code de sortie non-nul :

      import subprocess
      import sys
      result =[sys.executable, "-c", "raise ValueError('oops')"], check=True)

      Si nous exécutons ce code, nous obtenons une sortie comme celle qui suit :


      Traceback (most recent call last): File "<string>", line 1, in <module> ValueError: oops Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/lib/python3.8/", line 512, in run raise CalledProcessError(retcode, process.args, subprocess.CalledProcessError: Command '['/usr/local/bin/python', '-c', "raise ValueError('oops')"]' returned non-zero exit status 1.

      Cette sortie montre que nous avons exécuté un sous-processus qui a généré une erreur imprimée en stderr dans notre terminal. Ensuite, a consciencieusement levé un subprocess.CalledProcessError en notre nom dans notre programme Python principal.

      Alternativement, le module de sous-processus comprend également la méthode subprocess.CompletedProcess.check_returncode que nous pouvons invoquer pour un effet similaire :

      import subprocess
      import sys
      result =[sys.executable, "-c", "raise ValueError('oops')"])

      Si nous exécutons ce code, nous recevrons :


      Traceback (most recent call last): File "<string>", line 1, in <module> ValueError: oops Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/lib/python3.8/", line 444, in check_returncode raise CalledProcessError(self.returncode, self.args, self.stdout, subprocess.CalledProcessError: Command '['/usr/local/bin/python', '-c', "raise ValueError('oops')"]' returned non-zero exit status 1.

      Comme nous n’avons pas passé check=True à, nous avons lié avec succès une instance de subprocess.CompletedProcess à result, même si notre programme s’est terminé avec un code non nul. L’appel de result.check_returncode() fait cependant apparaître un sous-processus appelé CalledProcessError parce qu’il détecte le processus terminé sorti avec un code incorrect.

      Utilisation du délai d’attente pour quitter prématurément les programmes inclut l’argument timeout pour vous permettre d’arrêter un programme externe si son exécution est trop longue :

      import subprocess
      import sys
      result =[sys.executable, "-c", "import time; time.sleep(2)"], timeout=1)

      Si nous exécutons ce code, nous obtiendrons le résultat suivant :


      Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/lib/python3.8/", line 491, in run stdout, stderr = process.communicate(input, timeout=timeout) File "/usr/local/lib/python3.8/", line 1024, in communicate stdout, stderr = self._communicate(input, endtime, timeout) File "/usr/local/lib/python3.8/", line 1892, in _communicate self.wait(timeout=self._remaining_time(endtime)) File "/usr/local/lib/python3.8/", line 1079, in wait return self._wait(timeout=timeout) File "/usr/local/lib/python3.8/", line 1796, in _wait raise TimeoutExpired(self.args, timeout) subprocess.TimeoutExpired: Command '['/usr/local/bin/python', '-c', 'import time; time.sleep(2)']' timed out after 0.9997982999999522 seconds

      Le sous-processus que nous avons essayé d’exécuter utilisait la fonction time.sleep pour se mettre en veille pendant 2 secondes. Cependant, nous avons passé l’argument du mot-clé timeout=1 à pour que notre sous-processus soit temporisé après 1 seconde. Cela explique pourquoi notre appel à a finalement soulevé une exception de subprocess.TimeoutExpired.

      Notez que l’argument du mot-clé timeout à est approximatif. Python fera tout son possible pour arrêter le sous-processus après le nombre de secondes stipulé dans timeout, mais ce ne sera pas nécessairement exact.

      Transmission d’Input aux programmes

      Parfois, les programmes s’attendent à ce que les données leur soient transmises via stdin.

      L’argument du mot-clé input à vous permet de passer des données au stdin du sous-processus. Par exemple :

      import subprocess
      import sys
      result =
          [sys.executable, "-c", "import sys; print("], input=b"underwater"

      Après l’exécution de ce code, nous recevrons une sortie comme celle qui suit :



      Dans ce cas, nous avons passé les octets underwater à input. Notre sous-processus cible a utilisé sys.stdin pour lire le passage dans stdin ( underwater ) et l’a imprimé dans notre sortie.

      L’argument du mot-clé input peut être utile si vous voulez enchaîner plusieurs appels ensemble en passant la sortie d’un programme comme entrée à un autre.


      Le module subprocess est une partie puissante de la bibliothèque standard Python, qui vous permet d’exécuter des programmes externes et d’inspecter leurs sorties facilement. Dans ce tutoriel, vous avez appris à utiliser pour contrôler des programmes externes, leur transmettre des entrées, analyser leurs sorties et vérifier leurs codes de retour.

      Le module subprocess propose des classes et des utilitaires supplémentaires que nous n’avons pas abordés dans ce tutoriel. Maintenant que vous disposez d’une base de référence, vous pouvez utiliser la documentation du module subprocess pour en savoir plus sur d’autres classes et utilitaires disponibles.

      Source link