One place for hosting & domains

      Run

      How To Use docker exec to Run Commands in a Docker Container


      Introduction

      Docker is a containerization tool that helps developers create and manage portable, consistent Linux containers.

      When developing or deploying containers you’ll often need to look inside a running container to inspect its current state or debug a problem. To this end, Docker provides the docker exec command to run programs in containers that are already running.

      In this tutorial we will learn about the docker exec command and how to use it to run commands and get an interactive shell in a running Docker container.

      Prerequisites

      This tutorial assumes you already have Docker installed, and your user has permission to run docker. If you need to run docker as the root user, please remember to prepend sudo to the commands in this tutorial.

      For more information on using Docker without sudo access, please see the Executing the Docker Command Without Sudo section of our How To Install Docker tutorial.

      Starting a Test Container

      To use the docker exec command, you will need a running Docker container. If you don’t already have a container, start a test container with the following docker run command:

      • docker run -d --name container-name alpine watch "date >> /var/log/date.log"

      This command creates a new Docker container from the official alpine image. This is a popular Linux container image that uses Alpine Linux, a lightweight, minimal Linux distribution.

      We use the -d flag to detach the container from our terminal and run it in the background. --name container-name will name the container container-name. You could choose any name you like here, or leave this off entirely to have Docker automatically generate a unique name for the new container.

      Next we have alpine, which specifies the image we want to use for the container.

      And finally we have watch "date >> /var/log/date.log". This is the command we want to run in the container. watch will repeatedly run the command you give it, every two seconds by default. The command that watch will run in this case is date >> /var/log/date.log. date prints the current date and time, like this:

      Output

      • Fri Jul 23 14:57:05 UTC 2021

      The >> /var/log/date.log portion of the command redirects the output from date and appends it to the file /var/log/date.log. Every two seconds a new line will be appended to the file, and after a few seconds it will look something like this:

      Output

      Fri Jul 23 15:00:26 UTC 2021 Fri Jul 23 15:00:28 UTC 2021 Fri Jul 23 15:00:30 UTC 2021 Fri Jul 23 15:00:32 UTC 2021 Fri Jul 23 15:00:34 UTC 2021

      In the next step we’ll learn how to find the names of Docker containers. This will be useful if you already have a container you’re targeting, but you’re not sure what its name is.

      Finding the Name of a Docker Container

      We’ll need to provide docker exec with the name (or container ID) of the container we want to work with. We can find this information using the docker ps command:

      This command lists all of the Docker containers running on the server, and provides some high-level information about them:

      Output

      CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 76aded7112d4 alpine "watch 'date >> /var…" 11 seconds ago Up 10 seconds container-name

      In this example, the container ID and name are highlighted. You may use either to tell docker exec which container to use.

      If you’d like to rename your container, use the docker rename command:

      • docker rename container-name new-name

      Next, we’ll run through several examples of using docker exec to execute commands in a running Docker container.

      Running an Interactive Shell in a Docker Container

      If you need to start an interactive shell inside a Docker Container, perhaps to explore the filesystem or debug running processes, use docker exec with the -i and -t flags.

      The -i flag keeps input open to the container, and the -t flag creates a pseudo-terminal that the shell can attach to. These flags can be combined like this:

      • docker exec -it container-name sh

      This will run the sh shell in the specified container, giving you a basic shell prompt. To exit back out of the container, type exit then press ENTER:

      If your container image includes a more advanced shell such as bash, you could replace sh with bash above.

      Running a Non-interactive Command in a Docker Container

      If you need to run a command inside a running Docker container, but don’t need any interactivity, use the docker exec command without any flags:

      • docker exec container-name tail /var/log/date.log

      This command will run tail /var/log/date.log on the container-name container, and output the results. By default the tail command will print out the last ten lines of a file. If you’re running the demo container we set up in the first section, you will see something like this:

      Output

      Mon Jul 26 14:39:33 UTC 2021 Mon Jul 26 14:39:35 UTC 2021 Mon Jul 26 14:39:37 UTC 2021 Mon Jul 26 14:39:39 UTC 2021 Mon Jul 26 14:39:41 UTC 2021 Mon Jul 26 14:39:43 UTC 2021 Mon Jul 26 14:39:45 UTC 2021 Mon Jul 26 14:39:47 UTC 2021 Mon Jul 26 14:39:49 UTC 2021 Mon Jul 26 14:39:51 UTC 2021

      This is essentially the same as opening up an interactive shell for the Docker container (as done in the previous step with docker exec -it container-name sh) and then running the tail /var/log/date.log command. However, rather than opening up a shell, running the command, and then closing the shell, this command returns that same output in a single command and without opening up a pseudo-terminal.

      Running Commands in an Alternate Directory in a Docker Container

      To run a command in a certain directory of your container, use the --workdir flag to specify the directory:

      • docker exec --workdir /tmp container-name pwd

      This example command sets the /tmp directory as the working directory, then runs the pwd command, which prints out the present working directory:

      Output

      /tmp

      The pwd command has confirmed that the working directory is /tmp.

      Running Commands as a Different User in a Docker Container

      To run a command as a different user inside your container, add the --user flag:

      • docker exec --user guest container-name whoami

      This will use the guest user to run the whoami command in the container. The whoami command prints out the current user’s username:

      Output

      guest

      The whoami command confirms that the container’s current user is guest.

      Passing Environment Variables into a Docker Container

      Sometimes you need to pass environment variables into a container along with the command to run. The -e flag lets you specify an environment variable:

      • docker exec -e TEST=sammy container-name env

      This command sets the TEST environment variable to equal sammy, then runs the env command inside the container. The env command then prints out all the environment variables:

      Output

      PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin HOSTNAME=76aded7112d4 TEST=sammy HOME=/root

      The TEST variable is set to sammy.

      To set multiple variables, repeat the -e flag for each one:

      • docker exec -e TEST=sammy -e ENVIRONMENT=prod container-name env

      If you’d like to pass in a file full of environment variables you can do that with the --env-file flag.

      First, make the file with a text editor. We’ll open a new file with nano here, but you can use any editor you’re comfortable with:

      We’re using .env as the filename, as that’s a popular standard for using these sorts of files to manage information outside of version control.

      Write your KEY=value variables into the file, one per line, like the following:

      .env

      TEST=sammy
      ENVIRONMENT=prod
      

      Save and close the file. To save the file and exit nano, press CTRL+O, then ENTER to save, then CTRL+X to exit.

      Now run the docker exec command, specifying the correct filename after --env-file:

      • docker exec --env-file .env container-name env

      Output

      PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin HOSTNAME=76aded7112d4 TEST=sammy ENVIRONMENT=prod HOME=/root

      The two variables in the file are set.

      You may specify multiple files by using multiple --env-file flags. If the variables in the files overlap each other, whichever file was listed last in the command will override the previous files.

      Common Errors

      When using the docker exec command, you may encounter a few common errors:

      Error: No such container: container-name
      

      The No such container error means the specified container does not exist, and may indicate a misspelled container name. Use docker ps to list out your running containers and double-check the name.

      Error response from daemon: Container 2a94aae70ea5dc92a12e30b13d0613dd6ca5919174d73e62e29cb0f79db6e4ab is not running
      

      This not running message means that the container exists, but it is stopped. You can start the container with docker start container-name

      Error response from daemon: Container container-name is paused, unpause the container before exec
      

      The Container is paused error explains the problem fairly well. You need to unpause the container with docker unpause container-name before proceeding.

      Conclusion

      In this tutorial we learned how to execute commands in a running Docker container, along with some command line options available when doing so.

      For more information on Docker in general, please see our Docker tag page, which has links to Docker tutorials, Docker-related Q&A pages, and more.

      For help with installing Docker, take a look at How To Install and Use Docker on Ubuntu 20.04.



      Source link

      How To Run a PHP Job Multiple Times in a Minute with Crontab on Ubuntu 20.04


      The author selected Girls Who Code to receive a donation as part of the Write for DOnations program.

      Introduction

      In Linux, you can use the versatile crontab tool to process long-running tasks in the background at specific times. While the daemon is great for running repetitive tasks, it has got one limitation: you can only execute tasks at a minimum time interval of 1 minute.

      However, in many applications, to avoid a poor user experience it’s preferable for jobs to execute more frequently. For instance, if you’re using the job-queue model to schedule file-processing tasks on your website, a significant wait will have a negative impact on end users.

      Another scenario is an application that uses the job-queue model either to send text messages or emails to clients once they’ve completed a certain task in an application (for example, sending money to a recipient). If users have to wait a minute before the delivery of a confirmation message, they might think that the transaction failed and try to repeat the same transaction.

      To overcome these challenges, you can program a PHP script that loops and processes tasks repetitively for 60 seconds as it awaits for the crontab daemon to call it again after the minute. Once the PHP script is called for the first time by the crontab daemon, it can execute tasks in a time period that matches the logic of your application without keeping the user waiting.

      In this guide, you will create a sample cron_jobs database on an Ubuntu 20.04 server. Then, you’ll set up a tasks table and a script that executes the jobs in your table in intervals of 5 seconds using the PHP while(...){...} loop and sleep() functions.

      Prerequisites

      To complete this tutorial, you require the following:

      Step 1 — Setting Up a Database

      In this step, you’ll create a sample database and table. First, SSH to your server and log in to MySQL as root:

      Enter your root password for the MySQL server and press ENTER to proceed. Then, run the following command to create a cron_jobs database.

      • CREATE DATABASE cron_jobs;

      Create a non-root user for the database. You’ll need the credentials of this user to connect to the cron_jobs database from PHP. Remember to replace EXAMPLE_PASSWORD with a strong value:

      • CREATE USER 'cron_jobs_user'@'localhost' IDENTIFIED WITH mysql_native_password BY 'EXAMPLE_PASSWORD';
      • GRANT ALL PRIVILEGES ON cron_jobs.* TO 'cron_jobs_user'@'localhost';
      • FLUSH PRIVILEGES;

      Next, switch to the cron_jobs database:

      Output

      Database changed

      Once you’ve selected the database, create a tasks table. In this table, you’ll insert some tasks that will be automatically executed by a cron job. Since the minimum time interval for running a cron job is 1 minute, you’ll later code a PHP script that overrides this setting and instead, execute the jobs in intervals of 5 seconds.

      For now, create your tasks table:

      • CREATE TABLE tasks (
      • task_id BIGINT NOT NULL AUTO_INCREMENT PRIMARY KEY,
      • task_name VARCHAR(50),
      • queued_at DATETIME,
      • completed_at DATETIME,
      • is_processed CHAR(1)
      • ) ENGINE = InnoDB;

      Insert three records to the tasks table. Use the MySQL NOW() function in the queued_at column to record the current date and time when the tasks are queued. Also for the completed_at column, use the MySQL CURDATE() function to set a default time of 00:00:00. Later, as tasks complete, your script will update this column:

      • INSERT INTO tasks (task_name, queued_at, completed_at, is_processed) VALUES ('TASK 1', NOW(), CURDATE(), 'N');
      • INSERT INTO tasks (task_name, queued_at, completed_at, is_processed) VALUES ('TASK 2', NOW(), CURDATE(), 'N');
      • INSERT INTO tasks (task_name, queued_at, completed_at, is_processed) VALUES ('TASK 3', NOW(), CURDATE(), 'N');

      Confirm the output after running each INSERT command:

      Output

      Query OK, 1 row affected (0.00 sec) ...

      Make sure the data is in place by running a SELECT statement against the tasks table:

      • SELECT task_id, task_name, queued_at, completed_at, is_processed FROM tasks;

      You will find a list of all tasks:

      Output

      +---------+-----------+---------------------+---------------------+--------------+ | task_id | task_name | queued_at | completed_at | is_processed | +---------+-----------+---------------------+---------------------+--------------+ | 1 | TASK 1 | 2021-03-06 06:27:19 | 2021-03-06 00:00:00 | N | | 2 | TASK 2 | 2021-03-06 06:27:28 | 2021-03-06 00:00:00 | N | | 3 | TASK 3 | 2021-03-06 06:27:36 | 2021-03-06 00:00:00 | N | +---------+-----------+---------------------+---------------------+--------------+ 3 rows in set (0.00 sec)

      The time for the completed_at column is set to 00:00:00, this column will update once the tasks are processed by a PHP script that you will create next.

      Exit from the MySQL command-line interface:

      Output

      Bye

      Your cron_jobs database and tasks table are now in place and you can now create a PHP script that processes the jobs.

      Step 2 — Creating a PHP Script that Runs Tasks After 5 Seconds

      In this step, you’ll create a script that uses a combination of the PHP while(...){...} loop and sleep functions to run tasks after every 5 seconds.

      Open a new /var/www/html/tasks.php file in the root directory of your web server using nano:

      • sudo nano /var/www/html/tasks.php

      Next, create a new try { block after a <?php tag and declare the database variables that you created in Step 1. Remember to replace EXAMPLE_PASSWORD with the actual password for your database user:

      /var/www/html/tasks.php

      <?php
      try {
          $db_name="cron_jobs";
          $db_user="cron_jobs_user";
          $db_password = 'EXAMPLE_PASSWORD';
          $db_host="localhost";
      

      Next, declare a new PDO (PHP Data Object) class and set the attribute ERRMODE_EXCEPTION to catch any PDO errors. Also, switch ATTR_EMULATE_PREPARES to false to let the native MySQL database engine handle emulation. Prepared statements allow you to send the SQL queries and data separately to enhance security and reduce chances of an SQL injection attack:

      /var/www/html/tasks.php

      
          $pdo = new PDO('mysql:host=" . $db_host . "; dbname=" . $db_name, $db_user, $db_password);
          $pdo->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION);  
          $pdo->setAttribute(PDO::ATTR_EMULATE_PREPARES, false);       
      

      Then, create a new variable named $loop_expiry_time and set it to the current time plus 60 seconds. Then open a new PHP while(time() < $loop_expiry_time) { statement. The idea here is to create a loop that runs until the current time (time()) matches the variable $loop_expiry_time:

      /var/www/html/tasks.php

             
          $loop_expiry_time = time() + 60;
      
          while (time() < $loop_expiry_time) { 
      

      Next, declare a prepared SQL statement that retrieves unprocessed jobs from the tasks table:

      /var/www/html/tasks.php

         
              $data = [];
              $sql  = "select 
                       task_id
                       from tasks
                       where is_processed = :is_processed
                       ";
      

      Execute the SQL command and fetch all rows from the tasks table that have the column is_processed set to N. This means the rows are not processed:

      /var/www/html/tasks.php

        
              $data["is_processed'] = 'N';  
      
              $stmt = $pdo->prepare($sql);
              $stmt->execute($data);
      

      Next, loop through the retrieved rows using a PHP while ($row = $stmt->fetch(PDO::FETCH_ASSOC)) {...} statement and create another SQL statement. This time around, the SQL command updates the is_processed and completed_at columns for each task processed. This ensures that you don’t process tasks more than one time:

      /var/www/html/tasks.php

        
              while ($row = $stmt->fetch(PDO::FETCH_ASSOC)) { 
                  $data_update   = [];         
                  $sql_update    = "update tasks set 
                                    is_processed  = :is_processed,
                                    completed_at  = :completed_at
                                    where task_id = :task_id                                 
                                    ";
      
                  $data_update   = [                
                                   'is_processed' => 'Y',                          
                                   'completed_at' => date("Y-m-d H:i:s"),
                                   'task_id'      => $row['task_id']                         
                                   ];
                  $stmt = $pdo->prepare($sql_update);
                  $stmt->execute($data_update);
              }
      

      Note: If you have a large queue to be processed (for example, 100,000 records per second), you might consider queueing jobs in a Redis Server since it is faster than MySQL when it comes to implementing the job-queue model. Nevertheless, this guide will process a smaller dataset.

      Before you close the first PHP while (time() < $loop_expiry_time) { statement, include a sleep(5); statement to pause the jobs execution for 5 seconds and free up your server resources.

      You may change the 5 seconds period depending on your business logic and how fast you want tasks to execute. For instance, if you would like the tasks to be processed 3 times in a minute, set this value to 20 seconds.

      Remember to catch any PDO error messages inside a } catch (PDOException $ex) { echo $ex->getMessage(); } block:

      /var/www/html/tasks.php

                       sleep(5); 
      
              }       
      
      } catch (PDOException $ex) {
          echo $ex->getMessage(); 
      }
      

      Your complete tasks.php file will be as follows:

      /var/www/html/tasks.php

      <?php
      try {
          $db_name="cron_jobs";
          $db_user="cron_jobs_user";
          $db_password = 'EXAMPLE_PASSWORD';
          $db_host="localhost";
      
          $pdo = new PDO('mysql:host=" . $db_host . "; dbname=" . $db_name, $db_user, $db_password);
          $pdo->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION);  
          $pdo->setAttribute(PDO::ATTR_EMULATE_PREPARES, false);               
      
          $loop_expiry_time = time() + 60;
      
          while (time() < $loop_expiry_time) { 
              $data = [];
              $sql  = "select 
                       task_id
                       from tasks
                       where is_processed = :is_processed
                       ";
      
              $data["is_processed'] = 'N';             
      
              $stmt = $pdo->prepare($sql);
              $stmt->execute($data);
      
              while ($row = $stmt->fetch(PDO::FETCH_ASSOC)) { 
                  $data_update   = [];         
                  $sql_update    = "update tasks set 
                                    is_processed  = :is_processed,
                                    completed_at  = :completed_at
                                    where task_id = :task_id                                 
                                    ";
      
                  $data_update   = [                
                                   'is_processed' => 'Y',                          
                                   'completed_at' => date("Y-m-d H:i:s"),
                                   'task_id'      => $row['task_id']                         
                                   ];
                  $stmt = $pdo->prepare($sql_update);
                  $stmt->execute($data_update);
              }   
      
              sleep(5); 
      
              }       
      
      } catch (PDOException $ex) {
          echo $ex->getMessage(); 
      }
      

      Save the file by pressing CTRL + X, Y then ENTER.

      Once you’ve completed coding the logic in the /var/www/html/tasks.php file, you’ll schedule the crontab daemon to execute the file after every 1 minute in the next step.

      Step 3 — Scheduling the PHP Script to Run After 1 Minute

      In Linux, you can schedule jobs to run automatically after a stipulated time by entering a command into the crontab file. In this step, you will instruct the crontab daemon to run your /var/www/html/tasks.php script after every minute. So, open the /etc/crontab file using nano:

      Then add the following toward the end of the file to execute the http://localhost/tasks.php after every 1 minute:

      /etc/crontab

      ...
      * * * * * root /usr/bin/wget -O - http://localhost/tasks.php
      

      Save and close the file.

      This guide assumes that you have a basic knowledge of how cron jobs work. Consider reading our guide on How to Use Cron to Automate Tasks on Ubuntu.

      As earlier indicated, although the cron daemon runs the tasks.php file after every 1 minute, once the file is executed for the first time, it will loop through the open tasks for another 60 seconds. By the time the loop time expires, the cron daemon will execute the file again and the process will continue.

      After updating and closing the /etc/crontab file, the crontab daemon should begin executing the MySQL tasks that you inserted in the tasks table immediately. To confirm whether everything is working as expected, you’ll query your cron_jobs database next.

      Step 4 — Confirming Job Execution

      In this step, you will open your database one more time to check whether the tasks.php file is processing queued jobs when executed automatically by the crontab.

      Log back in to your MySQL server as root:

      Then, enter your MySQL server’s root password and hit ENTER to proceed. Then, switch to the database:

      Output

      Database changed

      Run a SELECT statement against the tasks table:

      • SELECT task_id, task_name, queued_at, completed_at, is_processed FROM tasks;

      You will receive output similar to the following. In the completed_at column, tasks have been processed at intervals of 5 seconds. Also, the tasks have been marked as completed since the is_processed column is now set to Y, which means YES.

      Output

      +---------+-----------+---------------------+---------------------+--------------+ | task_id | task_name | queued_at | completed_at | is_processed | +---------+-----------+---------------------+---------------------+--------------+ | 1 | TASK 1 | 2021-03-06 06:27:19 | 2021-03-06 06:30:01 | Y | | 2 | TASK 2 | 2021-03-06 06:27:28 | 2021-03-06 06:30:06 | Y | | 3 | TASK 3 | 2021-03-06 06:27:36 | 2021-03-06 06:30:11 | Y | +---------+-----------+---------------------+---------------------+--------------+ 3 rows in set (0.00 sec)

      This confirms that your PHP script is working as expected; you have run tasks in a shorter time interval by overriding the limitation of the 1 minute time period set by the crontab daemon.

      Conclusion

      In this guide, you’ve set up a sample database on an Ubuntu 20.04 server. Then, you have created jobs in a table and run them at intervals of 5 seconds using the PHP while(...){...} loop and sleep() functions. Use the logic in this tutorial when you’re next implementing a job-queue-based application where tasks need to be run multiple times within a 1 minute time period.

      For more PHP tutorials, check out our PHP topic page.



      Source link

      How To Use subprocess to Run External Programs in Python 3


      The author selected the COVID-19 Relief Fund to receive a donation as part of the Write for DOnations program.

      Introduction

      Python 3 includes the subprocess module for running external programs and reading their outputs in your Python code.

      You might find subprocess useful if you want to use another program on your computer from within your Python code. For example, you might want to invoke git from within your Python code to retrieve files in your project that are tracked in git version control. Since any program you can access on your computer can be controlled by subprocess, the examples shown here will be applicable to any external program you might want to invoke from your Python code.

      subprocess includes several classes and functions, but in this tutorial we’ll cover one of subprocess’s most useful functions: subprocess.run. We’ll review its different uses and main keyword arguments.

      Prerequisites

      To get the most out of this tutorial, it is recommended to have some familiarity with programming in Python 3. You can review these tutorials for the necessary background information:

      Running an External Program

      You can use the subprocess.run function to run an external program from your Python code. First, though, you need to import the subprocess and sys modules into your program:

      import subprocess
      import sys
      
      result = subprocess.run([sys.executable, "-c", "print('ocean')"])
      

      If you run this, you will receive output like the following:

      Output

      ocean

      Let’s review this example:

      • sys.executable is the absolute path to the Python executable that your program was originally invoked with. For example, sys.executable might be a path like /usr/local/bin/python.
      • subprocess.run is given a list of strings consisting of the components of the command we are trying to run. Since the first string we pass is sys.executable, we are instructing subprocess.run to execute a new Python program.
      • The -c component is a python command line option that allows you to pass a string with an entire Python program to execute. In our case, we pass a program that prints the string ocean.

      You can think of each entry in the list that we pass to subprocess.run as being separated by a space. For example, [sys.executable, "-c", "print('ocean')"] translates roughly to /usr/local/bin/python -c "print('ocean')". Note that subprocess automatically quotes the components of the command before trying to run them on the underlying operating system so that, for example, you can pass a filename that has spaces in it.

      Warning: Never pass untrusted input to subprocess.run. Since subprocess.run has the ability to perform arbitrary commands on your computer, malicious actors can use it to manipulate your computer in unexpected ways.

      Capturing Output From an External Program

      Now that we can invoke an external program using subprocess.run, let’s see how we can capture output from that program. For example, this process could be useful if we wanted to use git ls-files to output all your files currently stored under version control.

      Note: The examples shown in this section require Python 3.7 or higher. In particular, the capture_output and text keyword arguments were added in Python 3.7 when it was released in June 2018.

      Let’s add to our previous example:

      import subprocess
      import sys
      
      result = subprocess.run(
          [sys.executable, "-c", "print('ocean')"], capture_output=True, text=True
      )
      print("stdout:", result.stdout)
      print("stderr:", result.stderr)
      

      If we run this code, we’ll receive output like the following:

      Output

      stdout: ocean stderr:

      This example is largely the same as the one introduced in the first section: we are still running a subprocess to print ocean. Importantly, however, we pass the capture_output=True and text=True keyword arguments to subprocess.run.

      subprocess.run returns a subprocess.CompletedProcess object that is bound to result. The subprocess.CompletedProcess object includes details about the external program’s exit code and its output. capture_output=True ensures that result.stdout and result.stderr are filled in with the corresponding output from the external program. By default, result.stdout and result.stderr are bound as bytes, but the text=True keyword argument instructs Python to instead decode the bytes into strings.

      In the output section, stdout is ocean (plus the trailing newline that print adds implicitly), and we have no stderr.

      Let’s try an example that produces a non-empty value for stderr:

      import subprocess
      import sys
      
      result = subprocess.run(
          [sys.executable, "-c", "raise ValueError('oops')"], capture_output=True, text=True
      )
      print("stdout:", result.stdout)
      print("stderr:", result.stderr)
      

      If we run this code, we receive output like the following:

      Output

      stdout: stderr: Traceback (most recent call last): File "<string>", line 1, in <module> ValueError: oops

      This code runs a Python subprocess that immediately raises a ValueError. When we inspect the final result, we see nothing in stdout and a Traceback of our ValueError in stderr. This is because by default Python writes the Traceback of the unhandled exception to stderr.

      Raising an Exception on a Bad Exit Code

      Sometimes it’s useful to raise an exception if a program we run exits with a bad exit code. Programs that exit with a zero code are considered successful, but programs that exit with a non-zero code are considered to have encountered an error. As an example, this pattern could be useful if we wanted to raise an exception in the event that we run git ls-files in a directory that wasn’t actually a git repository.

      We can use the check=True keyword argument to subprocess.run to have an exception raised if the external program returns a non-zero exit code:

      import subprocess
      import sys
      
      result = subprocess.run([sys.executable, "-c", "raise ValueError('oops')"], check=True)
      

      If we run this code, we receive output like the following:

      Output

      Traceback (most recent call last): File "<string>", line 1, in <module> ValueError: oops Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/lib/python3.8/subprocess.py", line 512, in run raise CalledProcessError(retcode, process.args, subprocess.CalledProcessError: Command '['/usr/local/bin/python', '-c', "raise ValueError('oops')"]' returned non-zero exit status 1.

      This output shows that we ran a subprocess that raised an error, which is printed in stderr in our terminal. Then subprocess.run dutifully raised a subprocess.CalledProcessError on our behalf in our main Python program.

      Alternatively, the subprocess module also includes the subprocess.CompletedProcess.check_returncode method, which we can invoke for similar effect:

      import subprocess
      import sys
      
      result = subprocess.run([sys.executable, "-c", "raise ValueError('oops')"])
      result.check_returncode()
      

      If we run this code, we’ll receive:

      Output

      Traceback (most recent call last): File "<string>", line 1, in <module> ValueError: oops Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/lib/python3.8/subprocess.py", line 444, in check_returncode raise CalledProcessError(self.returncode, self.args, self.stdout, subprocess.CalledProcessError: Command '['/usr/local/bin/python', '-c', "raise ValueError('oops')"]' returned non-zero exit status 1.

      Since we didn’t pass check=True to subprocess.run, we successfully bound a subprocess.CompletedProcess instance to result even though our program exited with a non-zero code. Calling result.check_returncode(), however, raises a subprocess.CalledProcessError because it detects the completed process exited with a bad code.

      Using timeout to Exit Programs Early

      subprocess.run includes the timeout argument to allow you to stop an external program if it is taking too long to execute:

      import subprocess
      import sys
      
      result = subprocess.run([sys.executable, "-c", "import time; time.sleep(2)"], timeout=1)
      

      If we run this code, we’ll receive output like the following:

      Output

      Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/lib/python3.8/subprocess.py", line 491, in run stdout, stderr = process.communicate(input, timeout=timeout) File "/usr/local/lib/python3.8/subprocess.py", line 1024, in communicate stdout, stderr = self._communicate(input, endtime, timeout) File "/usr/local/lib/python3.8/subprocess.py", line 1892, in _communicate self.wait(timeout=self._remaining_time(endtime)) File "/usr/local/lib/python3.8/subprocess.py", line 1079, in wait return self._wait(timeout=timeout) File "/usr/local/lib/python3.8/subprocess.py", line 1796, in _wait raise TimeoutExpired(self.args, timeout) subprocess.TimeoutExpired: Command '['/usr/local/bin/python', '-c', 'import time; time.sleep(2)']' timed out after 0.9997982999999522 seconds

      The subprocess we tried to run used the time.sleep function to sleep for 2 seconds. However, we passed the timeout=1 keyword argument to subprocess.run to time out our subprocess after 1 second. This explains why our call to subprocess.run ultimately raised a subprocess.TimeoutExpired exception.

      Note that the timeout keyword argument to subprocess.run is approximate. Python will make a best effort to kill the subprocess after the timeout number of seconds, but it won’t necessarily be exact.

      Passing Input to Programs

      Sometimes programs expect input to be passed to them via stdin.

      The input keyword argument to subprocess.run allows you to pass data to the stdin of the subprocess. For example:

      import subprocess
      import sys
      
      result = subprocess.run(
          [sys.executable, "-c", "import sys; print(sys.stdin.read())"], input=b"underwater"
      )
      

      We’ll receive output like the following after running this code:

      Output

      underwater

      In this case, we passed the bytes underwater to input. Our target subprocess used sys.stdin to read the passed in stdin (underwater) and printed it out in our output.

      The input keyword argument can be useful if you want to chain multiple subprocess.run calls together passing the output of one program as the input to another.

      Conclusion

      The subprocess module is a powerful part of the Python standard library that lets you run external programs and inspect their outputs easily. In this tutorial, you have learned to use subprocess.run to control external programs, pass input to them, parse their output, and check their return codes.

      The subprocess module exposes additional classes and utilities that we did not cover in this tutorial. Now that you have a baseline, you can use the subprocess module’s documentation to learn more about other available classes and utilities.



      Source link