One place for hosting & domains

      November 2020

      How To Write Unit Tests in Go


      The author selected the FreeBSD Foundation to receive a donation as part of the Write for DOnations program.

      Introduction

      A unit test is a function that tests a specific piece of code from a program or package. The job of unit tests is to check the correctness of an application, and they are a crucial part of the Go programming language.

      In this tutorial, you will create a small program and then run a series of tests on your code using Go’s testing package and the go test command. Once you complete the tutorial, you will have a working unit-testing suite that includes a table-based unit test, a coverage test, a benchmark, and a documented example.

      Prerequisites

      To complete this tutorial, you’ll need the following:

      Note: This tutorial uses Go Modules, which is a package management system introduced in Go version 1.11. Go Modules are intended to replace the $GOPATH and became the default option starting with Go version 1.13. For a more comprehensive overview of the differences between Go Modules and the $GOPATH, consider reading this official blog post from the Go core team.

      This tutorial was tested using Go version 1.14

      Step 1 — Creating a Sample Program to Unit Test

      Before you can write any unit tests, you need some code for your tests to analyze. In this step, you will build a small program that sums two integers. In the subsequent steps, you will use go test to test the program.

      First, create a new directory called math:

      Move inside the new directory:

      This will be the root directory for your program, and you will run all remaining commands from here.

      Now, using nano or your preferred text editor, create a new file called math.go:

      Add the following code:

      ./math/math.go

      package math
      
      // Add is our function that sums two integers
      func Add(x, y int) (res int) {
          return x + y
      }
      
      // Subtract subtracts two integers
      func Subtract(x, y int) (res int) {
          return x - y
      }
      

      Here you are creating two functions called Add and Subtract. Each function accepts two integers and returns either their sum (func Add) or their difference (func Subtract).

      Save and close the file.

      In this step, you wrote some code in Go. Now, in the following steps, you will write some unit tests to ensure that your code functions properly.

      Step 2 — Writing Unit Tests in Go

      In this step, you will write your first test in Go. Writing tests in Go requires a test file link, and this test file must always end with _test.go. By convention, Go testing files are always located in the same folder, or package, where the code they are testing resides. These files are not built by the compiler when you run the go build command, so you needn’t worry about them ending up in deployments.

      And as with everything in Go, the language is opinionated about testing. The Go language provides a minimal yet complete package called testing that developers use alongside the go test command. The testing package provides some useful conventions, such as coverage tests and benchmarks, which you will now explore.

      Use your editor to create and open a new file called math_test.go:

      A test function in Go includes this signature: func TestXxxx(t *testing.T). This means that all test functions must start with the word Test, followed by a suffix whose first word is capitalized. Test functions in Go receive only one parameter, and in this case, it’s a pointer of type testing.T. This type contains useful methods that you will need to output results, log errors to the screen, and signal failures, like the t.Errorf() method.

      Add the following code to math_test.go:

      ./math/math_test.go

      
      package math
      
      import "testing"
      
      func TestAdd(t *testing.T){
      
          got := Add(4, 6)
          want := 10
      
          if got != want {
              t.Errorf("got %q, wanted %q", got, want)
          }
      }
      

      First, you declare the name of the package that you want to test—math. Then you import the testing package itself, which then makes available the testing.T type and the other types and methods exported by the package. The code and testing logic is contained in the TestAdd function.

      To summarize, the following are characteristics of a test in Go:

      • The first and only parameter must be t *testing.T
      • The testing function begins with the word Test followed by a word or phrase that starts with a capital letter (the convention is to use the name of the method under test, e.g., TestAdd)
      • The test calls t.Error or t.Fail to indicate a failure (you are calling t.Error because it returns more detail than t.Fail)
      • You can use t.Log to provide non-failing debug information
      • Tests are saved in files using this naming convention: foo_test.go, such as math_test.go.

      Save and then close the file.

      In this step, you wrote your first test in Go. In the next step, you will begin using go test to test your code.

      Step 3 — Testing Your Go Code Using the go test command

      In this step, you will test your code. go test is a powerful subcommand that helps you automate your tests. go test accepts different flags that can configure the tests you wish to run, how much verbosity the tests return, and more.

      From your project’s root directory, run your first test:

      You will receive the following output:

      Output

      PASS ok ./math 0.988s

      PASS means the code is working as expected. When a test fails, you will see FAIL.

      The go test subcommand only looks for files with the _test.go suffix. go test then scans those file(s) for special functions including func TestXxx and several others that we will cover in later steps. go test then generates a temporary main package that calls these functions in the proper way, builds and runs them, reports the results, and finally cleans everything up.

      Our go test is probably sufficient for our little program, but there will be times when you’ll wish to see what tests are running and how long each takes. Adding the -v flag increases verbosity. Rerun your test with the new flag:

      You will see the following output:

      Output

      === RUN TestAdd --- PASS: TestAdd (0.00s) PASS ok ./math 1.410s

      In this step, you ran a basic unit test using the go test subcommand. In the next step, you will write a more complex, table-driven unit test.

      Step 4 — Writing Table-Driven Tests in Go

      A table-driven test is like a basic unit test except that it maintains a table of different values and results. The testing suite iterates over these values and submits them to the test code. Using this approach, we get to test several combinations of inputs and their respective output.

      You will now replace your unit test with a table of structs, whose fields include the necessary two arguments (two integers) and the expected result (their sum) for your Add function.

      Reopen math_test.go:

      Delete all the code in the file and add the following table-driven unit test instead:

      ./math/math_test.go

      
      package math
      
      import "testing"
      
      // arg1 means argument 1 and arg2 means argument 2, and the expected stands for the 'result we expect'
      type addTest struct {
          arg1, arg2, expected int
      }
      
      var addTests = []addTest{
          addTest{2, 3, 5},
          addTest{4, 8, 12},
          addTest{6, 9, 15},
          addTest{3, 10, 13},
      
      }
      
      
      func TestAdd(t *testing.T){
      
          for _, test := range addTests{
              if output := Add(test.arg1, test.arg2); output != test.expected {
                  t.Errorf("Output %q not equal to expected %q", output, test.expected)
              }
          }
      }
      
      

      Here you are defining a struct, populating a table of structs that include the arguments and expected results for your Add function, and then writing a new TestAdd function. In this new function, you iterate over the table, run the arguments, compare the outputs to each expected result, and then returning any errors, should they occur.

      Save and close the file.

      Now run the test with the -v flag:

      You will see the same output as before:

      Output

      === RUN TestAdd --- PASS: TestAdd (0.00s) PASS ok ./math 1.712s

      With each iteration of the loop, the code tests the value calculated by the Add function against an expected value.

      In this step, you wrote a table-driven test. In the next step, you will write a coverage test instead.

      Step 5 — Writing Coverage Tests in Go

      In this step, you will write a coverage test in Go. When writing tests, it is often important to know how much of your actual code the tests cover. This is generally referred to as coverage. This is also why you have not written a test for your Subtract function — so we can view an incomplete coverage test.

      Run the following command to calculate the coverage for your current unit test:

      • go test -coverprofile=coverage.out

      You will receive the following output:

      Output

      PASS coverage: 50.0% of statements ok ./math 2.073s

      Go saved this coverage data in the file coverage.out. Now you can present the results in a web browser.

      Run the following command:

      • go tool cover -html=coverage.out

      A web browser will open, and your results will render:

      go unit testing coverage.out

      The green text indicates coverage, whereas the red text indicates the opposite.

      In this step, you tested the coverage of your table-driven unit test. In the next step, you will benchmark your function.

      Step 6 — Writing Benchmarks in Go

      In this step, you will write a benchmark test in Go. Benchmarking measures the performance of a function or program. This allows you to compare implementations and to understand the impact of the changes you make to your code. Using that information, you can reveal parts of your Go source code worth optimizing.

      In Go, functions that take the form func BenchmarkXxx(*testing.B) are considered benchmarks. go test will execute these benchmarks when you provide the -bench flag. Benchmarks are run sequentially.

      Let’s add a benchmark to our unit test.

      Open math_test.go:

      Now add a benchamrk function using the func BenchmarkXxx(*testing.B) syntax:

      ./math_test.go

      ...
      func BenchmarkAdd(b *testing.B){
          for i :=0; i < b.N ; i++{
              Add(4, 6)
          }
      }
      

      The benchmark function must run the target code b.N times, where N is an integer that can be adjusted. During benchmark execution, b.N is adjusted until the benchmark function lasts long enough to be timed reliably. The --bench flag accepts its arguments in the form of a regular expression.

      Save and close the file.

      Now let’s use go test again to run our benchmark:

      The . will match every benchmark function in a file.

      You can also declare benchmark functions explicitly:

      Run either command, and you will see an output like this:

      Output

      goos: windows goarch: amd64 pkg: math BenchmarkAdd-4 1000000000 1.07 ns/op PASS ok ./math 2.074s

      The resulting output means that the loop ran 10,000,000 times at a speed of 1.07 nanosecond per loop.

      Note: Try not to benchmark your Go code on a busy system being used for other purposes, or else you will interfere with the benchmarking process and get inaccurate results

      You have now added a benchmark to your growing unit test. In the next and final step, you will add examples to your documentation, which go test will also evaluate.

      Step 7 — Documenting Your Go Code with Examples

      In this step, you will document your Go code with examples and then test those examples. Go is very focused on proper documentation, and example code adds another dimension both to documentation and testing. Examples are based on existing methods and functions. Your examples should show users how to use a specific piece of code. Example functions are the third type of function treated specially by the go test subcommand.

      To begin, reopen math_test.go,

      Now add the highlighted code. This will add the fmt package to the import list and your example function at the end of the file:

      ./math/math_test.go

      
      package math
      
      import (
          "fmt"
          "testing"
      )
      
      // arg1 means argument 1 and arg2 means argument 2, and the expected stands for the 'result we expect'
      type addTest struct {
          arg1, arg2, expected int
      }
      
      var addTests = []addTest{
          addTest{2, 3, 5},
          addTest{4, 8, 12},
          addTest{6, 9, 15},
          addTest{3, 10, 13},
      }
      
      func TestAdd(t *testing.T) {
      
          for _, test := range addTests {
              if output := Add(test.arg1, test.arg2); output != test.expected {
                  t.Errorf("Output %q not equal to expected %q", output, test.expected)
              }
          }
      }
      
      func BenchmarkAdd(b *testing.B) {
          for i := 0; i < b.N; i++ {
              Add(4, 6)
          }
      }
      
      
      func ExampleAdd() {
          fmt.Println(Add(4, 6))
          // Output: 10
      }
      
      

      The Output: line is used to specify and document the expected output.

      Note: The comparison ignores leading and trailing space.

      Save and close the file.

      Now rerun your unit test:

      You will see an updated outout like this:

      Output

      === RUN TestAdd --- PASS: TestAdd (0.00s) === RUN ExampleAdd --- PASS: ExampleAdd (0.00s) PASS ok ./math 0.442s

      Your examples are now also tested. This feature improves your documentation and also makes your unit test more robust.

      Conclusion

      In this tutorial, you created a small program and then wrote a basic unit test to check its functionality. You then rewrote your unit test as a table-based unit test and then added a coverage test, a benchmark, and a documented example.

      Taking time to write adequate unit tests is useful to you as a programmer because it improves your confidence that the code or program you have written will continue to work as expected. The testing package in Go provides you with considerable unit-testing capabilities. To learn more, refer to Go’s official documentation.

      And if you wish to learn more about programming in Go, visit our tutorial series / free ebook, How To Code in Go.



      Source link

      How To Protect Sensitive Data in Terraform


      The author selected the Free and Open Source Fund to receive a donation as part of the Write for DOnations program.

      Introduction

      Terraform provides automation to provision your infrastructure in the cloud. To do this, Terraform authenticates with cloud providers (and other providers) to deploy the resources and perform the planned actions. However, the information Terraform needs for authentication is very valuable, and generally, is sensitive information that you should always keep secret since it unlocks access to your services. For example, you can consider API keys or passwords for database users as sensitive data.

      If a malicious third party were to acquire the sensitive information, they would be able to breach the security systems by presenting themselves as a known trusted user. In turn, they would be able to modify, delete, and replace the resources and services that are available under the scope of the obtained keys. To prevent this from happening, it is essential to properly secure your project and safeguard its state file, which stores all the project secrets.

      By default, Terraform stores the state file locally in the form of unencrypted JSON, allowing anyone with access to the project files to read the secrets. While a solution to this is to restrict access to the files on disk, another option is to store the state remotely in a backend that encrypts the data automatically, such as DigitalOcean Spaces.

      In this tutorial, you’ll hide sensitive data in outputs during execution and store your state in a secure cloud object storage, which encrypts data at rest. You’ll use DigitalOcean Spaces in this tutorial as your cloud object storage. You’ll also use tfmask, which is an open source program written in Go that dynamically censors values in the Terraform execution log output.

      Prerequisites

      • A DigitalOcean Personal Access Token, which you can create via the DigitalOcean control panel. Instructions to do that can be found in this link: How to Generate a Personal Access Token.
      • Terraform installed on your local machine and a project set up with the DigitalOcean provider. Complete Step 1 and Step 2 of the How To Use Terraform with DigitalOcean tutorial, and be sure to name the project folder terraform-sensitive, instead of loadbalance. During Step 2, do not include the pvt_key variable and the SSH key resource.
      • A DigitalOcean Space with API keys (access and secret). To learn how to create a DigitalOcean Space and API keys, see How To Create a DigitalOcean Space and API Key.

      Note: This tutorial has specifically been tested with Terraform 0.13.

      Marking Outputs as sensitive

      In this step, you’ll hide outputs in code by setting their sensitive parameter to true. This is useful when secret values are part of the Terraform output that you’re storing indefinitely, or you need to share the output logs beyond your team for analysis.

      Assuming you are in the terraform-sensitive directory, which you created as part of the prerequisites, you’ll define a Droplet and an output showing its IP address. You’ll store it in a file named droplets.tf, so create and open it for editing by running:

      Add the following lines:

      terraform-sensitive/droplets.tf

      resource "digitalocean_droplet" "web" {
        image  = "ubuntu-18-04-x64"
        name   = "web-1"
        region = "fra1"
        size   = "s-1vcpu-1gb"
      }
      
      output "droplet_ip_address" {
        value = digitalocean_droplet.web.ipv4_address
      }
      

      This code will deploy a Droplet called web-1 in the fra1 region, running Ubuntu 18.04 on 1GB RAM and one CPU core. Here you’ve given the droplet_ip_address output a value and you’ll receive this in the Terraform log.

      To deploy this Droplet, execute the code by running the following command:

      • terraform apply -var "do_token=${DO_PAT}"

      The actions Terraform will take will be the following:

      Output

      An execution plan has been generated and is shown below. Resource actions are indicated with the following symbols: + create Terraform will perform the following actions: # digitalocean_droplet.web will be created + resource "digitalocean_droplet" "web" { + backups = false + created_at = (known after apply) + disk = (known after apply) + id = (known after apply) + image = "ubuntu-18-04-x64" + ipv4_address = (known after apply) + ipv4_address_private = (known after apply) + ipv6 = false + ipv6_address = (known after apply) + ipv6_address_private = (known after apply) + locked = (known after apply) + memory = (known after apply) + monitoring = false + name = "web-1" + price_hourly = (known after apply) + price_monthly = (known after apply) + private_networking = (known after apply) + region = "fra1" + resize_disk = true + size = "s-1vcpu-1gb" + status = (known after apply) + urn = (known after apply) + vcpus = (known after apply) + volume_ids = (known after apply) + vpc_uuid = (known after apply) } Plan: 1 to add, 0 to change, 0 to destroy. ...

      Enter yes when prompted. You’ll receive the following output:

      Output

      digitalocean_droplet.web: Creating... ... digitalocean_droplet.web: Creation complete after 33s [id=216255733] Apply complete! Resources: 1 added, 0 changed, 0 destroyed. Outputs: droplet_ip_address = your_droplet_ip_address

      You will find that the IP address is in the output. If you’re sharing this output with others, or in case it will be publicly available because of automated deployment processes, it’s important to take actions to hide this data in the output.

      To censor it, you’ll need to set the sensitive attribute of the droplet_ip_address output to true.

      Open droplets.tf for editing:

      Add the highlighted line:

      terraform-sensitive/droplets.tf

      resource "digitalocean_droplet" "web" {
        image  = "ubuntu-18-04-x64"
        name   = "web-1"
        region = "fra1"
        size   = "s-1vcpu-1gb"
      }
      
      output "droplet_ip_address" {
        value = digitalocean_droplet.web.ipv4_address
        sensitive = true
      }
      

      Save and close the file when you’re done.

      Apply the project again by running:

      • terraform apply -var "do_token=${DO_PAT}"

      The output will be:

      Output

      digitalocean_droplet.web: Refreshing state... [id=216255733] Apply complete! Resources: 0 added, 0 changed, 0 destroyed. Outputs: droplet_ip_address = <sensitive>

      You’ve now explicitly censored the IP address—the value of the output. Censoring outputs is useful in situations when the Terraform logs would be in a public space, or when you want them to remain hidden, but not delete them from the code. You’ll also want to censor outputs that contain passwords and API tokens, as they are sensitive info as well.

      You’ve now hidden the values of the defined outputs by marking them as sensitive. In the next step, you’ll configure Terraform to store your project’s state in the encrypted cloud, instead of locally.

      Storing State in an Encrypted Remote Backend

      The state file stores all information about your deployed infrastructure containing all its internal relationships and secrets. By default, it’s stored in plaintext, locally on the disk. Storing it remotely, in the cloud, provides a higher level of security. If the cloud storage service supports encryption at rest, it will store the state file in an encrypted state at all times, so that potential attackers won’t be able to gather information from it. Storing the state file encrypted remotely is different from marking outputs as sensitive—this way, all secrets are securely stored in the cloud, which only changes how Terraform stores data, not when it’s displayed.

      You’ll now configure your project to store the state file in a DigitalOcean Space. As a result it will be encrypted at rest and protected with TLS in transit.

      By default, the Terraform state file is called terraform.tfstate and is located in the root of every initialized directory. You can view its contents by running:

      The contents of the file will be similar to this:

      {
        "version": 4,
        "terraform_version": "0.13.1",
        "serial": 3,
        "lineage": "926017f6-d7be-e1fa-99e4-f2a988026ed4",
        "outputs": {
          "droplet_ip_address": {
            "value": "...",
            "type": "string",
            "sensitive": true
          }
        },
        "resources": [
          {
            "mode": "managed",
            "type": "digitalocean_droplet",
            "name": "web",
            "provider": "provider["registry.terraform.io/digitalocean/digitalocean"]",
            "instances": [
              {
                "schema_version": 1,
                "attributes": {
                  "backups": false,
                  "created_at": "...",
                  "disk": 25,
                  "id": "216255733",
                  "image": "ubuntu-18-04-x64",
                  "ipv4_address": "...",
                  "ipv4_address_private": "10.135.0.3",
                  "ipv6": false,
                  "ipv6_address": "",
                  "ipv6_address_private": null,
                  "locked": false,
                  "memory": 1024,
                  "monitoring": false,
                  "name": "web-1",
                  "price_hourly": 0.00744,
                  "price_monthly": 5,
                  "private_networking": true,
                  "region": "fra1",
                  "resize_disk": true,
                  "size": "s-1vcpu-1gb",
                  "ssh_keys": null,
                  "status": "active",
                  "tags": [],
                  "urn": "do:droplet:216255733",
                  "user_data": null,
                  "vcpus": 1,
                  "volume_ids": [],
                  "vpc_uuid": "fc52519c-dc84-11e8-8b13-3cfdfea9f160"
                },
                "private": "..."
              }
            ]
          }
        ]
      }
      
      

      The state file contains all the resources you’ve deployed, as well as all outputs and their computed values. Gaining access to this file is enough to compromise the entire deployed infrastructure. To prevent that from happening, you can store it encrypted in the cloud.

      Terraform supports multiple backends, which are storage and retrieval mechanisms for the state. Examples are: local for local storage, pg for the Postgres database, and s3 for S3 compatible storage, which you’ll use to connect to your Space.

      The back-end configuration is specified under the main terraform block, which is currently in provider.tf. Open it for editing by running:

      Add the following lines:

      terraform-sensitive/provider.tf

      terraform {
        required_providers {
          digitalocean = {
            source = "digitalocean/digitalocean"
            version = "1.22.2"
          }
        }
      
        backend "s3" {
          key      = "state/terraform.tfstate"
          bucket   = "your_space_name"
          region   = "us-west-1"
          endpoint = "https://spaces_endpoint"
          skip_region_validation      = true
          skip_credentials_validation = true
          skip_metadata_api_check     = true
        }
      }
      
      variable "do_token" {}
      
      provider "digitalocean" {
        token = var.do_token
      }
      

      The s3 back-end block first specifies the key, which is the location of the Terraform state file on the Space. Passing in state/terraform.tfstate means that you will store it as terraform.tfstate under the state directory.

      The endpoint parameter tells Terraform where the Space is located and bucket defines the exact Space to connect to. The skip_region_validation and skip_credentials_validation disable validations that are not applicable to DigitalOcean Spaces. Note that region must be set to a conforming value (such as us-west-1), which has no reference to Spaces.

      Remember to put in your bucket name and the Spaces endpoint, including the region, which you can find in the Settings tab of your Space. When you are done customizing the endpoint, save and close the file.

      Next, put the access and secret keys for your Space in environment variables, so you’ll be able to reference them later. Run the following commands, replacing the highlighted placeholders with your key values:

      • export SPACE_ACCESS_KEY="your_space_access_key"
      • export SPACE_SECRET_KEY="your_space_secret_key"

      Then, configure Terraform to use the Space as its backend by running:

      • terraform init -backend-config "access_key=$SPACE_ACCESS_KEY" -backend-config "secret_key=$SPACE_SECRET_KEY"

      The -backend-config argument provides a way to set back-end parameters at runtime, which you are using here to set the Space keys. You’ll be asked if you wish to copy the existing state to the cloud, or start anew:

      Output

      Initializing the backend... Do you want to copy existing state to the new backend? Pre-existing state was found while migrating the previous "local" backend to the newly configured "s3" backend. No existing state was found in the newly configured "s3" backend. Do you want to copy this state to the new "s3" backend? Enter "yes" to copy and "no" to start with an empty state.

      Enter yes when prompted. The rest of the output will be the following:

      Output

      Successfully configured the backend "s3"! Terraform will automatically use this backend unless the backend configuration changes. Initializing provider plugins... - Using previously-installed digitalocean/digitalocean v1.22.2 Terraform has been successfully initialized! You may now begin working with Terraform. Try running "terraform plan" to see any changes that are required for your infrastructure. All Terraform commands should now work. If you ever set or change modules or backend configuration for Terraform, rerun this command to reinitialize your working directory. If you forget, other commands will detect it and remind you to do so if necessary.

      Your project will now store its state in your Space. If you receive an error, double-check that you’ve provided the correct keys, endpoint, and bucket name.

      Your project is now storing state in your Space. The local state file has been emptied, which you can check by showing its contents:

      There will be no output, as expected.

      You can try modifying the Droplet definition and applying it to check that the state is still being correctly managed.

      Open droplets.tf for editing:

      Modify the highlighted lines:

      terraform-sensitive/droplets.tf

      resource "digitalocean_droplet" "web" {
        image  = "ubuntu-18-04-x64"
        name   = "test-droplet"
        region = "fra1"
        size   = "s-1vcpu-1gb"
      }
      
      output "droplet_ip_address" {
        value = digitalocean_droplet.web.ipv4_address
        sensitive = false
      }
      

      Save and close the file, then apply the project by running:

      • terraform apply -var "do_token=${DO_PAT}"

      You will receive the following output:

      Output

      An execution plan has been generated and is shown below. Resource actions are indicated with the following symbols: ~ update in-place Terraform will perform the following actions: # digitalocean_droplet.web will be updated in-place ~ resource "digitalocean_droplet" "web" { backups = false created_at = "2020-11-11T18:43:03Z" disk = 25 id = "216419273" image = "ubuntu-18-04-x64" ipv4_address = "159.89.21.92" ipv4_address_private = "10.135.0.4" ipv6 = false locked = false memory = 1024 monitoring = false ~ name = "web-1" -> "test-droplet" price_hourly = 0.00744 price_monthly = 5 private_networking = true region = "fra1" resize_disk = true size = "s-1vcpu-1gb" status = "active" tags = [] urn = "do:droplet:216419273" vcpus = 1 volume_ids = [] vpc_uuid = "fc52519c-dc84-11e8-8b13-3cfdfea9f160" } Plan: 0 to add, 1 to change, 0 to destroy. ...

      Enter yes when prompted, and Terraform will apply the new configuration to the existing Droplet, meaning that it’s correctly communicating with the Space its state is stored on:

      Output

      digitalocean_droplet.web: Modifying... [id=216419273] digitalocean_droplet.web: Still modifying... [id=216419273, 10s elapsed] digitalocean_droplet.web: Modifications complete after 12s [id=216419273] Apply complete! Resources: 0 added, 1 changed, 0 destroyed. Outputs: droplet_ip_address = your_droplet_ip_address

      You’ve configured the s3 backend for your project, so that you’re storing the state encrypted in the cloud, in a DigitalOcean Space. In the next step, you’ll use tfmask, a tool that will dynamically censor all sensitive outputs and information in Terraform logs.

      Using tfmask in CI/CD Environments

      In this section, you’ll download tfmask and use it to dynamically censor sensitive data from the whole output log Terraform generates when executing a command. It will censor the variables and parameters whose values are matched by a RegEx expression that you provide.

      Dynamically matching their names is possible when they follow a pattern (for example, contain the word password or secret). The advantage of using tfmask over only marking the outputs as sensitive, is that it also censors matched parts of the resource declarations that Terraform prints out while executing. It’s imperative you hide them when the execution logs may be public, such as in automated CI/CD environments, which may often list execution logs publicly.

      Compiled binaries of tfmask are available at its releases page on GitHub. For Linux, run the following command to download it:

      • sudo curl -L https://github.com/cloudposse/tfmask/releases/download/0.7.0/tfmask_linux_amd64 -o /usr/bin/tfmask

      Mark it as executable by running:

      • sudo chmod +x /usr/bin/tfmask

      tfmask works on the outputs of terraform plan and terraform apply by masking the values of all variables whose names are matched by a RegEx expression that you specify. The regex expression and the character with which the actual values will be replaced, you supply using environment variables called TFMASK_CHAR and TFMASK_VALUES_REGEX, respectively.

      You’ll now use tfmask to censor the name and ipv4_address of the Droplet that Terraform would deploy. First, you’ll need to set the mentioned environment variables by running:

      • export TFMASK_CHAR="*"
      • export TFMASK_VALUES_REGEX="(?i)^.*(ipv4_address|name).*$"

      This regex expression will match all strings starting with ipv4_address or name, and will not be case sensitive.

      To make Terraform plan an action for your Droplet, modify its definition:

      Modify the Droplet’s name:

      terraform-sensitive/droplets.tf

      resource "digitalocean_droplet" "web" {
        image  = "ubuntu-18-04-x64"
        name   = "web"
        region = "fra1"
        size   = "s-1vcpu-1gb"
      }
      
      output "droplet_ip_address" {
        value = digitalocean_droplet.web.ipv4_address
        sensitive = false
      }
      

      Save and close the file.

      Because you’ve changed an attribute of the Droplet, Terraform will show its full definition in its output. Plan the configuration, but pipe it to tfmask to censor variables according to the regex expression:

      • terraform plan -var "do_token=${DO_PAT}" | tfmask

      You’ll receive output similar to the following:

      Output

      ... digitalocean_droplet.web: Refreshing state... [id=216419273] ------------------------------------------------------------------------ An execution plan has been generated and is shown below. Resource actions are indicated with the following symbols: ~ update in-place Terraform will perform the following actions: # digitalocean_droplet.web will be updated in-place ~ resource "digitalocean_droplet" "web" { backups = false created_at = "2020-11-11T18:43:03Z" disk = 25 id = "216419273" image = "ubuntu-18-04-x64" ipv4_address = "************" ipv4_address_private = "**********" ipv6 = false locked = false memory = 1024 monitoring = false ~ name = "**********************************" price_hourly = 0.00744 price_monthly = 5 private_networking = true region = "fra1" resize_disk = true size = "s-1vcpu-1gb" status = "active" tags = [] urn = "do:droplet:216419273" vcpus = 1 volume_ids = [] vpc_uuid = "fc52519c-dc84-11e8-8b13-3cfdfea9f160" } Plan: 0 to add, 1 to change, 0 to destroy. ...

      Note that tfmask has censored the values for name, ipv4_address, and ipv4_address_private using the character you specified in the TFMASK_CHAR environment variable, because they match the regex expression.

      This way of value censoring in the Terraform logs is very useful for CI/CD, where the logs may be publicly available. The benefit of tfmask is that you have full control over what variables to censor (using the regex expression). You can also specify keywords that you want to censor, which may not currently exist, but you are anticipating using in the future.

      You can destroy the deployed resources by running the following command and entering yes when prompted:

      • terraform destroy -var "do_token=${DO_PAT}"

      Conclusion

      In this article, you’ve worked with a couple of ways to hide and secure sensitive data in your Terraform project. The first measure, using sensitive to hide values from the outputs, is useful when only logs are accessible, but the values themselves stay present in the state stored on disk.

      To remedy that, you can opt to store the state file remotely, which you’ve achieved with DigitalOcean Spaces. This allows you to make use of encryption at rest. You also used tfmask, a tool that censors values of variables—matched using a regex expression—during terraform plan and terraform apply.

      You can also check out Hashicorp Vault to store secrets and secret data. It can be integrated with Terraform to inject secrets in resource definitions, so you’ll be able to connect your project with your existing Vault workflow. You may want to check out our tutorial on How To Build a Hashicorp Vault Server Using Packer and Terraform on DigitalOcean.

      For more on using Terraform, read other articles in our How To Manage Infrastructure with Terraform series.



      Source link

      How To Deploy a Django App on App Platform


      The author selected the Diversity in Tech Fund to receive a donation as part of the Write for DOnations program.

      Introduction

      Django is a powerful web framework that allows you to deploy your Python applications or websites. Django includes many features such as authentication, a custom database ORM (Object-Relational Mapper), and an extensible plugin architecture. Django simplifies the complexities of web development, allowing you to focus on writing code.

      In this tutorial, you’ll configure a Django project and deploy it to DigitalOcean’s App Platform using GitHub.

      Prerequisites

      To complete this tutorial, you’ll need:

      Step 1 — Creating a Python Virtual Environment for your Project

      Before you get started, you need to set up our Python developer environment. You will install your Python requirements within a virtual environment for easier management.

      First, create a directory in your home directory that you can use to store all of your virtual environments:

      Now create your virtual environment using Python:

      • python3 -m venv ~/.venvs/django

      This will create a directory called django within your .venvs directory. Inside, it will install a local version of Python and a local version of pip. You can use this to install and configure an isolated Python environment for your project.

      Before you install your project’s Python requirements, you need to activate the virtual environment. You can do that by typing:

      • source ~.venvs/django/bin/activate

      Your prompt should change to indicate that you are now operating within a Python virtual environment. It will look something like this: (django)user@host:~$.

      With your virtual environment active, install Django, Gunicorn, whitenoise, dj-database-url, and the psycopg2 PostgreSQL adaptor with the local instance of pip:

      • pip install django gunicorn psycopg2-binary dj-database-url

      Note: When the virtual environment is activated (when your prompt has (django) preceding it), use pip instead of pip3, even if you are using Python 3. The virtual environment’s copy of the tool is always named pip, regardless of the Python version.

      These packages do the following:

      • django - Installs the Django framework and libraries
      • gunicorn - A tool for deploying Django with a WSGI
      • dj-database-url - A Django tool for parsing a database URL
      • psycopg2 - A PostgreSQL adapter that allows Django to connect to a PostgreSQL database

      Now that you have these packages installed, you will need to save these requirements and their dependencies so App Platform can install them later. You can do this using pip and saving the information to a requirements.txt file:

      • pip freeze > requirements.txt

      You should now have all of the software needed to start a Django project. You are almost ready to deploy.

      Step 2 — Creating the Django Project

      Create your project using the django-admin tool that was installed when you installed Django:

      • django-admin startproject django_app

      At this point, your current directory (django_app in your case) will have the following content:

      • manage.py: A Django project management script.
      • django_app/: The Django project package. This should contain the __init__.py, settings.py, urls.py, asgi.py, and wsgi.py files.

      This directory will be the root directory of your project and will be what we upload to GitHub. Navigate into this directory with the command:

      Let’s adjust some settings before deployment.

      Adjusting the Project Settings

      Now that you’ve created a Django project, you’ll need to modify the settings to ensure it will run properly in App Platform. Open the settings file in your text editor:

      • nano django_app/settings.py

      Let’s examine our configuration one step at a time.

      Reading Environment Variables

      First, you need to add the os import statement to be able to read environment variables:

      django_app/settings.py

      import os
      

      Setting the Secret Key

      Next, you need to modify the SECRET_KEY directive. This is set by Django on the initial project creation and will have a randomly generated default value. It is unsafe to keep this hardcoded value in the code once it’s pushed to GitHub, so you should either read this from an environment variable or generate it when the application is started. To do this, add the following import statement at the top of the settings file:

      django_app/settings.py

      from django.core.management.utils import get_random_secret_key
      

      Now modify the SECRET_KEY directive to read in the value from the environment variable DJANGO_SECRET_KEY or generate the key if it does not find said environment variable:

      django_app/settings.py

      SECRET_KEY = os.getenv("DJANGO_SECRET_KEY", get_random_secret_key())
      

      Warning: If you don’t set this environment variable, then every time the app is re-deployed, this key will change. This can have adverse effects on cookies and will require users to log in again every time this key changes. You can generate a key using an online password generator.

      Setting Allowed Hosts

      Now locate the ALLOWED_HOSTS directive. This defines a list of the server’s addresses or domain names that may be used to connect to the Django instance. Any incoming request with a Host header that is not in this list will raise an exception. Django requires that you set this to prevent a certain class of security vulnerability.

      In the square brackets, list the IP addresses or domain names that are associated with your Django server. Each item should be listed in quotations with entries separated by a comma. If you wish requests for an entire domain and any subdomains, prepend a period to the beginning of the entry.

      App Platform supplies you with a custom URL as a default and then allows you to set a custom domain after you have deployed the application. Since you won’t know this custom URL until you have deployed the application, you should attempt to read the ALLOWED_HOSTS from an environment variable, so App Platform can inject this into your app when it launches.

      We’ll cover this process more in-depth in a later section. But for now, modify your ALLOWED_HOSTS directive to attempt to read the hosts from an environment variable. The environment variable can be set to either a single host or a comma-delimited list:

      django_app/settings.py

      ALLOWED_HOSTS = os.getenv("DJANGO_ALLOWED_HOSTS", "127.0.0.1,localhost").split(",")
      

      Setting the Debug Directive

      Next you should modify the DEBUG directive so that you can toggle this by setting an environment variable:

      django_app/settings.py

      DEBUG = os.getenv("DEBUG", "False") == "True"
      

      Here you used the getenv method to check for an environment variable named DEBUG. If this variable isn’t found, we should default to False for safety. Since environment variables will be read in as strings from App Platform, be sure to make a comparison to ensure that your variable is evaluated correctly.

      Setting the Development Mode

      Now create a new directive named DEVELOPMENT_MODE that will also be set as an environment variable. This is a helper variable that you will use to determine when to connect to your Postgres database and when to connect to a local SQLite database for testing. You’ll use this variable later when setting up the database connection:

      django_app/settings.py

      DEVELOPMENT_MODE = os.getenv("DEVELOPMENT_MODE", "False") == "True"
      

      Configuring Database Access

      Next, find the section that configures database access. It will start with DATABASES. The configuration in the file is for a SQLite database. App Platform allows you to create a PostgreSQL database for our project, so you need to adjust the settings to be able to connect to it.

      Warning: If you don’t change these settings and continue with the SQLite DB, your database will be erased after every new deployment. App Platform doesn’t maintain the disk when re-deploying applications, and your data will be lost.

      Change the settings with your PostgreSQL database information. You’ll read in the database connection information and credentials from an environment variable, DATABASE_URL, that will be provided by App Platform. Use the psycopg2 adaptor we installed with pip to have Django access a PostgreSQL database. You’ll use the dj-database-url package that was installed to get all of the necessary information from the database connection URL.

      To facilitate with development of your application locally, you’ll also use an if statement here to determine if DEVELOPMENT_MODE is set to True and which database should be accessed. By default, this will be set to False, and it will attempt to connect to a PostgreSQL database. You also don’t want Django attempting to make a database connection to the PostgreSQL database when attempting to collect the static files, so you’ll write an if statement to examine the command that was executed and not connect to a database if you determine that the command given was collectstatic. App Platform will automatically collect static files when the app is deployed.

      First, install the sys library so you can determine the command that was passed to manage.py and the dj_database_url library to be able to parse the URL passed in:

      django_app/settings.py

      . . .
      import os
      import sys
      import dj_database_url
      

      Next remove the current DATABASE directive block and replace it with this:

      django_app/settings.py

      if DEVELOPMENT_MODE is True:
          DATABASES = {
              "default": {
                  "ENGINE": "django.db.backends.sqlite3",
                  "NAME": os.path.join(BASE_DIR, "db.sqlite3"),
              }
          }
      elif len(sys.argv) > 0 and sys.argv[1] != 'collectstatic':
          if os.getenv("DATABASE_URL", None) is None:
              raise Exception("DATABASE_URL environment variable not defined")
          DATABASES = {
              "default": dj_database_url.parse(os.environ.get("DATABASE_URL")),
          }
      
      

      Next, move down to the bottom of the file and add a setting indicating where the static files should be placed. When your Django app is deployed to App Platform, python manage.py collectstatic will be run automatically. Set the route to match the STATIC_URL directive in the settings file:

      django_app/settings.py

      . . .
      STATIC_URL = "/static/"
      STATIC_ROOT = os.path.join(BASE_DIR, "staticfiles")
      

      Note: If you plan on storing static files in other locations outside of your individual Django-app static files, you will need to add an additional directive to your settings file. This directive will specify where to find these files. Be aware that these directories cannot share the same name as your STATIC_ROOT:

      django_app/settings.py

      . . .
      STATIC_URL = "/static/"
      STATIC_ROOT = os.path.join(BASE_DIR, "staticfiles")
      STATICFILES_DIRS = (os.path.join(BASE_DIR, "static"))
      

      Reviewing the Completed settings.py File

      Your completed file will look like this:

      from django.core.management.utils import get_random_secret_key
      from pathlib import Path
      import os
      import sys
      import dj_database_url
      
      # Build paths inside the project like this: BASE_DIR / 'subdir'.
      BASE_DIR = Path(__file__).resolve().parent.parent
      
      
      # Quick-start development settings - unsuitable for production
      # See https://docs.djangoproject.com/en/3.1/howto/deployment/checklist/
      
      # SECURITY WARNING: keep the secret key used in production secret!
      SECRET_KEY = os.getenv("DJANGO_SECRET_KEY", get_random_secret_key())
      
      # SECURITY WARNING: don't run with debug turned on in production!
      DEBUG = os.getenv("DEBUG", "False") == "True"
      
      ALLOWED_HOSTS = os.getenv("DJANGO_ALLOWED_HOSTS", "127.0.0.1,localhost").split(",")
      
      
      # Application definition
      
      INSTALLED_APPS = [
          'django.contrib.admin',
          'django.contrib.auth',
          'django.contrib.contenttypes',
          'django.contrib.sessions',
          'django.contrib.messages',
          'django.contrib.staticfiles',
      ]
      
      MIDDLEWARE = [
          'django.middleware.security.SecurityMiddleware',
          'django.contrib.sessions.middleware.SessionMiddleware',
          'django.middleware.common.CommonMiddleware',
          'django.middleware.csrf.CsrfViewMiddleware',
          'django.contrib.auth.middleware.AuthenticationMiddleware',
          'django.contrib.messages.middleware.MessageMiddleware',
          'django.middleware.clickjacking.XFrameOptionsMiddleware',
          "whitenoise.middleware.WhiteNoiseMiddleware",
      ]
      
      ROOT_URLCONF = 'masons_django_app.urls'
      
      TEMPLATES = [
          {
              'BACKEND': 'django.template.backends.django.DjangoTemplates',
              'DIRS': [],
              'APP_DIRS': True,
              'OPTIONS': {
                  'context_processors': [
                      'django.template.context_processors.debug',
                      'django.template.context_processors.request',
                      'django.contrib.auth.context_processors.auth',
                      'django.contrib.messages.context_processors.messages',
                  ],
              },
          },
      ]
      
      WSGI_APPLICATION = 'masons_django_app.wsgi.application'
      
      
      # Database
      # https://docs.djangoproject.com/en/3.1/ref/settings/#databases
      DEVELOPMENT_MODE = os.getenv("DEVELOPMENT_MODE", "False") == "True"
      
      if DEVELOPMENT_MODE is True:
          DATABASES = {
              "default": {
                  "ENGINE": "django.db.backends.sqlite3",
                  "NAME": os.path.join(BASE_DIR, "db.sqlite3"),
              }
          }
      elif len(sys.argv) > 0 and sys.argv[1] != 'collectstatic':
          if os.getenv("DATABASE_URL", None) is None:
              raise Exception("DATABASE_URL environment variable not defined")
          DATABASES = {
              "default": dj_database_url.parse(os.environ.get("DATABASE_URL")),
          }
      
      
      # Password validation
      # https://docs.djangoproject.com/en/3.1/ref/settings/#auth-password-validators
      
      AUTH_PASSWORD_VALIDATORS = [
          {
              'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator',
          },
          {
              'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator',
          },
          {
              'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator',
          },
          {
              'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator',
          },
      ]
      
      
      # Internationalization
      # https://docs.djangoproject.com/en/3.1/topics/i18n/
      
      LANGUAGE_CODE = 'en-us'
      
      TIME_ZONE = 'UTC'
      
      USE_I18N = True
      
      USE_L10N = True
      
      USE_TZ = True
      
      
      # Static files (CSS, JavaScript, Images)
      # https://docs.djangoproject.com/en/3.1/howto/static-files/
      
      STATIC_URL = "/static/"
      STATIC_ROOT = os.path.join(BASE_DIR, "staticfiles")
      STATICFILES_DIRS = (os.path.join(BASE_DIR, "static"),)
      STATICFILES_STORAGE = "whitenoise.storage.CompressedManifestStaticFilesStorage"
      

      Save and close settings.py.

      You’ve now finished configuring your Django app to run on App Platform. Next, you’ll push the app to GitHub and deploy it to App Platform.

      Step 3 — Pushing the Site to GitHub

      DigitalOcean App Platform deploys your code from GitHub repositories, so the first thing you’ll need to do is get your site in a git repository and then push that repository to GitHub.

      First, initialize your Django project as a git repository:

      When you work on your Django app locally, certain files get added that are unnecessary for deployment. Let’s exclude that directory by adding it to Git’s ignore list. Create a new file called .gitignore:

      Now add the following code to the file:

      .gitignore

      db.sqlite3
      *.pyc
      

      Save and close the file.

      Now execute the following command to add files to your repository:

      • git add django_app/ manage.py requirements.txt static/

      Make your initial commit:

      • git commit -m "Initial Django App"

      Your files will commit:

      Output

      [master (root-commit) eda5d36] Initial Django App 8 files changed, 238 insertions(+) create mode 100644 django_app/__init__.py create mode 100644 django_app/asgi.py create mode 100644 django_app/settings.py create mode 100644 django_app/urls.py create mode 100644 django_app/wsgi.py create mode 100644 manage.py create mode 100644 requirements.txt create mode 100644 static/README.md

      Open your browser and navigate to GitHub, log in with your profile, and create a new repository called django-app. Create an empty repository without a README or license file.

      Once you’ve created the repository, return to the command line to push your local files to GitHub.

      First, add GitHub as a remote repository:

      • git remote add origin https://github.com/your_username/django-app

      Next, rename the default branch main, to match what GitHub expects:

      Finally, push your main branch to GitHub’s main branch:

      Your files will transfer:

      Output

      Enumerating objects: 12, done. Counting objects: 100% (12/12), done. Delta compression using up to 8 threads Compressing objects: 100% (9/9), done. Writing objects: 100% (12/12), 3.98 KiB | 150.00 KiB/s, done. Total 12 (delta 1), reused 0 (delta 0) remote: Resolving deltas: 100% (1/1), done. To github.com:yourUsername/django-app.git * [new branch] main -> main Branch 'main' set up to track remote branch 'main' from 'origin'.

      Enter your GitHub credentials when prompted to push your code.

      Your code is now on GitHub and accessible through a web browser. Now you will deploy your site to DigitalOcean’s App Platform.

      Step 4 — Deploying to DigitalOcean with App Platform

      Once the code is pushed, visit https://cloud.digitalocean.com/apps and click Launch Your App. You’ll be prompted to connect your GitHub account:

      Connect GitHub account

      Connect your account and allow DigitalOcean to access your repositories. You can choose to let DigitalOcean have access to all of your repositories or just to the ones you wish to deploy.

      Allow repository access

      Click Install and Authorize. You’ll be returned to your DigitalOcean dashboard to continue creating your app.

      Once your GitHub account is connected, select the your_account/django-app repository and click Next.

      Choose your repository

      Next, provide your app’s name, choose a region, and ensure the main branch is selected. Then ensure that Autodeploy code changes is checked. Click Next to continue.

      Choose a name, region, and branch

      DigitalOcean will detect that your project is a Python app and will automatically populate a partial run command.

      Python Application detected and partial run command populated

      Click the Edit link next to the Build and Run commands to complete the build command. Your completed build command needs to reference your project’s WSGI file. In this example, this is at django_app.wsgi. Your completed run command should be gunicorn --worker-tmp-dir /dev/shm django_app.wsgi.

      Completing the run command

      Next, you need to define the environment variables you declared in your project’s settings. App Platform has a concept of App-Wide Variables, which are environment variables that are provided by App Platform, such as APP_URL and APP_DOMAIN. The platform also maintains Component-Specific Variables, which are variables that are exported from your components. These will be useful for determining your APP_DOMAIN beforehand so you can properly set DJANGO_ALLOWED_HOSTS. You will also use these variables to copy configuration settings from your database.

      To read more about these different variables, consult the App Platform Environment Variable Documetation

      For your Django app to function, you need to set the following environment variables like so:

      • DJANGO_ALLOWED_HOSTS -> ${APP_DOMAIN}
        • This allows us to know the randomly generated URL that App Platform provides and provide it to our app
      • DATABASE_URL -> ${<NAME_OF_YOUR_DATABASE>.DATABASE_URL}
        • In this case, we’ll name our database db in the next step, so this should be ${db.DATABASE_URL}
      • DEBUG -> True
        • Set this to True for now to verify your app is functioning and set to False when it’s time for this app to be in production
      • DJANGO_SECRET_KEY -> <A RANDOM SECRET KEY>
        • You can either allow your app to generate this at every launch or pick a strong password with at least 32 characters to use as this key. Using a secure password generator is a good option for this
        • Don’t forget to click the Encrypt check box to ensure that your credentials are encrypted for safety

      Set environment variables

      To set up your database, click the Add a Database button. You will be presented with the option of selecting a small development database or integrating with a managed database elsewhere. For this deployment, select the development database and ensure that the name of the database is db. Once you’ve verified this, click the Add Database button.

      Add a database

      Click Next, and you’ll be directed to the Finalize and Launch screen where you’ll choose your plan. Be sure to select the appropriate plan to fit your needs, whether in Basic App or Professional App and then click Launch App at the bottom. Your app will build and deploy:

      App building and deploying

      Once the build process completes, the interface will show you a healthy site. Now you need to access your app’s console through the Console tab and perform the Django first launch tasks by running the following commands:

      • python manage.py migrate - This will perform your initial database migrations.
      • python manage.py createsuperuser - This will prompt you for some information to create an administrative user

      Perform initial Django tasks

      Once you are done with that, click on the link to your app provided by App Platform:

      Click on app link

      This link should take you to the standard initial Django page.

      Initial Django Page

      And now you have a Django app deployed to App Platform. Any changes that you make and push to GitHub will be automatically deployed.

      Step 5 — Deploying Your Static Files

      Now that you’ve deployed your app, you may notice that your static files aren’t being loaded if you have DEBUG set to False. Django doesn’t serve static files in production and instead wants you to deploy them using a web server or CDN. Luckily, App Platform can do just that. App Platform provides free static asset serving if you are running a service alongside it, as you are doing with your app. So you’re going to deploy your same Django app but as a static site this time.

      Once your app is deployed, add a static site component from the Components tab in your app.

      Add Static Site

      Select the same GitHub repository as your deployed Django service. Click Next to continue.

      Select GitHub Repo

      Next, provide your app’s name and ensure the main branch is selected. Click Next to continue.

      Set Static Site Name and Branch

      Your component will be detected as a Service, so you’ll want to change the type to Static Site. Essentially we’ll have Django gather our static files and serve them. Set the route to what you set your STATIC_URL directive in your settings file. We set our directive to /static/ so set the route to /static. Finally, your static files will be collected into Output Directory in your app to match your STATIC_ROOT setting in settings.py. Yours is set to staticfiles, so set Output Directory to that.

      Static Site Settings

      Click Next, and you’ll be directed to the Finalize and Launch screen. When static files are paired with a service, it is free, so you won’t see any change in your bill. Click Launch App and deploy your static files. Now, if you have Debug set to False, you’ll see your static files properly displayed. Note that this tutorial’s only static asset is a README.md file. You will likely have more than this.

      Conclusion

      In this tutorial, you set up a Django project and deployed it using DigitalOcean’s App Platform. Any changes you commit and push to your repository will be re-deployed, so you can now expand your application. You can find the example code for this tutorial in the DigitalOcean Sample Images Repository

      The example in this tutorial is a minimal Django project. Your app might have more applications and features, but the deployment process will be the same.



      Source link