One place for hosting & domains

      Project

      How To Set Up a Gatsby Project with TypeScript


      The author selected the Diversity in Tech Fund to receive a donation as part of the Write for DOnations program.

      Introduction

      TypeScript is a superset of JavaScript that adds optional static typing at build time, which cuts down on debugging runtime errors. It has grown into a powerful alternative to JavaScript over the years. At the same time, Gatsby has emerged as a useful front-end framework for creating static websites. TypeScript’s static-typing abilities go well with a static-site generator like Gatsby, and Gatsby has built-in support for coding in TypeScript.

      In this tutorial, you’re going to use Gatsby’s built-in capabilities to configure a Gatsby project for TypeScript. After this tutorial, you will have learned how to integrate TypeScript into your Gatsby project.

      Prerequisites

      • You will need to have both Node and npm installed in order to run a development environment and handle TypeScript- or Gatsby-related packages, respectively. This tutorial was tested with Node.js version 14.13.0 and npm version 6.14.8. To install on macOS or Ubuntu 18.04, follow the steps in How to Install Node.js and Create a Local Development Environment on macOS or the Installing Using a PPA section of How To Install Node.js on Ubuntu 18.04.
      • To create a new Gatsby project, you will need the Gatsby CLI command line tool installed on your computer. To set this up, follow Step 1 in How to Set Up Your First Gatsby Site. This step will also show you how to create a new Gatsby project with the gatsby new command.
      • You will need some familiarity with GraphQL queries and using GraphiQL to query for local image data. If you’d like a refresher on the query sandbox in GraphiQL, read How to Handle Images with GraphQL and the Gatsby Image API.
      • You will need sufficient knowledge of JavaScript, especially ES6+ syntax such as destructuring and imports/exports. You can find more information on these topics in Understanding Destructuring, Rest Parameters, and Spread Syntax in JavaScript and Understanding Modules and Import and Export Statements in JavaScript.
      • Since Gatsby is a React-based framework, you will be refactoring and creating components in this tutorial. You can learn more about this in How to Create Custom Components in React.
      • Additionally, you will need TypeScript installed on your machine. To do this, refer to the official TypeScript website. If you are using an editor besides Visual Studio Code, you may need to go through a few extra steps to make sure you have TypeScript performing type-checks at build time and showing any errors. For example, if you’re using Atom, you’ll need to install the atom-typescript package to be able to achieve a true TypeScript experience. If you would like to download TypeScript only for your project, do so after the Gatsby project folder has been set up.

      Step 1 — Creating a New Gatsby Site and Removing Boilerplate

      To get started, you’re going to create your Gatsby site and make sure that you can run a server and view the site. After that, you will remove some unused boilerplate files and code. This will set your project up for edits in later steps.

      Open your computer’s console/terminal and run the following command:

      • gatsby new gatsby-typescript-tutorial

      This will take a few seconds to run as it sets up the necessary boilerplate files and folders for the Gatsby site. After it is finished, cd into the project’s directory:

      • cd gatsby-typescript-tutorial

      To make sure the site’s development environment can start properly, run the following command:

      After a few seconds, you will receive the following message in the console:

      Output

      ... You can now view gatsby-starter-default in the browser. http://localhost:8000

      Usually, the default port is :8000, but you can change this by running gatsby develop -p another_number instead.

      Head over to your preferred browser and type http://localhost:8000 in the address bar to find the site. It will look like this:

      Gatsby Default Starter Site

      Next, you’ll remove all unnecessary files. This includes gatsby-node.js, gastby-browser.js, and gatsby-ssr.js:

      • rm gatsby-node.js
      • rm gastby-browser.js
      • rm gatsby-ssr.js

      Next, to finish setup, you’re going to remove some boilerplate code from your project’s index page. In your project’s root directory, head to the src directory, followed by pages and then open the index.js file.

      For this tutorial, you are only going to work with an <Image /> component, so you can delete code related to the <Link /> component, along with the h1 and p elements. Your file will then look like the following:

      gatsby-typescript-tutorial/src/pages/index.js

      import React from "react"
      
      import Layout from "../components/layout"
      import Image from "../components/image"
      import SEO from "../components/seo"
      
      const IndexPage = () => (
        <Layout>
          <SEO title="Home" />
          <div style={{ maxWidth: `300px`, marginBottom: `1.45rem` }}>
            <Image />
          </div>
        </Layout>
      )
      
      export default IndexPage
      

      Save and close the file.

      Now that you’ve created your project and completed some initial setup, you are ready to install the necessary plugins.

      Step 2 — Installing Dependencies

      In order to set up support for TypeScript in Gatsby, you’ll need some additional plugins and dependencies, which you will install in this step.

      The gatsby-plugin-typescript plugin already comes with a newly created Gatsby site. Unless you want to change any of its default options, you don’t have to add this plugin to your gatsby-config.js file explicitly. This Gatsby plugin makes writing .ts and .tsx files in TypeScript possible.

      Since your app can read TypeScript files, you will now change Gatsby’s JavaScript files to a TypeScript file extension. In particular, change header.js, image.js, layout.js, and seo.js in src/components and index.js in src/pages to header.tsx, image.tsx, layout.tsx, seo.tsx, and index.tsx:

      • mv src/components/header.js src/components/header.tsx
      • mv src/components/image.js src/components/image.tsx
      • mv src/components/layout.js src/components/layout.tsx
      • mv src/components/seo.js src/components/seo.tsx
      • mv src/pages/index.js src/pages/index.tsx

      You are using the mv command to rename the files to the second argument. .tsx is the file extension, since these files use JSX.

      There is one important caveat about the gatsby-plugin-typescript plugin, however: it doesn’t include type-checking at build time (a core function of TypeScript). If you’re using VS Code, this won’t be an issue because TypeScript is a supported language in Visual Studio. But if you’re using another editor, like Atom, you will need to do some extra configurations to achieve a full TypeScript development experience.

      Since Gatsby is a React-based framework, adding some additional React-related typing is also recommended. To add type-checking for types specific to React, run the following command:

      To add type-checking for types related to the React DOM, use this command:

      Now that you’ve become familiar with the plugin gatsby-plugin-typescript, you are ready to configure your Gatsby site for TypeScript in the next step.

      Step 3 — Configuring TypeScript for Gatsby with the tsconfig.json File

      In this step, you will create a tsconfig.json file. A tsconfig.json file has two primary purposes: establishing the root directory of the TypeScript project (include) and overriding the TypeScript compiler’s default configurations (compilerOptions). There are a couple of ways to create this file. If you have the tsc command line tool installed with npm, you could create a new tsconfig file with tsc --init. But the file is then populated with many default options and comments.

      Instead, create a new file at the root of your directory (gatsby-typescript-project/) and name it tsconfig.json.

      Next, create an object with two properties, compilerOptions and include, populated with the following code:

      gatsby-typescript-tutorial/tsconfig.json

       {
        "compilerOptions": {
          "module": "commonjs",
          "target": "es6",
          "jsx": "preserve",
          "lib": ["dom", "es2015", "es2017"],
          "strict": true,
          "noEmit": true,
          "isolatedModules": true,
          "esModuleInterop": true,
          "skipLibCheck": true,
          "noUnusedLocals": true,
          "noUnusedParameters": true,
          "removeComments": false
        },
        "include": ["./src/**/*"]
      }
      

      Note:
      This configuration is partially based on the gatsby-starter-typescript-plus starter.

      Save this file and close it when you are done.

      The include property points to an array of filenames or paths that the compiler knows to convert from TypeScript to JavaScript.

      Here is a brief explanation of each option used in compilerOptions:

      • module - Sets the module system for the project; commonjs is used by default.
      • target - Depending on what version of JavaScript you’re using, this option determines which features to downlevel and which to leave alone. This can be helpful if your project is deployed to older environments vs. newer environments.
      • jsx - Setting for how JSX is treated in .tsx files. The preserve option leaves the JSX unchanged.
      • lib - An array of specified type-definitions of different JS libraries/APIs (dom, es2015, etc.).
      • strict - When set to true, this enables TypeScript’s type-checking abilities at build-time.
      • noEmit - Since Gatsby already uses Babel to compile your code to readable JavaScript, you change this option to true to leave TypeScript out it.
      • isolatedModules - By choosing Babel as your compiler/transpiler, you are opting for compilation one file at a time, which may cause potential problems at runtime. Setting this option to true allows TypeScript to warn you if you are about to run into this problem.
      • esModuleIterop - Enabling this option allows your use of CommonJS (your set module) and ES modules (importing and exporting custom variables and functions) to better work together and allow namespace objects for all imports.
      • noUnusedLocals and noUnusedParamters - Enabling these two options disables the errors TypeScript would normally report if you were to create an unused local variable or parameter.
      • removeComments - Setting this to false (or not setting it at all) allows there to be comments present after any TypeScript files have been converted to JavaScript.

      You can learn more about these different options and many more by visiting TypeScript’s reference guide for tsconfig.

      Now that TypeScript is configured for Gatsby, you are going to complete your TypeScript integration by refactoring some of your boilerplate files in src/components and src/pages.

      Step 4 — Refactoring seo.tsx for TypeScript

      In this step, you’re going to add some TypeScript syntax to the seo.tsx file. This step goes in depth to explain some concepts of TypeScript; the next step will show how to refactor other boilerplate code in a more abbreviated manner.

      One feature of TypeScript is its flexibility with its syntax. If you don’t want to add typing to your variables explicitly, you don’t have to. Gatsby believes that adopting TypeScript in your workflow “can and should be incremental”, and so this step will concentrate on three core TypeScript concepts:

      • basic types
      • defining types and interfaces
      • working with build-time errors

      Basic Types in TypeScript

      TypeScript supports basic datatypes including: boolean, number, and string. The major syntactical difference with TypeScript, compared to JavaScript, is that variables can now be defined with an associated type.

      For example, the following code block shows how to assign the basic types with the highlighted code:

      let num: number;
      num = 0
      
      let str: string;
      str = "TypeScript & Gatsby"
      
      let typeScriptIsAwesome: boolean;
      typeScriptIsAwesome = true;
      

      In this code, num must be a number, str must be a string, and typeScriptIsAwesome must be a boolean.

      Now you will examine the defaultProps and propTypes declarations in the seo.tsx file, found in the src/components directory. Open the file in your editor and look for the following highlighted lines:

      gatsby-typescript-tutorial/src/components/seo.tsx

      ...
      import React from "react"
      import PropTypes from "prop-types"
      import { Helmet } from "react-helmet"
      import { useStaticQuery, graphql } from "gatsby"
      
      ...
            ].concat(meta)}
          />
        )
      }
      
      
      SEO.defaultProps = {
        lang: `en`,
        meta: [],
        description: ``,
      }
      
      SEO.propTypes = {
        description: PropTypes.string,
        lang: PropTypes.string,
        meta: PropTypes.arrayOf(PropTypes.object),
        title: PropTypes.string.isRequired,
      }
      
      export default SEO
      

      By default, a Gatsby site’s SEO component comes with a weak typing system using PropTypes. The defaultProps and propTypes are explicitly declared, using the imported PropsTypes class. For example, in the meta prop (or alias) of the propTypes object, its value is an array of objects, each of which is itself a prop of the PropTypes component. Some props are marked as required (isRequired) while others are not, implying they are optional.

      Since you are using TypeScript, you will be replacing this typing system. Go ahead and delete defaultProps and propTypes (along with the import statement for the PropTypes at the top of the file). Your file will look like the following:

      gatsby-typescript-tutorial/src/components/seo.tsx

       ...
      import React from "react"
      import { Helmet } from "react-helmet"
      import { useStaticQuery, graphql } from "gatsby"
      
      
      ...
            ].concat(meta)}
          />
        )
      }
      
      export default SEO
      

      Now that you’ve removed the default typing, you’ll write out the type aliases with TypeScript.

      Defining TypeScript Interfaces

      In TypeScript, an interface is used to define the “shape” of a custom type. These are used to represent the value type of complex pieces of data like React components and function parameters. In the seo.tsx file, you’re going to build an interface to replace the defaultProps and propTypes definitions that were deleted.

      Add the following highlighted lines:

      gatsby-typescript-tutorial/src/components/seo.ts

       ...
      import React from "react"
      import { Helmet } from "react-helmet"
      import { useStaticQuery, graphql } from "gatsby"
      
      interface SEOProps {
        description?: string,
        lang?: string,
        meta?: Array<{name: string, content: string}>,
        title: string
      }
      
      ...
      
      
      

      The interface SEOProps accomplishes what SEO.propTypes did by setting each of the properties associated data type as well as marking some as required with the ? character.

      Typing a Function

      Just like in JavaScript, functions play an important role in TypeScript applications. You can even type functions by specifying the datatype of the arguments passed into them. In the seo.tsx file, you will now work on the defined SEO function component. Under where the interface for SEOProps was defined, you’re going to explicitly declare the type of the SEO component’s function arguments, along with a return type of SEOProps right after:

      Add the following highlighted code:

      gatsby-typescript-tutorial/src/components/seo.ts

      ...
      interface SEOProps {
        description?: string,
        lang?: string,
        meta?: Array<{name: string, content: string}>,
        title: string
      }
      
      function SEO({ description='', lang='en', meta=[], title }: SEOProps) {
        ...
      }
      

      Here you set defaults for the SEO function arguments so that they adhere to the interface, and added the interface with : SEOProps. Remember that you at least have to include the title in the list of arguments passed to the SEO component because it was defined as a required property in the SEOProps interface you defined earlier.

      Lastly, you can revise the metaDescription and defaultTitle constant declarations by setting their type, which is string in this case:

      gatsby-typescript-tutorial/src/components/seo.tsx

       ...
      function SEO({ description='', lang='en', meta=[], title }: SEOProps) {
        const { site } = useStaticQuery(
          graphql`
            query {
              site {
                siteMetadata {
                  title
                  description
                  author
                }
              }
            }
          `
        )
      
        const metaDescription: string = description || site.siteMetadata.description
        const defaultTitle: string = site.siteMetadata?.title
      ...
      

      Another type in TypeScript is the any type. For situations where you’re dealing with a variable whose type is unclear or difficult to define, use any as a last resort to avoid any build-time errors.

      An example of using the any type is when dealing with data fetched from a third-party, like an API request or a GraphQL query. In the seo.tsx file, where the destructured site property is defined with a GraphQL static query, set its type to any:

      gatsby-typescript-tutorial/src/components/seo.tsx

      ...
      interface SEOProps {
        description?: string,
        lang?: string,
        meta?: Array<{name: string, content: string}>,
        title: string
      }
      
      function SEO({ description='', lang='en', meta=[], title }: Props) {
        const { site }: any = useStaticQuery(
          graphql`
            query {
              site {
                siteMetadata {
                  title
                  description
                  author
                }
              }
            }
          `
        )
        ...
      }
      

      Save and exit the file.

      It’s important to always keep the defined values consistent with their type. Otherwise, you will see build-time errors appear via the TypeScript compiler.

      Build-Time Errors

      It will be helpful to become accustomed to the errors TypeScript will catch and report at build-time. The idea is that TypeScript catches these errors, mostly type-related, at build-time, and this cuts down on the amount of debugging in the long run (in compile-time).

      One example of a build-time error occurring is when you declare a variable of one type but assign it a value that is of another type. If you were to change the value of one of the keyword arguments passed to the SEO component to one of a different type, the TypeScript compiler will detect the inconsistency and report the error. The following is an image of what this looks like in VSCode:

      A build-time error in VSCode when the description variable is set to a number.

      The error says Type 'number' is not assignable to type 'string'. This is because, when you set up your interface, you said the description property would be of type string. The value 0 is of type number. If you change the value of description back into a “string”, the error message will go away.

      Step 5 — Refactoring the Rest of the Boilerplate

      Lastly, you will refactor the remaining boilerplate files with TypeScript: layout.tsx, image.tsx, and header.tsx. Like seo.tsx, these component files are located in the src/components directory.

      Open src/components/layout.tsx. Towards the bottom, is the defined Layout.propTypes. Delete the following highlighted lines:

      gatsby-typescript-tutorial/src/components/layout.tsx

       import React from "react"
      import PropTypes from "prop-types"
      import { useStaticQuery, graphql } from "gatsby"
      ...
      
      Layout.propTypes = {
        children: PropTypes.node.isRequired,
      }
      
      export default Layout
      

      The children prop shows that its value is of type node per the PropTypes class. Plus, it’s a required prop. Since the children in the layout could be anything from simple text to React child components, use ReactNode as the associated type by importing near the top and adding it to the interface:

      Add the following highlighted lines:

      gatsby-typescript-tutorial/src/components/layout.tsx

      ...
      import React, { ReactNode } from "react"
      import { useStaticQuery, graphql } from "gatsby"
      
      import Header from "./header"
      import "./layout.css"
      
      interface LayoutProps {
        children: ReactNode
      }
      
      const Layout = ({ children }: LayoutProps) => {
        ...
      

      Next, add a type to the data variable that stores a GraphQL query that fetches site title data. Since this query object is coming from a third-party entity like GraphQL, give data an any type. Lastly, add the string type to the siteTitle variable that works with that data:

      gatsby-typescript-tutorial/src/components/layout.tsx

       ...
      const Layout = ({ children }: LayoutProps) => {
        const data: any = useStaticQuery(graphql`
        query MyQuery {
          site {
            siteMetadata {
              title
            }
          }
        }
      `)
      
      const siteTitle: string = data.site.siteMetadata?.title || `Title`
      
        return (
          <>
            <Header siteTitle={siteTitle} />
            <div
      ...
      

      Save and close the file.

      Next, open the src/components/image.tsx file.

      Here you are dealing with a similar situation as layout.tsx. There is a data variable that stores a GraphQL query that could have an any type. The image fluid data that is passed into the fluid attribute of the <Img /> component could be separated from the return statement into its own variable. It’s also a complex variable like data, so give this an any type as well:

      gatsby-typescript-tutorial/src/components/image.tsx

      ...
      const Image = () => {
        const data: any = useStaticQuery(graphql`
          query {
            placeholderImage: file(relativePath: { eq: "gatsby-astronaut.png" }) {
              childImageSharp {
                fluid(maxWidth: 300) {
                  ...GatsbyImageSharpFluid
                }
              }
            }
          }
        `)
      
        if (!data?.placeholderImage?.childImageSharp?.fluid) {
          return <div>Picture not found</div>
        }
      
        const imageFluid: any = data.placeholderImage.childImageSharp.fluid
      
        return <Img fluid={imageFluid} />
      }
      
      export default Image
      

      Save and close the file.

      Now open the src/components/header.tsx file. This file also comes with predefined prop types, using the PropTypes class. Like seo.tsx, image.tsx, and layout.tsx, replace Header.defaultProps and Header.propTypes with an interface using the same prop names:

      gatsby-typescript-tutorial/src/components/header.tsx

      import { Link } from "gatsby"
      import React from "react"
      
      interface HeaderProps {
        siteTitle: string
      }
      
      const Header = ({ siteTitle }: HeaderProps) => (
        <header
          style={{
            background: `rebeccapurple`,
            marginBottom: `1.45rem`,
          }}
        >
          <div
            style={{
              margin: `0 auto`,
              maxWidth: 960,
              padding: `1.45rem 1.0875rem`,
            }}
          >
            <h1 style={{ margin: 0 }}>
              <Link
                to="/"
                style={{
                  color: `white`,
                  textDecoration: `none`,
                }}
              >
                {siteTitle}
              </Link>
            </h1>
          </div>
        </header>
      )
      
      export default Header
      

      Save and close the file.

      With your files refactored for TypeScript, you can now restart the server to make sure everything is working. Run the following command:

      When you navigate to localhost:8000, your browser will render the following:

      Gatsby Default Development page

      Conclusion

      TypeScript’s static-typing capabilities go a long way in keeping debugging at a minimum. It’s also a great language for Gatsby sites since it’s supported by default. Gatsby itself is a useful front-end tool for creating static-sites, such as landing pages.

      You now have two popular tools at your disposal. To learn more about TypeScript and all you can do with it, head over to the official TypeScript handbook.



      Source link

      How To Deploy Multiple Environments in Your Terraform Project Without Duplicating Code


      The author selected the Free and Open Source Fund to receive a donation as part of the Write for DOnations program.

      Introduction

      Terraform offers advanced features that become increasingly useful as your project grows in size and complexity. It’s possible to alleviate the cost of maintaining complex infrastructure definitions for multiple environments by structuring your code to minimize repetitions and introducing tool-assisted workflows for easier testing and deployment.

      Terraform associates a state with a backend, which determines where and how state is stored and retrieved. Every state has only one backend and is tied to an infrastructure configuration. Certain backends, such as local or s3, may contain multiple states. In that case, the pairing of state and infrastructure to the backend is describing a workspace. Workspaces allow you to deploy multiple distinct instances of the same infrastructure configuration without storing them in separate backends.

      In this tutorial, you’ll first deploy multiple infrastructure instances using different workspaces. You’ll then deploy a stateful resource, which, in this tutorial, will be a DigitalOcean Volume. Finally, you’ll reference pre-made modules from the Terraform Registry, which you can use to supplement your own.

      Prerequisites

      • A DigitalOcean Personal Access Token, which you can create via the DigitalOcean Control Panel. You can find instructions for this in the How to Generate a Personal Access Token tutorial.
      • Terraform installed on your local machine and a project set up with the DO provider. Complete Step 1 and Step 2 of the How To Use Terraform with DigitalOcean tutorial, and be sure to name the project folder terraform-advanced, instead of loadbalance. During Step 2, do not include the pvt_key variable and the SSH key resource.

      Note: We have specifically tested this tutorial using Terraform 0.13.

      Deploying Multiple Infrastructure Instances Using Workspaces

      Multiple workspaces are useful when you want to deploy or test a modified version of your main infrastructure without creating a separate project and setting up authentication keys again. Once you have developed and tested a feature using the separate state, you can incorporate the new code into the main workspace and possibly delete the additional state. When you init a Terraform project, regardless of backend, Terraform creates a workspace called default. It is always present and you can never delete it.

      However, multiple workspaces are not a suitable solution for creating multiple environments, such as for staging and production. Therefore workspaces, which only track the state, do not store the code or its modifications.

      Since workspaces do not track the actual code, you should manage the code separation between multiple workspaces at the version control (VCS) level by matching them to their infrastructure variants. How you can achieve this is dependent on the VCS tool itself; for example, in Git branches would be a fitting abstraction. To make it easier to manage the code for multiple environments, you can break them up into reusable modules, so that you avoid repeating similar code for each environment.

      Deploying Resources in Workspaces

      You’ll now create a project that deploys a Droplet, which you’ll apply from multiple workspaces.

      You’ll store the Droplet definition in a file called droplets.tf.

      Assuming you’re in the terraform-advanced directory, create and open it for editing by running:

      Add the following lines:

      droplets.tf

      resource "digitalocean_droplet" "web" {
        image  = "ubuntu-18-04-x64"
        name   = "web-${terraform.workspace}"
        region = "fra1"
        size   = "s-1vcpu-1gb"
      }
      

      This definition will create a Droplet running Ubuntu 18.04 with one CPU core and 1 GB RAM in the fra1 region. Its name will contain the name of the current workspace it is deployed from. When you’re done, save and close the file.

      Apply the project for Terraform to run its actions with:

      • terraform apply -var "do_token=${DO_PAT}"

      Your output will be similar to the following:

      Output

      An execution plan has been generated and is shown below. Resource actions are indicated with the following symbols: + create Terraform will perform the following actions: # digitalocean_droplet.web will be created + resource "digitalocean_droplet" "web" { + backups = false + created_at = (known after apply) + disk = (known after apply) + id = (known after apply) + image = "ubuntu-18-04-x64" + ipv4_address = (known after apply) + ipv4_address_private = (known after apply) + ipv6 = false + ipv6_address = (known after apply) + ipv6_address_private = (known after apply) + locked = (known after apply) + memory = (known after apply) + monitoring = false + name = "web-default" + price_hourly = (known after apply) + price_monthly = (known after apply) + private_networking = (known after apply) + region = "fra1" + resize_disk = true + size = "s-1vcpu-1gb" + status = (known after apply) + urn = (known after apply) + vcpus = (known after apply) + volume_ids = (known after apply) + vpc_uuid = (known after apply) } Plan: 1 to add, 0 to change, 0 to destroy. ...

      Enter yes when prompted to deploy the Droplet in the default workspace.

      The name of the Droplet will be web-default, because the workspace you start with is called default. You can list the workspaces to confirm that it’s the only one available:

      You’ll receive the following output:

      Output

      * default

      The asterisk (*) means that you currently have that workspace selected.

      Create and switch to a new workspace called testing, which you’ll use to deploy a different Droplet, by running workspace new:

      • terraform workspace new testing

      You’ll have output similar to:

      Output

      Created and switched to workspace "testing"! You're now on a new, empty workspace. Workspaces isolate their state, so if you run "terraform plan" Terraform will not see any existing state for this configuration.

      You plan the deployment of the Droplet again by running:

      • terraform plan -var "do_token=${DO_PAT}"

      The output will be similar to the previous run:

      Output

      An execution plan has been generated and is shown below. Resource actions are indicated with the following symbols: + create Terraform will perform the following actions: # digitalocean_droplet.web will be created + resource "digitalocean_droplet" "web" { + backups = false + created_at = (known after apply) + disk = (known after apply) + id = (known after apply) + image = "ubuntu-18-04-x64" + ipv4_address = (known after apply) + ipv4_address_private = (known after apply) + ipv6 = false + ipv6_address = (known after apply) + ipv6_address_private = (known after apply) + locked = (known after apply) + memory = (known after apply) + monitoring = false + name = "web-testing" + price_hourly = (known after apply) + price_monthly = (known after apply) + private_networking = (known after apply) + region = "fra1" + resize_disk = true + size = "s-1vcpu-1gb" + status = (known after apply) + urn = (known after apply) + vcpus = (known after apply) + volume_ids = (known after apply) + vpc_uuid = (known after apply) } Plan: 1 to add, 0 to change, 0 to destroy. ...

      Notice that Terraform plans to deploy a Droplet called web-testing, which it has named differently from web-default. This is because the default and testing workspaces have separate states and have no knowledge of each other’s resources—even though they stem from the same code.

      To confirm that you’re in the testing workspace, output the current one you’re in with workspace show:

      The output will be the name of the current workspace:

      Output

      testing

      To delete a workspace, you first need to destroy all its deployed resources. Then, if it’s active, you need to switch to another one using workspace select. Since the testing workspace here is empty, you can switch to default right away:

      • terraform workspace select default

      You’ll receive output of Terraform confirming the switch:

      Output

      Switched to workspace "default".

      You can then delete it by running workspace delete:

      • terraform workspace delete testing

      Terraform will then perform the deletion:

      Output

      Deleted workspace "testing"!

      You can destroy the Droplet you’ve deployed in the default workspace by running:

      • terraform destroy -var "do_token=${DO_PAT}"

      Enter yes when prompted to finish the process.

      In this section, you’ve worked in multiple Terraform workspaces. In the next section, you’ll deploy a stateful resource.

      Deploying Stateful Resources

      Stateless resources do not store data, so you can create and replace them quickly, because they are not unique. Stateful resources, on the other hand, contain data that is unique or not simply re-creatable; therefore, they require persistent data storage.

      Since you may end up destroying such resources, or multiple resources require their data, it’s best to store it in a separate entity, such as DigitalOcean Volumes.

      Volumes are objects that you can attach to Droplets (servers), but are separate from them, and provide additional storage space. In this step, you’ll define the Volume and connect it to a Droplet in droplets.tf.

      Open it for editing:

      Add the following lines:

      droplets.tf

      resource "digitalocean_droplet" "web" {
        image  = "ubuntu-18-04-x64"
        name   = "web-${terraform.workspace}"
        region = "fra1"
        size   = "s-1vcpu-1gb"
      }
      
      resource "digitalocean_volume" "volume" {
        region                  = "fra1"
        name                    = "new-volume"
        size                    = 10
        initial_filesystem_type = "ext4"
        description             = "New Volume for Droplet"
      }
      
      resource "digitalocean_volume_attachment" "volume_attachment" {
        droplet_id = digitalocean_droplet.web.id
        volume_id  = digitalocean_volume.volume.id
      }
      

      Here you define two new resources, the Volume itself and a Volume attachment. The Volume will be 10GB, formatted as ext4, called new-volume, and located in the same region as the Droplet. To connect the Volume to the Droplet, since they are separate entities, you define a Volume attachment object. volume_attachment takes the Droplet and Volume IDs and instructs the DigitalOcean cloud to make the Volume available to the Droplet as a disk device.

      When you’re done, save and close the file.

      Plan this configuration by running:

      • terraform plan -var "do_token=${DO_PAT}"

      The actions that Terraform will plan will be the following:

      Output

      An execution plan has been generated and is shown below. Resource actions are indicated with the following symbols: + create Terraform will perform the following actions: # digitalocean_droplet.web will be created + resource "digitalocean_droplet" "web" { + backups = false + created_at = (known after apply) + disk = (known after apply) + id = (known after apply) + image = "ubuntu-18-04-x64" + ipv4_address = (known after apply) + ipv4_address_private = (known after apply) + ipv6 = false + ipv6_address = (known after apply) + ipv6_address_private = (known after apply) + locked = (known after apply) + memory = (known after apply) + monitoring = false + name = "web-default" + price_hourly = (known after apply) + price_monthly = (known after apply) + private_networking = (known after apply) + region = "fra1" + resize_disk = true + size = "s-1vcpu-1gb" + status = (known after apply) + urn = (known after apply) + vcpus = (known after apply) + volume_ids = (known after apply) + vpc_uuid = (known after apply) } # digitalocean_volume.volume will be created + resource "digitalocean_volume" "volume" { + description = "New Volume for Droplet" + droplet_ids = (known after apply) + filesystem_label = (known after apply) + filesystem_type = (known after apply) + id = (known after apply) + initial_filesystem_type = "ext4" + name = "new-volume" + region = "fra1" + size = 10 + urn = (known after apply) } # digitalocean_volume_attachment.volume_attachment will be created + resource "digitalocean_volume_attachment" "volume_attachment" { + droplet_id = (known after apply) + id = (known after apply) + volume_id = (known after apply) } Plan: 3 to add, 0 to change, 0 to destroy. ...

      The output details that Terraform would create a Droplet, a Volume, and a Volume attachment, which connects the Volume to the Droplet.

      You’ve now defined and connected a Volume (a stateful resource) to a Droplet. In the next section, you’ll review public, pre-made Terraform modules that you can incorporate in your project.

      Referencing Pre-made Modules

      Aside from creating your own custom modules for your projects, you can also use pre-made modules and providers from other developers, which are publicly available at Terraform Registry.

      In the modules section you can search the database of available modules and sort by provider in order to find the module with the functionality you need. Once you’ve found one, you can read its description, which lists the inputs and outputs the module provides, as well as its external module and provider dependencies.

      Terraform Registry - SSH key Module

      You’ll now add the DigitalOcean SSH key module to your project. You’ll store the code separate from existing definitions in a file called ssh-key.tf. Create and open it for editing by running:

      Add the following lines:

      ssh-key.tf

      module "ssh-key" {
        source         = "clouddrove/ssh-key/digitalocean"
        key_path       = "~/.ssh/id_rsa.pub"
        key_name       = "new-ssh-key"
        enable_ssh_key = true
      }
      

      This code defines an instance of the clouddrove/droplet/digitalocean module from the registry and sets some of the parameters it offers. It should add a public SSH key to your account by reading it from the ~/.ssh/id_rsa.pub.

      When you’re done, save and close the file.

      Before you plan this code, you must download the referenced module by running:

      You’ll receive output similar to the following:

      Output

      Initializing modules... Downloading clouddrove/ssh-key/digitalocean 0.13.0 for ssh-key... - ssh-key in .terraform/modules/ssh-key Initializing the backend... Initializing provider plugins... - Using previously-installed digitalocean/digitalocean v1.22.2 Terraform has been successfully initialized! ...

      You can now plan the code for the changes:

      • terraform plan -var "do_token=${DO_PAT}"

      You’ll receive output similar to this:

      Output

      Refreshing Terraform state in-memory prior to plan... The refreshed state will be used to calculate this plan, but will not be persisted to local or remote state storage. ------------------------------------------------------------------------ An execution plan has been generated and is shown below. Resource actions are indicated with the following symbols: + create Terraform will perform the following actions: ... # module.ssh-key.digitalocean_ssh_key.default[0] will be created + resource "digitalocean_ssh_key" "default" { + fingerprint = (known after apply) + id = (known after apply) + name = "devops" + public_key = "ssh-rsa ... demo@clouddrove" } Plan: 4 to add, 0 to change, 0 to destroy. ...

      The output shows that you would create the SSH key resource, which means that you downloaded and invoked the module from your code.

      Conclusion

      Bigger projects can make use of some advanced features Terraform offers to help reduce complexity and make maintenance easier. Workspaces allow you to test new additions to your code without touching the stable main deployments. You can also couple workspaces with a version control system to track code changes. Using pre-made modules can also shorten development time, but may incur additional expenses or time in the future if the module becomes obsolete.

      For further resources on using Terraform, check out our How To Manage Infrastructure With Terraform series.



      Source link

      How To Harden the Security of Your Production Django Project


      The author selected the COVID-19 Relief Fund to receive a donation as part of the Write for DOnations program.

      Introduction

      Developing a Django application can be a quick and clean experience, because its approach is flexible and scalable. Django also offers a variety of security-oriented settings that can help you seamlessly prepare your project for production. However, when it comes to production deployment, there are several ways to further secure your project. Restructuring your project by breaking up your settings will allow you to easily set up different configurations based on the environment. Leveraging dotenv for hiding environment variables or confidential settings will ensure you don’t release any details about your project that may compromise it.

      While implementing these different strategies and features might seem time-consuming at first, developing a practical workflow will allow you to deploy releases of your project without compromising on security, or your productivity.

      In this tutorial, you will leverage a security-oriented workflow for your Django development by implementing and configuring environment-based settings, dotenv, and Django’s built-in security settings. These features all complement each other and will result in a version of your Django project that is ready for different approaches you may take to deployment.

      Prerequisites

      Before you begin this guide you’ll need the following:

      Note: If you’re using an existing Django project, you may have different requirements. This tutorial suggests a particular project structure, however, you can also use each of the sections of this tutorial individually as needed.

      Step 1 — Restructuring Django’s Settings

      In this first step, you’ll start by rearranging your settings.py file into environment-specific configurations. This is a good practice when you need to move a project between different environments, for example, development and production. This arrangement will mean less reconfiguration for different environments; instead, you’ll use an environment variable to switch between configurations, which we’ll discuss later in the tutorial.

      Create a new directory called settings in your project’s sub-directory:

      • mkdir testsite/testsite/settings

      (As per the prerequisites we’re using testsite, but you can substitute your project’s name in here.)

      This directory will replace your current settings.py configuration file; all of your environment-based settings will be in separate files contained in this folder.

      In your new settings folder, create three Python files:

      • cd testsite/testsite/settings
      • touch base.py development.py production.py

      The development.py file will contain settings you’ll normally use during development. And production.py will contain settings for use on a production server. You should keep these separate because the production configuration will use settings that will not work in a development environment; for example, forcing the use of HTTPS, adding headers, and using a production database.

      The base.py settings file will contain settings that development.py and production.py will inherit from. This is to reduce redundancy and to help keep your code cleaner. These Python files will be replacing settings.py, so you’ll now remove settings.py to avoid confusing Django.

      While still in your settings directory, rename settings.py to base.py with the following command:

      • mv ../settings.py base.py

      You’ve just completed the outline of your new environment-based settings directory. Your project won’t understand your new configuration yet, so next, you’ll fix this.

      Step 2 — Using python-dotenv

      Currently Django will not recognize your new settings directory or its internal files. So, before you continue working with your environment-based settings, you need to make Django work with python-dotenv. This is a dependency that loads environment variables from a .env file. This means that Django will look inside a .env file in your project’s root directory to determine which settings configuration it will use.

      Go to your project’s root directory:

      Install python-dotenv:

      • pip install python-dotenv

      Now you need to configure Django to use dotenv. You’ll edit two files to do this: manage.py, for development, and wsgi.py, for production.

      Let’s start by editing manage.py:

      Add the following code:

      testsite/manage.py

      import os
      import sys
      import dotenv
      
      def main():
          os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'testsite.settings.development')
      
          if os.getenv('DJANGO_SETTINGS_MODULE'):
          os.environ['DJANGO_SETTINGS_MODULE'] = os.getenv('DJANGO_SETTINGS_MODULE')
      
          try:
              from django.core.management import execute_from_command_line
          except ImportError as exc:
              raise ImportError(
                  "Couldn't import Django. Are you sure it's installed and "
                  "available on your PYTHONPATH environment variable? Did you "
                  "forget to activate a virtual environment?"
              ) from exc
          execute_from_command_line(sys.argv)
      
      
      if __name__ == '__main__':
          main()
      
      dotenv.load_dotenv(
          os.path.join(os.path.dirname(__file__), '.env')
      )
      

      Save and close manage.py and then open wsgi.py for editing:

      Add the following highlighted lines:

      testsite/testsite/wsgi.py

      
      import os
      import dotenv
      
      from django.core.wsgi import get_wsgi_application
      
      dotenv.load_dotenv(
          os.path.join(os.path.dirname(os.path.dirname(__file__)), '.env')
      )
      
      os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'testsite.settings.development')
      
      if os.getenv('DJANGO_SETTINGS_MODULE'):
       os.environ['DJANGO_SETTINGS_MODULE'] = os.getenv('DJANGO_SETTINGS_MODULE')
      
      application = get_wsgi_application()
      

      The code you’ve added to both of these files does two things. First, whenever Django runs—manage.py for running development, wsgi.py for production—you’re telling it to look for your .env file. If the file exists, you instruct Django to use the settings file that .env recommends, otherwise you use the development configuration by default.

      Save and close the file.

      Finally, let’s create a .env in your project’s root directory:

      Now add in the following line to set the environment to development:

      testsite/.env

      DJANGO_SETTINGS_MODULE="testsite.settings.development"
      

      Note: Add .env to your .gitignore file so it is never included in your commits; you’ll use this file to contain data such as passwords and API keys that you do not want visible publicly. Every environment your project is running on will have its own .env with settings for that specific environment.

      It is recommended to create a .env.example to include in your project so you can easily create a new .env wherever you need one.

      So, by default Django will use testsite.settings.development, but if you change DJANGO_SETTINGS_MODULE to testsite.settings.production for example, it will start using your production configuration. Next, you’ll populate your development.py and production.py settings configurations.

      Step 3 — Creating Development and Production Settings

      Next you’ll open your base.py and add the configuration you want to modify for each environment in the separate development.py and production.py files. The production.py will need to use your production database credentials, so ensure you have those available.

      Note: It is up to you to determine which settings you need to configure, based on environment. This tutorial will only cover a common example for production and development settings (that is, security settings and separate database configurations).

      In this tutorial we’re using the Django project from the prerequisite tutorial as the example project. We’ll move settings from from base.py to development.py. Begin by opening development.py:

      • nano testsite/settings/development.py

      First, you will import from base.py—this file inherits settings from base.py. Then you’ll transfer the settings you want to modify for the development environment:

      testsite/testsite/settings/development.py

      from .base import *
      
      DEBUG = True
      
      DATABASES = {
          'default': {
              'ENGINE': 'django.db.backends.sqlite3',
              'NAME': os.path.join(BASE_DIR, 'db.sqlite3'),
          }
      }
      

      In this case, the settings specific to development are: DEBUG, you need this True in development, but not in production; and DATABASES, a local database instead of a production database. You’re using an SQLite database here for development.

      Note: For security purposes, Django’s DEBUG output will never display any settings that may contain the strings: API, KEY, PASS, SECRET, SIGNATURE, or TOKEN.

      This is to ensure secrets will not be revealed, if you accidentally deploy a project to production with DEBUG still enabled.

      With that being said, never deploy a project publicly with DEBUG enabled. It will only ever put the security of your project at risk.

      Next, let’s add to production.py:

      • nano testsite/settings/production.py

      Production will be similar to development.py, but with a different database configuration and DEBUG set to False:

      testsite/testsite/settings/production.py

      from .base import *
      
      DEBUG = False
      
      ALLOWED_HOSTS = []
      
      DATABASES = {
          'default': {
              'ENGINE': os.environ.get('SQL_ENGINE', 'django.db.backends.sqlite3'),
              'NAME': os.environ.get('SQL_DATABASE', os.path.join(BASE_DIR, 'db.sqlite3')),
              'USER': os.environ.get('SQL_USER', 'user'),
              'PASSWORD': os.environ.get('SQL_PASSWORD', 'password'),
              'HOST': os.environ.get('SQL_HOST', 'localhost'),
              'PORT': os.environ.get('SQL_PORT', ''),
          }
      }
      

      For the example database configuration given, you can use dotenv to configure each of the given credentials, with defaults included. Assuming you’ve already set up a database for the production version of your project, please use your configuration instead of the example provided.

      You have now configured your project to use different settings based on DJANGO_SETTINGS_MODULE in .env. Given the example settings you’ve used, when you set your project to use production settings, DEBUG becomes False, ALLOWED_HOSTS is defined, and you start using a different database that you’ve (already) configured on your server.

      Step 4 — Working with Django’s Security Settings

      Django includes security settings ready for you to add to your project. In this step, you’ll add security settings to your project that are considered essential for any production project. These settings are intended for use when your project is available to the public. It’s not recommended to use any of these settings in your development environment; hence, in this step you’re limiting these settings to the production.py configuration.

      For the most part these settings are going to enforce the use of HTTPS for various web features, such as session cookies, CSRF cookies, upgrading HTTP to HTTPS, and so on. Therefore, if you haven’t already set up your server with a domain pointing to it, hold off on this section for now. If you need to set up your server ready for deployment, check out the Conclusion for suggested articles on this.

      First open production.py:

      In your file, add the highlighted settings that work for your project, according to the explanations following the code:

      testsite/testsite/settings/production.py

      from .base import *
      
      DEBUG = False
      
      ALLOWED_HOSTS = ['your_domain', 'www.your_domain']
      
      DATABASES = {
          'default': {
              'ENGINE': os.environ.get('SQL_ENGINE', 'django.db.backends.sqlite3'),
              'NAME': os.environ.get('SQL_DATABASE', os.path.join(BASE_DIR, 'db.sqlite3')),
              'USER': os.environ.get('SQL_USER', 'user'),
              'PASSWORD': os.environ.get('SQL_PASSWORD', 'password'),
              'HOST': os.environ.get('SQL_HOST', 'localhost'),
              'PORT': os.environ.get('SQL_PORT', ''),
          }
      }
      
      SECURE_SSL_REDIRECT = True
      
      SESSION_COOKIE_SECURE = True
      
      CSRF_COOKIE_SECURE = True
      
      SECURE_BROWSER_XSS_FILTER = True
      
      • SECURE_SSL_REDIRECT redirects all HTTP requests to HTTPS (unless exempt). This means your project will always try to use an encrypted connection. You will need to have SSL configured on your server for this to work. Note that if you have Nginx or Apache configured to do this already, this setting will be redundant.
      • SESSION_COOKIE_SECURE tells the browser that cookies can only be handled over HTTPS. This means cookies your project produces for activities, such as logins, will only work over an encrypted connection.
      • CSRF_COOKIE_SECURE is the same as SESSION_COOKIE_SECURE but applies to your CSRF token. CSRF tokens protect against Cross-Site Request Forgery. Django CSRF protection does this by ensuring any forms submitted (for logins, signups, and so on) to your project were created by your project and not a third party.
      • SECURE_BROWSER_XSS_FILTER sets the X-XSS-Protection: 1; mode=block header on all responses that do not already have it. This ensures third parties cannot inject scripts into your project. For example, if a user stores a script in your database using a public field, when that script is retrieved and displayed to other users it will not run.

      If you would like to read more about the different security settings available within Django, check out their documentation.

      Warning: Django’s documentation states you shouldn’t rely completely on SECURE_BROWSER_XSS_FILTER. Never forget to validate and sanitize input.

      Additional Settings

      The following settings are for supporting HTTP Strict Transport Security (HSTS)—this means that your entire site must use SSL at all times.

      • SECURE_HSTS_SECONDS is the amount of time in seconds HSTS is set for. If you set this for an hour (in seconds), every time you visit a web page on your website, it tells your browser that for the next hour HTTPS is the only way you can visit the site. If during that hour you visit an insecure part of your website, the browser will show an error and the insecure page will be inaccessible.
      • SECURE_HSTS_PRELOAD only works if SECURE_HSTS_SECONDS is set. This header instructs the browser to preload your site. This means that your website will be added to a hardcoded list, which is implemented in popular browsers, like Firefox and Chrome. This requires that your website is always encrypted. It is important to be careful with this header. If at anytime you decide to not use encryption for your project, it can take weeks to be manually removed from the HSTS Preload list.
      • SECURE_HSTS_INCLUDE_SUBDOMAINS applies the HSTS header to all subdomains. Enabling this header, means that both your_domain and unsecure.your_domain will require encryption even if unsecure.your_domain is not related to this Django project.

      Warning: Incorrectly configuring these additional settings can break your site for a significant amount of time.

      Please read the Django documentation on HSTS before implementing these settings.

      It is necessary to consider how these settings will work with your own Django project; overall the settings discussed here are a good foundation for most Django projects. Next you’ll review some further usage of dotENV.

      Step 5 — Using python-dotenv for Secrets

      The final part of this tutorial will help you leverage python-dotenv. This will allow you to hide certain information such as your project’s SECRET_KEY or the admin’s login URL. This is a great idea if you intend to publish your code on platforms like GitHub or GitLab since these secrets won’t be published. Instead, whenever you initially set up your project on a local environment or a server, you can create a new .env file and define those secret variables.

      You must hide your SECRET_KEY so you’ll start working on that in this section.

      Open your .env file:

      And add the following line:

      testsite/.env

      DJANGO_SETTINGS_MODULE="django_hardening.settings.development"
      SECRET_KEY="your_secret_key"
      

      And in your base.py:

      • nano testsite/settings/base.py

      Let’s update the SECRET_KEY variable like so:

      testsite/testsite/settings/base.py

      . . .
      SECRET_KEY = os.getenv('SECRET_KEY')
      . . .
      

      Your project will now use the SECRET_KEY located in .env.

      Lastly, you’ll hide your admin URL by adding a long string of random characters to it. This will ensure bots can’t brute force the login fields and strangers can’t try guessing the login.

      Open .env again:

      And add a SECRET_ADMIN_URL variable:

      testsite/.env

      DJANGO_SETTINGS_MODULE="django_hardening.settings.development"
      SECRET_KEY="your_secret_key"
      SECRET_ADMIN_URL="very_secret_url"
      

      Now let’s tell Django to hide your admin URL with SECRET_ADMIN_URL:

      Note: Don’t forget to replace your_secret_key and very_secret_url with your own secret strings. Also don’t forget to replace very_secret_url with your own secret URL.

      If you want to use random strings for these variables, Python provides a fantastic secrets.py library for generating such strings. The examples they give are great ways to create small Python programs for generating secure random strings.

      Edit the admin URL like so:

      testsite/testsite/urls.py

      import os
      from django.contrib import admin
      from django.urls import path
      
      urlpatterns = [
          path(os.getenv('SECRET_ADMIN_URL') + '/admin/', admin.site.urls),
      ]
      

      You can now find the admin login page at very_secret_url/admin/ instead of just /admin/.

      Conclusion

      In this tutorial you have configured your current Django project for easy use with different environments. Your project now leverages python-dotenv for handling secrets and settings. And your production settings now have Django’s built-in security features enabled.

      If you’ve enabled all the recommended security components and re-implemented settings as directed, your project has these key features:

      • SSL/HTTPS for all communications (for example, subdomains, cookies, CSRF).
      • XSS (cross-site scripting) attacks prevention.
      • CSRF (cross-site request forgery) attacks prevention.
      • Concealed project secret key.
      • Concealed admin login URL preventing brute-force attacks.
      • Separate settings for development and production.

      If you are interested in learning more about Django, check out our tutorial series on Django development.

      Also, if you haven’t already put your project into production, here is a tutorial on How To Set Up Django with Postgres, Nginx, and Gunicorn on Ubuntu 20.04. You can also check out our Django topic page for further tutorials.

      And, of course, please read over Django’s settings documentation, for further information.



      Source link