One place for hosting & domains


      11 Secrets to Making a Successful Website

      Affiliate Disclosure: DreamHost maintains relationships with some of our recommended partners, so if you click through a link and purchase, we may receive a commission. We only recommend solutions we believe in.

      Whether you’re a writer looking to reach a wider audience, a boutique manufacturer needing to boost sales, or you’re someone who just wants to make money online — you’re going to need a website. And not just any website — a great website.

      Unfortunately, the World Wide Web has been saturated with sites for some years now.

      Standing out. Making your mark. It’s not easy.

      But here at DreamHost, we know a fair bit about websites and what makes them work — so here are our 11 secrets to making a successful website.

      1. Defined goals
      2. A good domain name
      3. Quality web hosting
      4. A clear description
      5. A top-notch CMS
      6. A great e-commerce platform
      7. Engaging web design
      8. SEO optimization
      9. High-quality content
      10. Using Google Analytics
      11. A site maintenance plan

      1. Define your goals

      Before you do anything else, you need to decide what you want to achieve from this website.

      • Is it going to be an e-commerce website that you use to sell products?
      • Are you looking to promote a service?
      • Do you want to make money via affiliate links?
      • Are you simply after a platform to share your thoughts and ideas?

      It can be hard to change the direction of an established website. Make sure you know what type of website you want to create and what you need to get out of it.

      2. Choose a good domain

      Picking a good domain name is easier said than done. It’s also seriously important since it’s tricky to change it once you’ve started establishing your site and brand. (Yes, you can migrate to a new domain, but that comes with all sorts of complications.)

      So what is a good domain? What does a good domain look like?

      • It is your brand name or includes your brand name.
      • It’s memorable.
      • It’s easy to spell.
      • It’s short (ideally under 14 characters; but the shorter, the better).
      • It’s free of numbers, hyphens, or other unusual characters.
      • It has a recognized, trustworthy extension (.com is the ideal).

      It can also be a good idea to choose an SEO-friendly domain that includes one of your most important keywords.

      3. Get secure, quality web hosting with good tech support

      It can be tempting to skimp on web hosting and choose the cheapest service you can find. Don’t do this.

      A cheap web host can cost you in other ways: excessive downtime, slow site speeds, limited or non-existent support.

      It’s not worth it.

      If you’re serious about making your website a success, invest in quality web hosting you can rely on. You won’t go wrong with DreamHost web hosting.

      4. Include a clear description of your business

      This is something a lot of companies get wrong. They know their industry and their business, inside and out. That’s great. But it often means they forget how to describe it to people that don’t.

      Ideally, you should be able to sum up what you do in a couple of sentences. This summary should be displayed prominently on your homepage. And anyone should be able to read it and understand it.

      If you have any doubts, ask someone who knows nothing about what you do what they think. Better yet, enlist the help of a professional copywriter.

      5. Use a top-notch content management system

      A content management system (CMS) is where you’ll manage your website’s pages and content. The right one can make this quick, easy, and fun. The wrong one can be the source of endless headaches and can even limit what you’re able to do.

      Good content management systems allow you to build pages and posts on a site with no prior knowledge of coding. They cut down the barrier to entry and enable anyone to create their own site.

      But how do you know which is the right one for you?

      The following questions will help you decide:

      • Do I want a basic website with no frills?
      • Do I want to be able to build the website in the future to have more features?
      • What’s my budget?
      • Will I want to add the ability for a website visitor to buy products in the future?
      • Am I happy to pay ongoing costs, or do I just want to pay a one-off fee?
      • Do I need to integrate with other parts of my business (such as a lead generation tool or a payment platform)?
      • Is it SEO-friendly?
      • Will it scale with my business?
      • Can I use a website builder to make the design process easier?

      Depending on what you want to use the CMS for, you may have other questions, but these basic ones should set you off on finding the right content management system for your needs.

      6. Choose a great e-commerce platform

      If you know that you’ll want to sell products on your site, you need to know what e-commerce platform you need to choose. You can choose from many platforms, but not all of them are built to scale or fit for your purposes. As with most things in life, you get what you pay for.

      If you’re running a business website, you need to make sure that the platform you choose is reliable and sturdy. You don’t want to deal with customer complaints because you chose a platform that can’t deliver.

      Before selecting a platform, ask yourself the following:

      • Is it SEO-friendly? While there are many cheap and easy-to-set-up e-commerce platforms, not all of them are particularly SEO-friendly.
      • Is it mobile-friendly? We live in a mobile-first world, and if that platform is even a little bit clunky, you’re going to be losing out on revenue.
      • Is it a trusted and secure platform? One of the most important considerations for customers is that their details will be safe when purchasing. Your platform needs to be fully secure, and it needs to communicate that to potential customers.
      • Will it scale? We all have high hopes for our businesses, and while not all succeed, a fair few do. When choosing an e-commerce platform, ensure that it will scale with your online business.
      • How do the systems work? One of the critical areas to investigate is how well the platform deals with product and order management. You need it to be swift so that you aren’t wasting time on the back end, and you can get on with delivering the best service to your customers.

      Your Online Store Deserves WooCommerce Hosting

      Sell anything, anywhere, anytime on the world’s biggest e-commerce platform.

      7. Create a beautiful, engaging, accessible website design

      When you imagine a design that matches usability, one company usually springs to mind: Apple. They have managed to combine both of these into a wildly successful business.

      Users appreciate good design, and when it’s combined with solid usability, you have a winner. A site that people want to revisit. A site that people want to buy from.

      Google has always said that you need to create websites with the end-user in mind — and it’s more true today than it’s ever been.

      Here are a few tips for creating sites that ooze design quality.

      • Know your target audience and design accordingly — what features would they want, and how design-savvy are they?
      • Don’t skimp on cost. With design, you get what you pay for. Don’t try to cut corners. Use a professional, experienced designer (like team of pros at DreamHost).
      • Look at the competition. Find some sites in your niche that perform well and study them. Google will rank sites based on niches, and design is crucial. If your competitors’ websites are winning with simplistic colors and designs, take notice. Then make yours better.

      8. Optimize for search engines

      One of the simplest, fastest ways to help make your site successful is to optimize it for search engines. While search engines might be smart — and every day they get better at understanding the meaning and context of web pages, their content, and users’ intent — they still need us to help them along.

      In most cases, the first step to optimizing for search engines is keyword research. This will help you identify the sort of keywords you should be targeting through optimization.

      Popular keyword research tools include:

      If you want to take a more sophisticated approach to keyword research, try Semrush. It’s a great tool for advanced digital marketers, and it’s accessible to people with less experience too. We love it so much, we’ve set up a free 14-day PRO trial for our readers!


      Semrush’s keyword research section works similarly to most other keyword research tools; however, in addition to keyword suggestions, search volume, and difficulty scores, you also get global volume data, keyword variations, and questions linked to your starting keyword. You also get insights into the current state of the SERPs.

      How to Choose Which Keywords to Target

      The right keywords to target can generally be determined by three things:

      1. Search volume
      2. Difficulty/competitiveness
      3. Relevance

      Ideally, you want to target keywords that lots of people are searching for (how many searches you can realistically expect this to be will depend on your industry), that have low competition (which increases the odds that you’ll be able to rank), and that, of course, are relevant to your site!

      How to Optimize for Target Keywords

      The main places for including your target keywords are:

      The title tag

      This forms part of the snippet of information that appears in the search results. For example, this is the title tag for the DreamHost homepage:

      <H> tags

      H tags are header tags. You might know them as H1s, H2s, H3s, and so on. They are used to help organize the information on a page, particularly in terms of hierarchy.

      While search engines use all the text on a page for ranking, H tags have extra weight behind them — particularly the H1 tag. Include keywords in them where you can (but never, ever be spammy about it!).

      On-page content

      Search engines use all of a page’s content when determining its subject matter and what it should rank for. It goes without saying that keywords should be included here. Just be tactical about how you do it.

      Use words and phrases naturally. Use permutations where possible. Consider entities. And most importantly of all, write for users, not search engines.

      If you need more help with optimizing your website, consider adding SEO toolkit to your hosting plan for $4.99/month. It’ll help you improve your search engine rankings and drive more customers to your site with its suite of DIY tools, helpful analytics, and a step-by-step SEO plan.

      9. Create high-quality content

      It’s hardly a secret that websites with high-quality content have a better chance of performing well than those with poor-quality content.

      Great content should be informative, well written, and easy to understand. It should be formatted in a way that guides the user through the copy.

      But “create high-quality content” sounds somewhat subjective, doesn’t it?

      It’s not quite as subjective as you might think. Here are some ways you can ensure you’re writing high-quality content for your niche.

      • Invest in good writers — as with many points in this guide, cutting corners won’t help your website succeed in the long run. Our SEO marketing service can help.
      • Have experts write your content — Google has been working towards making sure only the best and most accurate content reaches the top of the search results. Middle-of-the-road content isn’t going to cut it for much longer.
      • Conduct deep research — you need to find out what your customers want, not what you think they want. Many websites miss this point entirely. If you don’t satisfy your users’ needs, you can’t really call your content high-quality.

      It’s not just about the words — you need to make your content sing. Make sure it appeals to different users. If it’s right for your target audience, then introduce videos, images, infographics, and charts.

      10. Track your progress with analytics

      It’s tricky, if not impossible, to know whether your site is a success if you’re not tracking your progress. While many tools allow you to track your website and even spy on the status of others, there is arguably no better website tracking tool than Google’s own Analytics.

      To get started with Google Analytics, you will need to:

      And that’s pretty much it. You can create filtered views of the data to help you hone in on specific data elements, but the above is all you need to do for Google to start gathering extremely detailed data that will enable you to assess the performance of your site and adapt your strategy accordingly.

      11. Set up a site maintenance plan

      What do you do once your site’s up and running? Should you sit back, relax, and let the visits/leads/money roll in?


      Depending on your goals, you may be able to slow down. But you can’t just forget about your website. Things will go wrong.

      Instead, implement a maintenance plan, like what we offer as part of our DreamCare service. Your maintenance plan should include a list of periodic must-dos and when you will do them. The most important will likely be:

      • Running security scans
      • Backing up your site’s data
      • Checking Webmaster Tools, primarily for any glaring errors that have gone unnoticed

      Another thing we’d advise is to run Hotjar or another tool that monitors user behavior. While you can use its findings to gain a deep understanding of your website’s user experience (UX), you can also use it periodically to pinpoint specific issues or points of contention.

      Get Our Best Tips to Boost Website Traffic

      Whether you need help optimizing a landing page, crafting the ideal social media strategy, or creating buyer personas, we can help! Subscribe to our monthly digest so you never miss an article.

      Your New Website Is Waiting

      As you can see, creating a successful website isn’t quick, and it isn’t particularly easy — but knowing the secrets to a successful website will help.

      Get started with these key takeaways.

      1. Define your goals. Decide exactly what you want your website to achieve.
      2. Choose a good domain name that’s relevant, memorable, short, and has a trustworthy extension.
      3. Invest in quality web hosting that’s secure, with great tech support.
      4. Describe your business clearly on your homepage, and anywhere else it’s relevant.
      5. Use a quality content management system: one that’s robust and easy to use.
      6. Choose a good e-commerce platform that can grow with your business.
      7. Create a beautiful website design that’s one step above your competitors.
      8. Optimize for search engines. Ensure they understand what your site’s about and the terms it should rank for.
      9. Create high-quality content; substandard content doesn’t rank.
      10.  Track your progress, starting with Google Analytics.
      11.  Create a site maintenance plan, including backing up data and checking Search Console.

      Ready to get started with your website? If you’re starting from scratch, we can help with our Pro Services. Our expert team can design, build, manage and market your website — everything you need to launch yourself or your brand online. Learn more about what DreamHost can do for you here.

      Source link

      Secrets to Building and Scaling SRE Teams


      About the Talk

      Tammy Bryant, Principal SRE at Gremlin, shares how she’s built tech solutions in emerging ecosystems. From setting yourself up for success as you scale to efficiently handling millions of global users, Bryant shares her best advice for onboarding all customers, no matter what stage your business is in.

      What You’ll Learn

      • How to set yourself up for success from the moment your onboard your first customer
      • Enabling your teams to build scalable and standardized solutions
      • Three secrets to efficiently scaling your infrastructure


      About the Presenter

      Tammy Bryant (Butow) is a Principal SRE at Gremlin. Tammy loves building scalable solutions and has helped efficiently scale companies from seed round to post-IPO. Previously, Tammy was the SRE Manager for Databases and Block Storage at Dropbox where she led her teams to effectively scale from 400 million to over 500 million users in 1 year (with a small team of 5 engineers!). Tammy’s passion is working with small wise teams to scale effectively.

      Source link

      Use HashiCorp Vault to Manage Secrets

      Updated by Linode Contributed by Linode

      HashiCorp Vault is a secrets management tool that helps to provide secure, automated access to sensitive data. Vault meets these use cases by coupling authentication methods (such as application tokens) to secret engines (such as simple key/value pairs) using policies to control how access is granted. In this guide, you will install, configure, and access Vault in an example deployment to illustrate Vault’s features and API.

      This guide will use the latest version of Vault, which is 1.1.0 at the time of this writing.

      Why Use Vault?

      A service such as Vault requires operational effort to run securely and effectively. Given the added complexity of using Vault as part of an application, in what way does it add value?

      Consider a simple application that must use an API token or other secret value. How should this sensitive credential be given to the application at runtime?

      • Committing the secret alongside the rest of the application code in a version control system such as git is a poor security practice for a number of reasons, including that the sensitive value is recorded in plaintext and not protected in any way.
      • Recording a secret in a file that is passed to an application requires that the file be securely populated in the first place and strictly access-controlled.
      • Static credentials are challenging to rotate or restrict access to if an application is compromised.

      Vault solves these and other problems in a number of ways, including:

      • Services and applications that run without operator interaction can authenticate to Vault using values that can be rotated, revoked, and permission-controlled.
      • Some secrets engines can generate temporary, dynamically-generated secrets to ensure that credentials expire after a period of time.
      • Policies for users and machine accounts can be strictly controlled for specific types of access to particular paths.


      Before continuing, you should familiarize yourself with important Vault terms and concepts that will be used later in this guide.

      • A token is the the underlying mechanism that underpins access to Vault resources. Whether a user authenticates to Vault using a GitHub token or an application-driven service authenticates using an AppRole RoleID and SecretID, all forms of authentication are eventually normalized to a token. Tokens are typically short-lived (that is, expire after a period or time-to-live, or ttl) and have one or more policies attached to them.
      • A Vault policy dictates certain actions that may be performed upon a Vault path. Capabilities such as the ability to read a secret, write secrets, and delete them are all examples of actions that are defined in a policy for a particular path.
      • A path in Vault is similar in form to a Unix filesystem path (like /etc) or a URL (such as /blog/title). Users and machine accounts interact with Vault over particular paths in order to retrieve secrets, change settings, or otherwise interact with a running Vault service. All Vault access is performed over a REST interface, so these paths eventually take the form of an HTTP URL. While some paths interact with the Vault service itself to manage resources such as policies or settings, many paths serve as an endpoint to either authenticate to Vault or interact with a secret engine.
      • A secret engine is a backend used in Vault to provide secrets to Vault users. The simplest example of a secret engine is the key/value backend, which simply returns plain text values that may be stored at particular paths (these secrets remain encrypted on the backend). Other examples of secret backends include the PKI backend, which can generate and manage TLS certificates, and the TOTP backend, which can generate temporary one-time passwords for web sites that require multi-factor authentication (including the Linode Manager).


      This guide will setup Vault in a simple, local filesystem-only configuration. The steps listed here apply equally to any distribution.

      These installation steps will:

      • Procure a TLS certificate to ensure that all communications between Vault and clients are encrypted.
      • Configure Vault for local filesystem storage.
      • Install the vault binary and set up the operating system to operate Vault as a service.


      The configuration outlined in this guide is suitable for small deployments. In situations that call for highly-available or fault-tolerant services, consider running more than one Vault instance with a highly-available storage backend such as Consul.

      Before you Begin

      1. Familiarize yourself with Linode’s Getting Started guide and complete the steps for deploying and setting up a Linode running a recent Linux distribution (such as Ubuntu 18.04 or CentOS 7), including setting the hostname and timezone.


        Setting the full hostname correctly in /etc/hosts is important in this guide in order to terminate TLS on Vault correctly. Your Linode’s fully qualified domain name and short hostname should be present in the /etc/hosts file before continuing.

      2. This guide uses sudo wherever possible. Complete the sections of our Securing Your Server guide to create a standard user account, harden SSH access, and remove unnecessary network services.

      3. Follow our UFW Guide in order to install and configure a firewall on your Ubuntu or Debian-based system, or our FirewallD Guide for rpm or CentOS-based systems. Consider reviewing Vault’s Production Hardening recommendations if this will be used in a production environment.


        When configuring a firewall, keep in mind that Vault listens on port 8200 by default and Let’s Encrypt utilizes ports 80 (HTTP) and 443 (HTTPS).

      4. Ensure your system is up to date. On Debian-based systems, use:

        sudo apt update && sudo apt upgrade

        While on rpm-based systems, such as CentOS, use:

        sudo yum update

      Acquire a TLS Certificate

      1. Follow the steps in our Secure HTTP Traffic with Certbot guide to acquire a TLS certificate.

      2. Add a system group in order to grant limited read access to the TLS files created by Certbot.

        sudo groupadd tls
      3. Change the group ownership of certificate files in the Let’s Encrypt directory to tls.

        sudo chgrp -R tls /etc/letsencrypt/{archive,live}
      4. Grant members of the tls group read access to the necessary directories and files.

        sudo chmod g+rx /etc/letsencrypt/{archive,live}
        sudo find /etc/letsencrypt/archive -name 'privkey*' -exec chmod g+r {} ';'

      Download Vault files

      1. Download the release binary for Vault.



        If you receive an error that indicates wget is missing from your system, install the wget package and try again.

      2. Download the checksum file, which will verify that the zip file is not corrupt.

      3. Download the checksum signature file, which verifies that the checksum file has not been tampered with.


      Verify the Downloads

      1. Import the HashiCorp Security GPG key (listed on the HashiCorp Security page under Secure Communications):

        gpg --recv-keys 51852D87348FFC4C

        The output should show that the key was imported:

        gpg: /home/user/.gnupg/trustdb.gpg: trustdb created
        gpg: key 51852D87348FFC4C: public key "HashiCorp Security " imported
        gpg: no ultimately trusted keys found
        gpg: Total number processed: 1
        gpg:               imported: 1


        If an error occurs with the error message keyserver receive failed: Syntax error in URI, simply try rerunning the gpg command again.


        If you receive errors that indicate the dirmngr software is missing or inaccessible, install dirmngr using your package manager and run the GPG command again.

      2. Verify the checksum file’s GPG signature:

        gpg --verify vault*.sig vault*SHA256SUMS

        The output should contain the Good signature from "HashiCorp Security <>" confirmation message:

        gpg: Signature made Mon 18 Mar 2019 01:44:51 PM MDT
        gpg:                using RSA key 91A6E7F85D05C65630BEF18951852D87348FFC4C
        gpg: Good signature from "HashiCorp Security <>" [unknown]
        gpg: WARNING: This key is not certified with a trusted signature!
        gpg:          There is no indication that the signature belongs to the owner.
        Primary key fingerprint: 91A6 E7F8 5D05 C656 30BE  F189 5185 2D87 348F FC4C
      3. Verify that the fingerprint output matches the fingerprint listed in the Secure Communications section of the HashiCorp Security page.

      4. Verify the .zip archive’s checksum:

        sha256sum -c vault*SHA256SUMS 2>&1 | grep OK

        The output should show the file’s name as given in the vault*SHA256SUMS file:


      Install the Vault Executable

      1. Extract the Vault executable to the local directory.

        unzip vault_*


        If you receive an error that indicates unzip is missing from your system, install the unzip package and try again.

      2. Move the vault executable into a system-wide location.

        sudo mv vault /usr/local/bin
      3. Reset the ownership and permissions on the executable.

        sudo chown root:root /usr/local/bin/vault
        sudo chmod 755 /usr/local/bin/vault
      4. Set executable capabilities on the vault binary. This will grant Vault privileges to lock memory, which is a best practice for running Vault securely (see the Vault documentation for additional information).

        sudo setcap cap_ipc_lock=+ep /usr/local/bin/vault
      5. Verify that vault is now available in the local shell.

        vault --version

        The output of this command should return the following.

        Vault v1.1.0 ('36aa8c8dd1936e10ebd7a4c1d412ae0e6f7900bd')

      System Vault Configuration

      1. Create a system user that vault will run as when the service is started.

        sudo useradd --system -d /etc/vault.d -s /bin/nologin vault
      2. Add the vault user to the previously created tls group, which will grant the user the ability to read Let’s Encrypt certificates.

        sudo gpasswd -a vault tls
      3. Create the data directory and configuration directory for vault with limited permissions.

        sudo install -o vault -g vault -m 750 -d /var/lib/vault
        sudo install -o vault -g vault -m 750 -d /etc/vault.d
      4. Create a systemd service file that will control how to run vault persistently as a system daemon.

        Description="a tool for managing secrets"
        CapabilityBoundingSet=CAP_SYSLOG CAP_IPC_LOCK
        ExecStart=/usr/local/bin/vault server -config=/etc/vault.d/vault.hcl
        ExecReload=/bin/kill --signal HUP $MAINPID

        These systemd service options define a number of important settings to ensure that Vault runs securely and reliably. Review the Vault documentation for a complete explanation of what these options achieve.


      Configure Vault

      1. Create a configuration file for Vault with the following contents, replacing with the domain used in your Let’s Encrypt certificates.

        listener "tcp" {
          address = ""
          tls_cert_file = "/etc/letsencrypt/live/"
          tls_key_file = "/etc/letsencrypt/live/"
        storage "file" {
          path = "/var/lib/vault"

        This configuration will use the Let’s Encrypt certificates created in the previous steps to terminate TLS for the Vault service. This ensures that secrets will never be transmitted in plaintext. The actual storage for Vault will be on the local filesystem at /var/lib/vault.

      Run The Vault Service

      1. Vault is now ready to run. Start the service using systemctl.

        sudo systemctl start vault
      2. If desired, enable the service as well so that Vault starts at system boot time.

        sudo systemctl enable vault
      3. Confirm that Vault is operational by using the vault executable to check for the service’s status. Set the VAULT_ADDR environment variable to, replacing with your own domain:

        export VAULT_ADDR=
      4. vault commands should now be sent to your local Vault instance. To confirm this, run the vault status command:

        vault status

        The command should return output similar to the following:

        Key                Value
        ---                -----
        Seal Type          shamir
        Initialized        false
        Sealed             true
        Total Shares       0
        Threshold          0
        Unseal Progress    0/0
        Unseal Nonce       n/a
        Version            n/a
        HA Enabled         false

      The remainder of this tutorial assumes that the environment variable VAULT_ADDR is set to this value to ensure that requests are sent to the correct Vault host.

      Initializing Vault

      At this stage, Vault is installed and running, but not yet initialized. The following steps will initialize the Vault backend, which sets unseal keys and returns the initial root token. Initialization occurs only one time for a Vault deployment.

      There are two configurable options to choose when performing the initialization step. The first value is the number of key shares, which controls the total number of unseal keys that Vault will generate. The second value is the key threshold, which controls how many of these unseal key shares are required before Vault will successfully unseal itself. Unsealing is required whenever Vault is restarted or otherwise brought online after being in a previously offline state.

      To illustrate this concept, consider a secure server in a data center. Because the Vault database is only decrypted in-memory, stealing or bringing the server offline for any reason will leave the only copy of Vault’s database on the filesystem in encrypted form, or “sealed”.

      When starting the server again, a key share of 3 and key threshold of 2 means that 3 keys exist, but at least 2 must be provided at startup for Vault to derive its decryption key and load its database into memory for access once again.

      The key share count ensure that multiple keys can exist at different locations for a degree of fault tolerance and backup purposes. The key threshold count ensures that compromising one unseal key alone is not sufficient to decrypt Vault data.

      1. Choose a value for the number of key shares and key threshold. Your situation may vary, but as an example, consider a team of three people in charge of operating Vault. A key share of 3 ensures that each member holds one unseal key. A key threshold of 2 means that no single operator can lose their key and compromise the system or steal the Vault database without coordinating with another operator.

      2. Using these chosen values, execute the initialization command. Be prepared to save the output that is returned from the following command, as it is only viewable once.

        vault operator init -key-shares=3 -key-threshold=2

        This command will return output similar to the following:

        Unseal Key 1: BaR6GUWRY8hIeNyuzAn7FTa82DiIldgvEZhOKhVsl0X5
        Unseal Key 2: jzh7lji1NX9TsNVGycUudSIy/X4lczJgsCpRfm3m8Q03
        Unseal Key 3: JfdH8LqEyc4B+xLMBX6/LT9o8G/6isC2ZFfz+iNMIW/0
        Initial Root Token: s.YijNa8lqSDeho1tJBtY02983
        Vault initialized with 3 key shares and a key threshold of 2. Please securely
        distribute the key shares printed above. When the Vault is re-sealed,
        restarted, or stopped, you must supply at least 2 of these keys to unseal it
        before it can start servicing requests.
        Vault does not store the generated master key. Without at least 2 key to
        reconstruct the master key, Vault will remain permanently sealed!
        It is possible to generate new unseal keys, provided you have a quorum of
        existing unseal keys shares. See "vault operator rekey" for more information.
      3. In a production scenario, these unseal keys should be stored in separate locations. For example, store one in a password manager such as LastPass, encrypted one with gpg, and store another offline on a USB key. Doing so ensures that compromising one storage location is not sufficient to recover the number of unseal keys required to decrypt the Vault database.

      4. The Initial Root Token is equivalent to the “root” or superuser account for the Vault API. Record and protect this token in a similar fashion. Like the root account on a Unix system, this token should be used to create less-privileged accounts to use for day-to-day interactions with Vault and the root token should be used infrequently due to its widespread privileges.

      Unseal Vault

      After initialization, Vault will be sealed. The following unseal steps must be performed any time the vault service is brought down and then brought up again, such as when performing systemctl restart vault or restarting the host machine.

      1. With VAULT_ADDR set appropriately, execute the unseal command.

        vault operator unseal

        A prompt will appear:

        Unseal Key (will be hidden):
      2. Paste or enter one unseal key and press Enter. The command will finish with output similar to the following:

        Unseal Key (will be hidden):
        Key                Value
        ---                -----
        Seal Type          shamir
        Initialized        true
        Sealed             true
        Total Shares       3
        Threshold          2
        Unseal Progress    1/2
        Unseal Nonce       0124ce2a-6229-fac1-0e3f-da3e97e00583
        Version            1.1.0
        HA Enabled         false

        Notice that the output indicates that the one out of two required unseal keys have been provided.

      3. Perform the unseal command again.

        vault operator unseal
      4. Enter a different unseal key when the prompt appears.

        Unseal Key (will be hidden):
      5. The resulting output should indicate that Vault is now unsealed (notice the Sealed false line).

        Unseal Key (will be hidden):
        Key             Value
        ---             -----
        Seal Type       shamir
        Initialized     true
        Sealed          false
        Total Shares    3
        Threshold       2
        Version         1.1.0
        Cluster Name    vault-cluster-a397153e
        Cluster ID      a065557e-3ee8-9d26-4d90-b90c8d69fa5d
        HA Enabled      false

      Vault is now operational.

      Using Vault

      Token Authentication

      When interacting with Vault over its REST API, Vault identifies and authenticates most requests by the presence of a token. While the initial root token can be used for now, the Policies section of this guide explains how to provision additional tokens.

      1. Set the VAULT_TOKEN environment variable to the value of the previously-obtained root token. This token is the authentication mechanism that the vault command will rely on for future interaction with Vault. The actual root token will be different in your environment.

        export VAULT_TOKEN=s.YijNa8lqSDeho1tJBtY02983
      2. Use the token lookup subcommand to confirm that the token is valid and has the expected permissions.

        vault token lookup
      3. The output of this command should include the following:

        policies            [root]

      The KV Secret Backend

      Vault backends are the core mechanism Vault uses to permit users to read and write secret values. The simplest backend to illustrate this functionality is the KV backend. This backend lets clients write key/value pairs (such as mysecret=apikey) that can be read later.

      1. Enable the secret backend by using the enable Vault subcommand.

        vault secrets enable -version=2 kv
      2. Write an example value to the KV backend using the kv put Vault subcommand.

        vault kv put kv/myservice api_token=secretvalue

        This command should return output similar to the following:

        Key              Value
        ---              -----
        created_time     2019-03-31T04:35:38.631167678Z
        deletion_time    n/a
        destroyed        false
        version          1
      3. Read this value from the kv/myservice path.

        vault kv get kv/myservice

        This command should return output similar to the following:

        ====== Metadata ======
        Key              Value
        ---              -----
        created_time     2019-03-31T04:35:38.631167678Z
        deletion_time    n/a
        destroyed        false
        version          1
        ====== Data ======
        Key          Value
        ---          -----
        api_token    secretvalue
      4. Many utilities and script are better suited to process json output. Use the -format=json flag to do a read once more, with the results return in JSON form.

        vault kv get -format=json kv/myservice
          "request_id": "2734ea81-6f39-c017-4c73-2719b2018b65",
          "lease_id": "",
          "lease_duration": 0,
          "renewable": false,
          "data": {
            "data": {
              "api_token": "secretvalue"
            "metadata": {
              "created_time": "2019-03-31T04:35:38.631167678Z",
              "deletion_time": "",
              "destroyed": false,
              "version": 1
          "warnings": null


      Up until this point, we have performed API calls to Vault with the root token. Production best practices dictate that this token should rarely be used and most operations should be performed with lesser-privileged tokens associated with controlled policies.

      Policies are defined by specifying a particular path and the set of capabilities that are permitted by a user upon the path. In our previous commands, the path has been kv/myservice, so we can create a policy to only read this secret and perform no other operations, including reading or listing secrets. When no policy exists for a particular path, Vault denies operations by default.

      In the case of the KV backend, Vault distinguishes operations upon the stored data, which are the actual stored values, and metadata, which includes information such as version history. In this example, we will create a policy to control access to the key/value data alone.

      1. Create the following Vault policy file.

        path "kv/data/myservice" {
          capabilities = ["read"]

        This simple policy will permit any token associated with it to read the secret stored at the KV secret backend path kv/myservice.

      2. Load this policy into Vault using the policy write subcommand. The following command names the aforementioned policy read-myservice.

        vault policy write read-myservice policy.hcl
      3. To illustrate the use of this policy, create a new token with this new policy associated with it.

        vault token create -policy=read-myservice

        This command should return output similar to the following.

        Key                  Value
        ---                  -----
        token                s.YdpJWRRaEIgdOW4y72sSVygy
        token_accessor       07akQfzg0TDjj3YoZSGMPkHA
        token_duration       768h
        token_renewable      true
        token_policies       ["default" "read-myservice"]
        identity_policies    []
        policies             ["default" "read-myservice"]
      4. Open another terminal window or tab and login to the same host that Vault is running on. Set the VAULT_ADDR to ensure that new vault commands point at the local instance of Vault, replacing with your domain.

        export VAULT_ADDR=
      5. Set the VAULT_TOKEN environment variable to the new token just created by the token create command. Remember that your actual token will be different than the one in this example.

        export VAULT_TOKEN=s.YdpJWRRaEIgdOW4y72sSVygy
      6. Now attempt to read our secret in Vault at the kv/myservice path.

        vault kv get kv/myservice

        Vault should return the key/value data.

        ====== Metadata ======
        Key              Value
        ---              -----
        created_time     2019-03-31T04:35:38.631167678Z
        deletion_time    n/a
        destroyed        false
        version          1
        ====== Data ======
        Key          Value
        ---          -----
        api_token    secretvalue
      7. To illustrate forbidden operations, attempt to list all secrets in the KV backend.

        vault kv list kv/

        Vault should deny this request.

        Error listing kv/metadata: Error making API request.
        URL: GET
        Code: 403. Errors:
        * 1 error occurred:
                * permission denied
      8. In contrast, attempt to perform the same operation in the previous terminal window that has been configured with the root token.

        vault kv list kv/

        The root token should have sufficient rights to return a list of all secret keys under the kv/ path.

      Authentication Methods

      In practice, when services that require secret values are deployed, a token should not be distributed as part of the deployment or configuration management. Rather, services should authenticate themselves to Vault in order to acquire a token that has a limited lifetime. This ensures that credentials eventually expire and cannot be reused if they are ever leaked or disclosed.

      Vault supports many types of authentication methods. For example, the Kubernetes authentication method can retrieve a token for individual pods. As a simple illustrative example, the following steps will demonstrate how to use the AppRole method.

      The AppRole authentication method works by requiring that clients provide two pieces of information: the AppRole RoleID and SecretID. The recommendation approach to using this method is to store these two pieces of information in separate locations, as one alone is not sufficient to authenticate against Vault, but together, they permit a client to retrieve a valid Vault token. For example, in a production service, a RoleID might be present in a service’s configuration file, while the SecretID could be provided as an environment variable.

      1. Enable the AppRole authentication method using the auth subcommand. Remember to perform these steps in the terminal window with the root token stored in the VAULT_TOKEN environment variable, otherwise Vault commands will fail.

        vault auth enable approle
      2. Create a named role. This will define a role that can be used to “log in” to Vault and retrieve a token with a policy associated with it. The following command creates a named role named my-application which creates tokens valid for 10 minutes which will have the read-myservice policy associated with them.

        vault write auth/approle/role/my-application 
      3. Retrieve the RoleID of the named role, which uniquely identifies the AppRole. Note this value for later use.

        vault read auth/approle/role/my-application/role-id
        Key        Value
        ---        -----
        role_id    147cd412-d1c2-4d2c-c57e-d660da0b1fa8

        In this example case, RoleID is 147cd412-d1c2-4d2c-c57e-d660da0b1fa8. Note that your value will be different.

      4. Finally, read the secret-id of the named role, and save this value for later use as well.

        vault write -f auth/approle/role/my-application/secret-id
        Key                   Value
        ---                   -----
        secret_id             2225c0c3-9b9f-9a9c-a0a5-10bf06df7b25
        secret_id_accessor    30cbef6a-8834-94fe-6cf3-cf2e4598dd6a

        In this example output, the SecretID is 2225c0c3-9b9f-9a9c-a0a5-10bf06df7b25.

      5. Use these values to generate a limited-use token by performing a write operation against the AppRole API. Replace the RoleID and SecretID values here with your own.

        vault write auth/approle/login 

        The resulting output should include a new token, which in this example case is s.coRl4UR6YL1sqw1jXhJbuZfq

        Key                     Value
        ---                     -----
        token                   s.3uu4vwFO8D1mG5S76IG04mck
        token_accessor          fi3aW4W9kZNB3FAC20HRXeoT
        token_duration          10m
        token_renewable         true
        token_policies          ["default" "read-myservice"]
        identity_policies       []
        policies                ["default" "read-myservice"]
        token_meta_role_name    my-application
      6. Open one more terminal tab or window and log in to your remote host running Vault.

      7. Once again, set the VAULT_ADDR environment variable to the correct value to communicate with your local Vault instance.

        export VAULT_ADDR=
      8. Set the VAULT_TOKEN environment variable to this newly created token. From the previous example output, this would be the following (note that your token will be different).

        export VAULT_TOKEN=s.3uu4vwFO8D1mG5S76IG04mck
      9. Read the KV path that this token should be able to access.

        vault kv get kv/myservice

        The example should should be read and accessible.

      10. If you read this value using this Vault token after more than 10 minutes have elapsed, the token will have expired and any read operations using the token should be denied. Performing another vault write auth/approle/login operation (detailed in step 5) can generate new tokens to use.

      More Information

      You may wish to consult the following resources for additional information on this topic. While these are provided in the hope that they will be useful, please note that we cannot vouch for the accuracy or timeliness of externally hosted materials.

      Find answers, ask questions, and help others.

      This guide is published under a CC BY-ND 4.0 license.

      Source link