One place for hosting & domains


      Building Go Applications for Different Operating Systems and Architectures

      In software development, it is important to consider the operating system and underlying processor architecture that you would like to compile your binary for. Since it is often slow or impossible to run a binary on a different OS/architecture platform, it is a common practice to build your final binary for many different platforms to maximize your program’s audience. However, this can be difficult when the platform you are using for development is different from the platform you want to deploy your program to. In the past, for example, developing a program on Windows and deploying it to a Linux or a macOS machine would involve setting up build machines for each of the environments you wanted binaries for. You’d also need to keep your tooling in sync, in addition to other considerations that would add cost and make collaborative testing and distribution more difficult.

      Go solves this problem by building support for multiple platforms directly into the go build tool, as well as the rest of the Go toolchain. By using environment variables and build tags, you can control which OS and architecture your final binary is built for, in addition to putting together a workflow that can quickly toggle the inclusion of platform-dependent code without changing your codebase.

      In this tutorial, you will put together a sample application that joins strings together into a filepath, create and selectively include platform-dependent snippets, and build binaries for multiple operating systems and system architectures on your own system, showing you how to use this powerful capability of the Go programming language.


      To follow the example in this article, you will need:

      Possible Platforms for GOOS and GOARCH

      Before showing how to control the build process to build binaries for different platforms, let’s first inspect what kinds of platforms Go is capable of building for, and how Go references these platforms using the environment variables GOOS and GOARCH.

      The Go tooling has a command that can print a list of the possible platforms that Go can build on. This list can change with each new Go release, so the combinations discussed here might not be the same on another version of Go. At the time of writing this tutorial, the current Go release is 1.13.

      To find this list of possible platforms, run the following:

      You will receive an output similar to the following:


      aix/ppc64 freebsd/amd64 linux/mipsle openbsd/386 android/386 freebsd/arm linux/ppc64 openbsd/amd64 android/amd64 illumos/amd64 linux/ppc64le openbsd/arm android/arm js/wasm linux/s390x openbsd/arm64 android/arm64 linux/386 nacl/386 plan9/386 darwin/386 linux/amd64 nacl/amd64p32 plan9/amd64 darwin/amd64 linux/arm nacl/arm plan9/arm darwin/arm linux/arm64 netbsd/386 solaris/amd64 darwin/arm64 linux/mips netbsd/amd64 windows/386 dragonfly/amd64 linux/mips64 netbsd/arm windows/amd64 freebsd/386 linux/mips64le netbsd/arm64 windows/arm

      This output is a set of key-value pairs separated by a /. The first part of the combination, before the /, is the operating system. In Go, these operating systems are possible values for the environment variable GOOS, pronounced “goose”, which stands for Go Operating System. The second part, after the /, is the architecture. As before, these are all possible values for an environment variable: GOARCH. This is pronounced “gore-ch”, and stands for Go Architecture.

      Let’s break down one of these combinations to understand what it means and how it works, using linux/386 as an example. The key-value pair starts with the GOOS, which in this example would be linux, referring to the Linux OS. The GOARCH here would be 386, which stands for the Intel 80386 microprocessor.

      There are many platforms available with the go build command, but a majority of the time you’ll end up using linux , windows, or darwin as a value for GOOS. These cover the big three OS platforms: Linux, Windows, and macOS, which is based on the Darwin operating system and is thus called darwin. However, Go can also cover less mainstream platforms like nacl, which represents Google’s Native Client.

      When you run a command like go build, Go uses the current platform’s GOOS and GOARCH to determine how to build the binary. To find out what combination your platform is, you can use the go env command and pass GOOS and GOARCH as arguments:

      In testing this example, we ran this command on macOS on a machine with an AMD64 architecture, so we will receive the following output:


      darwin amd64

      Here the output of the command tells us that our system has GOOS=darwin and GOARCH=amd64.

      You now know what the GOOS and GOARCH are in Go, as well as their possible values. Next, you will put together a program to use as an example of how to use these environment variables and build tags to build binaries for other platforms.

      Write a Platform-Dependent Program with filepath.Join()

      Before you start building binaries for other platforms, let’s build an example program. A good sample for this purpose is the Join function in the path/filepath package in the Go standard library. This function takes a number of strings and returns one string that is joined together with the correct filepath separator.

      This is a good example program because the operation of the program depends on which OS it is running on. On Windows, the path separator is a backslash, , while Unix-based systems use a forward slash, /.

      Let’s start with building an application that uses filepath.Join(), and later, you’ll write your own implementation of the Join() function that customizes the code to the platform-specific binaries.

      First, create a folder in your src directory with the name of your app:

      Move into that directory:

      Next, create a new file in your text editor of choice named main.go. For this tutorial, we will use Nano:

      Once the file is open, add the following code:


      package main
      import (
      func main() {
        s := filepath.Join("a", "b", "c")

      The main() function in this file uses filepath.Join() to concatenate three strings together with the correct, platform-dependent path separator.

      Save and exit the file, then run the program:

      When running this program, you will receive different output depending on which platform you are using. On Windows, you will see the strings separated by :



      On Unix systems like macOS and Linux, you will receive the following:



      This shows that, because of the different filesystem protocols used on these operating systems, the program will have to build different code for the different platforms. But since it already uses a different file separator depending on the OS, we know that filepath.Join() already accounts for the difference in platform. This is because the Go tool chain automatically detects your machine’s GOOS and GOARCH and uses this information to use the code snippet with the right build tags and file separator.

      Let’s consider where the filepath.Join() function gets its separator from. Run the following command to inspect the relevant snippet from Go’s standard library:

      • less /usr/local/go/src/os/path_unix.go

      This will display the contents of path_unix.go. Look for the following part of the file:


      . . .
      // +build aix darwin dragonfly freebsd js,wasm linux nacl netbsd openbsd solaris
      package os
      const (
        PathSeparator     = '/' // OS-specific path separator
        PathListSeparator = ':' // OS-specific path list separator
      . . .

      This section defines the PathSeparator for all of the varieties of Unix-like systems that Go supports. Notice all of the build tags at the top, which are each one of the possible Unix GOOS platforms associated with Unix. When the GOOS matches these terms, your program will yield the Unix-styled filepath separator.

      Press q to return to the command line.

      Next, open the file that defines the behavior of filepath.Join() when used on Windows:

      • less /usr/local/go/src/os/path_windows.go

      You will see the following:


      . . .
      package os
      const (
              PathSeparator     = '\' // OS-specific path separator
              PathListSeparator = ';'  // OS-specific path list separator
      . . .

      Although the value of PathSeparator is \ here, the code will render the single backslash () needed for Windows filepaths, since the first backslash is only needed as an escape character.

      Notice that, unlike the Unix file, there are no build tags at the top. This is because GOOS and GOARCH can also be passed to go build by adding an underscore (_) and the environment variable value as a suffix to the filename, something we will go into more in the section Using GOOS and GOARCH File Name Suffixes. Here, the _windows part of path_windows.go makes the file act as if it had the build tag // +build windows at the top of the file. Because of this, when your program is run on Windows, it will use the constants of PathSeparator and PathListSeparator from the path_windows.go code snippet.

      To return to the command line, quit less by pressing q.

      In this step, you built a program that showed how Go converts the GOOS and GOARCH automatically into build tags. With this in mind, you can now update your program and write your own implementation of filepath.Join(), using build tags to manually set the correct PathSeparator for Windows and Unix platforms.

      Implementing a Platform-Specific Function

      Now that you know how Go’s standard library implements platform-specific code, you can use build tags to do this in your own app program. To do this, you will write your own implementation of filepath.Join().

      Open up your main.go file:

      Replace the contents of main.go with the following, using your own function called Join():


      package main
      import (
      func Join(parts ...string) string {
        return strings.Join(parts, PathSeparator)
      func main() {
        s := Join("a", "b", "c")

      The Join function takes a number of parts and joins them together using the strings.Join() method from the strings package to concatenate the parts together using the PathSeparator.

      You haven’t defined the PathSeparator yet, so do that now in another file. Save and quit main.go, open your favorite editor, and create a new file named path.go:

      nano path.go

      Define the PathSeparator and set it equal to the Unix filepath separator, /:


      package main
      const PathSeparator = "/"

      Compile and run the application:

      You’ll receive the following output:



      This runs successfully to get a Unix-style filepath. But this isn’t yet what we want: the output is always a/b/c, regardless of what platform it runs on. To add in the functionality to create Windows-style filepaths, you will need to add a Windows version of the PathSeparator and tell the go build command which version to use. In the next section, you will use build tags to accomplish this.

      To account for Windows platforms, you will now create an alternate file to path.go and use build tags to make sure the code snippets only run when GOOS and GOARCH are the appropriate platform.

      But first, add a build tag to path.go to tell it to build for everything except for Windows. Open up the file:

      Add the following highlighted build tag to the file:


      // +build !windows
      package main
      const PathSeparator = "/"

      Go build tags allow for inverting, meaning that you can instruct Go to build this file for any platform except for Windows. To invert a build tag, place a ! before the tag.

      Save and exit the file.

      Now, if you were to run this program on Windows, you would get the following error:


      ./main.go:9:29: undefined: PathSeparator

      In this case, Go would not be able to include path.go to define the variable PathSeparator.

      Now that you have ensured that path.go will not run when GOOS is Windows, add a new file, windows.go:

      In windows.go, define the Windows PathSeparator, as well as a build tag to let the go build command know it is the Windows implementation:


      // +build windows
      package main
      const PathSeparator = "\"

      Save the file and exit from the text editor. The application can now compile one way for Windows and another for all other platforms.

      While the binaries will now build correctly for their platforms, there are further changes you must make in order to compile for a platform that you do not have access to. To do this, you will alter your local GOOS and GOARCH environment variables in the next step.

      Using Your Local GOOS and GOARCH Environment Variables

      Earlier, you ran the go env GOOS GOARCH command to find out what OS and architecture you were working on. When you ran the go env command, it looked for the two environment variables GOOS and GOARCH; if found, their values would be used, but if not found, then Go would set them with the information for the current platform. This means that you can change GOOS or GOARCH so that they do not default to your local OS and architecture.

      The go build command behaves in a similar manner to the go env command. You can set either the GOOS or GOARCH environment variables to build for a different platform using go build.

      If you are not using a Windows system, build a windows binary of app by setting the GOOS environment variable to windows when running the go build command:

      Now list the files in your current directory:

      The output of listing the directory shows there is now an app.exe Windows executable in the project directory:


      app app.exe main.go path.go windows.go

      Using the file command, you can get more information about this file, confirming its build:

      You will receive:


      app.exe: PE32+ executable (console) x86-64 (stripped to external PDB), for MS Windows

      You can also set one, or both environment variables at build time. Run the following:

      • GOOS=linux GOARCH=ppc64 go build

      Your app executable will now be replaced by a file for a different architecture. Run the file command on this binary:

      You will receive output like the following:

      app: ELF 64-bit MSB executable, 64-bit PowerPC or cisco 7500, version 1 (SYSV), statically linked, not stripped

      By setting your local GOOS and GOARCH environment variables, you can now build binaries for any of Go’s compatible platforms without a complicated configuration or setup. Next, you will use filename conventions to keep your files neatly organized and build for specific platforms automatically wihout build tags.

      Using GOOS and GOARCH Filename Suffixes

      As you saw earlier, the Go standard library makes heavy use of build tags to simplify code by separating out different platform implementations into different files. When you opened the os/path_unix.go file, there was a build tag that listed all of the possible combinations that are considered Unix-like platforms. The os/path_windows.go file, however, contained no build tags, because the suffix on the filename sufficed to tell Go which platform the file was meant for.

      Let’s look at the syntax of this feature. When naming a .go file, you can add GOOS and GOARCH as suffixes to the file’s name in that order, separating the values by underscores (_). If you had a Go file named filename.go, you could specify the OS and architecture by changing the filename to filename_GOOS_GOARCH.go. For example, if you wished to compile it for Windows with 64-bit ARM architecture, you would make the name of the file filename_windows_arm64.go. This naming convention helps keep code neatly organized.

      Update your program to use the filename suffixes instead of build tags. First, rename the path.go and windows.go file to use the convention used in the os package:

      • mv path.go path_unix.go
      • mv windows.go path_windows.go

      With the two filenames changed, you can remove the build tag you added to path_windows.go:

      Remove // +build windows so that your file looks like this:


      package main
      const PathSeparator = "\"

      Save and exit from the file.

      Because unix is not a valid GOOS, the _unix.go suffix has no meaning to the Go compiler. It does, however, convey the intended purpose of the file. Like the os/path_unix.go file, your path_unix.go file still needs to use build tags, so keep that file unchanged.

      By using filename conventions, you removed uneeded build tags from your source code and made the filesystem cleaner and clearer.


      The ability to generate binaries for multiple platforms that require no dependencies is a powerful feature of the Go toolchain. In this tutorial, you used this capability by adding build tags and filename suffixes to mark certain code snippets to only compile for certain architectures. You created your own platorm-dependent program, then manipulated the GOOS and GOARCH environment variables to generate binaries for platforms beyond your current platform. This is a valuable skill, because it is a common practice to have a continuous integration process that automatically runs through these environment variables to build binaries for all platforms.

      For further study on go build, check out our Customizing Go Binaries with Build Tags tutorial. If you’d like to learn more about the Go programming language in general, check out the entire How To Code in Go series.

      Source link

      Building Your Own Business Website? Don’t Make These 10 Mistakes

      It can be daunting to get a business website up and running.

      Let’s be real here: if you weren’t a little bit jittery about it, we’d be worried. Not because you can’t do this. You totally can. It’s easy to build a great-looking business website if you use the right tools — and you don’t even have to know how to code!

      No, it’s daunting because your website matters so much to the health of your business. It’ll help you generate leads, drive conversions, and build your brand. But like a first date, there are a lot of ways to screw this thing up.

      “So, you’re paying, right?”

      “I’m a huge Nickleback fan.”

      “Do you mind if my mom joins us?”

      Luckily, avoiding “website don’ts” is much easier than finding love in a hopeless place. In this post, I’ll outline the 10 biggest mistakes you could make when setting up a website for your small business. Avoid these pitfalls and you’ll be on your way to turning visitors into devoted customers. Ah, l’amour.

      1. Failing To Make A Responsive Website

      This is the ultimate beginner’s mistake. So what is a responsive website anyway?

      Simply put, it’s a website that responds to its environment to give the user the best possible viewing experience. In other words, if a user comes searching for your website on a mobile phone, then the site’s layout will display in a different, more accessible way than if they were visiting the site on a desktop.

      We’ve gone in-depth on why mobile-friendly website design matters here on the blog before. But here are the simple facts: 61 percent of users who have trouble accessing a mobile site are unlikely to return. Of those, 40 percent will seek out a competitor’s site instead. And if you don’t create a mobile-friendly website, Google’s going to ding you too.

      The takeaway?

      When choosing a website builder or platform to create your website, make sure you pick one that offers responsive designs. You don’t want to mess around with a stagnant design that will drive away mobile visitors.

      2. Not Customizing Your Theme

      One of the best things about using a content management system is the free themes available at your fingertips. In fact, as soon as you settle on your web hosting company and purchase a domain, you can select the perfect theme to match your brand in mere minutes.

      However, it’s important to remember whatever platform you use, you’re going to have to customize it to match your brand’s style. Otherwise, you’ll be left with a website that looks exactly like thousands of other business sites on the web — a big mistake.

      Happily, with Remixer, our in-house website builder, it’s easy to personalize your site. You can upload and insert your own images (or use our royalty-free gallery, your call), flesh out your unique content, and place menu items where you need them to build your dream website.

      3. Using Jargon

      We get it. You have been working in your field for years and years, and you’re literally a master of your industry. You know what “IPC,” “VC Money,” and “apportunity” stand for, but I’ve got news for you — your website visitors don’t.

      If a visitor lands on your website and the copywriting is full of technical jargon they can’t understand, they’re not going to stick around to parse through your metaphors.

      Remember: the average human has a shorter attention span than a goldfish. That’s a piddly eight seconds. This means when customers find your site, they need to encounter copy that is straightforward and encourages them to take action fast — whether that’s watching a video, entering your sign-up flow, or subscribing to an email newsletter.

      If you need a good example, Dropbox Business slays when it comes to website design and simple copywriting. Let’s take a look at their homepage.

      dropbox business home page

      What is Dropbox Business doing right?

      • The headline is straightforward with no jargon.
      • The subheading tells you what they do in one easy-to-follow sentence. In fact, it’s immediately clear what the company offers.
      • The call-to-action is easy to see (and click)!

      When approaching copywriting and design, be like Dropbox.

      4. Not Thinking About Readability

      Not only does your copywriting need to be sweet and simple, but the design also has to be easy on the eyes.

      And I don’t just mean nice to look at; it also has to be easy to read.

      When you use a website builder, you have free reign to customize your website as you wish, but this doesn’t mean you should part with best practices. To make sure your users don’t get turned off by your design, stick to these rules:

      • Keep Your Font Sizes Consistent — Larger font sizes are a good way to say, “This is important, so pay attention.” Smaller font sizes should be used for more in-depth information. When building your website, don’t go hog wild and use a bunch of different sizes. Stick to three or four sizes.
      • Consider Your Fonts — Papyrus may look cute on your kid’s 5th birthday party invite, but it doesn’t look great on your website. Luckily, most website builders themes will only use fonts that designers have already vetted for readability and looks. One important tip: Sans-serif fonts — the ones without the extra little flourishes — are generally easier to read on the web.
      • Choose Contrasting Colors — When selecting a color palette for your website, make sure the background images don’t drown out your font. Readability has to be the first priority. If you’re design challenged (no shame in admitting that, by the way), Remixer comes with preset color mixes so you don’t have to worry about the subtle differences between Seafoam and Aqua.

      freshbooks cloud accounting home page

      So who is doing readability right? FreshBooks is nailing it.

      • The copy is free of jargon, simple, and straight to the point.
      • Even though their content is more robust than the Dropbox example above, it’s still easy to understand.
      • The colors work nicely with each other, and none of the images detract from the text.
      • The most important messages are in larger font while the supplemental information is in a smaller font.

      Overall, the readability of this website is on the money — which is good because, well, their business is all about the dollars.

      5. Falling For Search Engine Optimization Myths

      Every new business owner hopes to create a website that will sit on the top of the search results on Google, Bing, Yahoo, and every other search engine. And they hope to rank for more than just one keyword.

      However, the truth of the matter is that a good SEO strategy takes time, smarts, and money. Plus, it’s impossible to successfully optimize your homepage for hundreds of keywords. That’s just not how the internet works, and if you try to cut corners, Google knows where you live.

      Seriously, it knows.

      A better strategy is to think about the top keyword for your website and optimize your content to rank for that keyword. Here are a few suggestions:

      • Write Long-Form Content — Once upon a time, stuffing your content with your top keyword would help you rank in the search results. Gone are those days, and just like on that first date we talked about earlier, you’ll actually be penalized for trying too hard. These days, it’s better to simply write your content for the user. Be as comprehensive and helpful as possible and Google will reward you.
      • Structure Your Content with Heading Tags — Heading tags — the top-down <h1> to <h6>s — are often seen as a “meh, not that important” sort of thing, but they really do matter. Headings give structure to your pages, making it easier for both readers and Google bots to consume your content. To get the most SEO bang for your buck with headings, follow this guide from Yoast.
      • Add a Call-to-Action — Your homepage should have a clear call-to-action (CTA). Not only will it help direct your readers to do the thing you want them to do — buy your product, sign up for your service, or subscribe to your newsletter — but it will help Google focus on what is important to you.

      The Moz blog is a solid example of on-point optimization. Here’s what they’re doing right:

      • Clear, strong heading tags in every post.
      • Structured content that is easy to follow, read, and scan.
      • The posts aren’t laden with annoying keywords. Instead, it supports the H1 tag and is helpful to readers.

      6. Going Pop-Up Crazy

      Here’s how I like to think about pop-ups. When someone puts a sign in front of your face, it’s difficult not to pay attention to it. But when someone puts a whole bunch of signs in your face, it’s impossible to pay attention to any of them.

      Helpful pop-ups that serve your readers are a great way to build your business. For example, you can include ONE pop-up asking someone to do ONE of the following: join your mailing list, share a post, follow you on social media, or sign up for an upcoming event.

      But the second you start throwing pop-ups on your website to join your mailing list and share a post and follow you on social media and sign up for your webinar, and . . . you are not serving your visitors — or your business.

      When it comes to pop-ups, be wise. Determine what the most pressing action you want your users to take is and then build a pop-up around that action. Leave the rest out. Simple as that.

      example of pop-up 'super early bird 65% off'

      Digital Marketer, one of the marketing world’s top thought leaders, serves as a great example of using pop-ups wisely.

      • Digital Marketer is an online publication with thousands of daily followers. They use this pop-up to let subscribers know about an upcoming event.
      • Once a subscriber either enters their information or opts out, the pop-up disappears.
      • The pop-up isn’t asking for multiple actions from the subscriber.

      Feel free to use a pop-up on your website. Just don’t go crazy or your website visitors will feel like they’ve shown up at a protest with mixed messages.

      Be Awesome on the Internet

      Join our monthly newsletter for tips and tricks to build your dream website!

      7. Slow Server Times

      Did you know customers will only wait 4 seconds for a site to load before clicking out of the website, according to a study by Akamai Technologies? That means if you want to keep your customers interested, you need to make sure your site loads whip fast.

      The good news is when you build your site with Remixer, you are working with a product that is configured to make load times faster. Remixer’s static pages load whip-fast compared to dynamic ones.

      8. Poor Navigation

      The internet yields nearly 7 billion global searches a day, and websites with intuitive navigation are rewarded with more visitors (and visitors who stick around for longer). If you can’t help your users get what they want immediately, chances are they will move on to a competitor’s site.

      Even if you’re not a professional, there are a few simple things you can do to make sure your design is intuitive for visitors:

      • Use a Theme — The easiest way to create a winning website is to use a website builder. With Remixer, the important structural elements you’ll need for a basic website are incorporated into each of our expert-built themes. That means, all you have to do is choose a design that works with your brand, add your content, and boom, you’ve got a well-designed website — no coding required.
      • Stick to the Standard — Humans are creatures of habit. And most of us are trained to expect vertical navigation on the left side of the page and horizontal navigation across the top of the page. To avoid confusion, keep your navigation standard.
      • Don’t Overwhelm Users — You may be tempted to include several links in your navigation bar. But remember: less is more. Stick to the basics — About, Products, Services, Contact, etc. — in your navigation menu.

      You know what’s coming next, don’t you? A good example! 4 Rivers Smokehouse has a really sleek design.

      • The navigation bar is up top, simple, and easy to read.
      • You know exactly how to take action as soon as you view the home page. “Show me the menu!”
      • The design is simple — and makes you want to dive into a plate of slow-roasted brisket.

      9. Outdated Information and/or Design

      I know we just talked about brisket, but building a website is not like making slow-cooked pork. You can’t set it and forget it! Your website requires regular updates and maintenance for a variety of reasons.

      • Updated Information Helps Customers — If you let your website information get outdated, it will be difficult for customers to find you, order from you, and remain a loyal customer. Don’t leave them hanging!
      • It Keeps Google Happy — Google ranks websites based on a huge algorithm. One major driver of rankings: how fresh and robust is your site’s content? This means you need to frequently add new content to your site (blog posts, anyone?) and routinely spruce up your older pages and posts.
      • Updated Design Keeps Your Brand Relevant — The tech world is constantly innovating, and you need to stay in the game when it comes to design trends and best practices. For example, here’s how Google and Facebook, two of the world’s most popular websites, looked when they first launched. Imagine how successful they would have been if they never updated their look and feel. Yeah, it’s not a pretty picture.
      Google home page in 1996
      Google in 1996
      Facebook in 2004
      Facebook in 2004

      As you continue to build (and grow!) your business, make sure your website keeps up.

      10. Don’t Go It Alone

      Building a website from scratch is a lofty goal, but unless you’re really looking forward to investing in the process, it can be a big drain on your resources. And remember, your time counts as a resource when you’re bootstrapping a small business. If you need a responsive, professional-looking website — and you need it fast — Remixer is the tool for you.

      Need a Beautiful Website?

      Design it yourself with Remixer, our easy-to-use website builder. No coding required.

      You can start with a free responsive theme that’s been put together by our web experts to help you sidestep all the mistakes we’ve outlined above. Our themes are designed to load quickly, look great, and help you easily plug in SEO-friendly content.

      All you have to do is import your content, customize your theme, and then hit ‘publish.’ And if you get stuck somewhere along the way, the DreamHost team is just a chat away. Today is the day to start building your own Remixer site for free.

      Source link

      Webinar Series: Building Blocks for Doing CI/CD with Kubernetes

      Webinar Series

      This article supplements a webinar series on doing CI/CD with Kubernetes. The series discusses how to take a Cloud Native approach to building, testing, and deploying applications, covering release management, Cloud Native tools, Service Meshes, and CI/CD tools that can be used with Kubernetes. It is designed to help developers and businesses that are interested in integrating CI/CD best practices with Kubernetes into their workflows.

      This tutorial includes the concepts and commands from the first session of the series, Building Blocks for Doing CI/CD with Kubernetes.


      If you are getting started with containers, you will likely want to know how to automate building, testing, and deployment. By taking a Cloud Native approach to these processes, you can leverage the right infrastructure APIs to package and deploy applications in an automated way.

      Two building blocks for doing automation include container images and container orchestrators. Over the last year or so, Kubernetes has become the default choice for container orchestration. In this first article of the CI/CD with Kubernetes series, you will:

      • Build container images with Docker, Buildah, and Kaniko.
      • Set up a Kubernetes cluster with Terraform, and create Deployments and Services.
      • Extend the functionality of a Kubernetes cluster with Custom Resources.

      By the end of this tutorial, you will have container images built with Docker, Buildah, and Kaniko, and a Kubernetes cluster with Deployments, Services, and Custom Resources.

      Future articles in the series will cover related topics: package management for Kubernetes, CI/CD tools like Jenkins X and Spinnaker, Services Meshes, and GitOps.


      Step 1 — Building Container Images with Docker and Buildah

      A container image is a self-contained entity with its own application code, runtime, and dependencies that you can use to create and run containers. You can use different tools to create container images, and in this step you will build containers with two of them: Docker and Buildah.

      Building Container Images with Dockerfiles

      Docker builds your container images automatically by reading instructions from a Dockerfile, a text file that includes the commands required to assemble a container image. Using the docker image build command, you can create an automated build that will execute the command-line instructions provided in the Dockerfile. When building the image, you will also pass the build context with the Dockerfile, which contains the set of files required to create an environment and run an application in the container image.

      Typically, you will create a project folder for your Dockerfile and build context. Create a folder called demo to begin:

      Next, create a Dockerfile inside the demo folder:

      Add the following content to the file:


      FROM ubuntu:16.04
      RUN apt-get update 
          && apt-get install -y nginx 
          && apt-get clean 
          && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/* 
          && echo "daemon off;" >> /etc/nginx/nginx.conf
      EXPOSE 80
      CMD ["nginx"]

      This Dockerfile consists of a set of instructions that will build an image to run Nginx. During the build process ubuntu:16.04 will function as the base image, and the nginx package will be installed. Using the CMD instruction, you've also configured nginx to be the default command when the container starts.

      Next, you'll build the container image with the docker image build command, using the current directory (.) as the build context. Passing the -t option to this command names the image nkhare/nginx:latest:

      • sudo docker image build -t nkhare/nginx:latest .

      You will see the following output:


      Sending build context to Docker daemon 49.25MB Step 1/5 : FROM ubuntu:16.04 ---> 7aa3602ab41e Step 2/5 : MAINTAINER ---> Using cache ---> 552b90c2ff8d Step 3/5 : RUN apt-get update && apt-get install -y nginx && apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/* && echo "daemon off;" >> /etc/nginx/nginx.conf ---> Using cache ---> 6bea966278d8 Step 4/5 : EXPOSE 80 ---> Using cache ---> 8f1c4281309e Step 5/5 : CMD ["nginx"] ---> Using cache ---> f545da818f47 Successfully built f545da818f47 Successfully tagged nginx:latest

      Your image is now built. You can list your Docker images using the following command:


      REPOSITORY TAG IMAGE ID CREATED SIZE nkhare/nginx latest 4073540cbcec 3 seconds ago 171MB ubuntu 16.04 7aa3602ab41e 11 days ago

      You can now use the nkhare/nginx:latest image to create containers.

      Building Container Images with Project Atomic-Buildah

      Buildah is a CLI tool, developed by Project Atomic, for quickly building Open Container Initiative (OCI)-compliant images. OCI provides specifications for container runtimes and images in an effort to standardize industry best practices.

      Buildah can create an image either from a working container or from a Dockerfile. It can build images completely in user space without the Docker daemon, and can perform image operations like build, list, push, and tag. In this step, you'll compile Buildah from source and then use it to create a container image.

      To install Buildah you will need the required dependencies, including tools that will enable you to manage packages and package security, among other things. Run the following commands to install these packages:

      • cd
      • sudo apt-get install software-properties-common
      • sudo add-apt-repository ppa:alexlarsson/flatpak
      • sudo add-apt-repository ppa:gophers/archive
      • sudo apt-add-repository ppa:projectatomic/ppa
      • sudo apt-get update
      • sudo apt-get install bats btrfs-tools git libapparmor-dev libdevmapper-dev libglib2.0-dev libgpgme11-dev libostree-dev libseccomp-dev libselinux1-dev skopeo-containers go-md2man

      Because you will compile the buildah source code to create its package, you'll also need to install Go:

      • sudo apt-get update
      • sudo curl -O
      • sudo tar -xvf go1.8.linux-amd64.tar.gz
      • sudo mv go /usr/local
      • sudo echo 'export PATH=$PATH:/usr/local/go/bin' >> ~/.profile
      • source ~/.profile
      • go version

      You will see the following output, indicating a successful installation:


      go version go1.8 linux/amd64

      You can now get the buildah source code to create its package, along with the runc binary. runc is the implementation of the OCI container runtime, which you will use to run your Buildah containers.

      Run the following commands to install runc and buildah:

      • mkdir ~/buildah
      • cd ~/buildah
      • export GOPATH=`pwd`
      • git clone ./src/
      • cd ./src/
      • make runc all TAGS="apparmor seccomp"
      • sudo cp ~/buildah/src/ /usr/bin/.
      • sudo apt install buildah

      Next, create the /etc/containers/registries.conf file to configure your container registries:

      • sudo nano /etc/containers/registries.conf

      Add the following content to the file to specify your registries:


      # This is a system-wide configuration file used to
      # keep track of registries for various container backends.
      # It adheres to TOML format and does not support recursive
      # lists of registries.
      # The default location for this configuration file is /etc/containers/registries.conf.
      # The only valid categories are: '', 'registries.insecure',
      # and 'registries.block'.
      registries = ['', '', '', '', '']
      # If you need to access insecure registries, add the registry's fully-qualified name.
      # An insecure registry is one that does not have a valid SSL certificate or only does HTTP.
      registries = []
      # If you need to block pull access from a registry, uncomment the section below
      # and add the registries fully-qualified name.
      # Docker only
      registries = []

      The registries.conf configuration file specifies which registries should be consulted when completing image names that do not include a registry or domain portion.

      Now run the following command to build an image, using the repository as the build context. This repository also contains the relevant Dockerfile:

      • sudo buildah build-using-dockerfile -t rsvpapp:buildah

      This command creates an image named rsvpapp:buildah from the Dockerfille available in the repository.

      To list the images, use the following command:

      You will see the following output:


      IMAGE ID IMAGE NAME CREATED AT SIZE b0c552b8cf64 Sep 30, 2016 04:39 95.3 MB 22121fd251df localhost/rsvpapp:buildah Sep 11, 2018 14:34 114 MB

      One of these images is localhost/rsvpapp:buildah, which you just created. The other,, is the base image from the Dockerfile.

      Once you have built the image, you can push it to Docker Hub. This will allow you to store it for future use. You will first need to login to your Docker Hub account from the command line:

      • docker login -u your-dockerhub-username -p your-dockerhub-password

      Once the login is successful, you will get a file, ~/.docker/config.json, that will contain your Docker Hub credentials. You can then use that file with buildah to push images to Docker Hub.

      For example, if you wanted to push the image you just created, you could run the following command, citing the authfile and the image to push:

      • sudo buildah push --authfile ~/.docker/config.json rsvpapp:buildah docker://your-dockerhub-username/rsvpapp:buildah

      You can also push the resulting image to the local Docker daemon using the following command:

      • sudo buildah push rsvpapp:buildah docker-daemon:rsvpapp:buildah

      Finally, take a look at the Docker images you have created:


      REPOSITORY TAG IMAGE ID CREATED SIZE rsvpapp buildah 22121fd251df 4 minutes ago 108MB nkhare/nginx latest 01f0982d91b8 17 minutes ago 172MB ubuntu 16.04 b9e15a5d1e1a 5 days ago 115MB

      As expected, you should now see a new image, rsvpapp:buildah, that has been exported using buildah.

      You now have experience building container images with two different tools, Docker and Buildah. Let's move on to discussing how to set up a cluster of containers with Kubernetes.

      Step 2 — Setting Up a Kubernetes Cluster on DigitalOcean using kubeadm and Terraform

      There are different ways to set up Kubernetes on DigitalOcean. To learn more about how to set up Kubernetes with kubeadm, for example, you can look at How To Create a Kubernetes Cluster Using Kubeadm on Ubuntu 18.04.

      Since this tutorial series discusses taking a Cloud Native approach to application development, we'll apply this methodology when setting up our cluster. Specifically, we will automate our cluster creation using kubeadm and Terraform, a tool that simplifies creating and changing infrastructure.

      Using your personal access token, you will connect to DigitalOcean with Terraform to provision 3 servers. You will run the kubeadm commands inside of these VMs to create a 3-node Kubernetes cluster containing one master node and two workers.

      On your Ubuntu server, create a pair of SSH keys, which will allow password-less logins to your VMs:

      You will see the following output:


      Generating public/private rsa key pair. Enter file in which to save the key (~/.ssh/id_rsa):

      Press ENTER to save the key pair in the ~/.ssh directory in your home directory, or enter another destination.

      Next, you will see the following prompt:


      Enter passphrase (empty for no passphrase):

      In this case, press ENTER without a password to enable password-less logins to your nodes.

      You will see a confirmation that your key pair has been created:


      Your identification has been saved in ~/.ssh/id_rsa. Your public key has been saved in ~/.ssh/ The key fingerprint is: SHA256:lCVaexVBIwHo++NlIxccMW5b6QAJa+ZEr9ogAElUFyY root@3b9a273f18b5 The key's randomart image is: +---[RSA 2048]----+ |++.E ++o=o*o*o | |o +..=.B = o | |. .* = * o | | . =.o + * | | . . o.S + . | | . +. . | | . ... = | | o= . | | ... | +----[SHA256]-----+

      Get your public key by running the following command, which will display it in your terminal:

      Add this key to your DigitalOcean account by following these directions.

      Next, install Terraform:

      • sudo apt-get update
      • sudo apt-get install unzip
      • wget
      • unzip
      • sudo mv terraform /usr/bin/.
      • terraform version

      You will see output confirming your Terraform installation:


      Terraform v0.11.7

      Next, run the following commands to install kubectl, a CLI tool that will communicate with your Kubernetes cluster, and to create a ~/.kube directory in your user's home directory:

      • sudo apt-get install apt-transport-https
      • curl -s | sudo apt-key add -
      • sudo touch /etc/apt/sources.list.d/kubernetes.list
      • echo "deb kubernetes-xenial main" | sudo tee -a /etc/apt/sources.list.d/kubernetes.list
      • sudo apt-get update
      • sudo apt-get install kubectl
      • mkdir -p ~/.kube

      Creating the ~/.kube directory will enable you to copy the configuration file to this location. You’ll do that once you run the Kubernetes setup script later in this section. By default, the kubectl CLI looks for the configuration file in the ~/.kube directory to access the cluster.

      Next, clone the sample project repository for this tutorial, which contains the Terraform scripts for setting up the infrastructure:

      • git clone

      Go to the Terrafrom script directory:

      • cd k8s-cicd-webinars/webinar1/2-kubernetes/1-Terraform/

      Get a fingerprint of your SSH public key:

      • ssh-keygen -E md5 -lf ~/.ssh/ | awk '{print $2}'

      You will see output like the following, with the highlighted portion representing your key:



      Keep in mind that your key will differ from what's shown here.

      Save the fingerprint to an environmental variable so Terraform can use it:

      • export FINGERPRINT=dd:d1:b7:0f:6d:30:c0:be:ed:ae:c7:b9:b8:4a:df:5e

      Next, export your DO personal access token:

      • export TOKEN=your-do-access-token

      Now take a look at the ~/k8s-cicd-webinars/webinar1/2-kubernetes/1-Terraform/ project directory:

      Output files

      This folder contains the necessary scripts and configuration files for deploying your Kubernetes cluster with Terraform.

      Execute the script to trigger the Kubernetes cluster setup:

      When the script execution is complete, kubectl will be configured to use the Kubernetes cluster you've created.

      List the cluster nodes using kubectl get nodes:


      NAME STATUS ROLES AGE VERSION k8s-master-node Ready master 2m v1.10.0 k8s-worker-node-1 Ready <none> 1m v1.10.0 k8s-worker-node-2 Ready <none> 57s v1.10.0

      You now have one master and two worker nodes in the Ready state.

      With a Kubernetes cluster set up, you can now explore another option for building container images: Kaniko from Google.

      Step 3 — Building Container Images with Kaniko

      Earlier in this tutorial, you built container images with Dockerfiles and Buildah. But what if you could build container images directly on Kubernetes? There are ways to run the docker image build command inside of Kubernetes, but this isn't native Kubernetes tooling. You would have to depend on the Docker daemon to build images, and it would need to run on one of the Pods in the cluster.

      A tool called Kaniko allows you to build container images with a Dockerfile on an existing Kubernetes cluster. In this step, you will build a container image with a Dockerfile using Kaniko. You will then push this image to Docker Hub.

      In order to push your image to Docker Hub, you will need to pass your Docker Hub credentials to Kaniko. In the previous step, you logged into Docker Hub and created a ~/.docker/config.json file with your login credentials. Let's use this configuration file to create a Kubernetes ConfigMap object to store the credentials inside the Kubernetes cluster. The ConfigMap object is used to store configuration parameters, decoupling them from your application.

      To create a ConfigMap called docker-config using the ~/.docker/config.json file, run the following command:

      • sudo kubectl create configmap docker-config --from-file=$HOME/.docker/config.json

      Next, you can create a Pod definition file called pod-kaniko.yml in the ~/k8s-cicd-webinars/webinar1/2-kubernetes/1-Terraform/ directory (though it can go anywhere).

      First, make sure that you are in the ~/k8s-cicd-webinars/webinar1/2-kubernetes/1-Terraform/ directory:

      • cd ~/k8s-cicd-webinars/webinar1/2-kubernetes/1-Terraform/

      Create the pod-kaniko.yml file:

      Add the following content to the file to specify what will happen when you deploy your Pod. Be sure to replace your-dockerhub-username in the Pod's args field with your own Docker Hub username:


      apiVersion: v1
      kind: Pod
        name: kaniko
        - name: kaniko
          args: ["--dockerfile=./Dockerfile",
                  "--force" ]
            - name: docker-config
              mountPath: /root/.docker/
            - name: demo
              mountPath: /tmp/rsvpapp
        restartPolicy: Never
          - image: python
            name: demo
            command: ["/bin/sh"]
            args: ["-c", "git clone /tmp/rsvpapp"] 
            - name: demo
              mountPath: /tmp/rsvpapp
        restartPolicy: Never
          - name: docker-config
              name: docker-config
          - name: demo
            emptyDir: {}

      This configuration file describes what will happen when your Pod is deployed. First, the Init container will clone the Git repository with the Dockerfile,, into a shared volume called demo. Init containers run before application containers and can be used to run utilties or other tasks that are not desirable to run from your application containers. Your application container, kaniko, will then build the image using the Dockerfile and push the resulting image to Docker Hub, using the credentials you passed to the ConfigMap volume docker-config.

      To deploy the kaniko pod, run the following command:

      • kubectl apply -f pod-kaniko.yml

      You will see the following confirmation:


      pod/kaniko created

      Get the list of pods:

      You will see the following list:


      NAME READY STATUS RESTARTS AGE kaniko 0/1 Init:0/1 0 47s

      Wait a few seconds, and then run kubectl get pods again for a status update:

      You will see the following:


      NAME READY STATUS RESTARTS AGE kaniko 1/1 Running 0 1m

      Finally, run kubectl get pods once more for a final status update:


      NAME READY STATUS RESTARTS AGE kaniko 0/1 Completed 0 2m

      This sequence of output tells you that the Init container ran, cloning the GitHub repository inside of the demo volume. After that, the Kaniko build process ran and eventually finished.

      Check the logs of the pod:

      You will see the following output:


      time="2018-08-02T05:01:24Z" level=info msg="appending to multi args" time="2018-08-02T05:01:24Z" level=info msg="Downloading base image nkhare/python:alpine" . . . ime="2018-08-02T05:01:46Z" level=info msg="Taking snapshot of full filesystem..." time="2018-08-02T05:01:48Z" level=info msg="cmd: CMD" time="2018-08-02T05:01:48Z" level=info msg="Replacing CMD in config with [/bin/sh -c python]" time="2018-08-02T05:01:48Z" level=info msg="Taking snapshot of full filesystem..." time="2018-08-02T05:01:49Z" level=info msg="No files were changed, appending empty layer to config." 2018/08/02 05:01:51 mounted blob: sha256:bc4d09b6c77b25d6d3891095ef3b0f87fbe90621bff2a333f9b7f242299e0cfd 2018/08/02 05:01:51 mounted blob: sha256:809f49334738c14d17682456fd3629207124c4fad3c28f04618cc154d22e845b 2018/08/02 05:01:51 mounted blob: sha256:c0cb142e43453ebb1f82b905aa472e6e66017efd43872135bc5372e4fac04031 2018/08/02 05:01:51 mounted blob: sha256:606abda6711f8f4b91bbb139f8f0da67866c33378a6dcac958b2ddc54f0befd2 2018/08/02 05:01:52 pushed blob sha256:16d1686835faa5f81d67c0e87eb76eab316e1e9cd85167b292b9fa9434ad56bf 2018/08/02 05:01:53 pushed blob sha256:358d117a9400cee075514a286575d7d6ed86d118621e8b446cbb39cc5a07303b 2018/08/02 05:01:55 pushed blob sha256:5d171e492a9b691a49820bebfc25b29e53f5972ff7f14637975de9b385145e04 2018/08/02 05:01:56 digest: sha256:831b214cdb7f8231e55afbba40914402b6c915ef4a0a2b6cbfe9efb223522988 size: 1243

      From the logs, you can see that the kaniko container built the image from the Dockerfile and pushed it to your Docker Hub account.

      You can now pull the Docker image. Be sure again to replace your-dockerhub-username with your Docker Hub username:

      • docker pull your-dockerhub-username/rsvpapp:kaniko

      You will see a confirmation of the pull:


      kaniko: Pulling from your-dockerhub-username/rsvpapp c0cb142e4345: Pull complete bc4d09b6c77b: Pull complete 606abda6711f: Pull complete 809f49334738: Pull complete 358d117a9400: Pull complete 5d171e492a9b: Pull complete Digest: sha256:831b214cdb7f8231e55afbba40914402b6c915ef4a0a2b6cbfe9efb223522988 Status: Downloaded newer image for your-dockerhub-username/rsvpapp:kaniko

      You have now successfully built a Kubernetes cluster and created new images from within the cluster. Let's move on to discussing Deployments and Services.

      Step 4 — Create Kubernetes Deployments and Services

      Kubernetes Deployments allow you to run your applications. Deployments specify the desired state for your Pods, ensuring consistency across your rollouts. In this step, you will create an Nginx deployment file called deployment.yml in the ~/k8s-cicd-webinars/webinar1/2-kubernetes/1-Terraform/ directory to create an Nginx Deployment.

      First, open the file:

      Add the following configuration to the file to define your Nginx Deployment:


      apiVersion: apps/v1
      kind: Deployment
        name: nginx-deployment
          app: nginx
        replicas: 3
            app: nginx
              app: nginx
            - name: nginx
              image: nginx:1.7.9
              - containerPort: 80

      This file defines a Deployment named nginx-deployment that creates three pods, each running an nginx container on port 80.

      To deploy the Deployment, run the following command:

      • kubectl apply -f deployment.yml

      You will see a confirmation that the Deployment was created:


      deployment.apps/nginx-deployment created

      List your Deployments:


      NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE nginx-deployment 3 3 3 3 29s

      You can see that the nginx-deployment Deployment has been created and the desired and current count of the Pods are same: 3.

      To list the Pods that the Deployment created, run the following command:


      NAME READY STATUS RESTARTS AGE kaniko 0/1 Completed 0 9m nginx-deployment-75675f5897-nhwsp 1/1 Running 0 1m nginx-deployment-75675f5897-pxpl9 1/1 Running 0 1m nginx-deployment-75675f5897-xvf4f 1/1 Running 0 1m

      You can see from this output that the desired number of Pods are running.

      To expose an application deployment internally and externally, you will need to create a Kubernetes object called a Service. Each Service specifies a ServiceType, which defines how the service is exposed. In this example, we will use a NodePort ServiceType, which exposes the Service on a static port on each node.

      To do this, create a file, service.yml, in the ~/k8s-cicd-webinars/webinar1/2-kubernetes/1-Terrafrom/ directory:

      Add the following content to define your Service:


      kind: Service
      apiVersion: v1
        name: nginx-service
          app: nginx
        type: NodePort
        - protocol: TCP
          port: 80
          targetPort: 80
          nodePort: 30111

      These settings define the Service, nginx-service, and specify that it will target port 80 on your Pod. nodePort defines the port where the application will accept external traffic.

      To deploy the Service run the following command:

      • kubectl apply -f service.yml

      You will see a confirmation:


      service/nginx-service created

      List the Services:

      You will see the following list:


      NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP <none> 443/TCP 5h nginx-service NodePort <none> 80:30111/TCP 7s

      Your Service, nginx-service, is exposed on port 30111 and you can now access it on any of the node’s public IPs. For example, navigating to http://node_1_ip:30111 or http://node_2_ip:30111 should take you to Nginx's standard welcome page.

      Once you have tested the Deployment, you can clean up both the Deployment and Service:

      • kubectl delete deployment nginx-deployment
      • kubectl delete service nginx-service

      These commands will delete the Deployment and Service you have created.

      Now that you have worked with Deployments and Services, let's move on to creating Custom Resources.

      Step 5 — Creating Custom Resources in Kubernetes

      Kubernetes offers limited but production-ready functionalities and features. It is possible to extend Kubernetes' offerings, however, using its Custom Resources feature. In Kubernetes, a resource is an endpoint in the Kubernetes API that stores a collection of API objects. A Pod resource contains a collection of Pod objects, for instance. With Custom Resources, you can add custom offerings for networking, storage, and more. These additions can be created or removed at any point.

      In addition to creating custom objects, you can also employ sub-controllers of the Kubernetes Controller component in the control plane to make sure that the current state of your objects is equal to the desired state. The Kubernetes Controller has sub-controllers for specified objects. For example, ReplicaSet is a sub-controller that makes sure the desired Pod count remains consistent. When you combine a Custom Resource with a Controller, you get a true declarative API that allows you to specify the desired state of your resources.

      In this step, you will create a Custom Resource and related objects.

      To create a Custom Resource, first make a file called crd.yml in the ~/k8s-cicd-webinars/webinar1/2-kubernetes/1-Terrafrom/ directory:

      Add the following Custom Resource Definition (CRD):


      kind: CustomResourceDefinition
        version: v1
        scope: Namespaced
          plural: webinars
          singular: webinar
          kind: Webinar
          - wb

      To deploy the CRD defined in crd.yml, run the following command:

      • kubectl create -f crd.yml

      You will see a confirmation that the resource has been created:

      Output created

      The crd.yml file has created a new RESTful resource path: /apis/*/webinars. You can now refer to your objects using webinars, webinar, Webinar, and wb, as you listed them in the names section of the CustomResourceDefinition. You can check the RESTful resource with the following command:

      • kubectl proxy & curl

      Note: If you followed the initial server setup guide in the prerequisites, then you will need to allow traffic to port 8001 in order for this test to work. Enable traffic to this port with the following command:

      You will see the following output:


      HTTP/1.1 200 OK Content-Length: 238 Content-Type: application/json Date: Fri, 03 Aug 2018 06:10:12 GMT { "apiVersion": "v1", "kind": "APIGroup", "name": "", "preferredVersion": { "groupVersion": "", "version": "v1" }, "serverAddressByClientCIDRs": null, "versions": [ { "groupVersion": "", "version": "v1" } ] }

      Next, create the object for using new Custom Resources by opening a file called webinar.yml:

      Add the following content to create the object:


      apiVersion: ""
      kind: Webinar
        name: webinar1
        name: webinar
        image: nginx

      Run the following command to push these changes to the cluster:

      • kubectl apply -f webinar.yml

      You will see the following output:

      Output created

      You can now manage your webinar objects using kubectl. For example:


      NAME CREATED AT webinar1 21s

      You now have an object called webinar1. If there had been a Controller, it would have intercepted the object creation and performed any defined operations.

      Deleting a Custom Resource Definition

      To delete all of the objects for your Custom Resource, use the following command:

      • kubectl delete webinar --all

      You will see:

      Output "webinar1" deleted

      Remove the Custom Resource itself:

      • kubectl delete crd

      You will see a confirmation that it has been deleted:

      Output "" deleted

      After deletion you will not have access to the API endpoint that you tested earlier with the curl command.

      This sequence is an introduction to how you can extend Kubernetes functionalities without modifying your Kubernetes code.

      Step 6 — Deleting the Kubernetes Cluster

      To destroy the Kubernetes cluster itself, you can use the script from the ~/k8s-cicd-webinars/webinar1/2-kubernetes/1-Terrafrom folder. Make sure that you are in this directory:

      • cd ~/k8s-cicd-webinars/webinar1/2-kubernetes/1-Terrafrom

      Run the script:

      By running this script, you'll allow Terraform to communicate with the DigitalOcean API and delete the servers in your cluster.


      In this tutorial, you used different tools to create container images. With these images, you can create containers in any environment. You also set up a Kubernetes cluster using Terraform, and created Deployment and Service objects to deploy and expose your application. Additionally, you extended Kubernetes' functionality by defining a Custom Resource.

      You now have a solid foundation to build a CI/CD environment on Kubernetes, which we'll explore in future articles.

      Source link