One place for hosting & domains

      Bare Metal Cloud: Key Advantages and Critical Use Cases to Gain a Competitive Edge


      Cloud environments today are part of the IT infrastructure of most enterprises due to all the benefits they provide, including flexibility, scalability, ease of use and pay-as-you-go consumption and billing.

      But not all cloud infrastructure is the same.

      In this multicloud world, finding the right fit between a workload and a cloud provider becomes a new challenge. Application components, such as web-based content serving platforms, real-time analytics engines, machine learning clusters and Real-Time Bidding (RTB) engines integrating dozens of partners, all require different features and may call for different providers. Enterprises are looking at application components and IT initiatives on a project by project basis, seeking the right provider for each use case. Easy cloud-to-cloud interconnectivity allows scalable applications to be distributed over infrastructure from multiple providers.

      Bare Metal cloud is a deployment model that provides unique and valuable advantages, especially compared to the popular virtualized/VM cloud models that are common with hyperscale providers. Let’s explore the benefits of the bare metal cloud model and highlight some use cases where it offers a distinctive edge.

      Advantages of the Bare Metal Cloud Model

      Both bare metal cloud and the VM-based hyperscale cloud model provide flexibility and scalability. They both allow for DevOps driven provisioning and the infrastructure-as-code approach. They both help with demand-based capacity management and a pay-as-you-go budget allocation.

      But bare metal cloud has unique advantages:

      Customizability
      Whether you need NVMe storage for high IOPS, a specific GPU model, or a unique RAM-to-CPU ratio or RAID level, bare metal is highly customizable. Your physical server can be built to the unique specifications required by your application.

      Dedicated Resources
      Bare Metal cloud enables high-performance computing, as no virtualization is used and there is no hypervisor overhead. All the compute cycles and resources are dedicated to the application.

      Tuned for Performance
      Bare metal hardware can be tuned for performance and features, be it disabling hyperthreading in the CPU or changing BIOS and IPMI configurations. In the 2018 report, Price-Performance Analysis: Bare Metal vs. Cloud Hosting, INAP Bare Metal was tested against IBM and Amazon AWS cloud offerings. In Hadoop cluster performance testing, INAP’s cluster completed the workload 6% faster than IBM Cloud’s Bare Metal cluster and 6% faster than AWS’s EC2 offering, and 3% faster than AWS’s EMR offering.

      Additional Security on Dedicated Machine Instances
      With a bare metal server, security measures, like full end-to-end encryption or Intel’s Trusted Execution and Open Attestation, can be easily integrated.

      Full Hardware Control
      Bare metal servers allow full control of the hardware environment. This is especially important when integrating SAN storage, specific firewalls and other unique appliances required by your applications.

      Cost Predictability
      Bare metal server instances are generally bundled with bandwidth. This eliminates the need to worry about bandwidth cost overages, which tend to cause significant variations in cloud consumption costs and are a major concern for many organizations. For example, the Price Performance Analysis report concluded that INAP’s Bare Metal machine configuration was 32 percent less expensive than the same configuration running on IBM Cloud. The report can be found for download here.

      Efficient Compute Resources
      Bare metal cloud offers more cost-effective compute resources when compared to the VM-based model for similar compute capacity in terms of cores, memory and storage.

      Bare Metal Cloud Workload Application Use Cases

      Given these benefits, a bare metal cloud provides a competitive advantage for many applications. Feedback from customers indicates it is critical for some use cases. Here is a long—but not exhaustive—list of use cases:

      • High-performance computing, where any overhead should be avoided, and hardware components are selected and tuned for maximum performance: e.g., computing clusters for silicon chip design.
      • AdTech and Fintech applications, especially where Real-Time Bidding (RTB) is involved and speedy access to user profiles and assets data is required.
      • Real-time analytics/recommendation engine clusters where specific hardware and storage is needed to support the real-time nature of the workloads.
      • Gaming applications where performance is needed either for raw compute or 3-D rendering. Hardware is commonly tuned for such applications.
      • Workloads where database access time is essential. In such cases, special hardware components are used, or high performance NVMe-based SAN arrays are integrated.
      • Security-oriented applications that leverage unique Intel/AMD CPU features: end-to-end encryption including memory, trust execution environments, etc.
      • Applications with high outbound bandwidth usage, especially collaboration applications based on real-time communications and webRTC platforms.
      • Cases where a dedicated compute environment is needed either by policy, due to business requirements or for compliance.
      • Most applications where compute resource usage is steady and continuous, the application is not dependent on PaaS services, the hardware footprint size is considerable, and cost is a limiting concern.

      Is Bare Metal Your Best Fit?

      Bare Metal cloud provides many benefits when compared to virtualization-based cloud offerings.

      Bare Metal allows for high performance computing with a highly customizable hardware resources that can be tuned up for maximum performance. It offers a dedicated compute environment with more control on the resources and more security in a cost-effective way.

      Bare metal cloud can be an attractive solution to consider for your next workload or application and it is a choice validated and proven by some of the largest enterprises with mission-critical applications.

      Interested in learning more about INAP Bare Metal?

      CHAT NOW

      Layachi Khodja


      READ MORE



      Source link

      Add CAA Records in the Linode Cloud Manager


      Updated by Linode

      Written by Linode

      Certification Authority Authorization (CAA) is a type of DNS record that allows the owner of a domain to specify which certificate authority (or authorities) are allowed to issue SSL/TLS certificates for their domain(s). This quick answer shows you how to set up CAA records on your Linode.

      Add a Single CAA Record

      1. Log in to the Linode Cloud Manager

      2. Select the Domains link in the sidebar.

      3. Select the domain you want to add the record to, or add a domain if you don’t already have one listed.

      4. Under the CAA Record section, select Add a CAA record. A form with the following fields will appear:

        Name: The subdomain you want the CAA record to cover. To apply it to your entire website (for example: example.com), leave this field blank. To limit the record’s application to a subdomain on your site, (for example: subdomain.example.com), enter the subdomain’s name into the form field (for example: subdomain).

        Tag:

        • issue – Authorize the certificate authority entered in the Value field further below to issue TLS certificates for your site.

        • issuewild – Same as above, with the exception that you were issued a wildcard certificate.

        • iodef – URL where your CA can report security policy violations to you concerning certificate issue requests.

        Value: If the issue or issuewild tag was selected above, then the Value field takes the domain of your certificate issuer (for example: letsencrypt.org). If the iodef tag was selected, the Value field takes a contact or submission URL (http or mailto).

        TTL (Time to Live): Time in seconds that your new CAA record will be cached by Linode’s name servers before being refreshed. The Default selection’s TTL is 300 seconds, which is fine for most cases. You can use the dig command to view the remaining time your DNS records will be cached until refreshed. Replace linode.com with your site’s domain or subdomain in the command below:

        root@debian:~# dig +nocmd +noall +answer example.com
        example.com.     167 IN  A   203.0.113.1
        
      5. Click the Save button when finished. The CAA record should be fully propagated within the TTL duration.

      Add Multiple CAA Records

      Multiple CAA records must be added individually. If your site example.com was issued a TLS certificate by Let’s Encrypt, but your subdomain store.example.com uses a Symantec certificate, you would need two different CAA records. A reporting URL for the iodef tag would also need its own record. Those three would look something like this:

      Multiple CAA records

      More Information

      You may wish to consult the following resources for additional information on this topic. While these are provided in the hope that they will be useful, please note that we cannot vouch for the accuracy or timeliness of externally hosted materials.

      Find answers, ask questions, and help others.

      This guide is published under a CC BY-ND 4.0 license.



      Source link

      How To Set Up the code-server Cloud IDE Platform on DigitalOcean Kubernetes


      The author selected the Free and Open Source Fund to receive a donation as part of the Write for DOnations program.

      Introduction

      With developer tools moving to the cloud, creation and adoption of cloud IDE (Integrated Development Environment) platforms is growing. Cloud IDEs allow for real-time collaboration between developer teams to work in a unified development environment that minimizes incompatibilities and enhances productivity. Accessible through web browsers, cloud IDEs are available from every type of modern device. Another advantage of a cloud IDE is the possibility to leverage the power of a cluster, which can greatly exceed the processing power of a single development computer.

      code-server is Microsoft Visual Studio Code running on a remote server and accessible directly from your browser. Visual Studio Code is a modern code editor with integrated Git support, a code debugger, smart autocompletion, and customizable and extensible features. This means that you can use various devices, running different operating systems, and always have a consistent development environment on hand.

      In this tutorial, you will set up the code-server cloud IDE platform on your DigitalOcean Kubernetes cluster and expose it at your domain, secured with Let’s Encrypt certificates. In the end, you’ll have Microsoft Visual Studio Code running on your Kubernetes cluster, available via HTTPS and protected by a password.

      Prerequisites

      • A DigitalOcean Kubernetes cluster with your connection configured as the kubectl default. Instructions on how to configure kubectl are shown under the Connect to your Cluster step when you create your cluster. To create a Kubernetes cluster on DigitalOcean, see Kubernetes Quickstart.

      • The Helm package manager installed on your local machine, and Tiller installed on your cluster. To do this, complete Steps 1 and 2 of the How To Install Software on Kubernetes Clusters with the Helm Package Manager tutorial.

      • The Nginx Ingress Controller and Cert-Manager installed on your cluster using Helm in order to expose code-server using Ingress Resources. To do this, follow How to Set Up an Nginx Ingress on DigitalOcean Kubernetes Using Helm.

      • A fully registered domain name to host code-server, pointed at the Load Balancer used by the Nginx Ingress. This tutorial will use code-server.your_domain throughout. You can purchase a domain name on Namecheap, get one for free on Freenom, or use the domain registrar of your choice. This domain name must differ from the one used in the How To Set Up an Nginx Ingress on DigitalOcean Kubernetes prerequisite tutorial.

      Step 1 — Installing And Exposing code-server

      In this section, you’ll install code-server to your DigitalOcean Kubernetes cluster and expose it at your domain, using the Nginx Ingress controller. You will also set up a password for admittance.

      You’ll store the deployment configuration on your local machine, in a file named code-server.yaml. Create it using the following command:

      Add the following lines to the file:

      code-server.yaml

      apiVersion: v1
      kind: Namespace
      metadata:
        name: code-server
      ---
      apiVersion: extensions/v1beta1
      kind: Ingress
      metadata:
        name: code-server
        namespace: code-server
        annotations:
          kubernetes.io/ingress.class: nginx
      spec:
        rules:
        - host: code-server.your_domain
          http:
            paths:
            - backend:
                serviceName: code-server
                servicePort: 80
      ---
      apiVersion: v1
      kind: Service
      metadata:
       name: code-server
       namespace: code-server
      spec:
       ports:
       - port: 80
         targetPort: 8443
       selector:
         app: code-server
      ---
      apiVersion: extensions/v1beta1
      kind: Deployment
      metadata:
        labels:
          app: code-server
        name: code-server
        namespace: code-server
      spec:
        selector:
          matchLabels:
            app: code-server
        replicas: 1
        template:
          metadata:
            labels:
              app: code-server
          spec:
            containers:
            - image: codercom/code-server
              imagePullPolicy: Always
              name: code-server
              args: ["--allow-http"]
              ports:
              - containerPort: 8443
              env:
              - name: PASSWORD
                value: "your_password"
      

      This configuration defines a Namespace, a Deployment, a Service, and an Ingress. The Namespace is called code-server and separates the code-server installation from the rest of your cluster. The Deployment consists of one replica of the codercom/code-server Docker image, and an environment variable named PASSWORD that specifies the password for access.

      The code-server Service internally exposes the pod (created as a part of the Deployment) at port 80. The Ingress defined in the file specifies that the Ingress Controller is nginx, and that the code-server.your_domain domain will be served from the Service.

      Remember to replace your_password with your desired password, and code-server.your_domain with your desired domain, pointed to the Load Balancer of the Nginx Ingress Controller.

      Then, create the configuration in Kubernetes by running the following command:

      • kubectl create -f code-server.yaml

      You'll see the following output:

      Output

      namespace/code-server created ingress.extensions/code-server created service/code-server created deployment.extensions/code-server created

      You can watch the code-server pod become available by running:

      • kubectl get pods -w -n code-server

      The output will look like:

      Output

      NAME READY STATUS RESTARTS AGE code-server-f85d9bfc9-j7hq6 0/1 ContainerCreating 0 1m

      As soon as the status becomes Running, code-server has finished installing to your cluster.

      Navigate to your domain in your browser. You'll see the login prompt for code-server.

      code-server login prompt

      Enter the password you set in code-server.yaml and press Enter IDE. You'll enter code-server and immediately see its editor GUI.

      code-server GUI

      You've installed code-server to your Kubernetes cluster and made it available at your domain. You have also verified that it requires you to log in with a password. Now, you'll move on to secure it with free Let's Encrypt certificates using Cert-Manager.

      Step 2 — Securing the code-server Deployment

      In this section, you will secure your code-server installation by applying Let's Encrypt certificates to your Ingress, which Cert-Manager will automatically create. After completing this step, your code-server installation will be accessible via HTTPS.

      Open code-server.yaml for editing:

      Add the highlighted lines to your file, making sure to replace the example domain with your own:

      code-server.yaml

      apiVersion: v1
      kind: Namespace
      metadata:
        name: code-server
      ---
      apiVersion: extensions/v1beta1
      kind: Ingress
      metadata:
        name: code-server
        namespace: code-server
        annotations:
          kubernetes.io/ingress.class: nginx
          certmanager.k8s.io/cluster-issuer: letsencrypt-prod
      spec:
        tls:
        - hosts:
          - code-server.your_domain
          secretName: codeserver-prod
        rules:
        - host: code-server.your_domain
          http:
            paths:
            - backend:
                serviceName: code-server
                servicePort: 80
      ...
      

      First, you specify that the cluster-issuer that this Ingress will use to provision certificates will be letsencrypt-prod, created as a part of the prerequisites. Then, you specify the domains that will be secured under the tls section, as well as your name for the Secret holding them.

      Apply the changes to your Kubernetes cluster by running the following command:

      • kubectl apply -f code-server.yaml

      You'll need to wait a few minutes for Let's Encrypt to provision your certificate. In the meantime, you can track its progress by looking at the output of the following command:

      • kubectl describe certificate codeserver-prod -n code-server

      When it finishes, the end of the output will look similar to this:

      Output

      Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Generated 2m49s cert-manager Generated new private key Normal GenerateSelfSigned 2m49s cert-manager Generated temporary self signed certificate Normal OrderCreated 2m49s cert-manager Created Order resource "codeserver-prod-4279678953" Normal OrderComplete 2m14s cert-manager Order "codeserver-prod-4279678953" completed successfully Normal CertIssued 2m14s cert-manager Certificate issued successfully

      You can now refresh your domain in your browser. You'll see the padlock to the left of the address bar in your browser signifying that the connection is secure.

      In this step, you have configured the Ingress to secure your code-server deployment. Now, you can review the code-server user interface.

      Step 3 — Exploring the code-server Interface

      In this section, you'll explore some of the features of the code-server interface. Since code-server is Visual Studio Code running in the cloud, it has the same interface as the standalone desktop edition.

      On the left-hand side of the IDE, there is a vertical row of six buttons opening the most commonly used features in a side panel known as the Activity Bar.

      code-server GUI - Sidepanel

      This bar is customizable so you can move these views to a different order or remove them from the bar. By default, the first view opens the Explorer panel that provides tree-like navigation of the project's structure. You can manage your folders and files here—creating, deleting, moving, and renaming them as necessary. The next view provides access to a search and replace functionality.

      Following this, in the default order, is your view of the source control systems, like Git. Visual Studio code also supports other source control providers and you can find further instructions for source control workflows with the editor in this documentation.

      Git dropdown menu with version control actions

      The debugger option on the Activity Bar provides all the common actions for debugging in the panel. Visual Studio Code comes with built-in support for the Node.js runtime debugger and any language that transpiles to Javascript. For other languages you can install extensions for the required debugger. You can save debugging configurations in the launch.json file.

      Debugger View with launch.json open

      The final view in the Activity Bar provides a menu to access available extensions on the Marketplace.

      code-server GUI - Tabs

      The central part of the GUI is your editor, which you can separate by tabs for your code editing. You can change your editing view to a grid system or to side-by-side files.

      Editor Grid View

      After creating a new file through the File menu, an empty file will open in a new tab, and once saved, the file's name will be viewable in the Explorer side panel. Creating folders can be done by right clicking on the Explorer sidebar and pressing on New Folder. You can expand a folder by clicking on its name as well as dragging and dropping files and folders to upper parts of the hierarchy to move them to a new location.

      code-server GUI - New Folder

      You can gain access to a terminal by pressing CTRL+SHIFT+, or by pressing on Terminal in the upper menu, and selecting New Terminal. The terminal will open in a lower panel and its working directory will be set to the project's workspace, which contains the files and folders shown in the Explorer side panel.

      You've explored a high-level overview of the code-server interface and reviewed some of the most commonly used features.

      Conclusion

      You now have code-server, a versatile cloud IDE, installed on your DigitalOcean Kubernetes cluster. You can work on your source code and documents with it individually or collaborate with your team. Running a cloud IDE on your cluster provides more power for testing, downloading, and more thorough or rigorous computing. For further information see the Visual Studio Code documentation on additional features and detailed instructions on other components of code-server.



      Source link