One place for hosting & domains

      Development

      How to Install Node.js and Create a Local Development Environment on macOS


      Introduction

      Node.js is an open source JavaScript runtime environment for easily building server-side applications. It’s also the runtime that powers many client-side development tools for modern JavaScript frameworks.

      In this tutorial, you’ll set up a Node.js programming environment on your local macOS machine using Homebrew, and you’ll test your environment out by writing a simple Node.js program.

      Prerequisites

      You will need a macOS computer running High Sierra or higher with administrative access and an internet connection.

      Step 1 — Using the macOS Terminal

      You’ll use the command line to install Node.js and run various commands related to developing Node.js applications. The command line is a non-graphical way to interact with your computer. Instead of clicking buttons with your mouse, you’ll type commands as text and receive text-based feedback. The command line, also known as a shell, lets you automate many tasks you do on your computer daily, and is an essential tool for software developers.

      To access the command line interface, you’ll use the Terminal application provided by macOS. Like any other application, you can find it by going into Finder, navigating to the Applications folder, and then into the Utilities folder. From here, double-click the Terminal application to open it up. Alternatively, you can use Spotlight by holding down the COMMAND key and pressing SPACE to find Terminal by typing it out in the box that appears.

      macOS Terminal

      If you’d like to get comfortable using the command line, take a look at An Introduction to the Linux Terminal. The command line interface on macOS is very similar, and the concepts in that tutorial are directly applicable.

      Now that you have the Terminal running, let’s install some prerequisites we’ll need for Node.js.

      Xcode is an integrated development environment (IDE) that is comprised of software development tools for macOS. You won’t need Xcode to write Node.js programs, but Node.js and some of its components will rely on Xcode’s Command Line Tools package.

      Execute this command in the Terminal to download and install these components:

      You'll be prompted to start the installation, and then prompted again to accept a software license. Then the tools will download and install automatically.

      We're now ready to install the package manager Homebrew, which will let us install the latest version of Node.js.

      Step 3 — Installing and Setting Up Homebrew

      While the command line interface on macOS has a lot of the functionality you'd find in Linux and other Unix systems, it does not ship with a good package manager. A package manager is a collection of software tools that work to automate software installations, configurations, and upgrades. They keep the software they install in a central location and can maintain all software packages on the system in formats that are commonly used. Homebrew is a free and open-source software package managing system that simplifies the installation of software on macOS. We'll use Homebrew to install the most recent version of Node.js.

      To install Homebrew, type this command into your Terminal window:

      • /usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"

      The command uses curl to download the Homebrew installation script from Homebrew's Git repository on GitHub.

      Let’s walk through the flags that are associated with the curl command:

      • The -f or --fail flag tells the Terminal window to give no HTML document output on server errors.
      • The -s or --silent flag mutes curl so that it does not show the progress meter, and combined with the -S or --show-error flag it will ensure that curl shows an error message if it fails.
      • The -L or --location flag will tell curl to handle redirects. If the server reports that the requested page has moved to a different location, it'll automatically execute the request again using the new location.

      Once curl downloads the script, it's then executed by the Ruby interpreter that ships with macOS, starting the Homebrew installation process.

      The installation script will explain what it will do and will prompt you to confirm that you want to do it. This lets you know exactly what Homebrew is going to do to your system before you let it proceed. It also ensures you have the prerequisites in place before it continues.

      You'll be prompted to enter your password during the process. However, when you type your password, your keystrokes will not display in the Terminal window. This is a security measure and is something you'll see often when prompted for passwords on the command line. Even though you don't see them, your keystrokes are being recorded by the system, so press the RETURN key once you’ve entered your password.

      Press the letter y for “yes” whenever you are prompted to confirm the installation.

      Now let's verify that Homebrew is set up correctly. Execute this command:

      If no updates are required at this time, you'll see this in your Terminal:

      Output

      Your system is ready to brew.

      Otherwise, you may get a warning to run another command such as brew update to ensure that your installation of Homebrew is up to date.

      Now that Homebrew is installed, you can install Node.js.

      Step 4 — Installing Node.js

      With Homebrew installed, you can install a wide range of software and developer tools. We'll use it to install Node.js and its dependencies.

      You can use Homebrew to search for everything you can install with the brew search command, but to provide us with a shorter list, let’s instead search for packages related to Node.js:

      You'll see a list of packages you can install, like this:

      Output

      ==> Formulae node.js nodejs

      Both of these packages install Node.js on your system. They both exist just in case you can't remember if you need to use nodejs or node.js.

      Execute this command to install the nodejs package:

      You'll see output similar to the following in your Terminal. Homebrew will install many dependencies, but will eventually download and install Node.js itself:

      Output

      ==> Installing dependencies for node: icu4c ==> Installing node dependency: icu4c ==> Installing node ==> Downloading https://homebrew.bintray.com/bottles/node-11.0.0.sierra.bottle.tar.gz ######################################################################## 100.0% ==> Pouring node-11.0.0.sierra.bottle.tar.gz ... ==> Summary 🍺 /usr/local/Cellar/node/11.0.0: 3,936 files, 50.1MB

      In addition to Node.js itself, Homebrew installs a few related tools, including npm, which makes it easy to install and update Node.js libraries and packages you might use in your own projects.

      To check the version of Node.js that you installed, type

      This will output the specific version of Node.js that is currently installed, which will by default be the most up-to-date stable version of Node.js that is available.

      Output

      v11.0.0

      Check the version of npm with

      You'll see the version displayed:

      Output

      6.4.1

      You'll use npm to install additional components, libraries, and frameworks.

      To update your version of Node.js, you can first update Homebrew to get the latest list of packages, and then upgrade Node.js itself:

      • brew update
      • brew upgrade nodejs

      Now that Node.js is installed, let's write a program to ensure everything works.

      Step 5 — Creating a Simple Program

      Let's create a simple "Hello, World" program. This will make sure that our environment is working and gets you comfortable creating and running a Node.js program.

      To do this, create a new file called hello.js using nano:

      Type the following code into the file:

      hello.js

      let message = "Hello, World!";
      console.log(message);
      

      Exit the editor by pressing CTRL+X. Then press y when prompted to save the file. You'll be returned to your prompt.

      Now run the program with the following command:

      The program executes and displays its output to the screen:

      Output

      Hello, World!

      This simple program proves that you have a working development environment. You can use this environment to continue exploring Node.js and build larger, more interesting projects.

      Conclusion

      You've successfully installed Node.js, npm, and tested out your setup by creating and running a simple program. You can now use this to develop client-side apps or server-side apps. Take a look at the following tutorials to learn more:



      Source link

      The Importance of Data Backups

      Backups are important, even in the filesystem level!

      I’ve been a Linux user for around 13 years now and am amazed with how progressive the overall experience has become. Thirteen years ago you were using either Slackware 3, Redhat 5.x or Mandrake usually. Being 14 I was one of the “newbies” stuck on Mandrake because my 56k modem was what is known as a softmodem – a modem that lacks quite a bit of hardware and relies on your computer’s resources to actually function. Back then to make these work in Linux was a complete nightmare and Mandrake was the only one that worked out of the box with softmodems.

      Back in those days you didn’t have the package management tools you have today be it yum, aptitude, portage or any other various package management utilities. You had rpmfind.net to find your rpms while praying to god you found the right ones for your specific operating system as well as playing the dependency tracking game. Slackware was strictly source installs and the truly Linux proficient would pride themselves in how small of a Slackware install footprint they could get to have a running desktop.

      Growing frustrated at not understanding the build process and being constantly referred to as a newbie who uses “N00bdrake” I forced myself into the depths of Linux and after a year or so had a working Slackware box with XFree86 running Enlightenment with sound and support for my modem. I learned an extensive amount about how Linux works, compiling your own kernel, searching mailing lists to find patches for bugs, applying patches to software and walking through your hardware to build proper .conf files so daemons would function specifically. It seems that this kind of knowledge is being lost with Linux users these days as they are not forced to drop down to the lowest level of Linux to make their systems function.

      A good example of this is recently dealing with a hard drive with an ext3 filesystem that was showing no data on it. If you used the command df which shows partition disk usage the data was shown as taking up space, but you couldn’t see it. A lot of people conferred and figured that the data was completely lost for good while I sat there saying NOPE waiting for someone to give a correct answer. Unable to get one, I divulged that the reason this happened is because a special block what is known as a superblock had become corrupted and the journal on the filesystem lost all information. Issuing a fsck would not fix the issue and routinely would check out as “ok” as it is using the bad primary superblock. There are actually multiple superblocks on ext2, ext3 as well as ext4 partitions. These exist specifically for backup purposes should your main one become corrupt to correct such an issue. Having toasted Linux countless times playing around with things such as software raid and hard-locking due to a poorly configured kernel, I have probably spent more time than I should’ve in the past reading about how the ext filesystem works. You in the past might have overlooked this when creating a filesystem in Linux but you will see output similar to this when creating an ext filesystem:

      Superblock backups stored on blocks:

      32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208

      These are very, very important blocks and integral to maintaining data on your filesystem. Lost these numbers? You can still get them from a few methods as well:

      1. First you need to know what size of blocks you used on your filesystem. The default is 1k so unless you actually issued a specific command when using mkfs.ext3 then your blocksize is 1024.
      2. Now issue the command “mke2fs -n -b block-size /dev/sdc1”. This is assuming that sdc1 is the corrupt partition throwing no data. Since you’re issuing the -n flag this means the command will not actually make a new partition but will give you those precious backup superblocks.
      3. Now take any of those superblocks and make sure that your partition is unmounted. Issue “fsck -fy -c -b 163840 /dev/sdc1” to hopefully fix your partition. Once completed mount the drive and more-than-likely all your data will be in the folder lost+found. It might have lost the initial folder name but at least your data is there, and with a little bit of play you can figure out which folder is which.

      Now take a breather, relax and be happy that your data is not completely gone. I suggest in the future pulling up a source-based distribution like Slackware and try setting up an entire system without using any package management. See how it goes, prepare to read a lot of documentation but in the end you will be thankful as you will learn more about Linux this way than any other method.

      The Value of Virtualization

      You’re probably familiar with many arguments for virtualizing your systems. Virtualization can make your systems more secure, by reducing the number of applications and users on a single machine. It can make it easier to scale, utilizes your resources more efficiently, reduces costs, is faster to set up, and shields you from hardware failures, providing you with better uptime. VMs have another advantages over conventional servers, though, which is less commonly listed but still pretty important: they’re automatically instrumented.

      Let me explain what I mean. Lets say you’re having some problems running your website on a traditional server. Your traffic has gone up, and now you are having outages during peak times. You speak to your tech support staff, and the admins agree that there’s a problem — but they’re not exactly sure what the cause is.

      Usually at this point, the admins will start ‘keeping an eye’ on the system in question. This often means being logged in and running top or vmstats. If the problem recurs, hopefully the admins will catch it, and the output from the monitoring software will give them hints as to what went wrong. If the admin is not around when the problem happens, though, they might not get the data they need, and then the process will have to start all over again.

      Another solution is to start monitoring the server using a monitoring program like Cacti or Ganglia. This is a little more reliable than manual monitoring because the software won’t get bored or distracted and miss the fault event. But monitoring software has its own problems. It is often a hassle to setup. It requires punching holes in your firewall, making your server less secure. It takes up resources on an already precarious machine, possibly making downtime more likely. And if the problem affects the network, the remote monitoring machine might not be able to communicate with the trouble server to get any useful data at the exact time when the data is needed.

      This is where virtualization comes to the rescue. The hypervisor — the software which makes virtualization possible — already has a lot of statistics about the virtual machine. Our Cascade cloud platform automatically gathers such statistics for every VM, storing the data in approximately five-minute increments in our own internal logging database. The data gathering happens in the context of the node, not the VM, meaning that the VM will not see a performance impact from the monitoring. Also, the fact that every VM is already monitored means that if a fault occurs, you won’t have to wait for a second fault to figure out what went wrong. The data to analyze the original fault might already be there.

      Let me give you a concrete example. Just today, we had a problem with a customer’s VM; his PHP site went offline and the VM required a reboot to bring the site back online. The admins had a hunch that the VM was overloaded and couldn’t handle the traffic, but they didn’t know what resource was running low.

      Here are some graphs generated by our internal system, Manage, which allowed our admins to get to the bottom of the problem. First, lets start with a graph of network bandwidth for the VM:

      network

      This graph illustrates the problem precisely. Around 8:50 PM, the VM stopped serving requests, or the amount of data served dropped precipitously. When admins logged in, they saw this in the kernel logs:

      Oct 31 20:58:03 vm1 kernel: INFO: task php:41632 blocked for more than 120 seconds.

      But why did this happen? Maybe the CPU usage for the VM was too high? Well, we can answer this question using our CPU graphs, which display CPU usage data for both the VM as a whole on its node and for each virtual CPU inside the VM:

      cpu

       

      Sure enough, the VM is pretty busy. It is making good use of all of its virtual CPUs, and its overall load on its node is often over 100%. However, the VM has 4 VCPUs, which means that if CPU were the limiting resource, the load would be as high as 400%. It looks like each VCPU is only being about 25% utilized. Also, the fault occurred at 8:50 PM, and we don’t see a CPU spike around that time. In fact, CPU usage for some of the virtual CPUs appears to drop around 8:50 — VCPU 0, at least, had nothing much to do during the outage.

      So what could be the problem? For an answer, lets turn to yet another batch of data we are able to get from the hypervisor: disk statistics

      disk

       

      The VM is not a particularly major user of disk IO — mostly steady writes consistent with saving log activity, with a few read spikes which might indicate someone searching through the file system or perhaps a scheduled backup. But here’s something interesting: right around the time the VM experienced its failure, swap file usage skyrocketed. Now we know exactly why the VM failed: it ran out of memory, and swap was too slow to fulfill the heavy traffic requests the VM demanded.

      Getting data like this on a regular, traditional server would have required complex monitoring software, a steady stream of network traffic, a whole another monitoring server and skilled labor to set the whole thing up. On a Cascade VM, you get this kind of data for free, automatically. You won’t see these exact same graphs in LEAP, our customer portal, as these are generated for our internal interfaces only. But the data behind these graphs is also available to LEAP, which will generate much prettier, more usable visualizations, which permit you to easily drill down and explore what’s happening with your virtual machine.

      The conclusion to this story is that we increased the amount of memory available to the VM from 4 GB to 8 GB. This required just a quick reboot of the VM, with none of the downtime or stress required for pulling a physical server out of the racks and opening it up. This solved the customer’s problems, with better performance and no outages. So here is yet another way virtualization with Cascade and LEAP makes your life easier.