One place for hosting & domains


      Reduce Your AWS Public Cloud Spend with DIY Strategies and Managed Services

      “My AWS bill last month was the price of a car.” A CIO of a Fortune 500 company in the Bay Area said this to me about five years ago. I was new to California and it seemed like everyone was driving a BMW or Mercedes, but the bill could have been equivalent to the cost of a Ferrari or Maserati for all I knew. Regardless, I concluded that the bill was high. Since then, I have been on a mission to research and identify how to help customers optimize their public cloud costs.

      Shifting some of your workloads to public cloud platforms such as AWS or Microsoft Azure can seem like common-sense economics, as public cloud empowers your organization to scale resources as needed. In theory, you pay for what you use, thus saving money. Right?

      Not necessarily. Unless you are vigilant and diligent, cloud spend through over provisioning, forgetting to turn off unwanted resources, not picking the right combination of instances, racking up data egress fees, and so on, can quickly make costs go awry. But public cloud cost optimization does not have to be esoteric or a long arduous road. Let’s demystify and explore ways to potentially reduce your cloud spend.

      AWS Goes Awry
      This comic got a few laughs and comments when I posted it on LinkedIn. But in all seriousness, this is a real problem, which is why I’m delving into cost optimization and the gotchas to watch out for. Source.

      Optimizing Your AWS Data Transfer Costs

      For some organizations, a large percentage of cloud spend can be attributed to network traffic/data transfer costs. It is prudent to be cognizant of the data transfer costs within availability zones (AZs), within regions, between regions and into and out of AWS and the internet. Pricing may vary considerably depending on your design or implementation selection.

      Common Misconceptions and Things to Look Out For

      • Cross-AZ traffic is not free: Utilizing multiple AZs for high availability (HA) is a good idea; however cross AZ traffic costs add up. If feasible, optimize your traffic to stay within the same AZ as much as possible.
        • EC2 traffic between AZs is effectively the same as between regions. For example, deploying a cluster across AZs is beneficial for HA, but can hurt on network costs.
      • Free services? Free is good: Several AWS services offer a hidden value of free cross-AZ data transfer. Databases such as EFS, RDS, MSK and others are examples of this.
      • Utilizing public IPs when not required: If you use an elastic IP or public IP address of an EC2 instance, you will incur network costs, even if it is accessed locally within the AZ.
      • Managed NAT Gateway: Managed NAT Gateways are used to let traffic egress from private subnets—at a cost of 4.5 cents as a data processing fee layered on top of data transfer pricing. At some point, consider running your own NAT instances to optimize your cloud spend.
      • The figure below provides an overview:
      Image source.

      Other Cloud Cost Optimization Suggestions by AWS Category

      • Elastic Compute Cloud (EC2)
        • Purchase savings plans for baseline capacity
        • Verify that instance type still reflects the current workload
        • Verify that the maximum I/O performance of the instance matches with the EBS volumes
        • Use Spot Instances for stateless and non-production workloads
        • Make use of AMD or ARM based instances
        • Switch to Amazon Linux or any other Operating System that is Open Source
      • Virtual Private Cloud (VPC)
        • Create VPC endpoints for S3 and DynamoDB
        • Check costs for NAT gateways and change architecture if necessary
        • Check costs for traffic between AZs and reduce traffic when possible
        • Try to avoid VPC endpoints for other services
      • Simple Storage Service (S3)
        • Delete unnecessary objects and buckets
        • Consider using S3 Intelligent Tiering
        • Configure lifecycle policies define a retention period for objects
        • Use Glacier Deep Archive for long-term data archiving
      • Elastic Block Storage (EBS)
        • Delete snapshots created to backup data that are no longer needed
        • Check whether your backup solution deletes old snapshots
        • Delete snapshots belonging to unused AMI
        • Search for unused volumes and delete them

      Alternatives to DIY Public Cloud Cost Optimization

      As I’ve shown, there are more than a few ways to optimize public cloud cost on your own. And if you were to look for more information on the topic, Googling “Optimizing AWS costs” will fetch more than 50 million results, and Googling “optimizing MS Azure costs” will get you more than 58 million results. My eyes are still bleeding from sifting through just a few of them.

      Do you really have time to examine 100 million articles? Do it yourself (DIY) can have some advantages if you have the time or expertise on staff. If not, there are alternatives to explore. 

      Third-Party Optimization Services

      Several companies offer services designed to help you gain insights into expenses or lower your AWS bill, such as Cloudability, CloudHealth Technologies and ParkMyCloud. Some of these charge a percentage of your bill, which may be expensive.

      Managed Cloud Service Providers

      You can also opt for a trusted managed public cloud provider who staffs certified AWS and MS Azure engineers that know the ins and outs of cost optimization for these platforms.

      Advantages of partnering with a Managed Cloud service provider:

      • Detect/investigate accidental spend or cost anomalies
      • Proactively design/build scalable, secure, resilient and cost-effective architecture
      • Reduce existing cloud spend
      • Report on Cloud spend and ROI
      • Segment Cloud costs by teams, product or category

      INAP’s experts are ready to assist you. With INAP Managed AWS, certified engineers and architects help you secure, maintain and optimize public cloud environments so your team can devote its efforts to the applications hosted there. We also offer services for Managed Azure to help you make the most of your public cloud resources.

      Explore INAP Managed Services.


      Ahmed Ragab


      Source link

      Optimizing Your Public Cloud with Managed Services

      Public cloud providers, like AWS and Azure, build and offer many services to help developers, IT shops and small and large business quickly build and deploy their applications on the public cloud. Public clouds offer speed and agility and a myriad of services to launch your applications, but it can become all too easy to have projects run away from you if your use of public cloud lacks focus. This is where optimizing public cloud with managed services can be a good option.

      Let’s take a closer look at the positives and negatives of public cloud, and some of the managed services that can take it to the next level.

      The Benefits and Drawbacks of Public Cloud

      Public clouds have taken the infrastructure world by storm and have certainly disrupted the IT industry in a very positive way. I must admit, from a technical (read: geek) standpoint, it is extremely attractive to be able to write and deploy my application on the same day. As my good friend James Desk puts it, “Throw another dime in the jukebox and code like hell.”

      In my opinion, the key to utilizing a public cloud to yield a well-supported, highly available application is to specifically build your application on top of the developed services the public cloud provider has available in their catalog. However, at the time of this writing, AWS is providing 140+ different services to help with deployment of your application on their cloud. This is where the problems start to creep in.

      Learning how to use all the services and being able to quickly understand which services are right for the application is impractical. My friends in the business will learn and use about five, 10, maybe 20 services which are immediately needed for their application. Once their applications are up and running, that’s where the learning stops until the next project come up.

      Because public clouds like AWS have made it so quick and easy to get started, they have also made it quick and easy to have a project run away from you with some costly consequences. I have been guilty of spinning up development/test environments or adding temporary resources to my production workload and then forgetting to turn things off, resulting in some unexpected, costly waste. Lessons learned.

      Getting help with management from experts that live and breathe this nebulous mist day in and day out can prevent wastes in time and cost.

      Managed Services to Optimize Public Cloud

      INAP’s Public Cloud Managed Services immediately make sense in this case. Our public cloud management team is made up of certified AWS and Azure engineers. These talented people make it their mission to know and understand how to leverage all public cloud services to made sure that our customers are utilizing all the needed resources without waste. They do this by staying on top of all newly released services, cloud certifications and industry best practices to make sure that our clients get the best in class service, support and advice.

      Here are some of the more popular services we offer for AWS and Azure:

      • Deployment Services, which includes a full-service onboarding team with dedicated project manager and implementation engineer
      • Configuration Services
      • 24/7/365 Issue Mitigation
      • Escalation Support
      • Monitoring and Alerting
      • Consolidated Billing
      • Compliance and Security Services
      • Operating System Support
      • Account Review: Performance and Cost Optimization
      • Solution Architecture
      • Migration Services
      • DBA Services

      Interested in exploring public cloud optimization with INAP? Chat now to learn more.

      Explore INAP Managed AWS.


      Rob Lerner


      Source link

      How To Install and Use Radamsa to Fuzz Test Programs and Network Services on Ubuntu 18.04

      The author selected the Electronic Frontier Foundation Inc to receive a donation as part of the Write for DOnations program.


      Security threats are continually becoming more sophisticated, so developers and systems administrators need to take a proactive approach in defending and testing the security of their applications.

      A common method for testing the security of client applications or network services is fuzzing, which involves repeatedly sending invalid or malformed data to the application and analyzing its response. This is useful to help test how resilient and robust the application is to unexpected input, which may include corrupted data or actual attacks.

      Radamsa is an open-source fuzzing tool that can generate test cases based on user-specified input data. Radamsa is fully scriptable, and so far has been successful in finding vulnerabilities in real-world applications, such as Gzip.

      In this tutorial, you will install and use Radamsa to fuzz test command-line and network-based applications using your own test cases.

      Warning: Radamsa is a penetration testing tool which may allow you to identify vulnerabilities or weaknesses in certain systems or applications. You must not use vulnerabilities found with Radamsa for any form of reckless behavior, harm, or malicious exploitation. Vulnerabilities should be ethically reported to the maintainer of the affected application, and not disclosed publicly without explicit permission.


      Before you begin this guide you’ll need the following:

      • One Ubuntu 18.04 server set up by following the Initial Server Setup with Ubuntu 18.04, including a sudo non-root user and enabled firewall to block non-essential ports.
      • A command-line or network-based application that you wish to test, for example Gzip, Tcpdump, Bind, Apache, jq, or any other application of your choice. As an example for the purposes of this tutorial, we’ll use jq.

      Warning: Radamsa can cause applications or systems to run unstably or crash, so only run Radamsa in an environment where you are prepared for this, such as a dedicated server. Please also ensure that you have explicit written permission from the owner of a system before conducting fuzz testing against it.

      Once you have these ready, log in to your server as your non-root user to begin.

      Step 1 — Installing Radamsa

      Firstly, you will download and compile Radamsa in order to begin using it on your system. The Radamsa source code is available in the official repository on GitLab.

      Begin by updating the local package index to reflect any new upstream changes:

      Then, install the gcc, git, make, and wget packages needed to compile the source code into an executable binary:

      • sudo apt install gcc git make wget

      After confirming the installation, apt will download and install the specified packages and all of their required dependencies.

      Next, you’ll download a copy of the source code for Radamsa by cloning it from the repository hosted on GitLab:

      • git clone

      This will create a directory called radamsa, containing the source code for the application. Move into the directory to begin compiling the code:

      Next, you can start the compilation process using make:

      Finally, you can install the compiled Radamsa binary to your $PATH:

      Once this is complete, you can check the installed version to make sure that everything is working:

      Your output will look similar to the following:


      Radamsa 0.6

      If you see a radamsa: command not found error, double-check that all required dependencies were installed and that there were no errors during compilation.

      Now that you’ve installed Radamsa, you can begin to generate some sample test cases to understand how Radamsa works and what it can be used for.

      Step 2 — Generating Fuzzing Test Cases

      Now that Radamsa has been installed, you can use it to generate some fuzzing test cases.

      A test case is a piece of data that will be used as input to the program that you are testing. For example, if you are fuzz testing an archiving program such as Gzip, a test case may be a file archive that you are attempting to decompress.

      Note: Radamsa will manipulate input data in a wide variety of unexpected ways, including extreme repetition, bit flips, control character injection, and so on. This may cause your terminal session to break or become unstable, so be aware of this before proceeding.

      Firstly, pass a simple piece of text to Radamsa to see what happens:

      • echo "Hello, world!" | radamsa

      This will manipulate (or fuzz) the inputted data and output a test case, for example:


      Hello,, world!

      In this case, Radamsa added an extra comma between Hello and world. This may not seem like a significant change, but in some applications this may cause the data to be interpreted incorrectly.

      Let’s try again by running the same command. You’ll see different output:


      Hello, '''''''wor'd!

      This time, multiple single quotes (') were inserted into the string, including one that overwrote the l in world. This particular test case is more likely to result in problems for an application, as single/double quotes are often used to separate different pieces of data in a list.

      Let’s try one more time:


      Hello, $+$PATHu0000`xcalc`world!

      In this case, Radamsa inserted a shell injection string, which will be useful to test for command injection vulnerabilities in the application that you are testing.

      You’ve used Radamsa to fuzz an input string and produce a series of test cases. Next, you will use Radamsa to fuzz a command-line application.

      Step 3 — Fuzzing a Command-line Application

      In this step, you’ll use Radamsa to fuzz a command-line application and report on any crashes that occur.

      The exact technique for fuzzing each program varies massively, and different methods will be most effective for different programs. However, in this tutorial we will use the example of jq, which is a command-line program for processing JSON data.

      You may use any other similar program as long as it follows the general principle of taking some form of structured or unstructured data, doing something with it, and then outputting a result. For instance this example would also work with Gzip, Grep, bc, tr, and so on.

      If you don’t already have jq installed, you can install it using apt:

      jq will now be installed.

      To begin fuzzing, create a sample JSON file that you’ll use as the input to Radamsa:

      Then, add the following sample JSON data to the file:


        "test": "test",
        "array": [
          "item1: foo",
          "item2: bar"

      You can parse this file using jq if you wish to check that the JSON syntax is valid:

      If the JSON is valid, jq will output the file. Otherwise, it will display an error, which you can use to correct the syntax where required.

      Next, fuzz the test JSON file using Radamsa and then pass it to jq. This will cause jq to read the fuzzed/manipulated test case, rather than the original valid JSON data:

      If Radamsa fuzzes the JSON data in a way that it is still syntactically valid, jq will output the data, but with whatever changes Radamsa made to it.

      Alternatively, if Radamsa causes the JSON data to become invalid, jq will display a relevant error. For example:


      parse error: Expected separator between values at line 5, column 16

      The alternate outcome would be that jq is unable to correctly handle the fuzzed data, causing it to crash or misbehave. This is what you’re really looking for with fuzzing, as this may be indicative of a security vulnerability such as a buffer overflow or command injection.

      In order to more efficiently test for vulnerabilities like this, a Bash script can be used to automate the fuzzing process, including generating test cases, passing them to the target program and capturing any relevant output.

      Create a file named

      The exact script content will vary depending on the type of program that you’re fuzzing and the input data, but in the case of jq and other similar programs, the following script suffices.

      Copy the script into your file:

      while true; do
        radamsa test.json > input.txt
        jq . input.txt > /dev/null 2>&1
        if [ $? -gt 127 ]; then
          cp input.txt crash-`date +s%.%N`.txt
          echo "Crash found!"

      This script contains a while to make the contents loop repeatedly. Each time the script loops, Radamsa will generate a test case based on test.json and save it to input.txt.

      The input.txt test case will then be run through jq, with all standard and error output redirected to /dev/null to avoid filling up the terminal screen.

      Finally, the exit value of jq is checked. If the exit value is greater than 127, this is indicative of a fatal termination (a crash), then the input data is saved for review at a later date in a file named crash- followed by the current date in Unix seconds and nanoseconds.

      Mark the script as executable and set it running in order to begin automatically fuzz testing jq:

      • chmod +x
      • ./

      You can issue CTRL+C at any time to terminate the script. You can then check whether any crashes have been found by using ls to display a directory listing containing any crash files that have been created.

      You may wish to improve your JSON input data since using a more complex input file is likely to improve the quality of your fuzzing results. Avoid using a large file or one that contains a lot of repeated data—an ideal input file is one that is small in size, yet still contains as many ‘complex’ elements as possible. For example, a good input file will contain samples of data stored in all formats, including strings, integers, booleans, lists, and objects, as well as nested data where possible.

      You’ve used Radamsa to fuzz a command-line application. Next, you’ll use Radamsa to fuzz requests to network services.

      Step 4 — Fuzzing Requests to Network Services

      Radamsa can also be used to fuzz network services, either acting as a network client or server. In this step, you’ll use Radamsa to fuzz a network service, with Radamsa acting as the client.

      The purpose of fuzzing network services is to test how resilient a particular network service is to clients sending it malformed and/or malicious data. Many network services such as web servers or DNS servers are usually exposed to the internet, meaning that they are a common target for attackers. A network service that is not sufficiently resistant to receiving malformed data may crash, or even worse fail in an open state, allowing attackers to read sensitive data such as encryption keys or user data.

      The specific technique for fuzzing network services varies enormously depending on the network service in question, however in this example we will use Radamsa to fuzz a basic web server serving static HTML content.

      Firstly, you need to set up the web server to use for testing. You can do this using the built-in development server that comes with the php-cli package. You’ll also need curl in order to test your web server.

      If you don’t have php-cli and/or curl installed, you can install them using apt:

      • sudo apt install php-cli curl

      Next, create a directory to store your web server files in and move into it:

      Then, create a HTML file containing some sample text:

      Add the following to the file:


      <h1>Hello, world!</h1>

      You can now run your PHP web server. You’ll need to be able to view the web server log while still using another terminal session, so open another terminal session and SSH to your server for this:

      • cd ~/www
      • php -S localhost:8080

      This will output something similar to the following:


      PHP 7.2.24-0ubuntu0.18.04.1 Development Server started at Wed Jan 1 16:06:41 2020 Listening on http://localhost:8080 Document root is /home/user/www Press Ctrl-C to quit.

      You can now switch back to your original terminal session and test that the web server is working using curl:

      This will output the sample index.html file that you created earlier:


      <h1>Hello, world!</h1>

      Your web server only needs to be accessible locally, so you should not open any ports on your firewall for it.

      Now that you’ve set up your test web server, you can begin to fuzz test it using Radamsa.

      First, you’ll need to create a sample HTTP request to use as the input data for Radamsa. Create a new file to store this in:

      Then, copy the following sample HTTP request into the file:


      GET / HTTP/1.1
      Host: localhost:8080
      User-Agent: test
      Accept: */*

      Next, you can use Radamsa to submit this HTTP request to your local web server. In order to do this, you’ll need to use Radamsa as a TCP client, which can be done by specifying an IP address and port to connect to:

      • radamsa -o http-request.txt

      Note: Be aware that using Radamsa as a TCP client will potentially cause malformed/malicious data to be transmitted over the network. This may break things, so be very careful to only access networks that you are authorized to test, or preferably, stick to using the localhost ( address.

      Finally, if you view the outputted logs for your local web server, you’ll see that it has received the requests, but most likely not processed them as they were invalid/malformed.

      The outputted logs will be visible in your second terminal window:


      [Wed Jan 1 16:26:49 2020] Invalid request (Unexpected EOF) [Wed Jan 1 16:28:04 2020] Invalid request (Malformed HTTP request) [Wed Jan 1 16:28:05 2020] Invalid request (Malformed HTTP request) [Wed Jan 1 16:28:07 2020] Invalid request (Unexpected EOF) [Wed Jan 1 16:28:08 2020] Invalid request (Malformed HTTP request)

      For optimal results and to ensure that crashes are recorded, you may wish to write an automation script similar to the one used in Step 3. You should also consider using a more complex input file, which may contain additions such as extra HTTP headers.

      You’ve fuzzed a network service using Radamsa acting as a TCP client. Next, you will fuzz a network client with Radamsa acting as a server.

      Step 5 — Fuzzing Network Client Applications

      In this step, you will use Radamsa to fuzz test a network client application. This is achieved by intercepting responses from a network service and fuzzing them before they are received by the client.

      The purpose of this kind of fuzzing is to test how resilient network client applications are to receiving malformed or malicious data from network services. For example, testing a web browser (client) receiving malformed HTML from a web server (network service), or testing a DNS client receiving malformed DNS responses from a DNS server.

      As was the case with fuzzing command-line applications or network services, the exact technique for fuzzing each network client application varies considerably, however in this example you will use whois, which is a simple TCP-based send/receive application.

      The whois application is used to make requests to WHOIS servers and receive WHOIS records as responses. WHOIS operates over TCP port 43 in clear text, making it a good candidate for network-based fuzz testing.

      If you don’t already have whois available, you can install it using apt:

      First, you’ll need to acquire a sample whois response to use as your input data. You can do this by making a whois request and saving the output to a file. You can use any domain you wish here as you’re testing the whois program locally using sample data:

      • whois > whois.txt

      Next, you’ll need to set up Radamsa as a server that serves fuzzed versions of this whois response. You’ll need to be able to continue using your terminal once Radamsa is running in server mode, so it is recommended to open another terminal session and SSH connection to your server for this:

      • radamsa -o :4343 whois.txt -n inf

      Radamsa will now be running in TCP server mode, and will serve a fuzzed version of whois.txt each time a connection is made to the server, no matter what request data is received.

      You can now proceed to testing the whois client application. You’ll need to make a normal whois request for any domain of your choice (it doesn’t have to be the same one that the sample data is for), but with whois pointed to your local Radamsa server:

      • whois -h localhost:4343

      The response will be your sample data, but fuzzed by Radamsa. You can continue to make requests to the local server as long as Radamsa is running, and it will serve a different fuzzed response each time.

      As with fuzzing network services, to improve the efficiency of this network client fuzz testing and ensure that any crashes are captured, you may wish to write an automation script similar to the one used in Step 3.

      In this final step, you used Radamsa to conduct fuzz testing of a network client application.


      In this article you set up Radamsa and used it to fuzz a command-line application, a network service, and a network client. You now have the foundational knowledge required to fuzz test your own applications, hopefully with the result of improving their robustness and resistance to attack.

      If you wish to explore Radamsa further, you may wish to review the Radamsa README file in detail, as it contains further technical information and examples of how the tool can be used:

      You may also wish to check out some other fuzzing tools such as American Fuzzy Lop (AFL), which is an advanced fuzzing tool designed for testing binary applications at extremely high speed and accuracy:

      Source link