One place for hosting & domains

      Tips for Contributing to Open Source With GitHub


      How to Join

      This Tech Talk is free and open to everyone. Register below to get a link to join the live event.

      FormatDateRSVP
      Presentation and Q&AOctober 13, 2020, 12:00–1:00 p.m. ET

      If you can’t join us live, the video recording will be published here as soon as it’s available.

      About the Talk

      A rundown of some of GitHub Developer Advocate Brian Douglas’ favorite tips to make your life easier as an open source project maintainer and future contributor. After watching this talk, you’ll be able to automate common tasks so that you can focus on the parts of any project that you want to prioritize.

      What You’ll Learn

      • Best things to look for when contributing to open source projects.
      • Ways to make your project approachable for contributors.
      • How to automate your contributor experience.

      This Talk is Designed For

      • Prospective open source contributors
      • Current open source maintainers

      Prerequisites

      About the Presenter

      Brian Douglas is a Developer Advocate at GitHub where he works on increasing the use of GitHub’s platform-specific features through technical content. (Ask him about GitHub Actions!) Brian has a passion for open source and loves mentoring new contributors.

      To join the live Tech Talk, register here.



      Source link

      6 Tips for Managing Cloud Security in the Modern Threat Landscape


      In a world where advanced cyberattacks are increasing in frequency and causing progressively higher costs for affected organizations, security is of the utmost importance no matter what infrastructure strategy your organization chooses. Despite longstanding myths, cloud environments are not inherently less secure than on-premise. With so many people migrating workloads to the cloud, however, it’s important to be aware of the threat landscape.

      Ten million cybersecurity attacks are reported to the Pentagon every day. In 2018, the number of records stolen or leaked from public cloud storage due to poor configuration totaled 70 million. And it’s estimated that the global cost of cybercrime by the end of 2019 will total $2 trillion.

      In response to the new cybersecurity reality, it is estimated that the annual spending on cloud security tools by 2023 will total $12.6 billion.

      Below, we’ll cover six ways to secure your cloud. This list is by no means exhaustive, but it will give you an idea of the security considerations that should be considered.

      Mitigating Cybersecurity Threats with Cloud Security Systems and Tools

      1. Intrusion Detection and 2. Intrusion Prevention Systems

      Intrusion detection systems (IDS) and intrusion prevention systems (IPS) are other important tools for ensuring your cloud environment is secure. These systems actively monitor the cloud network and systems for malevolent action and rule abuses. The action or rule may be reported directly to your administration team or collected and sent via a secure channel to an information management solution.

      IDSs have a known threat database that monitors all activity by users and the devices in your cloud environment to immediately spot threats such as SQL injection techniques, known malware worms with defined signatures and invalid secure certificates.

      IPS devices work at different layers and are often features of next-generation firewalls. These solutions are known for real-time deep packet inspection that alerts to potential threat behaviors. Sometimes these behaviors may be false alarms but are still important for learning what is and what is not a threat for your cloud environment.

      3. Isolating Your Cloud Environment for Various Users

      As you consider migrating to the cloud, understand how your provider will isolate your environment. In a multi-tenant cloud, with many organizations using the same technology resources (i.e. multi-tenant storage), you have segmented environments using vLAN’s and firewalls configured for least access. Any-any rules are the curse of all networks and are the first thing to look for when investigating the firewall rules. Much like leaving your front door wide-open all day and night, this firewall rule is an open policy of allowing traffic from any source to any destination over any port. A good rule of thumb is to block all ports and networks and then work up from there, testing each application and environment in a thorough manner. This may seem time consuming but going through a checklist of ports and connection scenarios from the setup is more efficient then doing the work of opening ports and allowing networks later.

      It’s also important to remember that while the provider owns the security of the cloud, customers own the security of their environments in the cloud. Assess tools and partners that allow you take better control. For instance, powerful tools such as VMware’s NSX support unified security policies and provide one place to manage firewall rules with its automation capabilities.

      4. User Entity Behavior Analytics

      Modern threat analysis employs User Entity Behavior Analytics (UEBA) and is invaluable to your organization in mitigating compromises of your cloud software. Through a machine learning model, UEBA analyzes data from reports and logs, different types of threat data and more to discern whether certain activities are a cyberattack.

      UEBA detects anomalies in the behavior patterns of your organization’s members, consultants and vendors. For example, the user account for a manager in the finance department would be flagged if it is downloading files from different parts of the world at different times of the day or is editing files from multiple time zones at the same time. In some instances, this might be legitimate behavior for this user, but the IT director should still give due diligence when the UEBA outs out this type of alert.  A quick call to confirm the behavior can prevent data loss or the loss of millions of dollars in revenue if the cloud environment has indeed been compromised.

      5. Role-Based Access Control

      All access should be given with caution and on an as-needed basis. Role-based access control (RBAC) allows employees to access only the information that allows them to do their jobs, restricting network access accordingly. RBAC tools allow you to designate what role the user plays—administrator, specialist, accountant, etc.—and add them to various groups. Permissions will change depending on user role and group membership. This is particularly useful for DevOps organizations where certain developers may need more access than others, as well as to specific cloud environments, but not others.

      When shifting to a RBAC, document the changes and specific user roles so that it can be put into a written policy. As you define the user roles, have conversations with employees to understand what they do. And be sure to communicate why implementing RBAC is good for the company. It not only helps you secure your company’s data and applications by managing employees, but third-party vendors, as well.

       6. Assess Third Party Risks

      As you transition to a cloud environment, vendor access should also be considered. Each vendor should have unique access rights and access control lists (ACL) in place that are native to the environments they connect from. Always remember that third party risk equates to enterprise risk. Infamous data breach incidents (remember Target in late 2013?) resulting from hackers’ infiltration of an enterprise via a third-party vendor should be enough of a warning to call into question how much you know about your vendors and the security controls they have in place. Third party risk management is considered a top priority for cybersecurity programs at a number of enterprises. Customers will not view your vendor as a separate company from your own in the event that something goes sideways and the information goes public. Protect your company’s reputation by protecting it from third party risks.

      Parting Thoughts

      The above tools are just several resources for ensuring your cloud environment is secure in multi-tenant or private cloud situations. As you consider the options for your cloud implementation, working with a trusted partner is a great way to meet your unique needs for your specific cloud environment.

      Explore INAP Managed Security.

      LEARN MORE

      Allan Williamson
      • Technical Account Manager


      READ MORE



      Source link

      3 Tips for Making Sure Your Brand’s Website Is Ready on Super Bowl Sunday


      Editor’s note: This article was originally published Jan. 31, 2020 on Adweek.com.

      There’s a hidden cost to even the most successful, buzz-generating Super Bowl ads: All that hard-earned (and expensively acquired) attention can easily bring a website to a standstill or break it altogether.

      We’re 12-plus years into the “second screen” era, and websites and applications during the Big Game are still frequently overwhelmed by the influx of visitors eagerly answering calls to action. Last year it was the CBS service streaming the game itself that failed to stay fully operational. In 2017, it was a lumber company taking a polarizing post-election stand. Advertisers in 2016’s game collectively witnessed website load time increases of 38%, with one retail tech company’s page crawling at 10-plus seconds.

      We pay a lot of attention to the ballooning per-second costs of these prized spots. We need to make more of a fuss around the opportunity costs of sites that buckle under the pressure of their brand’s own success.

      Note that a mere tenth of second slowdown on a website can take a heavy toll on conversion rates. Any length of time beyond that will send viewers back to their Twitter feeds. For first-time advertisers, this is an audience they may never see again. For big consumer brands, the expected hype can pivot quickly to reputation damage control. For ecommerce brands, downtime is a disaster that could mean millions in lost revenue.

      This year’s game may or may not yield another showcase example, but there’s a lesson here for marketers for brands of all sizes. Align your planned or unplanned viral triumphs with a tech infrastructure capable of rising to the occasion.

      To do so, lets briefly address two reasons why gaffes like this happen. The first, lightly technical explanation is that crashes and overloads occur when the number of requests and connections made by visitors outweigh the resources allocated to the website’s servers. The second, much-less-technical explanation is because executives didn’t sit down with their IT team early enough (or at all) to prevent explanation one.

      So, in that spirit of friendly interdepartmental alignment, here are a few pointers:

      Focus on the Site’s Purpose

      What’s the ideal user experience for those fleeting moments you hold a visitor’s attention? Answering this simple question will help your IT partners think holistically, identify potential bottlenecks in the system and allocate the right amount of resources to your web infrastructure.

      For instance, if you’re driving viewers to a video, your outbound bandwidth will need to pack a punch. If you’re an ecommerce site processing a high volume of transactions concurrently, you’ll need a lot of computer power and memory to handle dynamic requests. Image-heavy web assets may need compression tools. In any instance, your IT team will need to be ready with scalable contingencies. It’s why we see more enterprises adopting sophisticated multi-cloud and networking strategies that ensure key assets remain online through the peaks and valleys.

      Have the Cybersecurity Talk

      Mass publicity could very well make your website a target for bad actors. It’s the simple reality in an economy that’s increasingly digital. Ensure information security experts probe your site for vulnerabilities prior to major campaigns. Similarly, ask your IT team if your network can fend off denial of service attacks in which malicious actors send a deluge of fake traffic to your servers for the sole purpose of taking you offline. While these attacks are increasingly powerful and prevalent, gains in automation and machine learning mean they can be mitigated with the right tools.

      Don’t Forget the Dress Rehearsal

      If you’re planning a major campaign or your business is prone to seasonal traffic spikes, request that your tech partners run load tests. You’ll see firsthand what happens to your site performance when, for instance, your social team’s meme game finally strikes gold.

      Ultimately, website performance should be a 24/7 consideration. Ask your IT team about their monitoring tools in place and, more importantly, the processes and people at the ready to take any necessary action.

      Here’s hoping Sunday’s advertisers don’t squander their 15 minutes of fame with a 15-second page load. But if history does repeat itself, use it as fuel to ensure it doesn’t happen to you.

      Jennifer Curry
      • SVP, Global Cloud Services


      Jennifer Curry is SVP, Global Cloud Services. She is an operational and technology leader with over 17 years of experience in the IT industry, including seven years in the hosting/cloud market. READ MORE



      Source link