One place for hosting & domains

      Modern

      Modern Game Server Infrastructure in the Cloud


      How to Join

      This Tech Talk is free and open to everyone. Register for your preferred time to get a link to join the live event.

      Time zones Date RSVP
      Americas & EMEA Thursday, August 6, 2020 1:00 p.m.–2:00 p.m. ET
      Asia Pacific Friday, August 7, 2020, 12:00 p.m.–1:00 p.m. IST

      If you can’t join us live, the video recording will be published here as soon as it’s available.

      About the Talk

      Building large-scale infrastructure for a multiplayer game is not an easy feat. Game servers are stateful applications with long-lived connections to its clients, the opposite of what modern highly scalable server applications tend to be, so most of the tools and techniques used for deploying and maintaining these services on the cloud are not useful.

      Diego Rocha, Software Engineering Manager at Playkids, will discuss how a small team at PlayKids, an educational games platform, leveraged DigitalOcean, Kubernetes, and Agones to build PlayKids’ infrastructure to reliably serve millions of players. The presented solution enables multi-data center deployments and game server updates without disrupting game sessions, all at a low cost and requiring almost no maintenance.

      What You’ll Learn

      • How to deploy and scale large-scale game servers in the cloud to boost development productivity, reduce maintenance, and improve your game’s quality and resilience.
      • How to decrease the high cost of network-intensive multiplayer games through infrastructure optimization using DigitalOcean.

      This Talk is Designed For

      • Multiplayer game developers on small teams
      • Backend developers interested in scalability
      • Anyone who wants to learn how to host applications that are stateful, network-intensive, and/or have sticky connections.

      Prerequisites

      • Basic understanding of the value and difficulties of deploying large-scale game servers.
      • Moderate familiarity with cloud technologies.

      About the Presenters

      Diego Rocha leads a team of backend engineers at PlayKids. Although he considers himself a generalist, he’s been building critical large-scale distributed systems for more than 7 years. As a computer scientist, he thrives in applying theory and research to build solutions that are both elegant and efficient.

      Fabian Barajas joined DigitalOcean in 2015 as a Customer Success Engineer and became a Solutions Engineer in early 2017. He is an LPIC-1 and SUSE Certified Linux Administrator, and holds a number of certifications, including ComTIA Linux+ and A+.

      Diego and Fabian will be answering questions live during both sessions.

      To join the live Tech Talk, register here for the session of your choice.



      Source link

      6 Tips for Managing Cloud Security in the Modern Threat Landscape


      In a world where advanced cyberattacks are increasing in frequency and causing progressively higher costs for affected organizations, security is of the utmost importance no matter what infrastructure strategy your organization chooses. Despite longstanding myths, cloud environments are not inherently less secure than on-premise. With so many people migrating workloads to the cloud, however, it’s important to be aware of the threat landscape.

      Ten million cybersecurity attacks are reported to the Pentagon every day. In 2018, the number of records stolen or leaked from public cloud storage due to poor configuration totaled 70 million. And it’s estimated that the global cost of cybercrime by the end of 2019 will total $2 trillion.

      In response to the new cybersecurity reality, it is estimated that the annual spending on cloud security tools by 2023 will total $12.6 billion.

      Below, we’ll cover six ways to secure your cloud. This list is by no means exhaustive, but it will give you an idea of the security considerations that should be considered.

      Mitigating Cybersecurity Threats with Cloud Security Systems and Tools

      1. Intrusion Detection and 2. Intrusion Prevention Systems

      Intrusion detection systems (IDS) and intrusion prevention systems (IPS) are other important tools for ensuring your cloud environment is secure. These systems actively monitor the cloud network and systems for malevolent action and rule abuses. The action or rule may be reported directly to your administration team or collected and sent via a secure channel to an information management solution.

      IDSs have a known threat database that monitors all activity by users and the devices in your cloud environment to immediately spot threats such as SQL injection techniques, known malware worms with defined signatures and invalid secure certificates.

      IPS devices work at different layers and are often features of next-generation firewalls. These solutions are known for real-time deep packet inspection that alerts to potential threat behaviors. Sometimes these behaviors may be false alarms but are still important for learning what is and what is not a threat for your cloud environment.

      3. Isolating Your Cloud Environment for Various Users

      As you consider migrating to the cloud, understand how your provider will isolate your environment. In a multi-tenant cloud, with many organizations using the same technology resources (i.e. multi-tenant storage), you have segmented environments using vLAN’s and firewalls configured for least access. Any-any rules are the curse of all networks and are the first thing to look for when investigating the firewall rules. Much like leaving your front door wide-open all day and night, this firewall rule is an open policy of allowing traffic from any source to any destination over any port. A good rule of thumb is to block all ports and networks and then work up from there, testing each application and environment in a thorough manner. This may seem time consuming but going through a checklist of ports and connection scenarios from the setup is more efficient then doing the work of opening ports and allowing networks later.

      It’s also important to remember that while the provider owns the security of the cloud, customers own the security of their environments in the cloud. Assess tools and partners that allow you take better control. For instance, powerful tools such as VMware’s NSX support unified security policies and provide one place to manage firewall rules with its automation capabilities.

      4. User Entity Behavior Analytics

      Modern threat analysis employs User Entity Behavior Analytics (UEBA) and is invaluable to your organization in mitigating compromises of your cloud software. Through a machine learning model, UEBA analyzes data from reports and logs, different types of threat data and more to discern whether certain activities are a cyberattack.

      UEBA detects anomalies in the behavior patterns of your organization’s members, consultants and vendors. For example, the user account for a manager in the finance department would be flagged if it is downloading files from different parts of the world at different times of the day or is editing files from multiple time zones at the same time. In some instances, this might be legitimate behavior for this user, but the IT director should still give due diligence when the UEBA outs out this type of alert.  A quick call to confirm the behavior can prevent data loss or the loss of millions of dollars in revenue if the cloud environment has indeed been compromised.

      5. Role-Based Access Control

      All access should be given with caution and on an as-needed basis. Role-based access control (RBAC) allows employees to access only the information that allows them to do their jobs, restricting network access accordingly. RBAC tools allow you to designate what role the user plays—administrator, specialist, accountant, etc.—and add them to various groups. Permissions will change depending on user role and group membership. This is particularly useful for DevOps organizations where certain developers may need more access than others, as well as to specific cloud environments, but not others.

      When shifting to a RBAC, document the changes and specific user roles so that it can be put into a written policy. As you define the user roles, have conversations with employees to understand what they do. And be sure to communicate why implementing RBAC is good for the company. It not only helps you secure your company’s data and applications by managing employees, but third-party vendors, as well.

       6. Assess Third Party Risks

      As you transition to a cloud environment, vendor access should also be considered. Each vendor should have unique access rights and access control lists (ACL) in place that are native to the environments they connect from. Always remember that third party risk equates to enterprise risk. Infamous data breach incidents (remember Target in late 2013?) resulting from hackers’ infiltration of an enterprise via a third-party vendor should be enough of a warning to call into question how much you know about your vendors and the security controls they have in place. Third party risk management is considered a top priority for cybersecurity programs at a number of enterprises. Customers will not view your vendor as a separate company from your own in the event that something goes sideways and the information goes public. Protect your company’s reputation by protecting it from third party risks.

      Parting Thoughts

      The above tools are just several resources for ensuring your cloud environment is secure in multi-tenant or private cloud situations. As you consider the options for your cloud implementation, working with a trusted partner is a great way to meet your unique needs for your specific cloud environment.

      Explore INAP Managed Security.

      LEARN MORE

      Allan Williamson
      • Technical Account Manager


      READ MORE



      Source link

      Announcing INAP Interchange: Agility and Flexibility for Modern IT


      As multicloud and hybrid IT strategies become standard for the most agile, innovative enterprises, tech leaders must choose solutions that balance the needs of the business today with the agility and flexibility that the rapidly shifting tech landscape demands.

      That’s why committing to a data center or cloud solution for multiple years is a common source of consternation for infrastructure and operations (I&O) leaders. If business needs change, decision-makers rightfully fear being locked in to a solution or vendor, which may require them to sacrifice agility or devote precious resources to costly, unplanned strategic pivots.

      With INAP Interchange, there’s no need to fear.

      Our new program provides solution flexibility after you deploy your initial solution. Exchange infrastructure environments a year (or later) into your contract. That way, you can focus on current-state IT needs knowing you can adapt for future-state realities.   

      What is INAP Interchange?

      Interchange is a spend portability program available to new colocation or cloud customers. It gives you the option to switch infrastructure solutions—dollar for dollar—part-way through your contract. This will help you avoid environment lock-in and achieve current IT infrastructure goals while providing the flexibility to adapt for whatever comes next.

      What solutions are eligible?

      INAP Colocation, Bare Metal and Private Cloud solutions are eligible for the Interchange program. Chat with us to learn more about these services, and how spend portability can benefit your infrastructure solution.

      INAP Interchange: Use Cases

      A variety of business cases exist for Interchange:

      • Transition or phase from one of INAP’s 53 colocation facilities to an INAP Private Cloud or Bare Metal environment
      • Switch part or all of your current spend to new geographic regions or data centers
      • Re-deploy applications to a new hosted environment

      Let’s walk through a few example scenarios.

      No.1: On-Prem to Colo to Cloud

      To boost connectivity and performance, an IT infrastructure team seeks to migrate their company’s on-premise data center infrastructure to a Tier 3 colocation facility; however, pressure from senior management to explore cloud-first strategies within the next five years complicates their plans.

      Solution

      With INAP Interchange, the team migrates to a new INAP high-density colocation solution knowing that, one year later, they can choose to transition part of their spend to a custom-built INAP Private Cloud should the need arise.

      No.2: Geographic Flexibility

      One year after deploying INAP Bare Metal servers on the U.S. West Coast, a growing SaaS customer sees an increased demand for service in the EMEA region.

      Solution

      To get closer to their growing user base, the customer shifts part of their deployment to a custom-engineered solution in INAP’s North London cloud location.

      This use case also applies to colocation customers seeking to move data centers.

      No. 3: Application Best Fit

      An INAP Bare Metal customer needs to decommission legacy workloads and deploy newly architected applications that are better suited to a virtualized environment. 

      Solution

      Rather than spin up new on-demand services with a hyperscale public cloud provider, the customer uses INAP Interchange to migrate to an INAP Dedicated Private Cloud, which provides a scalable, single-tenant environment tailored to the resource needs of the new applications.

      To learn more, fill out the form and download the INAP Interchange FAQs.

      Dan Beers
      • VP, Colo BU Operations Services


      READ MORE



      Source link