Quantyca Cloud Data Platform image
Scopri

Overview

In the continuously evolving digital landscape, the concept of integration between on-premises environments and cloud platforms has become crucial for innovation-oriented businesses.

The main challenges to address, such as dynamic scalability of infrastructure, data exposure, development, and access to increasingly advanced services, have made it necessary to seek solutions that allow the connection of on-premise environments and cloud platforms in the simplest and most immediate way possible.

One of the key aspects driving this need is operational flexibility. By leveraging cloud platforms, a company can optimize its resources, dynamically increasing processing capacity based on needs, reducing and monitoring infrastructure costs. Adopting a hybrid strategy enables companies to harness the positive aspects of both worlds, maintaining control over sensitive data and critical workloads while fully utilizing cloud functionalities.

The need for integration between on-premises environments and cloud platforms is also guided by the growing number of services and capabilities offered by major cloud providers. Connecting on-premises environments to cloud platforms like AWS, Azure, or Google Cloud allows linking data to an increasing number of managed services, spanning areas such as data transformation, advanced analytics, and artificial intelligence. This synergy between environments facilitates and accelerates innovation, reduces development times, and allows developers to prioritize value creation, largely delegating the need to manage the underlying infrastructure.

Challenges

When it becomes necessary to perform integration between on-premises environments and cloud platforms, it is always good to consider the following aspects:

Data protection plays a fundamental role, both during the transition to the cloud and in the “at-rest” phase. Planning and implementing the best security policies are necessary during cloud service access. Another crucial aspect involves managing identities and access to ensure compliance with regulations and company policies in the on-premises environment. This includes correctly setting up user federation. In this way, unified monitoring and reporting mechanisms can be established to ensure visibility and traceability of activities across the entire hybrid ecosystem.

Latency and performance can become a critical point in the integration between on-premises environments and cloud platforms. The transfer of data between these two infrastructures may lead to unwanted delays, which must be taken into account, especially in contexts with strict latency requirements (financial transaction management, real-time analytics, …). It is, therefore, necessary to optimize traffic between networks, possibly managing dynamic load balancing across multiple endpoints to mitigate potential issues of reachability and latency. The adoption of technologies such as caching and geographical distribution of resources can contribute to enhancing the end-user experience.

The integration between on-premises environments and cloud platforms can lead to an increase in the overall complexity of the architecture. Managing a hybrid ecosystem requires specific skills and tools: designing a scalable, resilient, and highly reliable architecture is crucial but can be challenging. In some cases, it is necessary to adapt to the resources already present locally, balancing the optimal use of the cloud consistently with what is already available. In these scenarios, the adoption of automation and orchestration tools is often recommended.

Solutions

In our Data Strategy proposal, the adoption of cloud technologies has increasingly taken on a predominant role in the solutions provided to clients. As explained in detail in the Use Case “Migrate to Cloud,” this proposal is structured into four distinct phases:

  • Assessment 
  • Foundation 
  • Mobilization
  • Execution

It is during the Foundation phase that we explore and implement the best networking strategies for the specific use cases of our clients. In particular, what is called the Landing Zone is implemented on the chosen cloud provider.

The Landing Zone serves as the central hub for cloud management and allows the definition of various aspects within it:

Landing-Zone-Steps-migrate-to-cloud

 

Each of the steps highlighted in the image is implemented by leveraging the principles of Infrastructure as Code (IaC), generating scripts in which various infrastructure components and their configurations are explicitly defined in a declarative manner.

Once the IaC tool is chosen (for example, Terraform, especially in multi-cloud contexts), the created scripts enable precise management of all infrastructure components, including management services in the Network domain.

VPC Organization

Virtual Private Clouds (VPCs) are one of the key components within a cloud infrastructure. They allow isolating resources within a virtualized environment, ensuring greater security and flexibility. Properly designing and organizing one or more VPCs is crucial to maximize their benefits.

When designing a VPC, it is essential to plan the allocation of IP address spaces correctly using CIDR blocks (Classless Inter-Domain Routing). This involves dividing IP addresses into blocks that reflect scalability needs and the logical subdivision of resources.

A primary feature of VPCs is the ability to create one or more subnets within them, specifying the CIDR block within that defined for the VPC itself. This allows for an additional hierarchical level in the organization of resources and the application of security policies and general networking management.

Single VPC

The simplest topology to use is the single VPC, where all created cloud resources reside within it. This scenario is suitable for feasibility studies on cloud provider services, development environments, or small applications with minimal security requirements.

Even in the case of a feasibility study, it is recommended to have resources in at least two VPCs, testing configurations necessary to integrate and enable communication between the two VPCs. This way, all potential issues that could arise in production environments can be anticipated early on.

Multiple VPCs

This topology involves creating multiple separate VPCs within the same cloud account. Each VPC functions as an isolated entity, with the possibility to configure connections between them. This topology is ideal for separating production and development environments or hosting applications with different security requirements.

In the case of multiple VPCs, leveraging services provided by various cloud providers allows for different architectures. For example, central VPCs, known as “Hubs,” can act as network managers for other satellite VPCs, called “Spokes.” Alternatively, there can be Transit VPCs to which other VPCs can connect to communicate with each other.

The following paragraph will delve into the concepts of Point-to-Point and Hub & Spoke in more detail, including on-premise network management, defining the hybrid cloud management model.

Hybrid Cloud Management

In the subsequent phase, connections are established with infrastructures external to the Cloud, such as a different cloud provider or on-premises infrastructure. The two main existing models of hybrid connections are Point-to-Point and Hub & Spoke. Each model has its unique characteristics.

The point-to-point model is based on direct and dedicated connections, where there are no intermediaries in the link between the on-premises environment and a specific cloud platform. This approach generally provides reduced latency, enabling fast and reliable data transfer. Moreover, it is straightforward and immediate to implement, reducing the number of architectural components and, consequently, costs.

Direct connections between the on-premise environment and the cloud provider are established using VPNs or various Direct Connect services. Connections between the networks of the same cloud provider utilize the peering functionality.

The evident drawback in the point-to-point model is due to its challenging scalability and flexibility. As the number of networks to be interconnected increases, the number of connections grows exponentially. In these contexts, the functionality of transitive routing is often absent: if network A is connected to network B, and network B is connected to network C, A and C cannot communicate with each other; an additional direct communication between A and C is necessary.

Another negative aspect of this solution is the reduced manageability in network management. Without a central point for route management and network/subnet construction, it is also more prone to encountering problems of IP address overlapping.

The hub & spoke model is based on a centralized and distributed approach. The “hub” component acts as a central point for aggregation and management of connections, while the “spoke” components represent various environments, whether they are on-premises or in the cloud.

In this scenario, network traffic flows through the hub, allowing more efficient and scalable management of connections and simplifying the management of security and overall architecture maintainability.

One of the main advantages of the hub & spoke model is its scalability. Adding new environments, both on-premises and in the cloud, only requires configuring a connection in the central hub. This approach significantly simplifies management and maintenance, making the hub & spoke model particularly suitable for rapidly growing companies or those with a diverse presence across multiple cloud platforms.

The hub & spoke model may introduce slight additional latency compared to point-to-point connections, and it requires an additional architectural component, the central hub, along with associated costs.

 Network Security

At this point, the necessary VPCs have been defined, as well as the connection methods with the on-premise environment. Therefore, all additional aspects related to the security topics described in more detail below need to be determined.

In general, by managing these aspects with an Infrastructure as Code (IaC) tool, it is possible to reuse standardized configurations within multiple VPCs. These configurations are defined by the networking security responsible group.

Security Groups act as a virtual firewall, regulating inbound and outbound traffic to resources within the VPC. Each Security Group contains a list of security rules that determine which types of connections are allowed or denied. These rules can be configured based on specific protocols, ports, and IP address ranges. Such rules ensure highly granular access control, enforcing that only authorized communications can reach resources within the VPC.

Security Groups are a fundamental tool to ensure the security of resources within a VPC, helping prevent unauthorized access and protecting the cloud environment from external attacks.

Network Access Control Lists (NACLs), like Security Groups, consist of a set of security rules determining which types of traffic are allowed or denied. However, unlike Security Groups, for instance on AWS, NACLs are applicable at the subnet level within a VPC. This means that the same rules apply to all resources within a specific subnet. Security Groups, on the other hand, are applied to all resources (e.g., VM instances) contained within the Security Group itself. Another differentiating aspect is that network NACLs are stateless: each packet passing through the subnet to which the NACL is applied is evaluated.

Each VPC is born with at least one default routing table, and it is then possible and advisable to create additional ones to tailor the traffic behavior based on specific needs within the VPC itself.

Route Tables contain a set of rules indicating where to send packets based on their destination address. From a security perspective, they can be configured to enforce specific communications through security devices or services, such as firewalls or monitoring systems. Additionally, they allow for an additional level of segmentation within Subnets, isolating various logically connected groups of resources. This provides enhanced protection by restricting access only to resources that need to communicate with each other.

The Internet Gateway is an access point that enables resources within the VPC to communicate directly with the Internet. It serves as a bridge between the resources in the VPC and the external world. Typically, the Internet Gateway is configured using Route Tables to ensure that only authorized traffic has access to the Internet. Additional levels of control can be defined using Security Groups and Network Access Control Lists (NACLs).

On the other hand, the NAT Gateway (Network Address Translation) allows resources within the VPC to access external resources, including the Internet, without revealing their private IP addresses. This provides an additional layer of security as resources remain concealed behind the IP address of the NAT Gateway.

Network Enabling

Once the previous points have been established and the main aspects of the networks have been designed and configured, the final step is to enable the connections, leveraging one or more of the various possibilities offered by cloud providers.

The first option to bridge on-premises and cloud environments is through the use of a Virtual Private Network (VPN). VPNs are tools that enable secure and private connections to public networks, safeguarding communication between remote sites and cloud resources. There are various types of VPNs, each with specific features that adapt to the connectivity and security needs of organizations.

An example is provided by site-to-site VPNs, widely used to connect on-premises networks to virtual networks within major cloud providers. This configuration leverages the creation of a “secure tunnel” through the internet, allowing the transfer of data and network traffic between the two environments. This type of VPN is ideal for connecting corporate data centers and cloud resources since the cloud network is seen as an extension of the corporate network. The main challenge with this solution is that performance can be influenced by latency and congestion in the public internet.

To overcome the latency and security limitations of VPNs, it’s possible to utilize private and dedicated connections between on-premises infrastructure and cloud resources. Each cloud provider offers its own direct connection service, which can be both physical and virtual: Direct Connect (AWS), ExpressRoute (Azure), Dedicated Interconnect (GCP). The advantage of this solution lies in eliminating dependence on the public internet, ensuring a reliable, low-latency connection with increased security. However, these solutions require the implementation of dedicated circuits between data centers and the cloud provider, potentially incurring costs for the necessary physical connections or using services from commercial partners providing geographically closer endpoints to your local network.

Peering between two networks is used to enable direct communication between separate networks. These networks can belong to different organizations or may be separate internal networks within the same organization. When establishing peering between two networks, it’s crucial to ensure that the IP addresses of the two networks are distinct and non-overlapping. This is essential for ensuring proper traffic routing. In addition to on-premises/cloud connections, network peering is often utilized to connect local or cloud networks to services managed by external providers (e.g., SaaS applications).

Benefits

Access to Advanced Services
The leading cloud providers offer a wide range of advanced services. By connecting an on-premises environment to the cloud, companies can link their local services and applications to these new features without having to develop or manage them internally.
Disaster Recovery & Business Continuity
By using cloud resources for backup and disaster recovery, companies can enhance their ability to quickly restore data and applications in case of disruptions, minimizing downtime and ensuring service continuity.
Global Distribution
Cloud services offer the possibility to deploy resources in various geographical regions. This allows quick access for users worldwide, enhancing performance and the end-user experience.

Use Cases

Contattaci!

This field is for validation purposes and should be left unchanged.

Entra a far parte del team Quantyca, facciamo squadra!

Siamo sempre alla ricerca di persone di talento da inserire nel team, scopri tutte le nostre posizioni aperte.

VEDI TUTTE LE POSIZIONI APERTE