Platform Engineering
Quantyca Cloud Data Platform image


In the continuously evolving digital context we find ourselves in, the concept of integration between on-premises environments and cloud platforms has become crucial for innovation-oriented companies.

The primary challenges to address, such as dynamic scalability of infrastructure, data exposure, development, and access to increasingly advanced services, have made it necessary to seek solutions that allow for the simplest and most immediate connection between on-premise environments and cloud platforms.

One of the key factors driving this need is operational flexibility. By leveraging cloud platforms, a company can optimize its resources, dynamically increasing processing capacity based on needs, reducing and closely monitoring infrastructure costs. Adopting a hybrid strategy allows businesses to harness the advantages of both worlds, maintaining control over sensitive data and critical workloads while fully utilizing cloud capabilities.

The need for integration between on-premises environments and cloud platforms is equally driven by the growing number of services and expertise offered by leading cloud providers. By connecting on-premises environments to cloud platforms such as AWS, Azure, or Google Cloud, it is possible to link your data to an ever-expanding range of managed services that may include areas like data transformation, advanced analytics, and artificial intelligence. This synergy between environments streamlines and accelerates innovation, reduces development times, and allows developers to prioritize value creation, largely delegating the management of underlying infrastructure.


When it becomes necessary to perform integration between on-premises environments and cloud platforms, it is always good to consider the following aspects:

Data protection plays a fundamental role, both during data transit to the cloud and in the “at-rest” phase. When accessing cloud services, it becomes necessary to plan and implement the best security policies. Another crucial aspect is identity and access management to ensure that what has already been defined in compliance with regulations and corporate policies in the on-premises environment is also properly configured in the cloud, including setting up user federation. This enables the establishment of unified monitoring and reporting mechanisms to ensure visibility and traceability of activities across the entire hybrid ecosystem.

Latencyand performance can become a critical point in the integration between on-premises and cloud platforms. Data transfer between the two infrastructures could lead to unwanted delays, something that needs to be taken into account, especially in contexts where strict latency requirements exist (financial transaction management, real-time analytics, etc.). Therefore, it is necessary to optimize traffic between networks, possibly managing dynamic load balancing across multiple endpoints to mitigate potential issues of reachability and latency. The adoption of technologies such as caching and geographic resource distribution can contribute to improving the end-user experience.

The integration between on-premises environments and cloud platforms can lead to increased complexity in the overall architecture. Managing a hybrid ecosystem requires specific skills and tools: designing a scalable, resilient, and highly reliable architecture is essential but can be challenging. In some cases, it’s necessary to adapt to the existing resources locally, optimizing the use of the cloud in a way that aligns with what is already available. In these scenarios, the adoption of automation and orchestration tools is often recommended.


In our Data Strategy proposal, the adoption of cloud technologies has increasingly taken on a predominant role in the solutions provided to clients. As detailed in the “Migrate To Cloud” use case, this proposal is structured into four distinct phases:

  • Assessment
  • Foundation
  • Mobilization
  • Execution

It is during the Foundation phase that we explore and implement the best networking strategies for our clients’ specific use cases. In particular, what is called a Landing Zone is implemented on the chosen cloud provider.

The Landing Zone serves as a central hub for cloud management and allows the definition of various aspects within it:
Each of the highlighted steps in the image is implemented using the principles of Infrastructure as Code (IaC), generating scripts that declaratively specify various infrastructure components and their configurations.

Once the IaC tool is chosen (e.g., Terraform, especially in multi-cloud contexts), the created scripts enable precise management of all infrastructure components, including network management services.

VPC Organization

Virtual Private Clouds (VPCs) are one of the key components within a cloud infrastructure. They enable resource isolation in a virtualized environment, ensuring greater security and flexibility. Properly designing and organizing one or more VPCs is crucial to maximize their benefits.

When designing a VPC, it’s essential to plan the allocation of IP address spaces correctly using CIDR (Classless Inter-Domain Routing) blocks. This involves subdividing IP addresses into blocks that reflect scalability needs and logical resource segmentation.

One of the primary features of VPCs is the ability to create one or more subnets within them, specifying the CIDR block within the one defined for the VPC itself. This allows for an additional hierarchical level in resource organization and the application of security policies and general network management.


Single VPC

The simplest topology that can be used is the single VPC, where all cloud resources created reside within it. This scenario is suitable for feasibility studies of cloud provider services, development environments, or small applications with minimal security requirements.

Even in the case of feasibility studies, it is advisable to have resources in at least two VPCs to test the necessary configurations for integrating and enabling communication between the two VPCs. This allows anticipating potential issues early on that could arise in production environments.

Multiple VPCs

This topology involves creating multiple separate VPCs within the same cloud account. Each VPC acts as an isolated entity with the ability to configure connections between them. This topology is ideal for segregating production and development environments or hosting applications with varying security requirements.

In the case of multiple VPCs, leveraging services provided by different cloud providers allows for various architectures. For example, central VPCs, referred to as “Hubs,” can serve as network managers for other satellite VPCs, known as “Spokes.” Alternatively, Transit VPCs can be used to connect other VPCs for inter-communication.

In the following paragraph, the concepts of Point-to-Point and Hub & Spoke will be elaborated upon, including the management of on-premise networks, defining the hybrid cloud management model.

In the next phase, we create connections with external infrastructures outside the Cloud, such as a different cloud provider or on-premises infrastructure.

The two primary models of hybrid connections are Point-to-Point and Hub & Spoke. Both models have their specific characteristics.

The point-to-point model is based on direct and dedicated connections, with no intermediaries in the link between the on-premises environment and a specific cloud platform. This approach generally provides low latency, enabling fast and reliable data transfer. It is also straightforward and immediate to implement, reducing the number of architectural components and thus costs.

Direct connections between the on-premise environment and the cloud provider are established using VPNs or various Direct Connect services, while connections between the cloud provider’s networks themselves use peering functionality.

However, the evident drawback of the point-to-point model lies in its limited scalability and flexibility. As the number of networks to be interconnected increases, the number of connections grows exponentially. In these contexts, the transitive routing feature is often absent, meaning that if network A is connected to network B, and network B is connected to network C, A and C cannot communicate directly, necessitating an additional direct communication between A and C.

Another negative aspect of this solution is the reduced manageability of networks. Without a central point for route management and network and subnet construction, it’s easier to encounter issues related to IP address overlapping.

The hub and spoke model is based on a centralized and distributed approach. The “hub” component serves as the central point for aggregation and management of connections, while the “spoke” components represent various environments, whether they are on-premises or in the cloud.

In this scenario, network traffic flows through the hub, enabling more efficient and scalable connection management and simplifying overall architecture security and maintainability.

One of the key advantages of the hub and spoke model is its extensibility. Adding new environments, whether on-premises or in the cloud, only requires configuring a connection in the central hub. This approach significantly simplifies management and maintenance, making the hub and spoke model particularly suitable for rapidly growing businesses or those with a diverse presence across multiple cloud platforms.

The hub and spoke model may introduce a slight additional latency compared to a point-to-point connection, and it requires an additional architectural component, the central hub, incurring associated costs.

Network Security

At this point, the necessary VPCs have been defined, as well as the methods of connecting to the on-premise environment. Therefore, all additional aspects related to security, described in more detail below, need to be determined.

In general, by managing these aspects with an Infrastructure as Code (IaC) tool, it is possible to reuse pre-defined standard configurations within multiple VPCs, as defined by the networking security team.

Ingress and Egress Security Groups function as a virtual firewall, regulating inbound and outbound traffic to resources within the VPC. Each Security Group contains a list of security rules that determine which types of connections are allowed or denied and can be configured based on specific protocols, ports, and IP address ranges. These rules ensure fine-grained access control, enforcing that only authorized communications can reach resources within the VPC.

Security Groups are, therefore, a fundamental tool for ensuring the security of resources within a VPC, helping prevent unauthorized access and protecting the cloud environment from external attacks.

Network Access Control Lists (NACLs), like Security Groups, consist of a set of security rules that determine which types of traffic are allowed and denied. However, unlike Security Groups, for example in AWS, NACLs are applied at the subnet level within a VPC. This means that the same rules apply to all resources within a specific subnet. Security Groups, on the other hand, are applied to all resources (e.g., VM instances) contained within the Security Group itself. Another distinguishing aspect is that network NACLs are stateless: they evaluate each packet passing through the subnet to which the NACL is applied.

Each VPC is born with at least one default routing table, and it’s then possible and advisable to create additional ones to tailor the traffic behavior based on specific needs within the VPC itself.

Route Tables contain a set of rules that indicate where to send packets based on their destination address. From a security perspective, they can be configured to route certain communications through security devices or services, such as firewalls or monitoring systems. Additionally, they allow for an additional level of segmentation within subnets, isolating various logically connected resource groups. This enhances protection by restricting access to only the resources that need to communicate with each other.

The Internet Gateway is an access point that enables resources within the VPC to communicate directly with the internet. It acts as a bridge between resources within the VPC and the outside world. Typically, the Internet Gateway is configured using Route Tables to ensure that only authorized traffic has access to the internet. Additional layers of control can be defined using Security Groups and NACLs.

On the other hand, the NAT Gateway (Network Address Translation) allows resources within the VPC to access external resources, including the internet, without revealing their private IP addresses. This provides an additional layer of security because resources remain hidden behind the NAT Gateway’s IP address.

Network Enabling 

Once the previous points have been established and the fundamental aspects of the networks have been designed and configured, the final step involves enabling the connections, utilizing one or more of the various options provided by cloud providers.

The first option for bridging on-premises and cloud environments is to use a Virtual Private Network (VPN). VPNs are tools that enable secure and private connections to public networks, protecting communication between remote sites and cloud resources. There are various types of VPNs, each with specific features that cater to the connectivity and security needs of organizations.

An example is provided by site-to-site VPNs, widely used to connect on-premises networks to virtual networks within major cloud providers. This configuration leverages the creation of a “secure tunnel” through the internet, allowing data transfer and network traffic between the two environments. This type of VPN is ideal for connecting corporate data centers to cloud resources, as the cloud network is seen as an extension of the corporate network. The main challenge of this solution is that performance can be affected by latency and congestion in the public internet.

To overcome the latency and security limitations of VPNs, it is possible to utilize private and dedicated connections between on-premises infrastructure and cloud resources. Each cloud provider offers its own direct connection service, which can be both physical and virtual: Direct Connect (AWS), ExpressRoute (Azure), Dedicated Interconnect (GCP). The advantage of this solution lies in eliminating the dependency on the public internet, ensuring a reliable, low-latency, and more secure connection. However, these solutions require the implementation of dedicated circuits between data centers and the cloud provider. Consequently, there may be a need to address the costs associated with the necessary physical connections or the use of commercial partners’ services that provide endpoints geographically closer to your local network.

Peering between two networks is used to allow direct communication between two separate networks. These networks can belong to different organizations or can be separate internal networks within the same organization. When establishing peering between two networks, it is important to ensure that the IP addresses of the two networks are distinct and non-overlapping. This is crucial to ensure proper routing of traffic. In addition to on-premises/cloud connections, network peering is often used to connect local networks or cloud to services provided by external vendors (e.g., SaaS applications).


Access to Advanced Services
Leading cloud providers offer a wide range of advanced services. By connecting an on-premises environment to the cloud, companies can link their local services and applications to these new features without having to develop or manage them internally.
Disaster Recovery and Business Continuity
By using cloud resources for backup and disaster recovery, companies can enhance their ability to quickly restore data and applications in the event of disruptions, minimizing downtime and ensuring service continuity.
Global Distribution
Cloud services offer the ability to deploy resources in various geographic regions. This allows for rapid access for users worldwide, improving performance and the end-user experience.

Use Cases

Need personalised advice? Contact us to find the best solution!

This field is for validation purposes and should be left unchanged.

Join the Quantyca team, let's be a team!

We are always looking for talented people to join the team, discover all our open positions.