Multi-Cloud Strategy: Transitioning to a Cloud Environment (Part 3 of 3)

Multi-Cloud Strategy: Transitioning to a Cloud EnvironmentBY VISHAL DESHPANDE

Companies and government agencies are increasingly benefiting from using cloud services. The proliferation of cloud services provides them options to choose the right cloud service for their specific business workload(s). However, by opting for this multi-cloud approach, organizations are faced with risks to their governance structure, data architecture and security controls. To address these challenges, organizations need to develop and implement a multi-cloud strategy. When implemented correctly, organizations can maintain their cloud instances in a single security architecture, securing the movement of data across applications, ultimately reducing cybersecurity risk. Today, in the final installment of FI Consulting’s series on multi-cloud strategy, we explore transitioning to cloud environments and how the process can benefit from centralized cloud and data governance controls established through the CoE.

When transitioning to a multi-cloud environment:

What to consider when transitioning to a cloud environment

1. Infrastructure Resource Provisioning

The infrastructure layer transitions from running dedicated servers at limited scale to a dynamic environment where organizations can easily adjust to increased demand by spinning up thousands of servers and scaling them down when not in use. As architectures and services become more distributed, the sheer volume of compute nodes increases significantly. Cloud resource provisioning and architectures need to strike a balance between cost and performance making good use of auto-scaling of the resources as required. And with the use of multiple cloud platforms, it becomes critical that the cloud architectures are designed to be consistent across the cloud platform and adhere to the cloud and data governance and security control structures established at the organizational level.

IT teams should start by implementing reproducible infrastructure as code practices, and then layer compliance and governance workflows to ensure appropriate controls. Automation using services like Terraform, Pulumi, Chef, Puppet, Ansible etc. ensure consistent setups for infrastructure, application, data and security architectures across all SDLC environments and across all cloud platforms.

Reproducible infrastructure as code

Infrastructure Provisioning is to enable the delivery of reproducible infrastructure as code, providing DevOps teams a way to plan and provision resources inside CI/CD workflows using familiar tools throughout. This can be done using platforms like Terraform and Pulumi to name a few. Use of infrastructure as a code platform also brings consistency in the infrastructure setup across multiple cloud platforms as you can develop your infrastructure as reusable and modular code artifacts that can be combined in an overarching project architecture template for your project infrastructure provisioning. Using the code version control policies allows the infrastructure team to incrementally build the project infrastructure and adapt configuration management policies at the infrastructure level for better management of the project architecture. It also allows the project architecture to scale and change as required in an evolutionary manner and brings in much needed flexibility and agility to the underlying project architecture.

Compliance and management

There is a need to enforce policies on the type of infrastructure created, how it is used, and which teams get to use it. The cloud CoE can create these policies which can then be applied by the infrastructure provisioning team across all cloud platforms and resources in a consistent manner automating the application where possible through Infrastructure as Code modules.

2. Security controls

The security layer transitions from a fundamentally “high-trust” world enforced by a strong perimeter and firewall to a “zero-trust” environment with no clear or static perimeter. As a result, the foundational assumptions for security shifts from being IP-based to using identity-based access to resources. Cloud CoE needs to establish security controls and protocols that can be based on industry standard security policies like NIST-800, FedRAMP, etc., and can extend with specific controls required for the organization. Data security and access control policies, data protection using encryption at rest and in transit, and infrastructure security using appropriate virtual network configurations and public and private subnets needs to be established as base controls across all cloud platforms. Along with these, guidelines needs to be established for.

Secrets management

Secrets management is the central storage, access control, and distribution of dynamic secrets using either cloud native key management services like AWS Key Management or Azure Key Vault or using a cloud agnostic platform like Vault from Hashicorp. Instead of depending on static IP addresses, integrating with identity-based access systems such as AWS IAM and Azure AD to authenticate and access services and resources is crucial.

Encryption as a service

Additionally, enterprises need to encrypt application data at rest and in transit. This requires encryption-as-a-service to provide a consistent API for key management and cryptography enabling application developers to perform a single service integration to protect data across multiple environments.

Identity and Access Management [Identity as a Service (IDaaS)]

Centralizing identity management to control access to cloud platforms and resources is critical for multi-cloud strategy. While each cloud platform provides a native identity management solution, it is critical to centralize this using a cloud agnostic identity and access management platforms that provide the following services that can be applied across multiple cloud platforms.

3. Networking considerations

The networking layer transitions from being heavily dependent on the physical location and IP address of services and applications to using a dynamic registry of services for discovery, segmentation, and composition. Appropriate firewall configurations need to be established along with secure channels between on-premise resources and various cloud platforms in a WAN setup that will ensure data security as data moves between the platforms.

Networking services should be able to provide a service registry and service discovery capabilities. The registry can be queried programmatically to enable service discovery or drive network automation of API gateways, load balancers, firewalls, and other critical middleware components. As the number of services grow, organizations can look to replace the service registry with a service mesh. The two main goals of a service mesh are to allow insight into previously invisible service communications layers and to gain full control of all microservices communication logic, like dynamic service discovery, load balancing, timeouts, fallbacks, retries, circuit breaking, distributed tracing, and security policy enforcement between services.

4. Application design and architectures

The runtime layer shifts from deploying artifacts to a static application server to deploying applications with a scheduler atop a pool of infrastructure which is provisioned on-demand. In addition, new applications have become collections of services that are dynamically provisioned and packaged in multiple ways: from virtual machines to containers in a service-oriented architecture. The use of multiple cloud affects the following aspects of application design, development and deployment –

Multi-Cloud Application Delivery

As new apps are becoming increasingly distributed, legacy apps also need to be managed with more agility. A flexible orchestrator is required to deploy and manage legacy as well as modern service-oriented applications for all types of workloads: from long running services, to short lived batch, to system agents. Containerization of services and applications eases the deployment of these application across multiple environments on various platforms including on-premise infrastructures.

Mixed Workload Organization

Many new workloads are developed with container-based packaging with the intent to deploy to Kubernetes or other container management platforms. There is also growing trend of developing serverless computing services as all cloud platforms provide the ability to deploy services as functions.

Multi-Data Workload Orchestration

As teams roll out global applications in multiple data centers, or across cloud boundaries it becomes imperative to provide for orchestration and scheduling for these applications supported by the infrastructure, security, and networking resources and policies to ensure the applications are successfully deployed.

5. Data architectures

Multi-cloud architectures offer some potential benefits for databases and data-centric solutions but involve greater complexity, cost and effort than single-cloud architectures. Proper data architectures need to be created to ensure the integrity of the data across applications deployed over multiple cloud platforms. Organizations benefit from creating a central data warehouse and develop data sharing structures that will disseminate the data to the appropriate cloud platforms for the applications and business service needs. However, this will not fit all needs for data access and data controls. As a rule of thumb, data should be located on the cloud platform that minimizes movement of huge volumes of data across cloud boundaries. For example, if you are running your data analytics and machine learning modules on AWS, then having the data located on AWS resources like S3, Redshift, DynamoDB, Aurora, etc., would be ideal as this would make the data easily accessible to the analytics and machine learning modules. For applications running on a different cloud platform, private endpoints to these resources can be created and integrated into the application for direct access to the data on AWS. A safer way is to create a service layer that would allow external applications to interact and get access to the data using the REST API endpoints for these data services. Using the service layer allows for greater flexibility with access control and security policy implementations for applications running across multiple cloud environments.

6. Security architectures

Security in a multi-cloud setup needs to be carefully designed to support infrastructure security across cloud environments as well as data security at rest and in transit as data moves across different cloud platforms. Each cloud platform provides the technology stack to create a secure private cloud and configure secure communication channels for moving data across the cloud boundaries. However, these need to be configured carefully with strict access control policies that are based on the principles of least privileged access. The Cloud CoE can define agency-wide security controls and policies and oversee the implementation of these across all the cloud platforms used by the organization. Additional security controls through proper RBAC policies need to be designed and defined for Platform and Software as a service applications like Office 365, SharePoint, Salesforce, and other such services used at the organization.

Another aspect of security to consider when looking at cloud architectures is securing various SDLC environments like your Dev, Test, Stage and Production. The Dev and Test environments need to be separate from stage and prod and can have different security controls that facilitate faster development cycles. The data security controls for these lower environments need to be in place to avoid having production data in lower environments. Samples and mocked data should be created for the development and test environments with real data restricted in the higher environment with a stricter RBAC policy in place. All cloud platforms provide native constructs to create these environments. For example, one way to setup private and secure environments on AWS would be to use AWS Control Tower and AWS Organization structure to create and mimic the organization structure on AWS. This would enable organizations to create department specific private AWS environments and have specific fine-grained security and governance control structures in addition to the broad organization wide controls.

Conclusion

When considering implementing a multi-cloud strategy, organizations should investigate and design to the issues related to data security, availability, data sharing controls, data governance controls and network security. These should be part of the basic design for data and application related needs in a multi-cloud setup and should be established as an overarching organizational control structure. Establishing a Center of Excellence for the cloud architecture, design and governance is an important first step towards getting the strategy implemented.

If you are interested in learning more about how FI Consulting can support your organization in developing a successful multi-cloud strategy, please email contact@ficonsulting.com or call us at 571.255.6900.