Navigating the Cloud-Native Journey: An Enterprise Perspective

shape
shape
shape
shape
shape
shape
shape
shape

Introduction: The Cloud-Native Imperative

The IT landscape is undergoing a fundamental transformation. Organizations that operated with stable, predictable on-premise infrastructure for decades now compete with cloud-native companies that deploy changes multiple times daily, scale globally within minutes, and innovate at unprecedented velocity. This shift isn't just technological—it represents a complete reimagining of how enterprises should think about infrastructure, application architecture, and operations.

For enterprise architects and infrastructure managers, cloud-native adoption represents both tremendous opportunity and significant challenge. The opportunity lies in the tangible benefits: 61% of organizations adopting cloud-native technologies report process optimization, 57% see increased deployment agility, and 55% achieve faster product launches. Yet the challenge is equally substantial. Nearly half of organizations struggle to understand application dependencies during migration, underestimate project complexity, and encounter budget overruns when cloud initiatives don't match initial expectations.

The path from traditional on-premise architecture to cloud-native requires more than new tools—it demands rethinking fundamental assumptions about how applications are built, deployed, scaled, and operated. This journey, when done correctly, delivers competitive advantage and operational excellence. When done poorly, it consumes resources without delivering proportional value.

The Traditional On-Premise Model: Understanding What You're Leaving Behind

Before exploring cloud-native architectures, understanding traditional on-premise infrastructure is essential. For decades, enterprise applications ran on dedicated hardware within corporate data centers. This architecture offered some compelling advantages: complete control over infrastructure, no dependency on external providers, and full visibility into application performance. For stable, predictable workloads, this model worked reasonably well.

However, fundamental constraints limited traditional on-premise infrastructure:

Capital Expenditure Burden: Building data centers required massive upfront investment in hardware, networking equipment, security systems, and physical facilities. These capital expenses consumed significant resources that organizations could have deployed toward product development or customer-facing innovation. Capacity planning required purchasing infrastructure 12-24 months before demand materialized—leading to either over-provisioning that wasted resources or under-provisioning that constrained growth.

Inflexibility and Long Lead Times: Provisioning new servers took weeks. Expanding capacity involved purchasing hardware, coordinating with data center teams, performing installations, and integrating infrastructure into existing systems. This inflexibility meant that responding to unexpected demand spikes or market opportunities took months rather than minutes.

Operational Complexity: On-premise infrastructure required dedicated teams managing hardware maintenance, networking, storage systems, security patches, and capacity monitoring. These operational needs consumed resources that could have focused on application development or business value creation. A single security vulnerability required coordinating patches across potentially hundreds of servers, creating release coordination nightmares.

Scaling Challenges: Traditional applications scaled vertically—by adding processors, memory, and storage to existing machines. This approach had hard limits; even the most powerful hardware eventually hit capacity ceilings. Horizontal scaling—distributing applications across multiple machines—required extensive application redesign to handle distributed communication, state management, and consistency challenges.

Limited Innovation: New technologies took years to be adopted in enterprise infrastructure. The pace of change in enterprise IT lagged consumer technology by 5-10 years, meaning organizations couldn't leverage cutting-edge innovations to competitive advantage.

Cloud-native architectures address these constraints fundamentally by replacing capital expenditure with operational expense, enabling on-demand scaling, abstracting infrastructure complexity, and enabling rapid technology adoption.

Cloud-Native Fundamentals: A New Paradigm

Cloud-native applications are specifically designed to take advantage of cloud computing's fundamental characteristics: on-demand resource provisioning, distributed infrastructure, and elastic scaling. Rather than adapting applications built for static on-premise servers, cloud-native means building applications assuming dynamic, distributed, elastic infrastructure.

Several core characteristics define cloud-native architectures:

Containerization: Applications run in containers—lightweight, self-contained execution environments that bundle application code, dependencies, and configuration. Containers are significantly more efficient than virtual machines; they share the host operating system kernel rather than running complete guest operating systems. This efficiency enables packaging applications more densely—running dozens or hundreds of containers where only a few virtual machines would fit.

Microservices Architecture: Rather than building monolithic applications where all functionality runs in a single process, cloud-native applications decompose into microservices—small, independently deployable services focused on specific business capabilities. Each microservice runs in its own container, communicates with other services through APIs, and scales independently based on demand. This architecture enables teams to develop and deploy services independently, dramatically improving development velocity.

Infrastructure Abstraction: Cloud-native applications don't care about underlying infrastructure. They don't know whether they're running on AWS, Azure, Google Cloud, or on-premise Kubernetes clusters. Applications declare what resources they need (CPU, memory, storage) and let the platform figure out how to provision them.

DevOps and Automation: Cloud-native requires DevOps practices where development and operations teams collaborate throughout the application lifecycle. Continuous integration/continuous deployment (CI/CD) pipelines automate testing and deployment, enabling frequent releases. Infrastructure-as-Code treats infrastructure definition as code, enabling version control and consistent reproducibility.

Resilience by Design: Cloud-native applications expect infrastructure failures. Rather than trying to prevent failures through redundant hardware, cloud-native applications handle failures gracefully. Services implement circuit breakers, retry logic, and graceful degradation. Platforms provide automatic recovery—if a container crashes, the orchestration platform automatically restarts it.

Docker: The Container Revolution

Docker revolutionized application packaging by making containerization practical and accessible. A Docker container is a standardized, immutable package containing application code, runtime, dependencies, and configuration. The same container runs identically on a developer's laptop, in testing environments, in staging, and in production—eliminating the "it works on my machine" problem that plagued software development for decades.

How Docker Transforms Application Delivery

Traditional application deployment involved:

  1. Developers writing code targeting a specific runtime environment
  2. Operations teams installing that runtime and dependencies on production servers
  3. Complex compatibility matrices tracking which versions worked together
  4. Hours or days of troubleshooting when production environments drifted from development

Docker eliminates this complexity. A Dockerfile specifies the complete environment:

FROM node:18-alpine
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3000
CMD ["node", "app.js"]

This Dockerfile creates an immutable container image that runs identically everywhere. Developers can test the exact production environment locally. Operations teams know exactly what they're deploying. Configuration drift—where production systems gradually diverge from documented specifications—becomes nearly impossible.

Docker's Enterprise Benefits

Consistency: The same container image running in development, testing, staging, and production eliminates entire categories of deployment problems. The friction between development and operations shrinks dramatically.

Density: Containers are lightweight, enabling organizations to run far more applications per physical server than with virtual machines. This improved resource utilization directly reduces infrastructure costs.

Rapid Provisioning: Creating new container instances takes seconds. Responding to traffic spikes by spinning up additional containers requires minutes rather than the weeks needed to provision physical servers.

Portability: Docker containers run anywhere Docker runs—reducing vendor lock-in and enabling hybrid deployment strategies.

Kubernetes: Enterprise Container Orchestration

Docker solves the problem of packaging applications into containers. But managing hundreds or thousands of containers across a distributed infrastructure introduces new challenges: which server should each container run on? How do you handle container failures? How do you distribute load across multiple container instances? How do you update containers without service disruption?

Kubernetes (often abbreviated k8s) addresses these orchestration challenges. Originally developed at Google and now maintained by the Cloud Native Computing Foundation, Kubernetes has become the de facto standard for container orchestration in enterprises. By 2024, 93% of organizations were using, piloting, or evaluating Kubernetes—demonstrating its dominance.

Kubernetes Core Concepts

Pods: The smallest deployable unit in Kubernetes is a Pod—one or more containers that must run together. Most Pods contain a single container, but occasionally multiple tightly-coupled containers run in the same Pod, sharing network namespace and storage.

Services: Pods are ephemeral—they can be created and destroyed dynamically based on load. Services provide stable endpoints for accessing Pods, managing load balancing and service discovery automatically. When you request a service, Kubernetes routes traffic to any of the currently running Pods providing that service.

Deployments: Deployments describe desired application state and Kubernetes ensures actual state matches. You specify "I want 5 replicas of my service running" and Kubernetes maintains that configuration, automatically restarting failed Pods, scaling up during traffic spikes, and scaling down during low-demand periods.

Persistent Storage: Kubernetes provides abstractions for managing storage, enabling applications to persist data even when Pods are destroyed and recreated.

ConfigMaps and Secrets: Applications need configuration and sensitive data like database credentials. ConfigMaps provide non-sensitive configuration; Secrets handle sensitive information securely.

Kubernetes Operations at Scale

Automatic Scaling: Kubernetes monitors CPU usage and other metrics, automatically scaling up when demand increases and scaling down to conserve resources when demand decreases. This elasticity means organizations pay for infrastructure only when they need it, dramatically reducing costs compared to static capacity provisioning.

Self-Healing: When a container crashes, Kubernetes automatically restarts it. If a node (physical or virtual machine) fails, Kubernetes reschedules Pods to healthy nodes. Applications remain available even when underlying infrastructure fails, provided sufficient capacity exists.

Rolling Updates: Updating applications in Kubernetes doesn't require downtime. Kubernetes gradually replaces old Pods with new ones, ensuring service continues throughout the deployment. If issues emerge, you can immediately roll back to the previous version.

Service Mesh Integration: Service meshes like Istio add sophisticated networking capabilities—traffic management, security policies, observability, and failure injection for testing resilience.

Real-World Enterprise Impact

Organizations deploying applications on Kubernetes report significant operational improvements. A financial services company migrated from monolithic on-premise infrastructure to cloud-native Kubernetes deployments. The results: 50% reduction in deployment time, 40% improvement in system reliability, and 30% cost reduction through improved resource utilization. The organization could deploy code changes multiple times daily instead of monthly releases, enabling rapid response to market conditions and customer feedback.

The Migration Journey: From On-Premise to Cloud-Native

Migrating from on-premise infrastructure to cloud-native represents substantial organizational transformation. Success requires systematic approach addressing technical, operational, and cultural dimensions.

Assessment and Planning Phase

Successful migrations begin with rigorous assessment. Organizations must understand current applications, dependencies, and constraints before determining what should move to cloud and how.

Application Inventory: Create comprehensive inventory of all applications, including:

  • Architecture and dependencies
  • Data requirements and storage patterns
  • Scalability characteristics
  • Security and compliance requirements
  • Integration points with other systems

Readiness Assessment: Evaluate organizational readiness across multiple dimensions:

  • Technical skills in cloud-native technologies
  • DevOps maturity and practices
  • Organizational structure and team composition
  • Security and compliance capabilities
  • Network and infrastructure conditions

Cost Modeling: A common pitfall is underestimating cloud migration costs. Comprehensive cost modeling should include:

  • Data transfer costs (moving terabytes of data to cloud can be expensive)
  • Licensing costs for cloud services
  • Operational costs including monitoring, logging, and security services
  • Training and skill development costs
  • Timeline and personnel costs for migration projects

Organizations that model costs carefully typically allocate 15-25% contingency for unforeseen expenses. Those ignoring this guidance commonly experience 30-50% budget overruns.

Migration Strategy Selection

Different applications require different migration approaches:

Lift and Shift (Rehost): Take an application exactly as it is and move it to cloud infrastructure. Minimal changes, fastest implementation, but doesn't capture cloud-native benefits. Suitable for stable applications with no immediate need for cloud-native features.

Replatform (Migrate and Optimize): Make minor changes during migration to improve cloud compatibility and performance. Balances speed with capturing some cloud benefits. Suitable for applications where minor improvements yield significant benefits.

Refactor and Re-architect: Decompose monolithic applications into microservices, containerize services, and deploy on Kubernetes. Maximum effort but captures full cloud-native benefits. Suitable for applications requiring frequent updates and scalability.

Repurchase (Replace): Migrate to SaaS alternatives rather than maintaining and migrating legacy applications. Often optimal for generic business functions (HR, Finance, CRM) where SaaS solutions offer superior capabilities.

Organizations typically use a portfolio approach, applying different strategies to different applications based on complexity, criticality, and cloud benefit potential.

The Strangler Pattern: Incremental Migration

One powerful migration pattern is the Strangler Pattern, inspired by fig trees that gradually strangle host trees over time. Rather than attempting big-bang migration, you incrementally migrate one business capability at a time.

The pattern works as follows: place an API Gateway or reverse proxy in front of the monolithic application. Initially, the gateway routes all requests to the existing application. As you build cloud-native microservices, migrate one business capability at a time, routing requests for that capability to the new microservice while other requests continue reaching the monolith. Over time, the monolith shrinks as capabilities migrate to cloud-native services.

This approach provides several advantages: reduced risk through incremental migration, continuous value delivery as services migrate, minimal disruption to ongoing operations, and the ability to pause migration if business priorities change.

Data Migration Challenges

Data migration is often the most complex aspect of cloud adoption. Large enterprises may have petabytes of data. Simply uploading this to cloud can cost hundreds of thousands of dollars and take months. Strategies include:

AWS DataSync and Similar Services: These accelerate data transfer, often providing 50-100x faster transfer than standard internet connections.

Staged Migration: Migrate data in phases, prioritizing critical business processes. Lower-priority data can follow in subsequent phases.

Data Transformation: Use migration as opportunity to clean data, eliminate redundancy, and restructure for cloud-native systems.

Hybrid Approach: Keep some data on-premise if migration costs exceed benefits, connecting cloud applications to on-premise data through secure network links.

Containerization: Practical Implementation

Containerizing applications involves several steps:

Identify Application Components: Decompose the application into logical services aligned with business capabilities. Services should be independently deployable, scalable, and testable.

Create Container Images: Write Dockerfiles defining each service's container image. Images should be minimal—including only necessary dependencies to reduce image size and improve deployment speed.

Define Service Communication: Establish clear API contracts between services. Services communicate through REST APIs, message queues, or gRPC, not through shared databases or internal function calls.

Implement Configuration Management: Applications should retrieve configuration from environment variables or ConfigMaps rather than hardcoding settings. This enables deploying the same image across development, testing, and production with different configurations.

Add Health Checks: Services should expose health check endpoints that container orchestrators use to determine whether services are healthy. If health checks fail, orchestrators can automatically restart or reschedule services.

Implement Logging and Monitoring: Applications should log to standard output, not files. Kubernetes captures this output and makes it accessible through unified logging platforms. Add instrumentation for key business and technical metrics.

Best Practices for Enterprise Cloud-Native Adoption

Based on experiences from thousands of enterprise migrations, several best practices consistently correlate with success:

1. Executive Alignment and Business Focus

Cloud migration must serve business objectives, not be purely technical. Executive leadership must understand and support migration goals. The most successful migrations connect cloud-native adoption to business strategies—faster time-to-market, improved customer experience, cost reduction, or competitive advantage.

2. Organizational Structure and Team Composition

Cloud-native requires different team structures than traditional IT. DevOps teams need developers, operations engineers, and architects working together. Platform engineering teams build and maintain Kubernetes clusters and shared infrastructure. Application teams focus on business logic rather than infrastructure management.

Many organizations make mistakes by creating separate cloud teams distinct from existing IT. This creates turf wars and prevents knowledge transfer. Instead, integrate cloud experts with existing teams, gradually building cloud-native capabilities across the organization.

3. Security by Design

Cloud-native architectures introduce new security challenges. Security shouldn't be added after applications are deployed; it should be embedded throughout development and operations. Implement:

  • Zero-trust architecture where all communication is authenticated and authorized
  • Secrets management for sensitive data
  • Container image scanning for known vulnerabilities
  • Network policies controlling service-to-service communication
  • Compliance automation using policy-as-code

4. Observability and Monitoring

Traditional monitoring focuses on infrastructure metrics (CPU, memory, disk). Cloud-native requires deeper observability—understanding application behavior across distributed services. Implement:

  • Distributed tracing to follow requests across microservices
  • Metrics collection from applications and infrastructure
  • Centralized logging aggregating logs from all containers
  • Alert policies based on business SLOs (Service Level Objectives) rather than arbitrary thresholds

5. Gradual Capability Building

Organizations shouldn't expect to master cloud-native overnight. Adopt capabilities progressively:

  • Month 1-3: Container basics, Docker, and simple deployments
  • Month 4-6: Kubernetes fundamentals and orchestration
  • Month 7-12: Advanced patterns, service meshes, and automation
  • Year 2+: Advanced security, multi-cluster management, and cost optimization

This phased approach allows teams to build expertise systematically rather than becoming overwhelmed by complexity.

Common Cloud Migration Challenges and Solutions

Challenge 1: Cost Overruns Traditional applications often have inefficient characteristics that cause high cloud costs when naively migrated. A database server sized for peak load but averaging 10% utilization costs substantially more in cloud where you pay for actual consumption.

Solution: Right-size resources during migration. Implement auto-scaling policies. Use reserved instances for predictable baseline load and spot instances for variable load.

Challenge 2: Skills Gaps Cloud-native requires skills in containerization, Kubernetes, DevOps, and distributed systems. Most enterprises lack these skills internally.

Solution: Invest in training early. Hire cloud-native experts to establish patterns and best practices. Partner with consulting organizations for knowledge transfer and accelerated capability building.

Challenge 3: Dependency Complexity Modern applications have complex dependencies on shared services, databases, and other systems. Understanding and managing these dependencies during migration is challenging.

Solution: Use application discovery tools to automatically map dependencies. Start with applications having few dependencies. Establish clear service contracts and API definitions.

Challenge 4: Compliance and Security Regulated industries face challenges with data residency, compliance certifications, and security requirements.

Solution: Establish cloud security policies implementing compliance requirements. Use AWS/Azure/GCP compliance certifications and controls. Implement policy-as-code automating compliance validation.

The Business Case for Cloud-Native

Despite challenges, cloud-native adoption delivers compelling business benefits:

Agility and Speed: Organizations deploying cloud-native report 3-5x faster feature deployment. This speed translates to competitive advantage in markets where rapid innovation matters.

Cost Efficiency: While migration costs are substantial, long-term infrastructure costs typically decrease 30-40% through improved resource utilization and operational automation.

Resilience: Cloud-native architectures designed for failure actually have higher availability than traditional architectures. Automatic recovery from component failures means applications continue operating even when parts fail.

Scalability: Cloud-native applications can grow from thousands to millions of users without architectural changes. Scaling happens automatically based on demand rather than requiring planned capacity upgrades.

Innovation: Cloud platforms provide AI/ML capabilities, data analytics, and other advanced services that would be prohibitively expensive for enterprises to build independently.

Conclusion: The Cloud-Native Future

The transition from on-premise infrastructure to cloud-native architectures represents one of the most significant IT transformations of recent decades. It's not simply about moving workloads to different infrastructure; it's about rethinking how applications are architected, deployed, and operated.

For enterprise architects and infrastructure managers, the journey can feel daunting. Legacy applications with complex dependencies, organizations lacking cloud-native expertise, and uncertain ROI create legitimate concerns. Yet organizations that navigate this journey successfully gain competitive advantages that compound over time.

The path forward requires systematic approach: understand current state, define clear business objectives, develop realistic cost models, build organizational capability incrementally, and implement containerization and Kubernetes strategically. Mistakes happen, but they're manageable if migration is approached methodically rather than chaotically.

The cloud-native era is not coming—it's already here. Organizations that master this transition will lead their industries. Those that hesitate risk being left behind by competitors moving with greater speed and agility. The time to begin the cloud-native journey is now.

References

  1. IBM. (2025). "Kubernetes Migration Strategy and Best Practices." Retrieved from https://www.ibm.com/think/insights/kubernetes-migration

  2. Maruti Technologies. (2019). "Containerization for Microservices: A Path to Agility and Excellence." Retrieved from https://marutitech.com/containerized-services-benefits/

  3. TekSystems. (2025). "Effectively Drive an On-Premises to Cloud Migration." Retrieved from https://www.teksystems.com/en-sg/insights/article/on-premises-to-cloud-migration

  4. Google Cloud. (2025). "Re-architecting To Cloud Native." Retrieved from https://cloud.google.com/resources/rearchitecting-to-cloud-native

  5. Springer. (2020). "Cloud Native Applications with Docker and Kubernetes." Retrieved from https://link.springer.com/10.1007/978-1-4842-8876-4

  6. GJETA. (2025). "Modernizing legacy enterprise platforms: A cloud-native migration case study." Retrieved from https://gjeta.com/node/2372

  7. LoroJournals. (2025). "Kubernetes for Enterprise: Mastering Cloud-Native Container Orchestration at Scale." Retrieved from https://lorojournals.com/index.php/emsj/article/view/1407

  8. IEEE Xplore. (2024). "Evaluate Canary Deployment Techniques Using Kubernetes, Istio, and Liquibase for Cloud Native Enterprise Applications." Retrieved from https://ieeexplore.ieee.org/document/10560002/

  9. EAJournals. (2025). "Enterprise-Scale Microservices Architecture: Domain-Driven Design and Cloud-Native Patterns." Retrieved from https://eajournals.org/ejcsit/vol13-issue45-2025/

  10. Dev.to. (2025). "Migrating From Docker to Kubernetes: How To Take Your Application from Local to Cloud." Retrieved from https://dev.to/dumebii/from-docker-to-kubernetes-how-containers-go-from-local-to-cloud-37m1

  11. VE3 Global. (2025). "Why do Enterprises Adopt Microservices?" Retrieved from https://www.ve3.global/why-enterprises-adopt-microservices/

  12. Cortex. (2025). "On-Premise to Cloud Migration: Benefits, Steps, and Best Practices." Retrieved from https://www.cortex.io/post/on-premise-cloud-migration

  13. Invenia Tech. (2025). "The Cloud-Native Journey: A Roadmap to Modern Application Development." Retrieved from https://inveniatech.com/cloud-services/the-cloud-native-journey-a-roadmap-to-modern-application-development/

  14. CSI Ltd. (2024). "Containerisation and Microservices Architecture." Retrieved from https://www.csiltd.co.uk/application-modernisation/containerisation-and-microservices/

  15. ASD Team. (2025). "On-premise to Cloud Migration Strategy Breakdown." Retrieved from https://asd.team/blog/on-premise-to-cloud-migration-strategy/

  16. Paradigma Digital. (2025). "Architecture Patterns: From Monolith to Microservices." Retrieved from https://en.paradigmadigital.com/dev/architecture-patterns-from-monolith-to-microservices/

  17. KM Tech. (2025). "Why Cloud Migration Remains A Hurdle For Some Enterprises." Retrieved from https://kmtech.com.au/information-centre/why-cloud-migration-remains-a-hurdle-for-some-enterprises/

  18. Savvy Comsoftware. (2024). "6 Benefits Of Adopting Cloud-native Development." Retrieved from https://savvycomsoftware.com/blog/6-benefits-of-adopting-cloud-native-development/

  19. LinkedIN. (2024). "The Benefits of Adopting Cloud Native Architecture for Enterprises." Retrieved from https://www.linkedin.com/pulse/benefits-adopting-cloud-native-architecture-enterprises-stefan-elie-75voc

  20. GeeksforGeeks. (2024). "Steps to Migrate From Monolithic to Microservices Architecture." Retrieved from https://www.geeksforgeeks.org/system-design/steps-to-migrate-from-monolithic-to-microservices-architecture/

  21. Boatyard X. (2025). "The Hidden Costs of Cloud Migration: Pitfalls and Solutions." Retrieved from https://boatyardx.com/hidden-costs-of-cloud-migration/

  22. IBM. (2024). "Cloud Migration Challenges." Retrieved from https://www.ibm.com/think/insights/cloud-migration-challenges

  23. Milan Jovanovic Tech. (2024). "Breaking It Down: How to Migrate Your Modular Monolith to Microservices." Retrieved from https://www.milanjovanovic.tech/blog/breaking-it-down-how-to-migrate-your-modular-monolith-to-microservices

  24. Orient Software. (2025). "Cloud-Native Benefits: How It Transforms Your Business." Retrieved from https://www.orientsoftware.com/blog/cloud-native-benefits/

  25. Technology and Strategy. (2025). "Cloud Transition Journey: Benefits, Challenges, and Best Practices." Retrieved from https://www.technologyandstrategy.com/news/cloud-transition-journey-benefits-challenges-and-best-practices-for-businesses

  26. IJSR. (2025). "Leveraging Dimensional Modeling for Optimized Healthcare Data Warehouse Cloud Migration." Retrieved from https://www.ijsr.net/getabstract.php?paperid=SR241004233606

  27. OARJST. (2024). "Designing a comprehensive cloud migration framework for high-revenue financial services: A case study." Retrieved from https://oarjst.com/node/550

  28. Journal of Cloud Computing. (2021). "Enterprise adoption of cloud computing with application portfolio profiling." Retrieved from https://journalofcloudcomputing.springeropen.com/articles/10.1186/s13677-020-00210-w

  29. ArXiv. (2021). "Conceptualising Cloud Migration Lifecycle." Retrieved from https://arxiv.org/ftp/arxiv/papers/2109/2109.01757.pdf

  30. ArXiv. (2020). "Cloud Migration Process: A Survey, Evaluation Framework and Open Challenges." Retrieved from https://arxiv.org/pdf/2004.10725.pdf