**My AWS Cloud Migration Project 🚀☁️ Simple & Secure Hybrid Design!** Ever wondered how to move a company from its own computers to the cloud safely and smoothly? 🤔 I'm sharing the plan I made for moving a dating app ("Lovely") to AWS, connecting it with their existing setup! It was my final project for the AWS Cloud Architect course at School of Hi-Tech and Cyber Security Bar-Ilan University. Here’s a peek at the main ideas: ✅ **Easy & Secure Logins:** Made it simple for users to log in safely using their existing work accounts (Azure AD) with extra security checks (MFA). Set up separate AWS areas for different teams like R&D, IT, and DevOps. ✅ **Watching the Money:** Kept track of spending with automatic alerts (AWS Budgets & CloudWatch) to avoid surprises. Managed all billing from one central spot (AWS Organizations & Control Tower). ✅ **Connecting Old & New:** Safely linked the company's offices to AWS using a secure connection (Site-to-Site VPN). Made sure some computers could reach the internet without being directly exposed (NAT gateways). ✅ **Keeping the App Running Smoothly:** Moved their WordPress website to flexible AWS computers (EC2), databases (RDS), and storage (EFS). Ensured the site stays up even if parts fail (Multi-AZ, Auto Scaling, ALB) and kept user data safe (HTTPS, KMS). ✅ **Smart & Safe Storage:** Used AWS S3 like digital filing cabinets, giving each team their own secure folder. Protected all files with secret codes (KMS) and set rules to save money and make backup copies elsewhere automatically. ✅ **Top-Notch Security:** Limited access to only approved locations (IP restrictions), used unique keys for computers (EC2 Key Pairs), and stored passwords securely (Secrets Manager). Ensured all data was scrambled (encrypted) when stored or sent. ✅ **Automation Power:** Created little helpers (Lambda & EventBridge) to automatically turn off unused computers, saving money. Kept a close eye on everything with monitoring tools (CloudWatch). ✅ **Ready for Anything:** Prepared a backup website in a different location just in case (Disaster Recovery). Automatically copied important data to another region (S3 Replication) for extra safety. **Tools / Tech Used** 💻🛠️ ☁️ AWS: EC2, RDS, EFS, S3, KMS, IAM, Organizations, Control Tower, Budgets, CloudWatch, Lambda, EventBridge, VPC, VPN, NAT Gateway, ALB, Route 53, Secrets Manager 🔑 Identity: Azure AD, SAML, MFA 🔒 Security: Fortinet 💻 Other: VMware, WordPress What do you think of this setup? Let me know your thoughts in the comments! 👇 Follow me for more cloud project insights! #AWS #CloudArchitecture #HybridCloud #SolutionArchitect #CloudSecurity #CloudMigration #DevOps #CyberSecurity #Project #Learning ---
Migrating Data Center Resources to AWS
Explore top LinkedIn content from expert professionals.
Summary
Migrating data center resources to AWS means moving your company’s servers, databases, and applications from physical equipment onsite to Amazon’s cloud platform, which allows for more flexibility, cost savings, and easier management. This process involves careful planning and a step-by-step approach to ensure your business stays secure and operational during and after the move.
- Map your needs: Start with a clear assessment of your current setup, business goals, and budget to avoid unexpected challenges during migration.
- Build strong foundations: Establish secure connections, organize your cloud environment, and set up proper access controls before transferring any resources.
- Modernize thoughtfully: Choose the right migration strategies for each workload, balancing speed, costs, and complexity to fit your company’s timeline.
-
-
𝗗𝗮𝘁𝗮 𝗘𝗻𝗴𝗶𝗻𝗲𝗲𝗿𝘀: 𝗦𝗲𝗿𝘃𝗲𝗿𝗹𝗲𝘀𝘀 𝗗𝗲𝗹𝘁𝗮 𝗟𝗮𝗸𝗲 𝗔𝗿𝗰𝗵𝗶𝘁𝗲𝗰𝘁𝘂𝗿𝗲 𝗼𝗻 𝗔𝗪𝗦 =========================================== Imagine you have data in your company's local servers (on-premises) and want to: 1. Move this data to AWS 2. Analyze it without managing servers 3. Use an event-driven approach Here's how TrueBlue, a company facing this challenge, solved it using AWS services: 𝟭. 𝗗𝗮𝘁𝗮 𝗠𝗶𝗴𝗿𝗮𝘁𝗶𝗼𝗻 ----------------- • Used AWS Database Migration Service to copy data from local databases to Amazon S3 • Ensures up-to-date information for jobs, job requests, and workers • Enables accurate job matching 𝟮. 𝗘𝘃𝗲𝗻𝘁-𝗗𝗿𝗶𝘃𝗲𝗻 𝗔𝗿𝗰𝗵𝗶𝘁𝗲𝗰𝘁𝘂𝗿𝗲 ------------------------------ • Set up S3 event notifications when new data arrives • Used Amazon SQS (Simple Queue Service) to capture these events • Created 3 SQS queues for different update frequencies: - 10-minute updates - 60-minute updates - 3-hour updates • AWS EventBridge rules trigger Step Functions based on these time intervals • Step Functions orchestrate AWS Glue jobs for data processing 𝟯. 𝗦𝗲𝗿𝘃𝗲𝗿𝗹𝗲𝘀𝘀 𝗣𝗿𝗼𝗰𝗲𝘀𝘀𝗶𝗻𝗴 -------------------------- • Chose AWS Glue over Amazon EMR (Elastic MapReduce) for serverless data processing • Reasons for choosing Glue: - Team's expertise in serverless development - Easier to manage and debug - Achieves similar results to EMR without server management • Glue jobs transform and load data into the Delta Lake format 𝟰. 𝗔𝗻𝗮𝗹𝘆𝘁𝗶𝗰𝘀 ------------ • Data scientists use PySpark SQL to query the Delta Lake • Delta Lake has three tiers: 1. Bronze: Raw data from source systems 2. Silver: Cleaned and joined data from bronze tier 3. Gold: Prepared data for machine learning (feature store) • Glue jobs keep the Delta Lake up-to-date with reliable upserts (updates and inserts) • Enables data scientists to: - Perform accurate job matches - Extract datasets for analysis - Build and train machine learning models 𝗕𝗲𝗻𝗲𝗳𝗶𝘁𝘀 𝗼𝗳 𝘁𝗵𝗶𝘀 𝗔𝗿𝗰𝗵𝗶𝘁𝗲𝗰𝘁𝘂𝗿𝗲: ------------------------------ 1. Serverless: No need to manage infrastructure 2. Scalable: Can handle increasing data volumes 3. Cost-effective: Pay only for resources used 4. Real-time: Event-driven updates keep data fresh 5. Flexible: Supports various data processing needs This architecture showcases how to build a modern, serverless data lake using AWS services, enabling efficient data migration, processing, and analytics without the complexity of managing servers. #dataengineer #dataengineering #deltalake #aws
-
8 years ago, I learned an expensive lesson about “Green Dashboards.” as a Jr. Engineer at Amazon. Just because the alarms are not ringing does not mean the house is not on fire. We migrated from Oracle to DynamoDB. It looked perfect. Until the bill arrived. This was early in my time at Amazon. DynamoDB had been around for years, but most core systems still ran on Oracle. Then came a company wide migration push, internally called Rolling Stone, to move consumer workloads from Oracle to AWS services like DynamoDB and Aurora. One of the first services I touched looked simple: a few Oracle tables, some batch writes, read heavy APIs. We redesigned the schema for DynamoDB, ran backfills, validated row counts, replayed test traffic. Latency, error rates, CPU all looked green. We cut over production. For the first days, nothing broke. Dashboards looked great. About two weeks in, p95 and p99 latency started creeping up for a small slice of traffic. No alerts fired. Averages still looked fine. Behind those green graphs, three things were going wrong: – Our Oracle style access patterns turned into inefficient DynamoDB queries. – Traffic created hot partitions we never hit in testing. – Short throttling spikes were hidden inside healthy looking table averages. Once customers started timing out, we did the only thing that worked fast: cranked up provisioned capacity. Throttling dropped. Latency went back to normal. And the DynamoDB bill quietly exploded. By the time finance asked questions, DynamoDB was the system of record. Rolling back to Oracle would have meant data reconciliation, traffic freezes, and real downtime. So we fixed forward: redesigned keys, cleaned up access patterns, added real p99 and cost visibility. Since then my rule is simple: if dashboards are green but tail latency or cost is drifting, you are not healthy, you are blind. And if your rollback only works when everything is perfect, it is not a plan, it is hope. If you want to see the broader Oracle to AWS migration story, AWS has a public write up of it on their news blog:https://lnkd.in/gEn3iSSW
-
Still using MSSQL RDS as your primary database and paying millions? Ever heard of Babelfish? At Medibuddy, we migrated 7 TB of data from MSSQL to Aurora PostgreSQL and saved 40% in RDS infrastructure costs. We achieved this using Babelfish and AWS DMS, which turned out to be both cost-efficient and migration-efficient. We faced multiple roadblocks along the way, and I wanted to document and share what we learned that might help you too! First, the WHY? 1. MSSQL was hurting us with its costs. 2. We also wanted to unify our database stack and build deep expertise in one RDS system instead of maintaining distributed data and fragmented knowledge. Strategy 1: Our initial plan - We began with a two-way AWS DMS setup from source to target and vice-versa. The idea was to migrate applications incrementally from MSSQL to Postgres. Why was this inefficient? 1. Two-way DMS had loopholes — loopback prevention wasn’t working efficiently in the latest version until the AWS team created a custom patch for us. 2. This approach required multiple code changes, increasing vulnerability to bugs. 3. It was time-consuming and costly, since DMS would need to run until all applications were migrated. 4. This meant dual database costs (MSSQL + Aurora), significant developer involvement, migration testing overhead, and more. That’s when we stopped and asked: “Is there a better, more efficient way?” And that’s when we found Babelfish. What is Babelfish? Babelfish provides a compatibility layer on top of PostgreSQL so the database can understand T-SQL (MSSQL) syntax. In short: Postgres engine + MSSQL understanding. What this means in practice: Babelfish exposes two ports: 1433 → MSSQL communication (TDS protocol) 5433 → PostgreSQL port (native Postgres) 1. No application changes required. 2. No long-running DMS replication. 3. Huge cost savings with Aurora Postgres, which is I/O-optimized. Applications can be gradually migrated to the Postgres port (5433) if needed—or you can continue using Babelfish indefinitely. This open-source technology helped us migrate 7 TB of data in just 2 months (including testing and validation) and save 40% in RDS infrastructure costs. More on the HOW of the migration in the upcoming posts. The team that made it possible with a lot of learnings : Vijay Ramachandran Raghav Bhutra Ajay Bandari MediBuddy
-
I was talking with a friend about an upcoming cloud migration to Oracle, and the conversation quickly shifted from providers to something more important. The real question was not where to migrate, but how to migrate well. What I shared with her was a three-phase approach I have worked through in AWS migrations from on-prem. 1) Assess. This is where migrations succeed or fail before any workload ever moves. Cloud readiness assessments, a clear business case, and TCO modeling help align people, business, governance, platform, security, and operations. If you skip this step, everything downstream becomes reactive. 2) Mobilize. This is where foundations are built. A Cloud Center of Excellence, landing zones, connectivity, security baselines, and initial proof of concept applications. This phase is about learning fast and establishing patterns, not moving everything at once. 3) Migrate and Modernize. This is where the 7 Rs come into play. Rehost, replatform, relocate, refactor, retain, retire, and repurchase. Most environments are a mix, and the goal is not perfection, but intentional decisions based on time, cost, and complexity. What I appreciate about these conversations is the reminder that cloud migrations are rarely about technology alone. They are about clarity, sequencing, and shared understanding across teams. If your company is planning a migration or modernization effort next year, I suggest that you start with the whiteboard before the console to map out ideas. Here is the image from our whiteboarding session, along with another I created using Nano Banana to bring it to life. Curious how others approach the Assess and Mobilize phases in their migrations. What lessons have you learned early that saved you later?
-
Modernizing Ab Initio Workloads on AWS Migrating enterprise-scale Ab Initio workloads to the cloud requires more than just a lift-and-shift. It’s about rethinking architecture, automating processes, and ensuring compatibility with modern cloud-native patterns. Here’s our proven 8-step roadmap for a seamless Ab Initio-to-AWS transformation: Automated Cloud Infrastructure Setup – Provision scalable AWS environments with IaC tools for speed, consistency, and security. Automated Cloud Ab Initio Product Installation – Streamline installation with automation to reduce manual setup time. Transform Application & Tools for Cloud Compatibility – Refactor applications, scripts, and dependencies to work efficiently in AWS. Define Migration Process – Establish a detailed, repeatable migration strategy with risk mitigation measures. Setup Containerization – Package Ab Initio components into containers for portability, scalability, and faster deployments. Implement Tokenization – Enhance data security with robust tokenization for sensitive information. Define Deployment Process – Implement CI/CD pipelines to automate build, test, and deployment workflows. Ongoing Cloud Support – Ensure stability, cost optimization, and proactive monitoring post-migration. With this approach, we achieve faster go-live, improved scalability, and enhanced governance empowering organizations to get the most from their Ab Initio investments in AWS. #CloudMigration #AWS #AbInitio #DataEngineering #ETL #Containerization #DataSecurity #CloudTransformation #Automation #DataEngineer #C2C #SeniorDataEngineer
-
Your data is locked in legacy systems but it takes time to move the data to your enterprise data platform. What to do? • Data Gravity: Most valuable business data is still locked in the legacy stack. Moving it wholesale is slow and brittle. • Platform Dependency: AI/ML work requires data on the new enterprise platform to scale. • Transformation Lag: Multimillion-dollar app migrations take quarters or years, not weeks. Meanwhile, the business wants AI insights now. Options 1. Incremental Data Virtualization & Federated Queries • Don’t wait for a full migration. Use virtualization layers (Starburst/Trino, Dremio) or cloud vendor federated query services (BigQuery Omni, Athena Federated Query, Redshift Spectrum) to query data in place. • This gives your data scientists a unified SQL layer today, with the performance hit acceptable for prototyping / model training. • Over time, you use logs from the virtualization layer to prioritize which datasets should be physically migrated first. 2. Event-Driven Data Sync for “Hot Data” • Set up a Change Data Capture (CDC) pipeline (Debezium, AWS DMS, Kafka Connect, Fivetran) to replicate only the delta (latest transactions, key entities) from legacy into the new platform. • You don’t need the entire warehouse migrated day one — start with the 5–10 “hot tables” your ML use cases actually depend on. • This keeps training / scoring data “fresh enough” without waiting weeks for batch loads. 3. Model-in-Legacy with Deployment-in-New • Flip the problem: instead of forcing all training to happen in the new stack, train small/medium models closer to the legacy data. • Once trained, deploy them as APIs/services on the new enterprise platform for scalability. • This hybrid approach buys you time: quick wins on legacy data, scalable production later. 4. Surrogate / Proxy Datasets for Fast Prototyping • If you’re designing net-new AI products but the real data isn’t ready yet, create proxy datasets: anonymized samples, synthetic data, or limited slices extracted via controlled ETL. • This allows you to prove value and design workflows while the real migration catches up. 5. Parallel Tracks: Lab vs. Enterprise Build • Split your approach into two swimlanes: • Lab Track: lightweight, quick-and-dirty experiments on virtualized/replicated/synthetic data. • Enterprise Track: heavy lift migration + app rewrites for long-term scale. • The Lab Track feeds lessons into Enterprise Track (which data matters, which models deliver ROI). The CIO Mindset Shift The trap is waiting for the “perfect new world” before starting. In reality, you need bridges: • Federated access → buys visibility. • CDC pipelines → buys freshness. • Proxy data → buys speed. • Dual-track delivery → buys time. This way, AI work doesn’t stall for 18 months while multimillion-dollar transformations lumber forward. You show business value now and build momentum, even as the legacy elephant gets dragged into the hybrid cloud.
-
Influence without authority (AWS project) I led a multi-team AWS migration without direct authority. The only way it worked: treat centralized services like they’re my team and prove it every day. How we moved the mountain: Two-front plan: Infra and code in parallel. Centralized services owned VPC, IAM, networking, and landing zones; my core team owned app modernization (CI/CD, containers, observability). Clear contracts, not commandments: Written interfaces; accounts, roles, SLOs, Terraform modules, rollback paths. We agreed on “what good looks like” before we pushed a change. Service-first leadership: I protected their focus, escalated blockers for them, and gave them credit in every exec update. Compassion isn’t soft, it creates velocity. Shared scoreboard: One dashboard for cutover readiness: env parity, deployment success, error budgets, cost deltas. Wins were ours, not mine. Rituals that build trust: 15-min daily sync, weekly demo, blameless notes. When an issue hit, we stabilized first, learned later. Result: we cut over infra and application tiers on schedule, reduced spend, and improved reliability; without a single team shuffle. You don’t need org charts to lead big work. You need clarity, care, and shared success. Where could you turn a “support function” into a true partner this quarter? #InfluenceWithoutAuthority #AWS #CloudMigration #PlatformEngineering #CentralizedServices #EngineeringLeadership #CrossFunctionalCollaboration #DevOps #SRE #CI_CD #ServantLeadership #OperationalExcellence #CalmStrength #ForgedToEndure
-
Have you migrated your on-premises Spring Boot microservices to AWS? Here's how I did it. When migrating our existing on-premises microservices to AWS, I explored various strategies provided by AWS. AWS offers the 7Rs of migration, which help determine the best approach based on factors like time constraints, application architecture (monolithic vs. microservices), and overall complexity. AWS offers several compute options, including EC2, Lambda, and containers with ECS or EKS. Given the tight timeline and the microservices nature of our applications, I proposed Replatforming—one of the 7Rs—by using containerization with Amazon ECS instead of EKS. I was able to tweak my existing microservice slightly and had it up and running in an hour. Here’s Why I Chose ECS: AWS-Native Containerization Offering: I wanted to use as many AWS-native services as possible, and ECS is AWS's containerization service. ECS orchestrates microservices in a containerized environment by provisioning the desired infrastructure and ensuring that containers run reliably. Simplicity and Speed: While EKS is a powerful option for managing Kubernetes workloads, it has a steeper learning curve and adds complexity. ECS, on the other hand, provided all the containerization capabilities we needed without the overhead, allowing us to meet our timeline. Serverless Containers with Fargate: ECS with Fargate eliminated the need to manage servers, enabling us to focus on application logic rather than infrastructure management. Additionally, combining Fargate On-Demand with Fargate Spot allowed us to achieve cost-effectiveness while maintaining performance. By using ECS, we achieved a smooth migration, leveraging AWS-native services to ensure better performance, scalability, and cost-efficiency. Here are the High-Level Steps: 1. Generate a JAR file from your microservice. 2. Create a Dockerfile, build an image, and push it to ECR or DockerHub. 3. Set up a VPC and other networking components aligned with a three-tier architecture. 4. Create a target group to route traffic. 5. Set up an Application Load Balancer to forward traffic to the target group. 6. Create an IAM role with permissions to pull the image from ECR and access Secrets Manager for database secrets. 7. Create an ECS cluster. 8. Define a task definition with infrastructure specifications, container image configuration, and environment variables. 9. Create an ECS service, specifying VPC, compute options, and auto-scaling configuration. Summary: Migrating microservices from on-premises to AWS involves various decisions based on application architecture and requirements. For us, ECS was the right fit, providing simplicity, speed, and scalability. If you’ve also migrated Spring Boot microservices to AWS or are planning to, I’d love to hear your experience! Feel free to ask questions or share your insights in the comments.
-
On prem to Cloud migration Step-by-Step AWS Cloud Migration Process 1. Plan the Migration Assessment: Identify the current environment (servers, databases, dependencies, and configurations). Inventory: Document application components and dependencies. Sizing: Determine AWS resources (EC2 instance types, RDS configurations, etc.) based on current usage. Network Design: Plan VPC setup, subnets, security groups, and connectivity. Backup Plan: Create a fallback plan for any issues during migration. 2. Prepare the AWS Environment VPC Setup: Create a VPC with subnets across multiple Availability Zones (AZs). Security: Configure security groups, IAM roles, and policies. Database Configuration: Set up an Amazon RDS instance or EC2-based database for the migration. AD Server: Use AWS Managed Microsoft AD or deploy your AD on EC2. Application Server: Launch EC2 instances and configure the operating system and required dependencies. 3. Migrate Database Backup: Create a backup of the current database. Export/Import: Use database migration tools (e.g., AWS DMS or native database tools) to migrate data to the AWS database. Replication: Set up database replication for real-time sync with the on-prem database. Validation: Verify data consistency and integrity post-migration. 4. Migrate Application Server Packaging: Package the application (e.g., as Docker containers, AMIs, or simple binaries). Deployment: Deploy the application on AWS EC2 instances or use AWS Elastic Beanstalk. DNS Configuration: Update DNS records to point to the AWS environment. 5. Migrate Active Directory (AD) Replication: Create a replica of the on-prem AD in AWS using the AD Trust setup. DNS Sync: Sync DNS entries between on-prem and AWS environments. Validation: Test authentication and resource access. 6. Test and Validate End-to-End Testing: Validate the complete environment (application, database, and AD). Performance Check: Monitor performance using CloudWatch and address any issues. Failover Testing: Simulate failure scenarios to ensure HA/DR readiness. 7. Cutover and Go Live Schedule Downtime: Coordinate with stakeholders and users for a minimal downtime window. Final Sync: Perform a final sync of the database and switch traffic to AWS. DNS Propagation: Update DNS settings to route traffic to the AWS environment (may take up to 24 hours). Monitoring: Continuously monitor AWS resources and performance post-migration. 8. Post-Migration Optimization Scaling: Implement auto-scaling policies for the application. Security: Regularly review and improve security configurations. Cost Optimization: Use AWS Cost Explorer to analyze and optimize resource usage. Downtime Considerations Database Migration: Plan a maintenance window of 2–4 hours for the final database sync and cutover. DNS Propagation: Approx. 15 minutes to 24 hours, depending on TTL settings. Use short TTLs during migration to minimize delays. #AWSMigration #CloudMigration #MinimalDowntime #DatabaseToAWS #ApplicationToAWS #ADToAWS