The Starting Point
The client was a mid-market healthcare SaaS company running a Java monolith that had been in continuous development since 2014. The numbers were daunting:
- 512,000 lines of Java (Spring Boot, Hibernate)
- 14 Hibernate entity models with deep inheritance hierarchies
- 340 REST endpoints across 28 controllers
- Zero automated tests for roughly 40% of the codebase
- HIPAA-regulated patient data with strict audit requirements
- 47 database tables with complex cross-references and soft deletes
The system worked. It was also ossified. Adding a new feature took 8--12 weeks because every change risked cascading failures through tightly coupled modules. Deployment was a twice-monthly all-hands event that regularly ran past midnight. Three senior Java developers had left in the past year, and hiring replacements for a legacy Spring Boot codebase was getting harder by the quarter.
The goal: migrate to TypeScript microservices on AWS without a single day of downtime, while maintaining HIPAA compliance throughout.
The Approach: Strangler Fig with AI-Powered Analysis
We used the strangler fig pattern --- gradually replacing monolith functionality with new services while both systems run in parallel. This isn't new. What's new is how the AI Pipeline Engine transformed the most painful parts of the process.
Phase 1: Codebase Analysis (Weeks 1--3)
Before writing any new code, we needed to understand the existing system deeply. This is typically the most tedious phase of a legacy migration --- engineers reading thousands of lines of unfamiliar code, tracing dependencies, mapping data flows.
We fed the entire codebase into the AI Pipeline Engine's analysis module. Over 72 hours, it produced:
A dependency graph mapping every class-to-class relationship, every database query to its calling code path, and every external API integration. This graph revealed that what the client thought were 6 bounded contexts were actually 11, with 3 hidden circular dependencies that would have been migration landmines.
An endpoint catalog documenting every REST endpoint with its request/response shapes, authentication requirements, and downstream database operations. The existing Swagger docs covered about 60% of the endpoints. The AI Pipeline found and documented the other 40%, including 23 undocumented internal endpoints used for batch processing.
A risk assessment ranking each module by migration complexity based on coupling density, test coverage, and data sensitivity. The AI Pipeline flagged the patient records module as highest risk (HIPAA-regulated, zero tests, 8 circular dependencies) and the notification service as lowest risk (isolated, well-tested, no regulated data).
This analysis would have taken a team of three engineers approximately six weeks to produce manually. The AI Pipeline did it in three days. The remaining time in weeks 1--3 was spent validating the AI's analysis, correcting about 12% of its dependency mappings (mostly around reflection-based dependency injection that the AI couldn't trace statically), and making strategic decisions about migration order.
Phase 2: Test Generation (Weeks 4--6)
You can't safely migrate code you can't test. The 40% of the codebase without automated tests was our biggest risk. We used the AI Pipeline Engine to generate characterization tests --- tests that document current behavior regardless of whether that behavior is correct.
The process:
- AI Pipeline analyzed each untested endpoint
- Generated HTTP-level integration tests capturing current request/response behavior
- Generated database assertion tests verifying data state changes
- Engineers reviewed each test for HIPAA compliance (no real patient data in fixtures)
- Tests were run against the live monolith to verify they passed
Results: 1,847 new integration tests generated in two weeks. 1,614 passed on first run. 233 required adjustments --- mostly around non-deterministic behavior (timestamps, random IDs) that the AI didn't account for. Final test suite covered 91% of all endpoints.
These tests became the contract for the migration. If the new TypeScript services passed the same tests, we knew the migration preserved behavior.
Phase 3: Service Extraction (Weeks 7--20)
This was the core of the migration, and where we combined AI augmentation with careful human judgment.
Migration order (determined by the Phase 1 risk assessment):
- Notification service (low risk, isolated)
- Billing integration (medium risk, external API heavy)
- Reporting engine (medium risk, read-heavy)
- User management (medium risk, auth-critical)
- Appointment scheduling (high risk, complex business logic)
- Patient records (highest risk, HIPAA-regulated)
For each service, the workflow was:
AI Pipeline translated the Java code to TypeScript. Not line-by-line --- that would produce unidiomatic TypeScript. The pipeline understood Spring Boot patterns and mapped them to equivalent TypeScript/Node.js patterns: Hibernate entities became Drizzle ORM schemas, Spring controllers became tRPC routers, Spring Security annotations became middleware functions.
Engineers reviewed every translation with specific attention to:
- Data validation (TypeScript's type system caught several places where the Java code relied on runtime type checks that were silently failing)
- Error handling (the Java code used checked exceptions extensively; we redesigned error flows for TypeScript)
- HIPAA audit logging (every data access needed to be logged with user context)
- Performance characteristics (some Hibernate lazy-loading patterns needed explicit optimization in Drizzle)
The characterization tests were adapted to run against the new services. When a test failed, it meant the new service behaved differently from the monolith --- which was usually a bug, but occasionally an intentional improvement.
The HIPAA Challenge
HIPAA compliance added significant complexity to every phase. Three specific challenges:
Audit trail continuity. The monolith had a custom audit logging system. The new services needed to maintain the same audit trail format and ensure zero gaps during the transition. We built a shared audit service that both the monolith and new services wrote to, ensuring continuity regardless of which system was handling a request.
Data encryption at rest. The monolith used application-level encryption for sensitive fields. The new services used AWS KMS with field-level encryption. During the transition period, both encryption schemes needed to work simultaneously. The AI Pipeline generated the encryption/decryption adapters, but a security engineer reviewed every implementation for correctness.
Access control migration. The monolith's RBAC system was deeply embedded in the Hibernate entity layer. Extracting it into a standalone authorization service was the single most complex task of the migration. The AI Pipeline generated the initial service, but two senior engineers spent three weeks refining the permission model and verifying that every access control check was preserved.
Phase 4: Cutover and Decommission (Weeks 21--24)
We used a gradual traffic shifting approach:
- Week 21: 5% of traffic to new services (monitoring for errors)
- Week 22: 25% of traffic (load testing in production)
- Week 23: 75% of traffic (final validation)
- Week 24: 100% cutover, monolith into read-only mode
Zero downtime throughout. The monolith ran in parallel for an additional four weeks as a fallback before decommissioning.
Results
| Metric | Before | After | |---|---|---| | Deployment frequency | Twice monthly | 12x per day | | Feature delivery time | 8--12 weeks | 1--2 weeks | | Mean time to recovery | 4 hours | 8 minutes | | P95 API latency | 340ms | 45ms | | Test coverage | 58% | 94% | | On-call incidents/month | 14 | 3 |
The client estimated the migration saved them 18 months compared to a traditional approach. More importantly, their engineering team went from dreading deployments to shipping with confidence daily.
Lessons Learned
AI excels at analysis and translation, not architecture. The AI Pipeline Engine was transformative for understanding the existing codebase and generating TypeScript translations. It was not useful for deciding how to decompose the monolith. Those decisions required engineers who understood the business domain.
Characterization tests are the migration's safety net. Without the AI-generated test suite, we would have been migrating blind. The 1,847 tests caught 34 behavioral regressions during the migration that would have been production incidents.
HIPAA adds 40% overhead. Budget for it. Every security-sensitive decision required human review, documentation, and sometimes legal consultation. AI accelerated the mechanical work but couldn't reduce the compliance burden.
The strangler fig pattern is non-negotiable for large migrations. Big-bang rewrites fail. The gradual approach let us course-correct continuously and maintain production stability throughout.
Thinking about modernizing a legacy system? Get a proposal from DecimalTech. We've done this before, and we'll show you exactly what your migration timeline and cost look like.