Faster, safer, smarter: modernising legacy systems with agentic AI

Data & AI

Product & Delivery

Authors

Legacy monolithic applications often slow delivery, increase costs and limit the benefits of modern cloud platforms. Agentic AI (a form of Artificial Intelligence focused on autonomous, goal‑driven agents) is beginning to change this. By deploying vFunction inside a private cloud environment, teams can observe real application behaviour, identify logical system domains and find opportunities to split services more safely than with manual analysis.

When vFunction’s outputs are combined with AI development assistants like Amazon Q, service decomposition can move much faster. In our latest OpenPerspectives piece, Umit Ekinoglu, Senior DevOps Consultant at Opencast, explains how this approach can reduce early modernisation work from months to weeks while maintaining security, consistency and strong design practices.

Abstract illustration of flowing blue digital lines representing data movement and system transformation.

Context and challenge

Many organisations rely on monolithic applications that have evolved over many years. These systems are often business-critical but accumulate technical debt. Components become tightly connected, dependencies are unclear and documentation is limited, making even small changes slow and risky.

Moving these applications to container platforms such as Kubernetes does not automatically solve the problem. Rehosting the monolith may reduce infrastructure costs initially, but development cycles usually remain slow and operational complexity can increase. To gain the full benefits of cloud-native platforms (scalability, resilience and faster delivery) applications need to be broken into smaller services.

Traditionally this has been difficult. Teams may spend months studying codebases, mapping dependencies and debating service boundaries before meaningful work begins. The effort is difficult to scale and expensive to justify.

Approach

Agentic AI can offer a new approach. By combining runtime observation, code analysis and machine learning, modern tools can reveal an application’s structure and suggest safe refactoring paths. This helps teams to move faster while meeting enterprise requirements for security, governance and data control – especially key for large, complex organisations.


Recently, I worked on a medium-to-large Java monolith that was business-critical but fragile. Small changes often caused instability, leading to cautious releases and long lead times. The goal was to modernise the system for Kubernetes while keeping everything inside a private AWS (Amazon Web Services) environment. We followed four key steps:


  1. Observe application behaviour in real time

  2. Analyse code and execution data

  3. Identify boundaries aligned with business functions

  4. Gradually extract services ready for Kubernetes

Security and deployment


All analysis ran inside a customer-managed AWS VPC (Virtual Private Cloud) using private subnets with no internet access. Access was controlled through IAM (Identity and Access Management) and restricted security groups. Communication used mTLS (Mutual Transport Layer Security) and connectivity remained inside the environment through VPC endpoints.


From insight to implementation


Engineers instrumented the Java application so vFunction could observe runtime behaviour and analyse the codebase. The platform suggested candidate domains, highlighted dependency hotspots and proposed points where services could be extracted.


Architects reviewed these findings, confirmed or adjusted boundaries and documented the decisions.


vFunction then produced structured outputs such as refactoring tasks and service specifications. These helped developers scaffold new services aligned with platform standards.


Engineers created backlog items from these outputs and used Amazon Q in the IDE (Integrated Development Environment) to assist with tasks such as generating boilerplate code, suggesting refactoring changes and creating tests. All changes remained under human control and passed through peer review and CI/CD (continuous integration and continuous delivery) pipelines.

Running on Kubernetes


Each extracted service followed a consistent deployment model on Amazon EKS (Elastic Kubernetes Service), including resource limits, health checks, logging, metrics, secrets management and autoscaling. This ensured every service inherited the same operational and security practices.

Outcomes

Early modernisation work accelerated significantly. Tasks that once took months, such as dependency mapping and service identification, were completed in weeks.

The organisation measured around a 75% reduction in cycle time for initial service extraction. Manual analysis decreased, allowing specialists to focus on higher-value architectural work.

As services moved onto EKS with defined resource limits and autoscaling, teams improved resource utilisation and avoided unnecessary over-provisioning, supporting both cost efficiency and sustainability goals.

Operational model

Because the process and outputs were structured, other teams could reuse the same approach. The pattern was applied to additional legacy applications with minimal rework, creating a consistent and governed path for modernisation.

AI-assisted development remained inside secure environments, artefacts stayed within the private boundary and humans retained accountability for all decisions and code changes.

People working at computer desks in an open‑plan office, with screens and desk dividers visible.

Conclusion

Modernising monolithic systems no longer requires long, risky programmes. By combining agentic AI, private cloud deployment and a consistent Kubernetes platform, organisations can move faster while maintaining strong governance.

A practical starting point is to select a non-critical domain, run vFunction within an AWS VPC to identify service boundaries, and use Amazon Q to accelerate clearly defined development tasks under human review.



OpenPerspectives is our platform for Opencast people to share their thoughts and perspectives on modern digital delivery. It offers practical insight into user-centred design, engineering excellence, product leadership, data-driven decision making and building expert capabilities, grounded in real-world experience.

Related Content

© Opencast 2026

Registered in England and Wales

© Opencast 2026

Registered in England and Wales

© Opencast 2026

Registered in England and Wales

About

Services

Clients

Insights

Careers