What we built, and why it worked
When we started on CHERRISK for UNIQA, the brief was genuinely greenfield: no legacy constraints, a team that wanted to do things properly, and a business that needed to launch a fully online insurance platform across multiple markets. We chose Scala / Play, an event-driven CQRS architecture, Kafka for event streaming, GraphQL / Sangria for the API layer, and GitLab CI/CD with Docker Swarm for deployment.
That stack held up. The platform launched, scaled, handled real production load, and supported actuarial and BI work through its ElasticSearch event-sourced data layer. The decisions were sound — and most of them we would make again today.
What has changed since then: AI in the engineering workflow
The biggest single shift between then and now is not a framework or a cloud provider — it is the arrival of AI coding assistants as a day-to-day engineering tool. Since late 2022 we have progressively integrated ChatGPT, Claude, Cursor, and related tools into every phase of delivery. The productivity delta is not marginal; on certain classes of work — boilerplate, test scaffolding, documentation, data transformation logic — it is measured in hours saved per day per developer.
On a project the scale of CHERRISK, that matters enormously.
What we would do differently today
1. Leaner initial team, faster ramp-up
A significant fraction of early-stage effort on large greenfield projects goes into setup: project scaffolding, CI/CD configuration, environment standardisation, writing the first 80% of the boilerplate service layer. With AI-assisted development, two senior engineers today can produce what previously required four or five in the same timeframe. We would start leaner, validate the core domain model faster, and scale the team once the architecture was proven in production.
2. AI-generated test coverage from day one
Test coverage on a CQRS/event-sourced system is non-trivial — writing realistic command fixtures, event snapshots, and integration scenarios is tedious work that historically got deprioritised under delivery pressure. Today we would use AI to generate the first pass of test scaffolding continuously, keeping coverage honest without slowing down feature delivery.
3. Tighter feedback loops between domain experts and code
Insurance domain rules — pricing, eligibility, product logic — translate into code slowly when all translation has to happen through developer heads. We would explore using AI to help business analysts and actuaries produce structured logic that feeds directly into the codebase, shortening the business-to-code loop.
4. More aggressive use of managed services for non-differentiating infrastructure
We ran our own Kafka, managed our own Ceph object storage, operated our own Docker Swarm. In 2026, for most of those components, a managed cloud equivalent exists. We would spend less time on infrastructure operations and more on product delivery. The team's energy is most valuable applied to the domain logic, not to keeping a Kafka cluster healthy.
What we would keep exactly the same
- The event-driven CQRS architecture. For a domain that requires an audit trail, supports actuarial replay, and needs to evolve product rules without downtime, this is still the right call.
- Scala / Play for the core backend. The type system prevents entire classes of bugs. The performance is excellent. AI tools have become surprisingly good at Scala code generation, which removes one of the traditional friction points of the language.
- GitLab CI/CD with multi-environment pipelines and quality gates. Automated, auditable delivery was one of the project's biggest wins. We would not compromise on this.
- Close collaboration with business teams. Technology is not the hard part of an insurance platform. Understanding the domain well enough to model it correctly is. No amount of AI tooling replaces that.
The bottom line for similar projects today
If you are starting a similar project now — a complex, high-correctness domain requiring event-sourced data and multi-market scalability — the fundamental architecture remains valid. The difference is in the speed and team size required to execute it. A smaller, AI-augmented senior team can today deliver what previously required a larger team over a longer runway.
If that sounds relevant to your project, let's talk.