patternMinor
Is this an appropriate architecture, or can improvements be made?
Viewed 0 times
thiscanarchitecturemadeappropriateimprovements
Problem
Due to a combination of business/enterprise requirements and our architect's preferences we have arrived at a particular architecture that seems a bit off to me, but I have very limited architectural knowledge and even less cloud knowledge, so I would love a sanity check to see if there's improvement that can be made here:
Background: We are developing a replacement for an existing system that is a complete rewrite from the ground up. This requires us to source data from an SAP instance through BAPI/SOAP Web Services, as well as use some databases of our own for data not in SAP. Currently all of the data that we will be managing exists in local DBs on a distributed application, or in a MySQL database that will need to be migrated off of. We will need to create a handful of web applications that replicate the functionality of the existing distributed app, as well as providing admin related functionality over the data we control.
Business/Enterprise Requirements:
-
Any databases we control must be implemented in MS SQL Server
-
Minimize the number of databases created
-
Phase 1 will have us deploy our applications to Azure, but we need the ability to bring these applications on-prem in the future
-
Our Ops team wants us to dockerize everything as they feel it will make their management of the code a lot simpler.
-
Minimze/eliminate replication of data
-
The coding stack is going to be .NET Core for microservices and Admin apps, but Angular 5 for the main front-end application.
From these requirements our architect came up with this design:
Our front-ends will feed from a series of microservices (I use that term lightly as they are 'Domain' level and rather large), which will be broken into Read Services and Write Services in each domain. Both will be scalable and load balanced through Kubernetes. Each will also have a read-only copy of their database attached to them within their container, with a single master instance of the db available for wr
Background: We are developing a replacement for an existing system that is a complete rewrite from the ground up. This requires us to source data from an SAP instance through BAPI/SOAP Web Services, as well as use some databases of our own for data not in SAP. Currently all of the data that we will be managing exists in local DBs on a distributed application, or in a MySQL database that will need to be migrated off of. We will need to create a handful of web applications that replicate the functionality of the existing distributed app, as well as providing admin related functionality over the data we control.
Business/Enterprise Requirements:
-
Any databases we control must be implemented in MS SQL Server
-
Minimize the number of databases created
-
Phase 1 will have us deploy our applications to Azure, but we need the ability to bring these applications on-prem in the future
-
Our Ops team wants us to dockerize everything as they feel it will make their management of the code a lot simpler.
-
Minimze/eliminate replication of data
-
The coding stack is going to be .NET Core for microservices and Admin apps, but Angular 5 for the main front-end application.
From these requirements our architect came up with this design:
Our front-ends will feed from a series of microservices (I use that term lightly as they are 'Domain' level and rather large), which will be broken into Read Services and Write Services in each domain. Both will be scalable and load balanced through Kubernetes. Each will also have a read-only copy of their database attached to them within their container, with a single master instance of the db available for wr
Solution
My largest concern is around the MS SQL Server implementation. Coupling the read only instances so tightly to the services feels wrong. Is there a better way to do this?
Essentially what you have designed is a caching system - the service containers have a local copy of the data presumably so that for reads they don't have to make an extra network trip.
As you've pointed out, a more standard approach is to have a cluster of read replicas that all of the containers can read from. This allows you to scale them separately from the application servers, which is good, because they generally need different things (do you really want to allocate large amounts of RAM to every application container?). This will add network calls for database reads, but until that's proven to be an issue I wouldn't complicate the architecture to solve it.
If it does become an issue, a much more lightweight way of handling the problem is to run an actual cache locally, like memcache or redis. You can tune the TTLs on individual objects to be appropriate, and it will automatically drop off rarely-requested data to keep the application server light.
Essentially what you have designed is a caching system - the service containers have a local copy of the data presumably so that for reads they don't have to make an extra network trip.
As you've pointed out, a more standard approach is to have a cluster of read replicas that all of the containers can read from. This allows you to scale them separately from the application servers, which is good, because they generally need different things (do you really want to allocate large amounts of RAM to every application container?). This will add network calls for database reads, but until that's proven to be an issue I wouldn't complicate the architecture to solve it.
If it does become an issue, a much more lightweight way of handling the problem is to run an actual cache locally, like memcache or redis. You can tune the TTLs on individual objects to be appropriate, and it will automatically drop off rarely-requested data to keep the application server light.
Context
StackExchange DevOps Q#4058, answer score: 2
Revisions (0)
No revisions yet.