What should be architecture of multi client web application

Architecturedatabase-design

My question is not related to a specific technology rather it's about design methodology that should be chosen. My company is about to create a web application whose users will be corporate companies. The application will store data of all the signed up/registered companies including financial data. We are unsure what should be the architecture of the application specifically from database point of view.

I can think of two approaches:

  1. Single database for all the companies with same codebase. The company is identified using company table.

  2. Separate Database for each company with same codebase or may be different codebase for each company/client.

Both of the approaches have their pros and cos. If single database is used then data size will increase very fast and performance issue can be encountered due to load by multiple company users hitting the same database. The benefit of this approach is that schema is very easy to manage schema. The separate database on the other hand has benefit of reduced performance overhead, privacy of company data. But the hard part is that a single change in schema needs to be replicated in all the company databases.

These are the things we are aware of. As part of our policy, we would not sell the code to the company/client. Only license will be bought by them.

We are only interested in relational databases and would not be going for NoSQL. Plus the number of client/companies will not be limited and can grow to any number.

What is the better way to design database architecture of this scenario. I know there are thousands of applications that might have faced this scenario before development, but this is my first time 🙂

So would really like your input on what is the better approach for that.

Thank you very much!

Best Answer

Let me start with: you miss 3: Mutliple databases each containing data for a number of clients - allows both scale out (important) as well as not having possibly hundreds of thousands of databases.

But even then, you really have some bad logic - basically it is a lot less a problem than you think:

If single database is used then data size will increase very fast and performance issue can be encountered due to load by multiple company users hitting the same database.

Ah, yes, and if you have a dozen databases and your server is too slow the exact same happens. Also data size increase is identical unless you have a lot of shared data that you only need to load once.

The benefit of this approach is that schema is very easy to manage schema.

No. You deal with a product, so schema management should be fully automated. After all, you will likely have to manage schemata of a dozen or two database copies ANYWAY - development, a separate schema possibly per developer, some for testing, then quality control. Unless you plan everything to happen manually db changes will use change scripts and be automated.

And once automated - managing 1000 copies of a database is not really much more a problem, or?

The separate database on the other hand has benefit of reduced performance overhead, privacy of company data.

No. I mean, yes, it has less performance overhead. But this is cent wise and irrelevant - I doubt the overhead is more than a couple of percent. Privacy CAN be important, it makes a lot of scenarios a lot easier to separate customers. But only if you have separate web servers for every customer (or a good maintenance window) - otherwise you have to update all databases at exactly the same moment as code. Have fun with that.

Privacy can be important. It also allows easy import and export of data. As well as easily transporting data to a separate instance in case of a bug so people can look at it.

It also will likely use a lot more resources on the web server and database server as connection management will use a lot more connections because they are not reusable. Unless you program partially around it.

But the hard part is that a single change in schema needs to be replicated in all the company databases

Only if your development process is really backward and 1990 or so. Agile and DevOps already demand this to be solved, so this is a long solved problem and you have it anyway. Not sure how your team works, but I am in a smallish team (around 6 developers) in a project at the moment. One production database. Total db copy count is around 20 - multiple test environments, every developer has a separate database for his work, quality control, hallway testing, all have their own copy of the database. As such, schema management is fully automated. I could not care less about having 1000 other copies to run the scripts on. It is a little more scripting (and the time and parallelism requirements are an issue) but the core problem is solved anyway. If you do not have this, I fear your product will have manually maintained databases and the result of that will be a horrifically botched development process.