Advice on choosing a really fast DBMS

database-recommendationnosqlperformance

I am working on a project that does real-time monitoring of data being transmitted, and logs if there are any of the various type of errors, etc. The CRUD operations are real-time. We initially planned on using PostgreSQL, but the problem we are facing is that PostgreSQL is not fast enough to handle real-time CRUD operations even after tweaking it a little; same goes for MySQL and other biggies. SQLite performs a lot faster than them, but is soon almost dead when the database size reaches a few hundred MBs. Another constraint is that the monitoring is to be done over network.

Is there any database that can handle such fast operations? Or should I opt for a NoSQL database?

EDIT (Regarding design):

The design is normalized to the maximum extent it was possible. The stored data is almost mutually independent, so there are very few joins. Also, I said "a few hundred MBs" just as a reference. The actual databases that we work upon are multiple GBs in size. Lots and lots of data is inserted every second and retrieved also.

Talking about PostgreSQL, it takes 5-7 times the time taken by SQLite in the tests that I ran on my data.

EDIT (Regarding speed):

I'll like to mention a worst-case scenario that can happen.

Suppose that the main application is being used at 10 instances (or PCs). They all interact with a cental DBMS and insert data into it. Now, every single app will have many threads doing some operations on the data that is being transmitted in real-time. The app reports if there is erraneous data in the streaming or not. And since the data is analyzed at packet level, lots of errors can happen in a sec. Based on some very basic calculations, the worst case may require an insertion rate of ~3k rows per second per instance, with each row having some 8-10 associated columns. I tested such a test on my machine(4GB ram, QuadCore), and SQLite is able to do this in ~1 sec over network. I had tweaked PostgreSQL a little, and it did that same in ~5 sec (I'd admit that I did not optimize it a lot, as I am not a pro in the DBMS field). But the bottleneck comes as the db size grows bigger in SQLite; the insertions seemingly go on almost fine, but the reads take a lot of time. I had tested it with a DB size of 3gb myself.

Our major concern is the insertions, which, in worst case will be ~3k per app instance and in average case ~500-1k insertions per instance.

Best Answer

If you can't scale a major RDBMS then your database design (includes indexing, queries and the like) or hardware is wrong. The choice of platform is almost irrelevant.

It is that simple. Especially when you mention "few hundred megabytes" which implies low volumes (I mean a few dozen writes per second)