Though this is years old question...
In short, you can understand ACID as guarantee of data integrity/safety in any expected circumstances.
As like in generic programming, all the headaches comes from multi-threading.
The biggest issue on NoSQL is mostly ACI. D(urability) is usually a separated issue.
If your DB is single-threaded - so only one user can access at once -, that's natively ACI compliant. But I am sure virtually no server can have this luxury.
If your DB need to be multi-threaded - serve multiple users/clients simultaneously - you must need ACI-compliant transaction. Or you will get silent data corruption rather than simple data loss. Which is a lot more horrible. Simply, this is exactly same with generic multi-threaded programming. If you don't have proper mechanism such as lock, you will get undefined data. And the mechanism in DB called fully ACID compliance.
Many YesSQL/NoSQL databases advertises themselves ACID-complient, but actually, very few of them are really does.
No ACID compliance = You will get always undefined result under multi-user (client) environment. I don't even think what kind of DB does this.
Single row/key ACID compliant = You will get guaranteed result if you modify only single value at once. But undefined result (=silent data corruption) for simultaneous multi row/key update. Most of currently popular NoSQL DBs including Cassandra, MongoDB, CouchDB, … These kind of DBs are safe only for single-row transaction. So you need to guarantee your DB logic won't touch multiple rows in a transaction.
Multi row/key ACID compliance = You will always get guaranteed result for any operation. This is minimal requirements as a RDBMS. In NoSQL field, very few of them does this. Spanner, MarkLogic, VoltDB, FoundationDB. I am not even sure there's more solutions. These kind of DBs are really fresh and new, so mostly nothing is known about their ability or limitation.
Anyway, this is a comparison except D(urability). So don't forget to check durability attribute too. It's very hard to compare durability because range becomes too wide. I don't know this topic well…
No durability. You will lost data at any time.
Safely stored on disk. When you get COMMIT OK
, then the data is guaranteed on disk. You lost data if disk break.
Also, there're difference even on ACID compliant DBs.
Sometimes ACID compliant / you need configuration / no automatic something.. / some components are not ACID-complient / very fast but you need to turn off something for this... / ACID-compliant if you use specific module... = we will not bundle data safety by default. That's an add-on, option or separated sold. Don't forget to download, assemble, setup and issuing proper command. Anyway, data safety may be ignored silently. Do it yourself. Check it yourself. Good luck not to make any mistake. Everyone in your team must be flawless DBA to use this kind of DB safely. MySQL.
Always ACID compliant = We don't trade data safety with performance or anything. Data safety is a forced bundle with this DB package. Most commercial RDBMS, PostgreSQL.
Above is typical DB's implementation. But still, any other hardware failure may corrupt the database. Such as memory error, data channel error, or any other possible errors. So you need extra redundancy, and real production-quality DB must offer fault tolerance features.
No redundancy. You lose all data if your data corrupted.
Backup. You make snapshot copy/restore. You lose data after last backup.
Online backup. You can do snapshot backup while the database is running.
Asynchronous replication. Backup for each second (or specified period). If machine down, this DB guaranteed to get the data back by just rebooting. You lose data after last second.
Synchronous replication. Backup immediately for each data update. You always have exact copy of original data. Use the copy if origin breaks.
Until now, I see many DB implementation lacks many of these. And I think if they lacks proper ACID and redundancy support, users will lose data eventually.
Your list of characters that must be supported clearly indicates you need nothing more than plain ascii
.
If you want to stored this as text, then this ascii
is your most compact way. But here are a few clarifications:
VARCHAR(10) does not "need" 80 bits. It may need 80 bits, if all characters are used, under ascii
character set. If you only store 3 characters (e.g. 'abc'
), then it only needs 24 bits.
utf8
does not store more space than ascii
when ascii characters are used. 'abc'
on both utf8
and ascii
encoding are 3 bytes long. That's why it's called utf-8: it attempts to only use 8 bits when possible.
However on temporary tables (vanilla MySQL; solved on Percona Server; I'm not sure about MariaDB) a utf8
character will take 3 bytes no matter what; same for MEMORY
tables. So best use ascii
if it fits your needs.
You could compress further. You can use the COMPRESS()
function, for example, or encode via your own method (if you only need 64 different characters, this means you're using 6 bits. This means for every 3 bytes (24 bits) you use today, you could squeeze in another byte (using 2 buts from each of the 3 bytes). So you can certainly compress by 25%, and possibly more. But this leaves you with BINARY
/VARBINAY
types, which are not as easy to work with: you'll have to always compress/decompress, you will not be able to index the text (alphabetically, that is; you can certainly put indexes on the column).
The rest of the tools you mentioned are imho irrelevant; by the time your data reaches varnishd, your texts are uncompressed. Possibly so for PHP as well.
Best Answer
The issue with storing your database in memory is if you have any sort of memoery issue or server has to be restarted or anything of that issue all your memory will get flushed.
That is the reason people don't store their database in memory.
Now, there are caching tools which are in-memory and can work as a very simple database like memcached. That may meet your needs. If you look in to tmpfs and ramfs you can create a folder that exists in memory and move your files in there normally.
So, if you are working with MongoDB, mysql or whatever you work with, you can have the data folder live in the RAM folder. This will give the database super fast read and writes. Everything will be really fast. You will be limited to how much RAM you have minus the size of your OS and other things running.
Also, just be careful: MongoDB likes to store writes in memory until the disk has a chance to write, it so you may want to turn that feature off because it will be the same speed.
My recommendation is to work with memcached and then mix it with a normal database that lives on disk. The concept is done with PHP sessions on some systems.
http://mickeyben.com/2009/12/30/using-nginx-as-a-load-balancer.html
The basic way it works is, if your record is found in memcached, then it will not check the database. If it is not found, then do three(3) things:
:)