Is there really a limit to that design and if yes, how can it be dealt with? 1.1. If the select query on votes table will be getting slower, what can I do to speed it up?
I don't think the number of votes is likely to be the problem. The questions will have to do in part with questions of how well you can index, how your db does caching, etc. Standard performance tuning applies and that isn't really your design per se. I will answer more below on what to consider if you run into the wall of being unable to get your design to work fast enough.
Is there a better way to design this kind of relations?
Not really.
How do I cache that data? Or is that even needed with proper indexing?
My preference in this case would be to start out without caching, and then to implement a caching layer when you need one. A caching layer might include something like memcached, or you could build one on a NoSQL solution like Mongo. At that point you can look at optimizing the areas which are the largest problems.
What kind of indexes would you recommend for the votes table? Am I correct that I need a simple double-field index (user_id, content_id)?
I know that MySQL and PostgreSQL are different enough to make cross-db somewhat dangerous here but I am thinking you'd want two indexes, one on content_id and one on user_id. I am thinking this because aggregating by user_id and content_id are likely to be different queries and these are different join conditions.
Most of the load will go on recent content pieces, maybe I should create something like recent_votes table, which will hold duplicate data, but only for the last say 24 hours and most load will go on it, and if user wants some data that is older, he will work with much bigger and slower table with all votes? Does that make any sense?
Keep in mind that db's frequently do a good job of caching recent content pieces. I would expect that MySQL can do this too. If it can't go with PostgreSQL instead. Don't cache it yourself in the db.
what to do if you hit the wall will depend on your DB choice. If you are using MySQL, your traditional answer is to look at something like memcached or create a caching layer in a NoSQL db. If you are using PostgreSQL, you get those choices plus something like Postgres-XC which gives you an ability to do teradata-style scaling out and clustering in OLTP environments.
You can have a table of weights -- that is, an association of user with each question and how much that answer is weighted. Then you can give each user their own weight for each answer.
create table Weights(
UserID int not null references Users( ID ),
AnswerID int not null references Answers( ID ),
Weight int default 1, -- For Ux: Ax * Weight,
constraint PK_Weights primary key( UserID, AnswerID )
);
I've shown the weight as an integer but you can change that if you want fractional values (A1 * 3.5 + A2 * -2.75 + A3 * 7 + A4 * 3.1415 = SCORE ).
Best Answer
If you want to restrict your votes to "one user can vote for another user once", you simply use the two user id:s as primary key. As for the vote, just use one column and make it an enum: