It sounds like what you probably want is to:
Create a role to own all the common tables and schema, or just use your own if you really will always be the only one with full control of the main tables.
Create another role you intend to give only read-only access to the shared tables and schemas. GRANT
that role rights using GRANT SELECT ON ALL TABLES IN SCHEMA [x]
for each shared schema. You may also want to ALTER DEFAULT PRIVILEGES
to make sure this role has read rights on any new tables created in these schemas too.
Now GRANT
each user membership of the read-only access role with INHERIT
.
For the private schemas, create a schema the same as the user's username with CREATE SCHEMA [username] AUTHORIZATION [username]
or the older style where you create the schema then ALTER SCHEMA ... OWNER TO
.
See the postgresql manual for the detailed syntax of all of the above commands. Start with user management, part of the broader database administration topic that includes grant management etc. The PostgreSQL manual is detailed, comprehensive and readable: reading it is strongly recommended.
How do I avoid sort in the explain result? Actuall I didn't ask any sort in the SQL Statement.
You've asked for rows to be aggregated. One way to do this is to sort the data set and then scan it to collapse out duplicates. This can be faster than hash aggregation, which is the other way PostgreSQL knows how to do grouping.
So while you didn't explicitly say "sort the rows", it's still sorting them because of something you asked for.
The immediate problem is that PostgreSQL is being very conservative with how much memory it's using for sorts:
Sort Method: external merge Disk: 317656kB
and is doing a 300MB on-disk sort. You can see that pretty clearly if you take a look at the plain on explain.depesz.com.
If you:
SET work_mem = '400MB';
before running the query, it should be a whole lot different.
Unfortunately it's not a simple as changing your configuration, because PostgreSQL isn't too clever about resource management. Per the documentation on work_mem
, it may use up to work_mem
bytes per sort or join, per-session. So if you have max_connections = 50 and you're running complicated queries, you might find yourself using many gigabytes of working memory, exceeding free memory, and hitting swap. Which you really don't want.
It also seems to be doing a seqscan on counters
, but since it's finding about 1/4 the rows match the condition, that's probably the right thing to do - an index scan would probably be slower.
I find the default work_mem
way too conservative and tend to set it to at least 100MB on any reasonably large system. I also prefer to run PostgreSQL with PgBouncer in front, and a low max_connections
, allowing me to throw more resources at each individual connection.
Frankly, I'm curious to see how MS SQL Server executes this, because the numbers you report are astonishing for a query like this.
Best Answer
Solution
Result
I had been using
CREATE UNLOGGED
for a long time, as I found that in earlier versions of PostgreSQL, it made a HUGE impact in the execution time of my SQL scripts. In one case, it dropped from 15 minutes to around 45 seconds. Every once in a great while, like once or twice a year, I would find the tables had emptied out. I would just go ahead and reload the datasets, never really thinking more about it.But when I migrated my datasets to PostgreSQL 11.x, I found that the tables emptied out every time I shut down the laptop. Thanks to the comments above, I learned that the
UNLOGGED
option was indeed the culprit here. I removed that option from all the table creation commands, and the data no longer disappears during reboot.Added Benefit