Hello you need to change max_locks_per_transaction in the postgresql.conf file typically located in /var/lib/pgsql/data or wherever you installed the data directory for PostgreSQL.
Which was already answered here
You will have to restart the PostgreSQL Service.
The answer above also talks about having to possibly increase shared memory. Here is a link to a PostgreSQL article on that subject.
What would address your question is the subject JOIN DECOMPOSITION.
According to Page 209 of the Book
You can decompose a join by running multiple single-table queries instead of a multitable join, and then performing the join in the application. For example, instead of this single query:
SELECT * FROM tag
JOIN tag_post ON tag_post.tag_id = tag.id
JOIN post ON tag_post.post_id = post.id
WHERE tag.tag = 'mysql';
You might run these queries:
SELECT * FROM tag WHERE tag = 'mysql';
SELECT * FROM tag_post WHERE tag_id=1234;
SELECT * FROM post WHERE post.id IN (123,456,567,9098,8904);
Why on earth would you do this ? It looks wasteful at first glance, because you've increased the number of queries without getting anything in return. However, such restructuring can actually give significant performance advantages:
- Caching can be more efficient. Many applications cache "objects" that map directly to tables. In this example, if the object with the tag
mysql
is already cached, the application will skip the first query. If you find posts with an ID of 123, 567, or 908 in the cache, you can remove them from the IN()
list. The query cache might also benefit from this strategy. If only one of the tables changes frequently, decomposing a join can reduce the number of cache invalidations.
- Executing the queries individually can sometimes reduce lock contention
- Doing joins in the application makes it easier to scale the database by placing tables on different servers.
- The queries themselves can be more efficient. In this example, using an
IN()
list instead of a join lets MySQL sort row IDs and retrieve rows more optimally than might be possible with a join.
- You can reduce redundant row accesses. Doing a join in the application means retrieving each row only once., whereas a join in the query is essentially a denormalization that might repeatedly access the same data. For the same reason, such restructuring might also reduce the total network traffic and memory usage.
- To some extent, you can view this technique as manually implementing a hash join instead of the nested loops algorithm MySQL uses to execute a join. A hash join might be more efficient.
As a result, doings joins in the application can be more efficient when you cache and reuse a lot of data from earlier queries, you distribute data across multiple servers, you replace joins with IN()
lists, or a join refers to the same table multiple times.
OBSERVATION
I like the first bulletpoint because InnoDB is a little heavy-handed when it crosschecks the query cache.
As for the last bulletpoint, I wrote a post back on Mar 11, 2013 (Is there an execution difference between a JOIN condition and a WHERE condition?) that describes the nested loop algorithm. After reading it, you will see how good join decomposition may be.
As for all other points from the book, the developers really look for performance as the bottom line. Some rely on external means (outside of the application) for performance enhancements such as using a fast disk, get more CPUs/Cores, tuning the storage engine, and tuning the configuration file. Others will buckle down and write better code. Some may resort to coding all the business intelligence in Stored Procedures but still not apply join decomposition (See What are the arguments against or for putting application logic in the database layer? along with the other posts). It's all up to the culture and tolerance of each developer shop.
Some may be satisfied with performance and not touch the code anymore. Other simply don't realize there are great benefits one can reap if they try join composition.
For those developers that are willing ...
GIVE IT A TRY !!!
Best Answer
One of them is the binary size on disk. PostgreSQL compresses on disk all TOASTable fields over 2k (like text). Think about a binary value: they usually represent less space on disk than the same value
CAST
ed to text anyway.In addition, you're going to have overhead merely because the SQL, schema, and input format (being CSV or whatever).
You may want to look into the
-z
option. (Or,xy
if you need the best compression).