Looks like MB vs MiB, ie decimal SI vs binary megabyte:
regress=> SELECT round(25.17*1000*1000 / (1024*1024),1);
round
-------
24.0
(1 row)
You've done your conversion into megabytes using decimal SI units (MB) not binary megabytes (MiB). Divide by 1024 not by 1000, or use pg_size_pretty
:
regress=> select pg_size_pretty( 3072 * 8192::bigint );
pg_size_pretty
----------------
24 MB
(1 row)
Compare:
regress=> select
pg_size_pretty( 3072 * 8192::bigint ),
(3072::real * 8192)/(1024*1024) AS binaryMiB,
(3072::real * 8192)/(1000*1000) AS decimalMB_SI;
pg_size_pretty | binarymib | decimalmb_si
----------------+-----------+--------------
24 MB | 24 | 25.165824
(1 row)
The GUC display code uses kiB / MiB / GiB too, but (IMO) mislabels it kB / MB / GB:
regress=> SELECT 24*1024*1024;
?column?
----------
25165824
(1 row)
regress=> SET work_mem = 25165824;
SET
regress=> SHOW work_mem;
work_mem
----------
24GB
(1 row)
(It's GB because work_mem is measured in kb not bytes; ignore the factor-of-1024 difference).
depending on your usage, you could try to partition the data to reduce the scanned data (at 400gb even your indexes are large and heavy to read...)
I think it's your best option as working with 400gb tables is heavy even on a very optimized queries...
You could also pre-warm the cache with the index data , by issuing some of your queries on frequent gps points or by using something like pg_prewarm
Best Answer
If you want to know it per database:
If you are interested in values per table, use
If you want to have statistics per query, install
pg_stat_statements
and useDon't forget that since PostgreSQL uses buffered I/O, a “block read” might come from the kernel cache rather than from disk.