Yes, hacking into the catalog is bad. Reason #1 is that if you upgrade to new version and forget to move the hack, things start breaking. Just running pg_dump and loading to the same version on another instance will also lose the hack. There's also always the chance that a new version of Postgres will change so much that your hack is now not possible and force you to go back and re-engineer.
Overriding with your own function is the correct way to go.
Do tables with only fixed width values perform read queries better
than those with varying widths?
Basically no. There are very minor costs when accessing columns, but you won't be able to measure any difference. Details:
In particular:
The use of varchar(255)
in a table definition typically indicates a lack of understanding of the Postgres type system. The architect behind it is most probably not a native speaker - or the layout has been carried over from another RDBMS like SQL Server where this used to matter.
- Your most expensive query
SELECT COUNT(*) FROM articles
does not even consider row data at all, only the total size matters indirectly. Counting all rows is costly in Postgres due to its MVCC model. Maybe an estimate is good enough, which can be had very cheaply?
- Fast way to discover the row count of a table
(Pretend disk space isn't an issue.)
Disk space is always an issue, even if you have plenty. The size on disk (number of data pages that have to be read / processed / written) is one of the most important factors for performance.
Where can I learn more about the internals of the Postgres DB engine?
The info page for the tag postgres has the most important links to more information, including books, the Postgres Wiki and the excellent manual. The latter is my personal favorite.
Your third query has issues
SELECT * FROM articles WHERE user_id = $1 ORDER BY published_date DESC LIMIT 1;
ORDER BY published_date DESC
, but published_date
can be NULL (no NOT NULL
constraint). That's a loaded foot-gun if there can be NULL values, unless you prefer NULL values over the latest actual published_date
.
Either add a NOT NULL
constraint. Always do that for columns that can't be NULL.
Or make that ORDER BY published_date DESC
NULLS LAST
and adapt the index accordingly.
"articles_user_id_published_date_idx" btree (user_id, published_date DESC NULLS LAST)
Details in this recent, related answer:
Convert published_date
to an actual date
While 'published_date' is always rounded
, it's effectively just a date
which occupies 4 bytes instead of 8 for the timestamp
. You would best move that up in the table definition to come before the two timestamp
columns, so you don't lose the 4 bytes to padding:
...
body | text
published_date | date -- <---- here
created_at | timestamp without time zone
updated_at | timestamp without time zone
Smaller on-disk storage does make a difference for performance.
More importantly, your index on (user_id, published_date)
would now just occupy 32 bytes per index entry instead of 40, because 2x4 bytes do not incur extra padding. And that would make a noticeable difference for performance.
Aside: this index is not relevant to the demonstrated queries. Delete unless indexes unless used elsewhere:
"index_articles_on_published_date" btree (published_date)
Best Answer
You could use a check constraint that validates the scale of the value, rather than the definition of the data type:
Note also that the column itself needs to be redefined as just
numeric
, rather thannumeric(19,4)
.Then the following:
will result in
But
will succeed.
Check out this fiddle to see this solution in action.