A simple restriction of the domain of the SQL data type by a constraint is the most natural way. You have two options
- adding a constraint to the column of your table(s) in question
- creating a domain for each data type you want to be tailored to fit your given limitations of the programming language
An example of the first options looks like
ALTER TABLE table_name
ADD CONSTRAINT constraint_name
CHECK (column_name > -128 AND column_name < 127);
(change the table, constraint and column name as needed, and adjust the range of the checked bounds)
The second option is a two step process. First add a new domain to your DB (in this example the domain type is named c_byte8) with the command
CREATE DOMAIN c_byte8 NUMERIC CONSTRAINT c_byte8_constraint
CHECK (VALUE > -128 AND VALUE < 127);
Second create your table with the appropriate types, i.g. c_byte8 instead of NUMERIC as the attribute's type. E.g.
CREATE TABLE test_table_1 (id NUMERIC, byte_value c_byte8);
Please keep in mind that the first solution will require you to add constraints to each attribute you have defined, or you will define in the future. It makes your DB-create script less readable, because of the repetition of the same constraint condition every time you use the restricted domain. A second downside of this approach is, that you can easily miss some columns of your schema when you add those constraints to an existing schema.
An upside of the first solution is that you can extend an existing schema without much hassle.
As for the second approach, the plus is a more readable DB creation script, as is the same with the table definitions of a running production DB, but changing an existing production DB is more complicated.
There is no syntax variant that lets you update the whole row at once. However, there is a shorter form than what you have so far.
Also, you do not actually want to update all columns. The WHERE
condition on id pins down at least one column (id
) to remain unchanged. But that's just nitpicking.
UPDATE table_a a
SET ( c1, c2, ...)
= (b.c1, b.c2, ...)
FROM table_b b
WHERE a.id = b.id;
More details in this related answer:
Bulk update of all columns
DELETE / INSERT
Internally, due to the MVCC model of Postgres, every UPDATE
effectively inserts a new row anyway and marks the old one as obsolete. So, behind the curtains there is not much difference between UPDATE
and DELETE
plus INSERT
.
There are some details in favor of the UPDATE
route:
- HOT UPDATE.
- TOAST tables: If you have large columns, the content may be stored."out-of-line" in TOAST tables and the new row version can link to the same row in the TOAST table if toasted columns remain unchanged.
- Index maintenance may be cheaper for updates.
Otherwise, locking should be about the same. You need an exclusive lock on affected rows either way. Just make it quick.
If you are dealing with a huge number of rows and you don't need a consistent state (all rows or none), you can split the operation into multiple batches. (Separate transactions!) Increases the total cost, but keeps the lock time per row shorter.
Best Answer
In psql, you'd run something like this which will output a bunch of commands.
You can spot check them and if you're good to go execute them with
\gexec
.