No, the interval type supports reduced precision but none of the other date/time types do.
Postgres allows you to roll your own with create type
but unfortunately wont allow contraints to be added to the type which limits it's usefulness in this scenario. The best I can come up with requires you to repeat check constraints on every field where the fuzzy
type is used:
create type preciseness as enum('day', 'month', 'year');
create type fuzzytimestamptz as (ts timestamptz, p preciseness);
create table t( id serial primary key,
fuzzy fuzzytimestamptz
check( (fuzzy).ts is not null
or ((fuzzy).ts is null and (fuzzy).p is not null) ),
check((fuzzy).ts=date_trunc('year', (fuzzy).ts) or (fuzzy).p<'year'),
check((fuzzy).ts=date_trunc('month', (fuzzy).ts) or (fuzzy).p<'month'),
check((fuzzy).ts=date_trunc('day', (fuzzy).ts) or (fuzzy).p<'day') );
insert into t(fuzzy) values (row(date_trunc('year', current_timestamp), 'year'));
insert into t(fuzzy) values (row(date_trunc('month', current_timestamp), 'month'));
insert into t(fuzzy) values (row(date_trunc('day', current_timestamp), 'day'));
select * from t;
id | fuzzy
----+----------------------------------
1 | ("2011-01-01 00:00:00+00",year)
2 | ("2011-09-01 00:00:00+01",month)
3 | ("2011-09-23 00:00:00+01",day)
--edit - an example equality operator:
create function fuzzytimestamptz_equality(fuzzytimestamptz, fuzzytimestamptz)
returns boolean language plpgsql immutable as $$
begin
return ($1.ts, $1.ts+coalesce('1 '||$1.p, '0')::interval)
overlaps ($2.ts, $2.ts+coalesce('1 '||$2.p, '0')::interval);
end;$$;
--
create operator = ( procedure=fuzzytimestamptz_equality,
leftarg=fuzzytimestamptz,
rightarg=fuzzytimestamptz );
sample query:
select *, fuzzy=row(statement_timestamp(), null)::fuzzytimestamptz as equals_now,
fuzzy=row(statement_timestamp()+'1 day'::interval, null)::fuzzytimestamptz as equals_tomorrow,
fuzzy=row(date_trunc('month', statement_timestamp()), 'month')::fuzzytimestamptz as equals_fuzzymonth,
fuzzy=row(date_trunc('month', statement_timestamp()+'1 month'::interval), 'month')::fuzzytimestamptz as equals_fuzzynextmonth
from t;
id | fuzzy | equals_now | equals_tomorrow | equals_fuzzymonth | equals_fuzzynextmonth
----+------------------------------------+------------+-----------------+-------------------+-----------------------
1 | ("2011-01-01 00:00:00+00",year) | t | t | t | t
2 | ("2011-09-01 00:00:00+01",month) | t | t | t | f
3 | ("2011-09-24 00:00:00+01",day) | t | f | t | f
4 | ("2011-09-24 11:45:23.810589+01",) | f | f | t | f
I can't advocate strongly enough not to use a single field for this.
I'm currently dealing with maintaining a very large dataset with a bigint
bitmask field and it's a bit of a performance nightmare.
If you check a single bit it's fine. If you check more than one bit performance degrades very quickly.
Due to the nature of bitmask integers, the data distribution will be very imbalanced and you'll get suboptimal plans.
Multiple bit checks result in range or index scans with a function running against every row. It's a mess.
My workaround was simple - I made a table to store the PK for each of the conditions to be checked. This is counter-intuitive initially but the space needed is low (you only store the PK) and lookups are lightning fast, especially if you use a UNIQUE CLUSTERED INDEX
.
You can add as many conditions as you want without affecting your main table, and updates also don't affect your main table.
Indexing is simple since you just index all the lookup tables individually, and since your clustered key is the same on your main table and the lookups all your evaluations are merge join
s which are very efficient.
Best Answer
For sure you don't want to use a varchar: you are trying to store a number, so storing it into a string is a non sense. Plus, regardless of what you are going to do it later, varchar is the field that allows you the less flexibility: really no point in using it.
Your want to avoid timestamp, too, as it is deprecated and anyway it's intended for a different use (it's just a synonym for rowversion)
Your choice is then between bigint and datetime2(5), but given your requested precision of 5 they will both use 8 bytes, so no clear winner here. The only edge that datetime2 may have is that being a datetime it's better handled by datetime functions. Bigint, instead, will allow you to represent negative years (in case you need it).