Postgresql – Amount of time FULL VACUUM/VACUUM

postgresqlvacuum

I have a table "lng_grid_test", which, at the beginning, had around 600 000 000 around. The table is defined as :

-- Table: public.lng_grid_test

-- DROP TABLE public.lng_grid_test;

CREATE TABLE public.lng_grid_test
(
  gid integer NOT NULL DEFAULT nextval('lng_grid_test_gid_seq'::regclass),
  geom geometry(LineString,4326),
  CONSTRAINT lng_grid_test_pkey PRIMARY KEY (gid)
)
WITH (
  OIDS=FALSE
);
ALTER TABLE public.lng_grid_test
  OWNER TO postgres;

-- Index: public.lng_grid_test_idx

-- DROP INDEX public.lng_grid_test_idx;

CREATE INDEX lng_grid_test_idx
  ON public.lng_grid_test
  USING gist
  (geom);

I applied to delete request to this table, reducing the rows to around 490 000 000.
Before processing any further, I thought it would be accurate to execute a vaccum.
So I am currently running a vacuum,

INFO:  exécution du VACUUM sur « public.lng_grid_test »
INFO:  « lng_grid_test » : 3588177 versions de ligne supprimables, 485535819 non supprimables
parmi 5042516 pages
DETAIL:  0 versions de lignes ne peuvent pas encore être supprimées.
CPU 106.20s/230.72u sec elapsed 402.26 sec.

Do you think I should stop the full vacuum process and just chose the vacuum with analyze and freeze options or should I keep running the vacuum? And, if you think I should keep on running the process, what may be the order of time it will take to finish the vacuum?
Because I can't work on the table and I don't have a clue about how many time it will take to finish the vacuum.
Thank you!

Best Answer

You should never stop a vacuum process, ever.

A vacuum full will completely recreate the table, so it should take roughly as long to run as it would to create and populate a brand new table and index it with all 490_000_000 records.