Postgresql – Best practices for storing multiple SQL tables per user (built from uploaded csv file?)

csvdatabase-designpostgresql

I have a few users who will be uploading a few types of predefined CSV files. I need to store these as sql tables to be able to run queries on this data for a particular user, however that means there's potentially going to be loads of tables.

Right now I'm storing the csv's in an s3 bucket with the key username/filename/timestamp-filename.csv

What would be the best way to transform it into SQL? I have CSV parsing done using nodes fast-csv, it's just the schema that's baffling.

I was thinking table names like data-username-filename so each user will have as many tables as csv categories, so 10 tables each. Would it be best to store this in a seperate DB or the same DB just differntiating by table name/prefix?

This is a backoffice app with only a few users.

Best Answer

One option is to use different schemas per user, and then tables with the convention you want.

CREATE TABLE '<user_id>'.'<filename>'...