Well, since you have the CSV files, they can serve as a backup to the staging tables, so I see no benefit to keeping them in the primary database (or maintaining the staging tables after the import is done). If those staging tables can be discarded / truncated once the import is finished, then they really should be separate and have as minimal impact on the primary database as possible.
Also, I see no value in putting the staging tables in the primary database in any case, no matter how permanent the loaded data will be. And if you need the data to be permanent, you can always reference the data even if it is in a different database - in fact that is arguably better, because now:
In addition to the cons you've already identified, well, they're essentially just scratch tables. Even if you're only on a single spindle, why bother disrupting the primary database? There's no penalty in crossing database boundaries on the same instance, e.g. INSERT ... FROM db.dbo.table;
compared to INSERT ... FROM dbo.table;
.
Finally, when you say "autogrowth" I cringe. If you know your database is going to get bigger than it is now, why not manually expand it to the size you need it to be? Autogrow is a performance nightmare when it happens during peak activity - and you will certainly notice it happening even on a single machine and even as the only one performing any work - basically all of your data transfer will halt when either the data or log file needs to grow. This is particularly true if you've left the autogrow settings at the current, ridiculous, defaults, and/or if you don't have instant file initialization. Don't forget to set your log growth settings appropriately too, as log files are not able to take advantage of IFI.
I have always gone with Option #2, and have done so in such a way that there was minimal impact on the system as it was always assumed that that application would be running continuously, or down for 30 minutes tops. So, I would build the new table and slowly migrate the data into it over the course of a week. I would use a SQL Agent job to insert rows and an AFTER UPDATE, DELETE
trigger to keep already migrated rows in sync.
I have detailed this approach in an article on SQL Server Central: Restructure 100 Million Row (or more) Tables in Seconds. SRSLY!.
Best Answer
I would definitely err on the side of keeping the tables in the same database. It makes it easier to maintain referential integrity (unless you hard delete records in which case foreign keys won't work for you anyway) and create consistent backups (simply backup one database and you know it is in a consistent state, instead of taking two+ backups that my be slightly out of sync).
If you concern is wanting to spread the data between different storage pools (for instance keeping history and audit data on something big & cheap but potentially slow, while keeping active data on something faster but smaller & more expensive), or just keeping the different data on different spindle-sets to reduce potential IO contention, then you can achieve this within a single database by distributing your tables between files/filegroups instead of just having one filegroup containing everything. See http://technet.microsoft.com/en-us/library/ms189126(v=sql.105).aspx (amongst other reference locations in MS's sites) for more details on this.