There really is an abundance of documentation and tools to do MS SQL Server to MySQL migrations - likely there would be no need to re-invent the wheel. Exporting your database tables' data to CSV just to re-import it to MySQL seems like a rather bad idea since you are going to lose all metadata (data types, constraints, indexes, views, ...).
Look at this question over at DBA.SE for a number of references to migration tools. Also, the MySQL website publishes a white paper on MS SQL migration which has some useful information about how data types and functions could be substituted and describes how MySQL Workbench could be used for migration.
While DTS could be used to perform a one-time migration, it probably is not worth the effort - it has been created with repeatable procedures in mind (i.e. database synchronization, repeated export tasks etc.). The amount of time for creating a transformation package is going to be far higher than what you are aiming for.
I wonder why any kind of DB is needed. It sounds like you'd like it if these were just files, but that the number of them and their small size are problematic.
There are two strategies that can help here, one is to use a few levels of directories to help partition the files into smaller clusters.
Now, some file systems may have inode issues or become very wasteful of space if they are tiny files and large block sizes. Indeed, if you're on Linux it may be worth spending some time looking at the alternative file systems that are available and see if one meets your use case well.
A 'simply' way to avoid the issue is, for the last layer, rather than having files in a leaf directory, just have files in a zip file with zero compression. This pretty much works almost like a directory with files, but can circumvent the above two issues. It effectively gives you what you've been doing by hand, but with a more standard tool-chain.
From what you've described so far, I personally wouldn't be putting this anywhere near any kind of DB, NoSQL or traditional.
If you really want a DB solution, you could look at some form of partitioning across multiple tables to avoid any file size limits, but it wouldn't be my first option!
Best Answer
This is the solution I came up with:
Enable
xp_cmdshell
withIf needed create a directory with
xp_cmdshell
to get the needed permissions.Use BCP with queryout
**your_db must be the fully qualified table name, i.e [Yourdb].[YourSchema].[YourTable]