I wouldn't want to have 200 data flows in a single package. The time it'd take just to open up and validate would make you old before your time.
EzAPI is fun but if you're new to .NET and SSIS, oh hell no, you don't want that. I think you'll spend far more time learning about the SSIS object model and possibly dealing with COM than actually getting work done.
Since I'm lazy, I'll plug BIML as a free option you didn't list. From an answer on SO https://stackoverflow.com/questions/13809491/generating-several-similar-ssis-packages-file-data-source-to-db/13809604#13809604
- Biml is an interesting beast. Varigence will be happy to sell you a license to Mist but it's not needed. All you would need is BIDSHelper and then browse through BimlScript and look for a recipe that approximates your needs. Once you have that, click the context sensitive menu button in BIDSHelper and whoosh, it generates packages.
I think it might be an approach for you as well. You define your BIML that describes how your packages should behave and then generate them. In the scenario you describe where you make a change and have to fix N packages, nope, you fix your definition of the problem and regenerate packages.
Or if you've gained sufficient familiarity with the framework then use something like EzAPI to go and fix all the broken stuff. Heck, since you've tagged this as 2005, you could also give PacMan a try if you're in need of making mass modifications to existing packages.
SSIS Design considerations
Generally speaking, I try to make my packages focus on solving a single task (load sales data). If that requires 2 data flows, so be it. What I hate inheriting is a package from the import export wizard with many un-related data flows in a single package. Decompose them into something that solves a very specific problem. It makes future enhancements less risky as the surface area is reduced. An additional benefit is that I can be working on loading DimProducts
while my minion is dealing with loading SnowflakeFromHell
package.
Then use master package(s) to orchestrate the child work flows. I know you're on 2005 but SQL Server 2012's release of SSIS is the cat's pajamas. I love the project deployment model and the tight integration it allows between packages.
TSQL vs SSIS (my story)
As for the pure TSQL approach, in a previous job, they used a 73 step job for replicating all of their Informix data into SQL Server. It generally took about 9 hours but could stretch to 12 or so. After they bought a new SAN, it went down to about 7+ hours. Same logical process, rewritten in SSIS was a consistent sub 2 hours. Easily the biggest factor in driving down that time was the "free" parallelization we got using SSIS. The Agent job ran all of those tasks in serial. The master package basically divided the tables into processing units (5 parallel sets of serialized tasks of "run replicate table 1", table 2, etc) where I tried to divide the buckets into quasi equal sized units of work. This allowed the 60 or so lookup reference tables to get populated quickly and then the processing slowed down as it got into the "real" tables.
Other pluses for me using SSIS is that I get "free" configuration, logging and access to the .NET libraries for square data I need to bash into a round hole. I think it can be easier to maintain (pass off maintenance) an SSIS package than a pure TSQL approach by virtue of the graphical nature of the beast.
As always, your mileage may vary.
In my case there was a trigger that trying to update one of the rows. The unfortunate thing was that the query in the trigger returned more than one row in the subquery.
As it turned out for me the trigger was only relevant for one particular supplier (an IF condition in the trigger), also luckily for me, the previous occupant of my position hadn't reviewed this trigger, and it was no longer needed.
Removed the trigger. However in other scenarios I would have had to modify the trigger to return one row only.
Best Answer
Copying and pasting SSIS packages can lead to a few issues. It sounds like you have simple packages - a dataflow with a source to destination. Generally, those don't have the hangup I've run into but I have seen situations where people were inconsistent in their definition of data types. Email - varchar(40), varchar(80), varchar(120), nvarchar(256) all in the same database, just in different tables. What can happen is that you first build your package using the 80 size. When you copy/paste and fix your source table, the editor may not pick up on the data size change when you goto the 40 table because it fits within the old sizing. But I've also seen the reverse - it might not pick up the increased size. Resolution isn't horrific, just change your query to
select 1 AS x
, let it set the metadata and then fix your query back to the proper select.As Dave suggested, this task is ideal for an entry level Biml task. In fact, as along as you don't mind not having your explicit source query, my Biml replicate-o-matic is pretty much exactly what you're describing. Earlier incarnation is on SO and I'm linking to it to hedge against link rot.
If you like your approach, you can keep your approach. But, save yourself the clicking and dragging in your destination and replace it with six clicks: three right, three left. In your mappings tab, right click to bring up this context sensitive menu
Left click "Select All Mappings"
Right click and the left click "Delete Selected Mappings"
Right click and then Left click "Map Items by Matching Names"