Sql-server – ETL: extracting from 200 tables – SSIS data flow or custom T-SQL

data-warehouseetlsql serversql-server-2005ssis

Based on my analysis, a complete dimensional model for our data warehouse will require extraction from over 200 source tables. Some of these tables will be extracted as part of an incremental load and others will be a full load.

To note, we have about 225 source databases all the with the same schema.

From what I've seen, building a simple data flow in SSIS with a OLE DB source and OLE DB destination requires the columns and data types to be determined at design time. This means that I will eventually end up with over 200 data flows just for the extraction alone.

From a maintainability perspective, this strikes me as a big problem. If I needed to make some kind of sweeping change to the extraction code, I would have to modify 200 different data flows.

An an alternative option, I wrote a small script which reads the source databases, table names and columns I want to extract from a set of metadata tables. The code runs in multiple loops and uses dynamic SQL to extract from the source tables via a linked server and OPENQUERY.

Based on my tests, this is still not as fast as using an SSIS data flow with an OLEDB source and destination. So I am wondering what kind of alternatives I have. Thoughts so far include:

  1. Using EZAPI to programatically generate SSIS packages with
    simple data flow. The tables and columns to extract would come from
    the same metadata tables mentioned earlier.
  2. Purchase 3rd party software (dynamic data flow component)

What is the best way to approach this? When it comes to .NET programming I'm a beginner, so the time required to ramp up just with the basics is also a concern.

Best Answer

I wouldn't want to have 200 data flows in a single package. The time it'd take just to open up and validate would make you old before your time.

EzAPI is fun but if you're new to .NET and SSIS, oh hell no, you don't want that. I think you'll spend far more time learning about the SSIS object model and possibly dealing with COM than actually getting work done.

Since I'm lazy, I'll plug BIML as a free option you didn't list. From an answer on SO https://stackoverflow.com/questions/13809491/generating-several-similar-ssis-packages-file-data-source-to-db/13809604#13809604

  • Biml is an interesting beast. Varigence will be happy to sell you a license to Mist but it's not needed. All you would need is BIDSHelper and then browse through BimlScript and look for a recipe that approximates your needs. Once you have that, click the context sensitive menu button in BIDSHelper and whoosh, it generates packages.

I think it might be an approach for you as well. You define your BIML that describes how your packages should behave and then generate them. In the scenario you describe where you make a change and have to fix N packages, nope, you fix your definition of the problem and regenerate packages.

Or if you've gained sufficient familiarity with the framework then use something like EzAPI to go and fix all the broken stuff. Heck, since you've tagged this as 2005, you could also give PacMan a try if you're in need of making mass modifications to existing packages.

SSIS Design considerations

Generally speaking, I try to make my packages focus on solving a single task (load sales data). If that requires 2 data flows, so be it. What I hate inheriting is a package from the import export wizard with many un-related data flows in a single package. Decompose them into something that solves a very specific problem. It makes future enhancements less risky as the surface area is reduced. An additional benefit is that I can be working on loading DimProducts while my minion is dealing with loading SnowflakeFromHell package.

Then use master package(s) to orchestrate the child work flows. I know you're on 2005 but SQL Server 2012's release of SSIS is the cat's pajamas. I love the project deployment model and the tight integration it allows between packages.

TSQL vs SSIS (my story)

As for the pure TSQL approach, in a previous job, they used a 73 step job for replicating all of their Informix data into SQL Server. It generally took about 9 hours but could stretch to 12 or so. After they bought a new SAN, it went down to about 7+ hours. Same logical process, rewritten in SSIS was a consistent sub 2 hours. Easily the biggest factor in driving down that time was the "free" parallelization we got using SSIS. The Agent job ran all of those tasks in serial. The master package basically divided the tables into processing units (5 parallel sets of serialized tasks of "run replicate table 1", table 2, etc) where I tried to divide the buckets into quasi equal sized units of work. This allowed the 60 or so lookup reference tables to get populated quickly and then the processing slowed down as it got into the "real" tables.

Other pluses for me using SSIS is that I get "free" configuration, logging and access to the .NET libraries for square data I need to bash into a round hole. I think it can be easier to maintain (pass off maintenance) an SSIS package than a pure TSQL approach by virtue of the graphical nature of the beast.

As always, your mileage may vary.