One way to do this would be Object Types, in this case the type would be analagous to your #t1
. So it would need to be defined somewhere but it would not need to be global, it could be per-schema or per-procedure even. First, we can create a type:
SQL> create or replace type t1_type as object (x int, y int, z int)
2 /
Type created.
SQL> create or replace type t1 as table of t1_type
2 /
Type created.
Now set up some sample data:
SQL> create table xy (x int, y int)
2 /
Table created.
SQL> insert into xy values (1, 2)
2 /
1 row created.
SQL> insert into xy values (3, 4)
2 /
1 row created.
SQL> commit
2 /
Commit complete.
And create a function over this data returning our "temporary" type:
SQL> create or replace function fn_t1 return t1 as
2 v_t1 t1 := t1(); -- empty temporary table (really an array)
3 v_ix number default 0; -- array index
4 begin
5 for r in (select * from xy) loop
6 v_ix := v_ix + 1;
7 v_t1.extend;
8 v_t1(v_ix) := t1_type(r.x, r.y, (r.x + r.y));
9 end loop;
10 return v_t1;
11 end;
12 /
Function created.
And finally:
SQL> select * from the (select cast (fn_t1 as t1) from dual)
2 /
X Y Z
---------- ---------- ----------
1 2 3
3 4 7
As you can see this is pretty clunky (and uses collection pseudo-functions, which is an obscure feature at the best of times!), as I always say, porting from DB to DB is not merely about syntax and keywords in their SQL dialects, the real difficulty comes in different underlying assumptions (in the case of SQL Server, that cursors are expensive and their use avoided/worked around at all costs).
We recently had this discussion at my workplace.
First, I want to commend you for doing the "right thing" by using HASHBYTES()
over CHECKSUM()
to detect changes. Microsoft specifically cautions against using CHECKSUM(@input)
for this purpose as its collision rate is very high compared to that of HASHBYTES('SHA1', @input)
. One advantage CHECKSUM()
does have, though, is that there is no (obvious) restriction on the size of its input.
Second, if you use HASHBYTES()
I recommend using SHA1
as your hash algorithm. Of the available options SHA1
has the lowest collision rate, and speed is not a concern for your use case.
Finally, To use HASHBYTES()
against inputs larger than 8000 bytes you'll have to:
- Split your input into 8000 byte chunks.
- Hash each chunk.
Somehow combine the resulting hashes and hash them to get your final output.
You can do this in one of two ways:
- Convert your hash outputs into strings, concatenate them, and hash the result.
- Stick all your hash outputs into a memory table and take their aggregate checksum using
CHECKSUM_AGG()
.
Encapsulate this work as a function that takes NVARCHAR(MAX)
as its input.
All that said, it is all-around simpler to just compare the proc definitions directly using OBJECT_DEFINITION()
as gbn suggested, or to simply push all definitions out everywhere as often as you like, as Mike suggested.
I wonder what kind of environment would significantly benefit from a process that deployed only changed procedures and used hashes to avoid copying around and comparing full definitions. You'd need to have a lot of procedures to keep in sync.
Best Answer
This could be done using the sys.objects table and the OBJECT_DEFINITION() function.
The 'UPDATED' column from this query can be used to do a quick find and replace on whatever database or table name might be found in the view definition. You will want to make sure you do the same for every possible version of the text you are searching. For instance a database called 'db' might be written as [db] or db. Once you have what you want from the query just copy and paste to a query window and execute the whole thing. Please make sure to backup the database prior to running it. Also carefully proof read the output text so you can verify that the replace worked correctly and did not accidentally change more than you intended.
BTW, the answer WEI_DBA gave is perfectly acceptable also. It's just a different approach to the same thing. In his example, the GUI will aide in getting all of the object definitions to the query window. Then you would just use Find/Replace(ctrl+h) to replace the database name. To each his own in this case.
I don't know that there's an advantage in either approach except maybe that my approach will allow you to limit the query only to certain objects (views in this case) with additional filters added if needed in the where clause. Generate scripts will allow you to filter to only views, stored procs, etc, and then individually select objects but additional filters will be limited to whatever the UI can do.