Every administrators, developers have their own collection of scripts (some patterns). Time to time they are dismissed from one job and they are employed to other one. Is there any good, easy used, indexed tool to collect the common used sql scripts (structured folder/file system as a default)?
How to manage the own scripts
best practicesscripting
Related Solutions
You cannot use conversation groups to exclude application instances, if that's what you're trying to do. If Instance A needs to receive messages from the queue only for Instance A and Instance B needs to receive messages only for Instance B then instance A needs to use queue A and instance B needs to use queue B. Conversation group can be used only when Instance A and Instance B can both process any messages, but you want to exclude them from processing correlated messages concurrently. Every time I've seen someone trying to pre-generate conversation group IDs, the idea was always bad.
Update
The initiator cannot control the conversation group of the destination (target). Even if A and B were in the same conversation group at the initiator site, this does not mean they would be the same at the target site since conversation group is a local concept and doe snot travel with the message. If you want the messages sent by A and B to belong to the same conversation group on the target side, then the dialog has to be started in the 'reversed' order: the destination begins the dialog(s) and places them in a single conversation group, then the target(s) start sending messages on this dialog(s). So conversations act like an 'invitation' to send messages, the logical 'target' acts as the actual 'initiator'.
Update #2.
this is how it would work, in theory: say you have 10000 products. To do the 'reverse' pattern, when a user want to participate in the application it needs to 'join' in. So he starts a conversation with the server and send a message 'I want to join' (lts call this 'conversation 0'). The server processes this message by starting a conversation with this new user for each product. Now the user has 10000 conversations on which it can send 'bids'. For product A it uses conversation 1, for product B conversation 2 etc. When user B wants to join in it also starts a new conversation with the server and sends an 'I want to join' message. The server responds by starting 10000 conversation with this new user, again one for each product, and it makes sure each one is in the corresponding group for the product, so conversation for product A with user 1 is in the same group as conversation for product a for user 2.
Now obviously anyone will figure that this scheme is flawed: it requires number_of_products X number_of_user conversations, it makes adding and removing products a pain (must maintain all those user conversations!) and so on. One alternative is to multiplex products per conversation. Say the server only starts 10 conversations with each user and the user uses the conversation corresponding to the last digit of the product id (so product 1 goes to conversation 1, but so does product 11 or 101). This is more viable, it requires far fewer conversations, and requires no special conversation management when products are added or removed. You may consider that is a downside that now the server locks not all messages for product 1, but also all messages for product 11, all for 101 etc, but consider this: it only matters if you have more processing threads on the server than the number of conversations per user. If you have 5 threads, then it doesn't matter that you locked 1, 11, 101, there still are messages to process for the other 4 threads. Only if you'd have 11 threads it would matter, since the 11th thread has nothing to process.
Now I'm not advocating to deploy exactly this, I'm just pointing out some possibilities. In most cases the CG locking would be per user, not per product, and adding this extra dimension of using CG locking to avoid concurrency problems on each product is a little unorthodox.
And don't forget that the only construct that guarantees order in SSB is a conversation. So if users have to send bids for A on conversation 1 followed by bids for B on conversation 2 then there is no guarantee that the bid for A is going to be processed after the bid for B is processed. The only guarantee is that if user 1 sends two bids for A, they will be processed in the order sent.
Also, if two different users send bids for product A then there is no guarantee of the order of processing of these bids. However, if the bids for product A end up on the same CG then there is a guarantee that only one 'processor thread' will see both bid from user 1 and bid for user 2, but be careful because there is no guarantee that the bids are presented in the RECEIVE result set in the order they were received. RECEIVE only guarantees that:
- all messages in the result set are from the same CG
- the messages belonging to a conversation are in order
but the order of conversations in the result is basically random (is the driven by the conversationhandle ordering, a GUID).
Before I forget: remeber there is the also MOVE CONVERSATION
but relying on MOVE (rather than starting conversations directly in the correct CG) is very very very deadlock prone.
Lets run two hypothetical databases (H1
and H2
) in parallel, with the same total amount of RAM for bufferpools (R
).
- Let
H1
have a single bufferpool of sizeR
. - Let
H2
have two bufferpools: one of sizeI
for index pages, the other of sizeD
for data pages. (AndD+I==R
of course.)
The question is:
- How do you determine the right values for
I
andD
to makeH2
perform better thanH1
?
My answer is that you can't in general. The database engine for H1
has more room for optimizing its cache than H2
. If there are times of day where more index pages would lead to better performance, it can discard unused data pages and have a "mostly index pages" cache. If at a later time, data pages become hotter, it can evict more index pages and have a "mostly data pages" cache.
H2
can't do that. Once it has I
pages-worth of index pages cached, it can't cache more even if that's what would have been best right now. It's stuck there with a sub-optimal use of RAM.
The only way for H2
to perform just as well as H1
is if the D
/I
split initially chosen is ideal, and the workload is very stable. That sure can happen, but I'd wager that it is not a very usual database scenario. If that's not your case, think of H1
as being the same as H2
but with dynamic partitioning of the cache between data and index pages managed directly by the thing that knows the most about how it needs to optimizing I/O (i.e. the database engine).
This is not to say that maintaining different bufferpools is never a good idea.
One scenario for isolating specific pages in a specific cache I've encountered is having a critical "report" that needed to run fast (obviously) at all times, and happened to use a few tables that were pretty much never used elsewhere. So those pages kept getting evicted, and that "report" had runtimes that varied a lot from run to run. Moving a specific set of tables (and their indexes) to a specific pool removed much of the "non-deterministic" performance part.
But that's sub-optimal for the database as a whole, more of a kludge and nearer Voodoo optimization IMO. (And that wasn't on DB2 but I believe that's irrelevant here.)
So my advice is: if you have X Gb of RAM available for cache, use a single buffer, put everything in it, and let the database engine do its magic in there.
If you run into a corner case that would appear to benefit from cache segregation, try it out, benchmark it, think about the overhead of having to maintain the magic numbers for each cache size, and go tune the queries, schema or disk layout instead :)
Best Answer
I started to play with RedGate's tool - Script Manager. You can have them together in a single place with some UI. It may worth a try :).