You cannot use conversation groups to exclude application instances, if that's what you're trying to do. If Instance A needs to receive messages from the queue only for Instance A and Instance B needs to receive messages only for Instance B then instance A needs to use queue A and instance B needs to use queue B. Conversation group can be used only when Instance A and Instance B can both process any messages, but you want to exclude them from processing correlated messages concurrently. Every time I've seen someone trying to pre-generate conversation group IDs, the idea was always bad.
Update
The initiator cannot control the conversation group of the destination (target). Even if A and B were in the same conversation group at the initiator site, this does not mean they would be the same at the target site since conversation group is a local concept and doe snot travel with the message. If you want the messages sent by A and B to belong to the same conversation group on the target side, then the dialog has to be started in the 'reversed' order: the destination begins the dialog(s) and places them in a single conversation group, then the target(s) start sending messages on this dialog(s). So conversations act like an 'invitation' to send messages, the logical 'target' acts as the actual 'initiator'.
Update #2.
this is how it would work, in theory: say you have 10000 products. To do the 'reverse' pattern, when a user want to participate in the application it needs to 'join' in. So he starts a conversation with the server and send a message 'I want to join' (lts call this 'conversation 0'). The server processes this message by starting a conversation with this new user for each product. Now the user has 10000 conversations on which it can send 'bids'. For product A it uses conversation 1, for product B conversation 2 etc. When user B wants to join in it also starts a new conversation with the server and sends an 'I want to join' message. The server responds by starting 10000 conversation with this new user, again one for each product, and it makes sure each one is in the corresponding group for the product, so conversation for product A with user 1 is in the same group as conversation for product a for user 2.
Now obviously anyone will figure that this scheme is flawed: it requires number_of_products X number_of_user conversations, it makes adding and removing products a pain (must maintain all those user conversations!) and so on. One alternative is to multiplex products per conversation. Say the server only starts 10 conversations with each user and the user uses the conversation corresponding to the last digit of the product id (so product 1 goes to conversation 1, but so does product 11 or 101). This is more viable, it requires far fewer conversations, and requires no special conversation management when products are added or removed. You may consider that is a downside that now the server locks not all messages for product 1, but also all messages for product 11, all for 101 etc, but consider this: it only matters if you have more processing threads on the server than the number of conversations per user. If you have 5 threads, then it doesn't matter that you locked 1, 11, 101, there still are messages to process for the other 4 threads. Only if you'd have 11 threads it would matter, since the 11th thread has nothing to process.
Now I'm not advocating to deploy exactly this, I'm just pointing out some possibilities. In most cases the CG locking would be per user, not per product, and adding this extra dimension of using CG locking to avoid concurrency problems on each product is a little unorthodox.
And don't forget that the only construct that guarantees order in SSB is a conversation. So if users have to send bids for A on conversation 1 followed by bids for B on conversation 2 then there is no guarantee that the bid for A is going to be processed after the bid for B is processed. The only guarantee is that if user 1 sends two bids for A, they will be processed in the order sent.
Also, if two different users send bids for product A then there is no guarantee of the order of processing of these bids. However, if the bids for product A end up on the same CG then there is a guarantee that only one 'processor thread' will see both bid from user 1 and bid for user 2, but be careful because there is no guarantee that the bids are presented in the RECEIVE result set in the order they were received. RECEIVE only guarantees that:
- all messages in the result set are from the same CG
- the messages belonging to a conversation are in order
but the order of conversations in the result is basically random (is the driven by the conversationhandle ordering, a GUID).
Before I forget: remeber there is the also MOVE CONVERSATION
but relying on MOVE (rather than starting conversations directly in the correct CG) is very very very deadlock prone.
Contention will be the biggest problem. A conversation guarantees order but in order for such guarantee to be give the SEND verb must lock the conversation handle used until the end of transaction (or during the statement if no explicit transactions is used). In effect that means that only one transaction can send a bid on product A at any time. Whether this is acceptable or not depends entirely on your business requirements.
The terms: a conversation has two endpoints, the initiator and target. Each conversation has an unique ID, the conversation_id
(a guid) which has the same value on both endpoints (ie. it travels on the wire with the messages). Each endpoint has its own handle, the conversation_handle
(again, a guid) which you use with SSB verbs. the handle is local only and never travels with the message. So you got it right in your update ;)
I think what you describe in Update 2 is OK, but there as always the devil is in the details. I suggest you start experimenting with prototype code and exchanging real messages, you will get a much better feel once you see it working.
Best Answer
I did exactly this as a project back in 2008-2009, a web service (billing) that needed to handle 1M+ calls per day. I used SQL Server table as a queue, and the lesson from that project I distilled into the article Using Tables as Queues. Stick to the rules I lay out there, and specially don't try to add any whistles and bells to your table, use it exclusively as a queue. Under load I found that a critical issue was to batch dequeue (dequeue 100 web service call request in one DB operation) and handle the dispatch in the app (place the dequeue request into an in memory list, have the web call handlers pick up work from this list). Doing the web service call async is critical. Also, you must read about the
ServicePointManager.DefaultConnectionLimit
.