You might be able to do the INSERT into Payemnts_To_Charges in one SQL statement, but I'm not sure it would be worth it. It seems like this would be easier to build, debug, and maintain in procedural code. Something like this:
CREATE TABLE Payments (PaymentId Number(10), Amount Number(6,2));
CREATE TABLE Charges (ChargeID Number(10), Amount Number(6,2));
CREATE TABLE Payments_To_Charges
(PaymentID Number(10), ChargeID Number(10), Amount Number(6,2));
INSERT INTO Payments VALUES (1,4);
INSERT INTO Payments VALUES (2,4);
INSERT INTO Charges VALUES (1,2);
INSERT INTO Charges VALUES (2,5);
INSERT INTO Charges VALUES (3,6);
INSERT INTO Charges VALUES (4,4);
INSERT INTO Charges VALUES (5,10);
Declare
vPaymentAmount Payments.Amount%Type;
vAppliedAmount Payments_To_Charges.Amount%Type;
Begin
For vPayment In (SELECT PaymentID, Amount FROM Payments) Loop
vPaymentAmount := vPayment.Amount;
For vCharge In (
SELECT ChargeID, Amount FROM
(
SELECT ChargeID, Amount -
NVL((SELECT SUM(Amount) FROM Payments_To_Charges pc
WHERE pc.ChargeID = c.ChargeID),0) Amount
FROM Charges c
) WHERE Amount > 0
) Loop
vAppliedAmount := LEAST(vPaymentAmount, vCharge.Amount);
INSERT INTO Payments_To_Charges (PaymentID, ChargeID, Amount)
VALUES (vPayment.PaymentID, vCharge.ChargeID, vAppliedAmount);
vPaymentAmount := vPaymentAmount - vAppliedAmount;
If (vPaymentAmount = 0) Then
Exit;
End If;
End Loop;
End Loop;
End;
/
SELECT * FROM Payments_To_Charges;
Update:
remaining_balance is the issue. You can't reference the aliased value in the WHERE clause. You could either make the query a sub-query and add that condition on a higher level or change remaining_balance in the WHERE clause for (charges.amount - transactions.total_paid).
I haven't managed to reproduce this after running your code a few times.
I presume that it must happen when a later row gets inserted onto an earlier page in the file though.
So the order of operations is (for example)
- Rows inserted into heap on pages 200, 207, 223
- Select statement starts and performs an allocation ordered scan. Finds that the first page is 200 and is blocked waiting on a row lock to be released.
- Other rows are inserted by the first transaction. Some of them are allocated on a page before 200. Insert transaction commits.
- Row lock released and continues allocation ordered scan. Rows earlier in the file are missed.
The table comprised 10 pages. By default the first 8 pages will be allocated from mixed extents and then it will be allocated a uniform extent. Maybe in your case space was available in the file for a free uniform extent prior to the mixed extents that were used.
You can test this theory by running the following in a different window after you have reproduced the issue and seeing if the missing rows from the original SELECT
all appear at the beginning of this resultset.
SELECT [SomeData],
Moment,
SomeInt,
file_id,
page_id,
slot_id
FROM [SomeTable]
/*Undocumented - Use at own risk*/
CROSS APPLY sys.fn_PhysLocCracker(%% physloc %%)
ORDER BY page_id, SomeInt
The operation against an indexed table will be in index key order rather than allocation order so will not be affected by this particular scenario.
An allocation ordered scan can be carried out against an index but it is only considered if the table is sufficiently large and the isolation level is read uncommitted or a table lock is held.
Because read committed generally releases locks as soon as the data is read it is possible at for a scan against the index to read rows twice or not at all (if the index key is updated by a concurrent transaction causing the row to move forward or back) See The Read Committed Isolation Level for more discussion about this type of issue.
By the way I was originally envisaging for the indexed case that the index was on one of the columns that increases relative to insert order (any of Id, Moment, SomeInt). However even if the clustered index is on the random SomeData
the issue still doesn't arise.
I tried
DBCC TRACEON(3604, 1200, -1) /*Caution. Global trace flag. Outputs lock info
on every connection*/
SELECT TOP 2 *,
%%LOCKRES%%
FROM [SomeTable] WITH(nolock)
ORDER BY [SomeData];
SELECT *,
%%LOCKRES%%
FROM [SomeTable]
ORDER BY [SomeData];
/*Turn off trace flags. Doesn't check whether or not they were on already
before we started, with TRACEOFF*/
DBCC TRACEOFF(3604, 1200, -1)
Results were as below
The second resultset includes all 1,000 rows. The locking info shows that even though it was blocked waiting on lock resource 24c910701749
when the lock was released it doesn't just continue the scan from that point. Instead it immediately releases that lock and acquires a row lock on the new first row.
Best Answer
Definitely not.
Consider the following transaction:
No read done by the transaction controlling code: