Making use of Jeff Moden's Tally-Ho! CSV splitter from here:
CREATE FUNCTION [dbo].[DelimitedSplit8K]
--===== Define I/O parameters
(@pString VARCHAR(8000), @pDelimiter CHAR(1))
--WARNING!!! DO NOT USE MAX DATA-TYPES HERE! IT WILL KILL PERFORMANCE!
RETURNS TABLE WITH SCHEMABINDING AS
RETURN
--===== "Inline" CTE Driven "Tally Table" produces values from 1 up to 10,000...
-- enough to cover VARCHAR(8000)
WITH E1(N) AS (
SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL
SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL
SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1
), --10E+1 or 10 rows
E2(N) AS (SELECT 1 FROM E1 a, E1 b), --10E+2 or 100 rows
E4(N) AS (SELECT 1 FROM E2 a, E2 b), --10E+4 or 10,000 rows max
cteTally(N) AS (--==== This provides the "base" CTE and limits the number of rows right up front
-- for both a performance gain and prevention of accidental "overruns"
SELECT TOP (ISNULL(DATALENGTH(@pString),0)) ROW_NUMBER()
OVER (ORDER BY (SELECT NULL)) FROM E4
),
cteStart(N1) AS (--==== This returns N+1 (starting position of each "element" just
-- once for each delimiter)
SELECT 1 UNION ALL
SELECT t.N+1 FROM cteTally t WHERE SUBSTRING(@pString,t.N,1) = @pDelimiter
),
cteLen(N1,L1) AS(--==== Return start and length (for use in substring)
SELECT s.N1,
ISNULL(NULLIF(CHARINDEX(@pDelimiter,@pString,s.N1),0)-s.N1,8000)
FROM cteStart s
)
--===== Do the actual split. The ISNULL/NULLIF combo handles the length for the final
-- element when no delimiter is found.
SELECT ItemNumber = ROW_NUMBER() OVER(ORDER BY l.N1),
Item = SUBSTRING(@pString, l.N1, l.L1)
FROM cteLen l
;
go
We can code the solution as an apply against Jeff's function and a pivot like so:
with data as (
select Code,Location,Quantity,Store from ( values
('L698-W-EA', NULL, 2, 'A')
,('L82009-EA', 'A1K2, A1N2, C4Y3, CBP2', 2, 'A')
,('L80401-A-EA', 'A1S2, SHIP, R2F1, CBP5, BRP, BRP1-20', 17,'A')
,('CWD2132W-BOX-25PK', 'A-AISLE', 1, 'M')
,('GM22660003-EA', 'B1K2', 1, 'M')
)data(Code,Location,Quantity,Store)
)
,shredded as (
select Code,Location,Quantity,Store,t.*
from data
cross apply [dbo].[DelimitedSplit8K](data.Location,',') as t
)
select
pvt.Code,pvt.Quantity,pvt.Store
,cast(isnull(pvt.[1],' ') as varchar(8)) as Loc1
,cast(isnull(pvt.[2],' ') as varchar(8)) as Loc2
,cast(isnull(pvt.[3],' ') as varchar(8)) as Loc3
,cast(isnull(pvt.[4],' ') as varchar(8)) as Loc4
,cast(isnull(pvt.[5],' ') as varchar(8)) as Loc5
,cast(isnull(pvt.[6],' ') as varchar(8)) as Loc6
from shredded
pivot (max(Item) for ItemNumber in ([1],[2],[3],[4],[5],[6])) pvt;
;
go
yielding this:
Code Quantity Store Loc1 Loc2 Loc3 Loc4 Loc5 Loc6
----------------- ----------- ----- -------- -------- -------- -------- -------- --------
L698-W-EA 2 A
L82009-EA 2 A A1K2 A1N2 C4Y3 CBP2
L80401-A-EA 17 A A1S2 SHIP R2F1 CBP5 BRP BRP1-20
CWD2132W-BOX-25PK 1 M A-AISLE
GM22660003-EA 1 M B1K2
I think the group by should be applied first and only then you unpivot your results. The CROSS APPLY to unpivot multiple columns like this it's OK. I use the exact same technique without issues. You mention that you have more columns not shown here, but I think that probably they have the same values so you could include them from the beginning using MAX/MIN.
SELECT [Employee Id],
[Payroll Name],
v.EDT AS [E/D/T],
v.EDTName AS [E/D/T Name],
v.EDTAmount AS [E/D/T Amount],
v.EDTHours AS [E Hours]
FROM (
SELECT CK.EMP AS [Employee Id],
CK.PAYROLL AS [Payroll Name],
SUM(CK.REGULAR_GROSS) AS REGULAR_GROSS,
SUM(CK.REGULAR_HOURS) AS REGULAR_HOURS,
SUM(CK.Over_Time_GROSS) AS Over_Time_GROSS,
SUM(CK.Over_Time_HOURS) AS Over_Time_HOURS,
SUM(CK.Double_Time_GROSS) AS Double_Time_GROSS,
SUM(CK.Double_Time_HOURS) AS Double_Time_HOURS
FROM #CHECKS CK
GROUP BY
CK.EMP, CK.PAYROLL
) DerivedTable
CROSS APPLY (
VALUES
('E', 'Regular Gross', regular_gross, REGULAR_HOURS),
('E', 'Overtime Gross', OVER_TIME_GROSS, OVER_TIME_HOURS),
('E', 'Doubletime_Gross', DOUBLE_TIME_GROSS, DOUBLE_TIME_HOURS)
) v(EDT, EDTName, EDTAmount, EDTHours)
Is that what you expected?
Best Answer
This is also not pretty, but it allows for as\many\sub\folders\as\you\may\have.
With this sample data I get the following results: