This is a common problem with floating point numbers everywhere.
Floating point numbers stored in computer systems should only ever be considered approximations because there are numbers easy to represent in decimal that come out longer than the available precision (sometimes they are in fact never ending) when converted to binary. See ypercube's links and https://stackoverflow.com/questions/588004/is-floating-point-math-broken amongst many other references, for more detail.
The most common example given (as seen in that StackOverflow link) is 0.1+0.2 not resulting in exactly 0.3. You have to apply extra rounding or flexible bounds checking (in both cases reducing the effective precision) to get the behaviour you are expecting.
As your data has a fixed number of decimal places (or a fixed maximum) at 2, you would be much better of using the fixed place decimal/numeric types or similar. These are actually stored and processed as scaled integers avoiding the need for any floating point representation internally so avoiding the approximation problem (all integers can be represented accurately in both base 10 and base 2, assuming you have enough digits/bits). For instance 0.1 and 0.2 might be stored as 1000 and 2000, so the addition results in 3000 which when converted to a string for display becomes 0.3 not the 0.3000000004 you might get out of a floating point calculation, and of course compares accurately to 3 in the same type which would be scaled to 3000.
Is there a way to select all records containing more than two numbers after the period?
You could convert to string and count the characters after the decimal point (or comma depending on your system local). In TSQL this would find values that are have ended up with more than two decimal places after being stored in a binary float format then translated back to decimal:
SELECT *
FROM myTable
WHERE CHARINDEX('.', REVERSE(CAST(floatValue AS NVARCHAR(MAX)))>3
(you may need to tweak function names and related syntax for postgres).
I agree with oNare that adding a sqlFiddle that builds a simplified version of your data model, shows what you have tried so far, and shows the desired results would be most helpful.
That said, you might want something like the following in order to avoid double-counting the sales for any customers that were given multiple offers. I've put inline comments quoting the relevant parts of your question in this example query:
SELECT s.customerId, s.totalSellValue, o.* -- "other CRMOffer data"
FROM CRMOffer o
JOIN (
-- "I must then group by CRMOffers customerID and sum sell values, but sum only once for each SellPK"
-- Compute total sell value for each customer before joining to CRMOffer in order to avoid double-counting
-- NOTE: Grouping by Sell.customerId is the same as grouping by CRMOffers.customerId since the two must match
SELECT customerId, SUM(sellValue) AS totalSellValue
FROM Sell
WHERE customerId IS NOT NULL -- "If Sell table doesnt have a specific customerID, sell value will be null"
GROUP BY customerId
) s
ON s.customerId = o.customerId
Best Answer
The type DOUBLE is a floating point type represented in binary internally. Binary floating point numbers will produce "rounding errors" when converted to base 10. If you want precise numeric calculations with a decimal point, you'll have to use the NUMERIC or DECIMAL type.
To help you understand why floating point types have rounding errors when converting to base 10, please see this article:
https://stackoverflow.com/questions/2100490/floating-point-inaccuracy-examples
Also, just for your reference, the NUMERIC or DECIMAL types are going to be orders of magnitude slower at calculations compared to the DOUBLE type.
Also see Data Types in SQLite.