There is an example on this documentation page: https://learn.microsoft.com/en-us/sql/t-sql/data-types/precision-scale-and-length-transact-sql?view=sql-server-ver16&redirectedfrom=MSDN
It says that 123.45 has a precision of 5 and a scale of 2.
To me, that says that when creating a DECIMAL
, the first number is the precision, and the second number is the amount of that precision that is dedicated to storing decimal points.
But my testing does not agree with that. Here are a few examples:
SELECT 124.28 / 1039.72 -- 0.119532181
SELECT CAST(124.28 AS float) / CAST(1039.72 AS float) -- 0.119532181741238
SELECT CAST(124.28 AS DECIMAL(38,30)) / CAST(1039.72 AS DECIMAL(38,30)) -- 0.119532
I would expect DECIMAL(38,30))
to have at least as many digits as the float
example, but it has less.
How do I setup a decimal to give more digits in the result?
The documentation is totally accurate, what you are asking is about decimal division and the resultant decimal prevision. Nothing to do with the definition of decimal itself.
In the remarks section it explains what happens when you divide 2 decimal numbers. And I think from what I can tell, that the values are so large, that its dropping back to defaults.
You can see what is happening if you stuff the value into a #temp table then query the column definitions e.g. dbfiddle.uk/_2k6wEFu
Just for fun… I recently learned that NASA calculates Pi to 15 decimals. They can locate Voyager I which is 15 billion miles away within 6 inches. Now this made me chuckle because I was guilty of thinking more is better. xkcd.com/2170
I would expect DECIMAL(38,30)) to have at least as many digits as the float example, but it has less. I wouldn’t since
float
is an approximate numeric type – it cannot represent0.119532181
accurately so it approximates it as0.119532181741238
.Show 2 more comments