-
Notifications
You must be signed in to change notification settings - Fork 16
Description
The noise parameter in the sinograms only increases the absorption, never decreases it, see image:

This introduces a statistical bias. I think the reason for this behaviour is that noise is added for the attenuation data and not for the measured data
def measure(self, phantom, noise=False):
[...]
if noise > 0:
newdata += newdata * noise * np.random.poisson(1)
self.record()
return newdataThe way a "normal" tomography measurement works is by dividing (dark-current normalized) camera images: projection divided by flat-field. These images both have Poisson-distributed counting errors.
The variance variance of the Poisson distribution equals lambda:
VAR = sigma^2 =lambda
For a noise level equivalent to one standard deviation (i.e. sigma = noise ), this corresponds to a count rate of counts = noise^(-2)
I would suggest the noise application to be done in the following manner:
if noise > 0:
counts = noise**(-2)
newdata = -numpy.log(numpy.random.poisson(counts * numpy.exp(-newdata)) / numpy.random.poisson(counts))This approach creates a nice and uniform histogram of the noise around the expected value.
However, this approach would require that newdata is scaled reasonable, i.e. ideally newdata would be in [0, 3] and would support using "real world units".
It is obviously also more computational intensive than the old approach but considering the time required for computing all phantom intersections, I expect this to be negligible.
This is an example of how the histograms look like (500,000 random number pairs) with a transmission value of 0.25 (exp(-newdata) = 0.25):
