Sensor fusion – Correlated noise and why it’s such an issue

You may never have heard of correlated noise, but it is a huge issue when doing signal processing or even just using a couple of sensors. It is most painful when using a cheap set of Night Vision Goggles!

Cheap NVGs will only have one single multiplier tube, and a set of mirrors to send that image to both eyes. This is worst case, for the noise will be perfectly correlated cross both eyes, and your brain really isn’t used to this. Any transient event, including random noise, will register equally! This will drive you to distraction, and perhaps get you killed.

Every speckle and random photon or warm pixel will appear as if it were really there, rather than an artifact. You get kind of the same effect when straining to see in a pitch-dark room – ‘ghosts’ and noise when it is dark enough, as there is enough noise for there to be high levels of correlation.

Another example of ‘noise correlation’ (though not strictly the same thing) causing amplification issues (which incidentally does affect active NVG if not taken into consideration) is on IR camera systems and small format cameras with a built in flash that is nearly on the optical axis. What happens is that the high powered illumination causes a huge amount of light to reflect off anything close, blinding what is behind. A tiny spider up close will block a large part of the camera view when lit by a flash or IR lamps, and even the fine strands of web which are invisible to the eye are like giant white strings across the picture. Needless to say, these are distraction to humans and a nightmare for a vision system.

Sensor fusion can mitigate this as a design factor, but it is generally better to understand the cause. Plus, if you don’t understand it, you might run into it and not realise that you made a simple by costly mistake.

Always move illuminators off the optical axis as far as possible. Very narrow angles increase scattering, and lead to huge issues in fogs, mist and rain, and simple early design changes mitigate it.

When using a single window for multiple sensors (for example a mirrored splitter that gives the same view of the thermal image with the visible image) be aware that anything that is seen by one sensor is seem by both. Yes, it is obvious, but a stone chip or dead fly could blind a lot of both of your sensors.

Avoid sending identical images to both eyes. It drives users to distraction, and also robs people of depth perception. If you must, at least use a single screen for the user to look at, rather than two that are the same.

Occasionally you might think that an on-axis IR illuminator is better – picking out numberplates, for example, and occassionally it might be true. However, consider the case of no numberplate, and you simply won’t have enough illumination to be of use. Better to increase illumination off-axis, and get an image that is more than a burning numberplate and a black background.

As regards sensor fusion, if there are two (or more) sensors on different axes then anything seen by both at the same time is very likely to actually be there. Again, you might want to cross-check that spatially too (is it in the next frame from each camera?) But in general it is trustworthy.

The obvious downside is that it increases cost. That is a trade-off for you to consider. However, my experiance shows me that unless things are very tightly controlled, you will get issues with the systems outlined above. You want things to be robust, especially for machine vision. False positives are nearly as bad as missing an event, and if you have a system that cries ‘Wolf!’ too often, it will be ignored or disabled.

Leave a Comment