Q: What should I set my “gain” and “offset” to?
Before answering this, a bit of background is useful. Specifically, just what the heck do gain and offset do? Before we cover this, a brief primer on how those photons you capture become intensities you see on the screen is needed. If you wish, skip down to “OK, so what should I set my gain and offset to?” below.
How do signals off my CCD become intensity values?
When each CCD pixel is read out, there is a certain amount of voltage corresponding to how many photons were collected and converted into electrons. This is an analog signal that needs to be converted into a digital signal so that we have a number corresponding to the intensity. This conversion happens in the analog to digital converter (ADC). In so doing, we have a specification often seen on cameras, the overall system gain, typically specified as some number of electrons per ADU (analog-digital-unit, aka the raw intensities you see in your image in a program like Nebulosity). A camera may have an overall system gain of something like 0.7 e-/ADU or 1.3 e-/ADU, etc. This means that each electron registered corresponds to 0.7 or 1.3 raw intensity units.
There are four key limitations to keep in mind when thinking about the ADC process:
1) There are no fractional ADU outputs. So, one electron in both the systems above would probably end up recording 1 ADU. You can’t have half an ADU (and you can’t have half an electron).
2) Your ADC has a minimum value of 0 and a total number of intensity steps of 2 ^ (# bits in your ADC). For a 16-bit ADC, this is 0-65,535. For an 8-bit ADC, this is 0-255, etc.
3) Zero is evil and 65,535 is bad but not evil. When your signal hits either, you loose information. If the sky is at zero and your faint galaxy is at zero, no amount of stretching will bring it back. 01 = 0100 = 0.
4) Your CCD has a limited number of electrons it can hold called the well depth. This may be 20,000 e-, 40,000 e-, etc. Note, that for all the cameras I know of that let you adjust the gain and offset (Orion Starshoot, Meade DSIs, QHY cameras, etc.), the well depth is < 65,535. This will be key for my argument below.
What do gain and offset do?
With all this in your head, we can now describe what gain and offset controls on cameras do. After coming off the CCD and before hitting the actual ADC there is typically a small pre-amplifier (this may be inside the ADC chip itself). What this preamp does is allow you to boost the signal by some variable amount and to shift the signal up by some variable amount. The boosting is called gain and the shift is called offset.
So, let’s say that you have pixels that would correspond to 0.1, 0.2, 1.1, and 1.0 ADU were the ADC able to deal with fractional numbers. Now, given that it’s not, this would turn into 0, 0, 1, and 1 ADU. Two bad things have happened. First, the 0.1 and 0.2 have become the same number and the 1.1 and 1.0 have become the same number. We’ve distorted the truth and failed to accurately represent subtle changes in intensity. This failure is called quantization error. Second, the first two have become 0 and, as noted above, 0 is an evil black hole of information.
Well, what if we scaled these up by 10x before converting them into numbers (i.e., we introduce some gain)? We’d get 1, 2, 11, and 10. Hey, now we’re getting somewhere! With gain alone, we’ve actually fixed both problems. In reality, the situation is often different and the ADC’s threshold for moving from 0 to 1 might be high enough so that it takes a good number of electrons to move from 0 to 1. This is where injecting an offset (a DC voltage) into the signal comes in to make sure that all signals you could possibly have coming off the CCD turn into a number other than zero.
Gain’s downside: Bit depth and dynamic range
From the above example, it would seem like we should all run with lots of gain. The more the better! Heck, it makes the picture brighter too! I often get questions about this with the assumption that gain is making the camera more sensitive. It’s not. Gain does not make your camera more sensitive. It boosts the noise as well as the signal and does not help the signal to noise ratio (SNR) in and of itself. Gain trades off dynamic range and quantization error.
We saw above how it reduces quantization error. By boosting the signal we can have fractional differences become whole-number differences. What’s this about dynamic range?
Let’s come up with another example. Let’s have one camera with a gain of 1. So, 1 e-/ADU. Let’s have another run at 0.5 e-/ADU. Now, let’s have a pixel with 1k e-, another with 10k e-, another at 30k e-, and another at 50k e-. In our 1 e-/ADU cam, we of course have intensities of 1000, 10000, 30000, and 50000. In our 0.5 e-/ADU cam, we have intensities of 2000, 20000, 60000, and 65535. What? Why not 100000? Well, our 16-bit camera has a fixed limit of 65535. Anything above that gets clipped off. So while the 1 e-/ADU camera can faithfully preserve this whole range, the 0.5 e-/ADU camera can’t. Its dynamic range is limited now.
How do manufacturers determine gain and offset for cameras that don’t allow the user to adjust them?
Let’s pretend we’re making a real-world camera now and put in some real numbers and see how these play out. Let’s look at a Kodak KAI-2020 sensor, for example. The chip has a well-depth specified at 45k e-. So, if we want to stick 45,000 intensity values into a range of 0-65,535, one easy way to do it is to set the gain at 45,000 / 65535 or at 0.69 e-/ADU. Guess what the SBIG ST-2000 (which uses this chip) has the gain fixed at… 0.6 e-/ADU. How about the QSI 520ci? 0.8 e-/ADU. As 45k e- is a target value with actual chips varying a bit, the two makers have chosen to set things up a bit differently to deal with this variation (SBIG’s will clip the top end off as it’s going non-linear a bit more readily), but both are in the same range and both fix the value.
Why? There’s no real point in letting users adjust this. Let’s say we let users control the gain and they set it to 5 e-/ADU. Well, with 45k e- for a maximum electron count at 5 e-/ADU, we end up with a max of 9,000 ADU and we induce strong quantization error. 10, 11, 12, 13 and 14 e- would all become the same value of 2 ADU in the image, loosing the detail you so desperately want. What if the user set it the other way to 0.1 e-/ADU? Well, you’d turn those electron counts into 100, 110, 120, 130, and 140 ADU and wonder just what’s the point of skipping 10 ADU per electron. You’d also make 6553 e- be the effective full-well capacity of the chip. So, 6535:1 would be the maximum dynamic range rather than 45000:1. Oops. That nice detail in the core of the galaxy will have been blown out and saturated. You could have kept it preserved and not lost a darn thing (since each electron counts for > 1 ADU) if you’d left the gain at ~0.7 e-/ADU.
What about offset? Well, it’s easy enough to figure out the minimum value a chip is going to produce and add enough offset in the ADC process to keep it such that this is never going to hit 0.
OK, so what should I set my gain and offset to?
The best value for your camera may not be the best value for other cameras. In particular, different makers set things up differently. For example, on a Meade DSI III that I recently tested, running the gain full-out at 100% let it just hit full well at 65,535 ADU. Running below 100% and it hit full-well at 40,000 or 30,000, or 10,000 ADU. There’s no point in running this camera at anything less than 100% gain. On a CCD Labs Q8-HR I have, even at gains of 0 and 1 (on its 0-63 scale), the camera would hit 65535 on bright objects (like the ceiling above my desk). There’s no point in running this camera at gains higher than 0 or 1.
Why is there no point? The camera only holds 25k e-. If a gain of 0 or 1 gets me to 0.38 e-/ADU (so that those 25k e- become 65535), running at 0.1 e-/ADU will only serve to limit my dynamic range. Each single electron already comes out to more than 2 ADU.
So, how do I set it? (man, you ramble a lot when you get going!)
1) Take a bias frame and look for the minimum value in it. Is it at least, say 100 and less than a thousand or a few thousand? If so, your offset is fine. If it’s too low, boost the offset. If it’s high, drop it. Repeat until you have a bias frame with an offset in, roughly 100 – 1000. Don’t worry about precision here, it won’t matter at all in the end. You now know your offset. Set it and forget it. Never change it.
2) Aim the camera at something bright or just put it on your desk with no lens or lenscap on and take a picture. Look at the max value in the image. Is it well below 65k? If so, boost the gain. Is it at 65k? If so drop the gain. Now, if you’re on a real target (daylight ones are great for this) you can look at the histogram and see the bunching up at the top end as the camera is hitting full-well. Having that bunch-up roughly at 65,535 plus or minus a bit is where you want to be. If you pull up just shy, you’ll get the “most out of your chip” but you’ll also have non-linearity up there. You’ve got more of a chance of having odd color casts on saturated areas, for example, as a result. If you let that just clip off, you’ve lost a touch but what you’ve lost is very non-linear data anyway (all this assumes, BTW, an ABG chip which all of these cams in question are). Record that gain and set it and forget it. Never change it.
By doing this simple, daytime, two-step process you’ve set things up perfectly. You’ll be sure to never hit the evil of zero and you’ll be making your chip’s dynamic range fit best into the 16-bits of your ADC. Again, all the cameras in question have full-well capacities below 65,535 so you are sure to have enough ADUs to fit every electron you record into its own intensity value.
The above assumes you have more ADUs available than electrons. This is true as noted for the cameras in question here but isn’t universally true. For example, if you have an 8-bit ADC, variable gain is quite important as you may want yourself to trade-off quantization error and dynamic range. You may be fine blowing out the core of a galaxy to get those faint arms and want to run at 1 or 2 e-/ADU instead of 10 or 50 or 200 e-/ADU. This happens in 12-bit DSLRs as well with their 4096 shades of intensity but not so much with 14-bit DSLRs and their 16,384 shades.
Please note that none of this has considered noise at all. The situation is even “worse” when we factor in the actual noise we have. If the noise in the frame is 8 ADU that means the bottom 3 bits are basically worthless. That 45,000:1 dynamic range is really 45,000:8 or 5,625:1 and you’re not even able to really pull out every electron. But, that’s a topic for another day. (Google “Shannon Information” if interested).