Please enter your desired picture resolution in LP/mm to see the equivalent pixel numbers and pixel numbers needed for a properly sampled signal. See additional explanations below.
Resulting pixel amounts depending on sensor or film size
Pixel hor. (properly sampled)
Pixel vert. (properly sampled)
Pixel hor. (undersampled)
Pixel vert. (undersampled)
S35 (e.g. Red Helium 8K)
Full Frame (e.g. RED Monstro 8K)
IMAX 70mm (Film)
LP/mm, pixel resolution and MTF Charts
MTF Charts of lens manufacturers and review sites tell you about the capability of any lens to resolve contrast differences of details (y-axis) depending on either the spatial resolution itself or the distance from the center (x-axis).
MTF charts for lenses with distance to the center on the x-axis usually use one or more content-relevant resolutions, measured in line pairs per millimeter. They can't provide pixel resolutions since these numbers would depend on the size of the sensor used with them. To find out how many LP/mm you need for a specific resolution to be covered, you need to convert from LP/mm to the number of pixels of the camera sensor. The calculator above helps you to do that.
Talking about resolutions above the equivalent of 30 LP/mm or 2457 line widths of resolution for a Full-Frame image, the additional pixels are needed for proper signal sampling. The contrast values above such spatial frequencies can be compromised without significant negative effects. In fact, it might even be beneficial for the smoothness of the image if the contrast of such high frequencies decreases (this acts like a roll-off in common quantization methods). Avoiding high contrasts in the smallest pixels is one of the reasons why lens manufacturers are (co-)developing diffusion filters to be inserted into the light path and why some cinematographers like to use vintage lenses - whether they know it or not.
Analog Resolution vs. Digital Resolution
Analog spatial resolution is not the same as digital spatial resolution. In the digital world, we have to quantize image information in pixels, where one pixel is the smallest addressable unit of spatial information that can deliver the range of colors and luminance the digital-visual-system is capable of. This makes it a picture element where the name pixel is coming from.
Side note: When you consider healthy vision with a resolution capability of 1 arc minute (2020-vision or Visus 1.0) as being the goal for resolution, don't forget you can't just put that number into a square pixel grid and consider it done. We don't see in grids and objects in nature don't appear in grids. We only need to put it in digital grids because we need to address the elements of the picture. I have explained the details and demonstrated this in my seminars at various events in 2019.
Proper signal sampling (spatial quantization)
In order to reproduce any given analog spatial resolution, we need to have a digital spatial resolution (sampling frequency) that is significantly higher. Otherwise, we will face spatial quantization errors. Those errors appear if we don't take important factors into account. Those are the Nyquist-Shannon theorem or geometric challenges like curves or edges close to the horizontal or vertical axis. As an example of how important this kind of proper sampling is, we can think about audio sampling frequencies. In the world of audio, we use sampling frequencies like 44.1 kHz, 48 kHz, 96kHz or even 192 kHz for a signal that should reproduce up to 20kHz -the limit of what we can hear- accurately. No one is questioning we need this kind of proper sampling (up to 48 kHz) or Oversampling (up to 192 kHz) in the world of audio, so why are so many people questioning it in the world of video? In the world of motion picture or video, though, it is even more challenging because of the ability to appear for the quantization in multiple directions (x- and y-axis and diagonal directions).
Also, we need to consider other factors like the Bayer-pattern used in most digital cameras today as well as chroma subsampling. To overcome them, we need some headroom in sampling. This is one of the reasons why post-production is happening mainly in 4:4:4, and professional deliveries are usually not less than 4:2:2 subsampled.
As a rule of thumb, digital spatial quantization needs to be at least four times higher than the highest analog frequency we want to reproduce without artifacts. I tend to say a factor of 8 (descent oversampling) should finally solve all issues of the spatial restrictions in digital systems. But attention: This does not mean that we necessarily need the full contrast capabilities of individual pixels at this finest level of quantization. It's more about reproducing relevant spatial frequencies no matter where or in which direction they appear.
The real value of 8K
Unlike some negative prejudices about 8K, the real value of 8K is avoiding visible pixel structures while maintaining relevant spatial resolution. I use to say: "Let's use 8K and make the pixels disappear!" This is true for cameras as well as displays. Considering our human vision can't resolve much more than 2K-4K resolution at normal viewing distances, we still need to make this resolution available for all positions and directions of edges within the screen. So for example if an edge is moving across the screen and positioning in between pixels, only higher pixel resolution or some prefiltering can help to avoid flickering details.
Interestingly, if you use the calculator above, you may come to the valid conclusion that 8K is beneficial for sampling film material when it is really well produced like some movies in the late 90s and early 2000s.
Film Grain Size
One of the best analog film stocks used in many motion picture productions was the Kodak Vision 3. There are detailed specification data available for the resolution of this film stock, and we can find MTF charts in the technical specifications. Reading it, the contrast value in the MTF chart reduces dependent on spatial frequency and depending on the color. Small highlights tend to shift their color towards yellow as the blue color component lost contrast starting at 10LP/mm, staying above 50% ratio until about 35 LP/mm. If you consider the green channel as being most important, then the film's resolution with 50% contrast would be about 50 LP/mm. Of course, we need to consider some of the issues using analog film like the somewhat variable back focus, copies of the film losing resolution and so on. However, I think it is fair to consider that in ideal productions up to 30 LP/mm ended up to be resolved. Interestingly, many charts for lenses nowadays cover the same two points: 10LP/mm and 30 LP/mm. Even when you'll look at specs for really high-resolution cameras and lenses, you seldom find anything above that. Studying the relation in between MTF and lenses and sensors for a while and doing post-production with various materials, my personal opinion is that any motion picture resolution above 30 LP/mm is needed only if you aim to do clean signal sampling and make the pixels disappear. As a side note, there are smaller elements of grain on analog film as well, but due to their amount and the bigger elements, their contrast gets lower and lower. Considering 30% Contrast, we may find up to 80 LP/mm of resolution.
Conclusion: In order to emulate what a motion picture production on film is capable of doing, your sensor needs to resolve about 30 LP/mm with proper spatial quantization, leading to at least 60 LP/mm. Personally, I go even further and take the geometric aspects of the rectangular pixels as well as the Bayer pattern into account. I know it sounds much, but if you want to get rid of all pixel structures without losing details, we're talking about 120-240 LP/mm on the sensor side, while the lens still only needs to perform without compromises up until 30 LP/mm.