Gunnleik Groven har gjort egne tester og også i samarbeid med Geoff Boyle ( cml) på oppløsning av Dragon sammenlignet med Alexa. Han har samlet ulike testresultater i høy oppløsning og har også noen betrakninger rundt hvor høy oppløsning teller og hvor det ikke er så viktig. Alle testene kan sees , på han egen blogg. her finner du også mye morro for geeks like him
Følgende er tatt fra hans blog;
To me, an image can technically be broken down to these aspects:
- Dynamic range
- Color cleanness
- Color fidelity/Saturation
- Resolution
- Delivery formatAnd as a final parameter:
- Is the camera available? (!)
All these aspects of an image/camera are part of the matrix that makes us choose one over another for a particular production.
After these (to me) these more emotional points come in:
- Ease of use on set
- If it works well for the shot I want to get
- Price of use (Total, including on-set and post)
- If I “like it”
- If it works well with “other” technical parts of the production (like sound and post)
- Workflow in post and confidence that I get the desired result
- If it “feels” good
- Confidence at customer-level
Does resolution matter?
Clipped highlights and unusable lowlights suck. And if you cannot control those, resolution probably takes a diminutive order.
Still, from what we have just seen, even at 1080 delivered and oversampled source-images from both cameras, the higher captured resolution, gives the higher resulting resolution.
So I guess we could conclude:
- If you can control exposure, originating resolution matters quite a bit.
Next… When does not in-camera resolution matter that much (soft images scale after all better than sharp images, as a reference to my “Frozen” experience)
- Resolution does not matter a lot if most of your image is soft
Ouch… That sounds obvious, right?
BUT given the (past) trend of shooting S35mm at T1.3, that actually is a valid point.
When shooting at low aperture, whatever is sharp in the image will appear comparatively sharp, no matter which camera you shoot with.
And the out-of-focus part of the image will inevitably camouflage whatever lack of resolution there is.
I would to some extent argue that the “I shoot everything at T 1.3″ trend is somewhat related to what the cameras are capable of capturing. Lower resolution cameras simply “look better” when a lot of the image is out of focus. (And it is a cost-effective way to compose and clean up shots…)
Lack of resolution becomes distracting when you have a high level of detail in a shot and that is important, and if you get a high level of moire and softness as consequence of camera ability.
Now which shots are these?
To give a general idea, that would be wide shots with tons of important info at higher T-stops.
But that sounds so unfilmic, does it not?
Not really…
Imagine
- The fighting sequence of RAN (Akira Kurosawa)
- Anything but the T 0.9 shots by Stanley Kubrick
- Whatever in “The Dark Knight”
- The Godfather
- Anything by Jaques Tati (Thanks Brice!)
- Acopalypse now
- Star Wars IV-VI
- Kinda most of the “classics”
Low T-stops does not equal “filmic” IMHO.
And: With higher T-stops, you need the detail.
The most frequently quoted arguments for high resolution I see are these:
- Re-framing and stabilisation
- VFX/Compositing
- Oversampling to lower formats (like I looked at in the previous chapters)
- Reduced noise in delivery-image
- If you need to pull print-size stills from the film-sourced material (Yup, people, including me actually do that…)