Blog / Guides

What Camera Data Your VFX Vendor Actually Needs

Sourav Chatterjee Sourav Chatterjee
9 min read

A VFX vendor receiving a plate without camera data is in the position of a translator receiving a document without a source language. The work is technically possible — there are ways to reverse-engineer most of the missing information from the footage itself — but every reverse-engineered value is a place where errors creep in, and errors compound across the post pipeline.

This isn’t a problem that only affects student projects. Major productions deliver plates with missing or incomplete camera data more often than producers realize. Sometimes the camera report was kept but never sent to post. Sometimes it was sent but with key fields blank. Sometimes the second-unit DP shot without a camera report at all and nobody noticed until the matchmove came back wrong.

This post is about what camera data your VFX vendor actually needs, why each field matters, and what it costs (in time, money, and accuracy) when any of them is missing.

The Camera Report

The camera report is the document that travels with each take. It records the technical parameters of every shot — what camera, what lens, what settings, what conditions. In a properly run production, every take has a camera report entry, and the entries get consolidated and delivered to post alongside the plates.

A complete camera report covers, at minimum:

  • Camera body (make and model)
  • Sensor size and resolution
  • Lens (make, model, and focal length)
  • Aperture (f-stop)
  • Focus distance
  • Shutter angle or speed
  • Frame rate
  • ISO / sensitivity
  • White balance / color temperature
  • Color profile (RAW, log, Rec.709, etc.)
  • Lens height (for ground reference)
  • Tilt and pan angles at the start of the shot, if relevant
  • Filters used (NDs, polarizers, diffusion)

In practice, most camera reports cover the first half of this list reliably and are less consistent on the second half. The fields most often missing are focus distance, lens height, and filters. All three matter for matchmove and integration work.

Why Focal Length Matters

The focal length of the lens determines the field of view. Without it, the matchmove team is guessing at how wide the camera was seeing, which means guessing at how far points in the frame are from the camera in 3D space.

Focal length can be reverse-engineered from the plate by analyzing parallax — the way nearby objects move differently from far objects when the camera moves. If there’s enough parallax in the shot, the matchmove software (or artist) can solve for focal length to within a few percent. If the camera barely moves, parallax is minimal, and the focal length solve is unreliable.

When focal length is missing on a low-parallax shot, the matchmove team has to make an educated guess. Sometimes the guess is close enough. Sometimes it’s not, and the guess shows up as misregistered CG elements that don’t sit correctly in the scene. The fix is to re-track with a different focal length, which is hours of work that wouldn’t have been needed if the field had been on the camera report.

Why Sensor Size Matters

The sensor size, combined with focal length, determines the actual angle of view. Two different cameras with the same nominal focal length can have very different fields of view if their sensors are different sizes.

A “50mm” lens on a Super 35 sensor produces a different field of view than a 50mm lens on a full-frame sensor. The matchmove math has to account for this — and it can’t unless the sensor size is known.

The trap is that sensor size sometimes isn’t on the camera report because it’s “obvious” — everyone on set knew which camera was being used. In post, the team often doesn’t know, especially on multi-cam shoots or when the production used different cameras for different scenes. A field that was obvious on set becomes a question mark in post.

Sensor size also matters for color pipeline work. Different sensors have different color profiles, native white balances, and ISO behavior. Knowing the camera body lets the post team apply the right input device transform (IDT) for ACES workflows, or the right LUT for Rec.709 deliverables.

Why Lens Distortion Matters

Every lens distorts. Wide lenses bow lines outward (barrel distortion), some lenses bow lines inward (pincushion distortion), zoom lenses distort differently at different focal lengths, vintage lenses often have characteristic distortion patterns. None of this is a flaw — it’s just how the lens images the world.

For the comp to integrate cleanly, CG elements have to match the plate’s distortion. A CG building inserted into a wide-angle plate has to bow at its edges the way the real architecture does. A CG character standing next to live-action talent has to be positioned in undistorted space and then re-distorted to match the lens.

Lens distortion is solved either by capturing distortion grids (a checkered grid photographed through the lens at the start of the shoot — explicit reference for the distortion characteristics) or by analyzing the plate itself for distortion (less reliable, but workable). Camera reports should reference whether distortion grids were shot.

Without grids and without solid plate analysis, distortion has to be estimated. Estimates are usually close but rarely exact, and the integration suffers.

Why Focus Distance Matters

Focus distance determines where the lens’s depth-of-field is. For a shot where focus is shallow — talent in focus, background out of focus — the matchmove team needs to know how shallow, where, and how it shifts during the take.

This matters specifically for two cases:

Defocus matching for CG elements. A CG element placed in a scene needs to be defocused to match the plate’s depth-of-field. If the plate is shot at f/1.8 with focus on the foreground, the CG element in the background needs significant defocus. The exact amount depends on focus distance.

Focus pulls. A shot where focus shifts during the take — from foreground to background, or vice versa — needs the focus distance recorded across the take, not just at the start. Without this, the CG defocus can’t track the live-action defocus, and the comp reveals itself.

Focus distance is one of the least-recorded fields on most camera reports because it changes during the take. Productions that record it well usually have a focus puller’s notes that travel to post; productions that don’t have to estimate from the plate.

Why Frame Rate Matters

Frame rate is rarely missing from camera reports — it’s typically the headline value alongside resolution. But the fields adjacent to frame rate sometimes are missing, and they matter:

Shutter angle/speed determines motion blur. A 180° shutter at 24fps produces standard motion blur. A higher shutter (faster shutter speed) produces less motion blur and a “video” look. CG elements have to be rendered with motion blur matched to the plate’s, which means knowing the shutter setting.

Pulldown or conform settings matter when the shoot was done at a non-standard frame rate (like 23.976fps for film-style delivery, or 25fps for PAL territories). The wrong conform produces frame-by-frame timing errors that propagate through the entire delivery.

Why Lens Height Matters

The height of the lens above ground is the reference for placing CG elements that need to interact with the ground plane. A CG character that has to stand on the floor needs to know where the floor is in the camera’s frame of reference — and the floor’s position depends on the lens height at the time of capture.

Without lens height, the matchmove team can solve for it from the plate (ground features, perspective lines, talent stance) but the solve is approximate. Errors of a few inches in solved lens height become visible misalignments in CG placement.

Productions that shoot tracked CG (CG elements that need to interact with real-world geometry) should record lens height per camera setup. Productions that don’t can usually get away with the post-team’s solve.

What Happens When Camera Data Is Missing

For each missing field, the post team has options. None of them are free.

Reverse-engineering from the plate. Most fields can be partially or fully recovered by analyzing the footage. The recovery takes time (hours to days, depending on the field and shot complexity) and is rarely as accurate as the original metadata.

Educated guessing. When recovery isn’t feasible, the team makes an educated guess based on common values for the camera and lens combination. Guesses are right sometimes, wrong other times. Wrong guesses produce visible errors in the integration.

Asking production. Sometimes a quick email back to the production team can recover the missing data — the camera assistant remembers the focal length, the focus puller has notes from the day. This works when the production team is responsive and the records are accessible. Often by the time post discovers the gap, the production team has moved on to other projects.

The cumulative cost of missing camera data is real but distributed — each individual shot loses an hour or two, but across a project of dozens or hundreds of shots, the project loses days. The total adds up faster than producers expect.

What a Good Camera Report Looks Like

The cleanest camera reports use a standardized template that travels with every project. Each take has its own row, each row has the same fields, and the fields are filled in completely.

The industry-standard template references the work of Karen Goulekas and other VFX supervisors who’ve been writing about on-set data acquisition for decades. The fields don’t change much between productions; what changes is the discipline of actually filling them in.

A useful test: at the end of a shoot day, can a stranger pick up the camera reports for the day’s shots and reconstruct exactly what was shot, with what settings, in what conditions? If yes, the reports are good enough for post. If no, post is going to spend extra time figuring out what should have been written down.

How FXiation Digitals Receives Camera Data

We can work with whatever level of camera data the production sends. Complete camera reports make our matchmove and 3D CGI work fast and accurate. Incomplete reports require us to spend extra time reverse-engineering, and we’ll flag that in the bid if we can see the gaps in advance.

What we’d ask of any production: send the camera reports with the plates. If reports weren’t kept, send what does exist — the camera body and lens at minimum, focal length and aperture per shot if possible. The more we know, the less we have to guess.

If you’re planning a shoot and want to confirm your camera report template covers what post will need, send us your template. We’ll mark up any fields that aren’t there and explain what they’re used for, before the shoot. Producers tell us this is the cheapest piece of pre-production prep they’ve done, and the post savings show up across every VFX shot in the delivery.

Common Questions

Questions readers ask after this post.

What camera data does a VFX vendor need from a shoot?
At minimum: camera body, sensor size and resolution, lens make/model and focal length, aperture, focus distance, shutter angle or speed, frame rate, ISO, white balance, color profile, lens height for ground reference, and filters used. Most camera reports cover the headline fields; the ones most often missing are focus distance, lens height, and filters — and all three matter for matchmove and integration.
Why does focal length matter for VFX matchmove?
Focal length determines field of view and the parallax characteristics of the plate. The matchmove team uses it to solve for camera position in 3D space. Without focal length, the team has to estimate from plate analysis — which works on shots with significant parallax but is unreliable on shots where the camera barely moves. Errors of a few millimeters in assumed focal length show up as drift in the matchmove.
What is a lens distortion grid and why does VFX need it?
A lens distortion grid is a checkered chart photographed through the lens at the focal length and focus distance being used. It captures exactly how the lens distorts the image — barrel, pincushion, vignetting, chromatic aberration. CG elements have to be rendered with matching distortion to integrate at the edges of the frame. Without grids, distortion is estimated from the plate, which is approximate.
What happens in VFX when camera data is missing?
The post team reverse-engineers what they can from the plate, makes educated guesses on what they can't, or asks production. Each path costs time and produces less accurate results. The cumulative cost of missing camera data is real but distributed — each shot loses an hour or two, but across a project of dozens or hundreds of shots, the project loses days. The total adds up faster than producers expect.
Sourav Chatterjee

Sourav Chatterjee

Founder, FXiation Digitals

Over a decade in VFX production, leading FXiation Digitals across compositing, 3D, and visual effects for studios in 15+ countries.

Need VFX for your project?

Get a free consultation from our team.

Get a Quote