A cinematic trailer for a major game release looks like the game and feels like the game, but it almost never renders the same way the game does. The trailer runs through a linear offline pipeline that’s closer to feature-film VFX than to the real-time engine the studio’s been polishing for two years. Cinematics, key art, gameplay-flavored marketing inserts, social cuts, all of it comes out of an offline render farm.
That handoff between the engine and the linear pipeline is where game-cinematic VFX projects either match the brand or quietly break it. FXiation Digitals has worked on the linear side of that handoff across multiple game releases, and the pattern is consistent: the technical work of porting assets is straightforward; the harder problem is preserving the look.
Why the linear pipeline at all
Real-time game engines have advanced enormously. Path-traced lighting, character cinematics, in-engine cutscenes that rival pre-rendered work from a decade ago. So why does the trailer team go offline?
A few reasons that compound. The engine ships at the platform’s render resolution and frame rate, which is usually 4K-or-below at 30 or 60 frames. Trailer deliveries often want higher resolution masters with options for HDR mastering, vertical and square crops for social, and key-art stills at print resolution. The engine wasn’t built for any of that.
More importantly, the engine has performance budgets. Every shot it renders has to fit inside a frame budget that lets the game ship. A linear offline render has no such constraint. A cinematic trailer can put 32 path-tracing samples per pixel into a single frame, render dozens of light bounces, sustain volumetrics that the engine can only approximate. That’s where the visible quality lift comes from.
Then there’s the compositing layer. Linear offline renders deliver passes (diffuse, specular, AO, depth, IDs, mattes, custom utility passes) that let compositors finish the look in ways the engine can’t. Color grade per character, push contrast on a hero element only, layer in atmospheric haze that wasn’t in the original render. Compositing controls for the trailer that the engine doesn’t expose.
What actually transfers from the engine
The starting point of the linear pipeline is an asset handoff from the game team. What comes across:
Geometry. Usually higher-resolution LODs (level-of-detail variants) than ship in the final game. The hero models that artists already built for cinematic in-engine cutscenes typically translate cleanly. Background and crowd assets sometimes need uprezzing or rebuilding if the trailer pushes close to them.
Textures. Often a higher-resolution variant set, or fully reauthored textures for hero assets. The game’s texture compression is tuned for runtime memory; the trailer’s textures live in offline render memory budgets that are effectively unlimited.
Animation. Rigs and existing animation cycles, with the rig controls validated for film use. Game rigs are often optimized for runtime performance (fewer bones, simpler skinning) and need cinematic-grade additions (better facial controls, finer skin deformation, secondary motion on cloth and hair). Sometimes the game team’s own cinematic team has already done this work for in-engine cutscenes; sometimes the trailer vendor extends it.
Lighting reference. The engine’s lit look used as a visual target for the offline render to match or amplify. This is the most important piece and the most often underspecified. Without lighting reference, the offline render makes its own choices and the trailer drifts away from the game.
What does NOT transfer cleanly is the engine’s runtime shaders. Engine shaders are written for the engine’s lighting model, its post-process stack, its specific rendering quirks. Reauthoring them in the offline renderer’s shader system is required, and it’s where look-development time goes. A character’s skin shader might take days to match offline because the engine was doing dozens of small tricks to get the look that the offline renderer has to reproduce explicitly.
The look-lock decision
Before any shot work begins, the trailer commits to a look philosophy. Two main options:
Engine-faithful. The trailer matches the game’s look exactly. Same color, same lighting moods, same character read. Audiences get exactly what they’ll see when they play. This is the safe choice and usually right for gameplay-flavored marketing.
Cinematic-amplified. The trailer pushes beyond what the engine ships, with richer lighting, more cinematic grade, more atmospheric depth, sometimes more dramatic camera work than the engine can sustain. The trade is a wider gap between trailer and gameplay, which fans sometimes flag.
Most major releases run a mix. The hero cinematic trailer goes amplified; the gameplay reveal stays engine-faithful; the gameplay inserts in marketing material match what’s on screen during play. The decision needs to be made early because it changes everything downstream: shader complexity, lighting setup, render times, compositing approach.
Where the handoff breaks
The most common failure mode in offline trailer pipelines is character or environment drift. The trailer ships and fans say it doesn’t look like the game. The cause is almost always upstream of the offline render. Look-development happened in isolation from the game team’s reference material. The grade was approved without comparison to in-engine reference frames. The character skin shader was authored without sign-off from the game’s art director.
Fixing this is a process problem, not a technical one:
- Lock visual targets before any shot starts. Engine-rendered reference frames at the trailer’s hero moments. Color charts. Character skin reads. Environmental moods.
- Bake the references into the lighting and look-dev approval cycle. Every shader, every light rig, every grade pass evaluated against the reference, not in isolation.
- Get the game’s art director (or a designated reviewer with that authority) into the approval loop. Their eye catches drift before the wider review does.
The other common failure is animation flatness. The trailer renders a character whose face reads less expressively than it does in-engine. This usually traces back to the rig: cinematic-grade facial animation needs more controls than the game ships with, and the trailer team has to extend the rig.
How FXiation Digitals approaches the handoff
When FXiation works on cinematic VFX for a game release, we treat the asset handoff as a project phase, not a delivery. We get the game team’s art director into the approval cycle from week one. We render look-development frames against engine reference and iterate until the lit look matches. We lock the shader library before shot lighting begins so character and environment reads stay consistent across every shot in the trailer.
The result is a trailer that lines up with the game. Players watch it and recognize the title. The CG team and the trailer team feel like they made the same thing. That alignment is what cinematic VFX work is actually for.
For game studios planning a trailer pipeline, the highest-leverage decision is who reviews look-dev. If it’s an internal trailer team with no reach back to the game’s art direction, expect drift. If the game’s art director or a designated authority signs off on every look-dev milestone, drift gets caught early and the final trailer matches what fans expect to see.
Common Questions