“Camera Match Move” Explained
“Camera Match Move on Green Screen” is a visual effects technique that allows the artist/director a great deal of latitude and an invaluable opportunity to be creative.
Match Moving, as it were, is a cinematic technique by which one can insert computer graphics into live-action footage. Particular attention is given to both scale and orientation in terms of the photographed objects in the shot. The idea is to synchronize the foreground live action photography with the background still image or video so the two appear to be a seamless whole. It also involves the position in the frame, and the perspective and movement in relation to the background objects in the scene.
Film, as an art form, is a collaborative effort. The cameraman’s work and that of the editor in post-production are of equal importance.
Camera Match Move is a general term used by professional videographers and VFX editors, digital compositors to describe a few different ways of using camera motion information with tracking markers on the green screen background.
Match moving has become increasingly popular in the new world of advanced Green Screen visual effects. It has become a necessary tool for the serious cinematographer and post production digital compositor and editor.
Placing Tracking Markers on Green Screen
The first step is to place tracking markers on the Green Screen.
Using green tape in a slightly different shade of green from the chroma green background is a good idea.
Placing Tracking Markers on Green Screen – Part II
A common method is to use a simple X with green tape.
However, any point that will key with the green will work.
In this digital era where CG elements are prevalent on television and movies, one cannot afford to remain unaware of this technology. One must have a basic knowledge of Camera Match Move. There are many Match Moving software packages available for purchase or downloading.
To recap, basically Camera Match Move is used to track camera movement against markers on the Green Screen so that the exact same virtual camera movement can be reproduced in a 3D animation program. After the animated background is composited with the live-action video, they will appear in perfectly matched perspective and therefore appear seamless.
As Match Move is primarily software-based, it has become more affordable and is now an established digital compositing technique.
Green Screen Tracking – Step I
The software must recognize and lock onto the markers and follow them through multiple frames. The SynthEyes software refers to them as “blips”. The following is a bit technical, but bear with me.
Some artificial points in the image are selected by the various software packages in situations where you choose not to use tracking markers. Depending on the specific tracking algorithm these “artificial points” are chosen because they might be exceptionally bright or dark spots, or corners or sharp edges in the image.
There are algorithms which rely on template matching. This technique uses digital image processing for finding exceptionally small areas of the image which correspond to a template image.
The critical thing here is that every “feature” corresponds to a specific point on the surface of an actual real object. The feature is then tracked in a series of 2D coordinates that exactly correspond to the position of that feature in a series of frames. The frame series is called a “track”. After a track has been created it can immediately be used for two dimensional motion tracking and, if desired, then be used to calculate three dimensional tracking info.
Green Screen Tracking – Step II
This step involves calibration. The position of the camera is decided upon by calibrating the inverse-projection of the two dimensional paths.
The nerd in me surfaces. After a point on the surface of a 3D object is photographed in position in the two dimensional frame it can be calculated by using a three dimensional function.
“We can consider a camera to be an abstraction that holds all the parameters necessary to model a camera in a real or virtual world. Therefore, a camera is a vector that includes as its elements the position of the camera, its orientation, focal length, and other possible parameters that define how the camera focuses light onto the “film plane”. Exactly how this vector is constructed is not important as long as there is a compatible projection function.
An illustration of feature projection. Around the rendering of a 3D structure, red dots represent points that are chosen by the tracking process. Cameras at frame i and j project the view onto a plane depending on the parameters of the camera. In this way features tracked in 2D correspond to real points in a 3D space. Although this particular illustration is computer-generated, Match Moving is normally done on real objects.
I trust the above was informative.