MM Mocha Connect tool

Preview

If you’ve ever wanted to transfer planar tracking or PowerMesh data from Mocha — now you can👌
The video shows how to install and use the tool.

Lens Distortion in Identical Lenses

In this series of tests, we will answer the most popular questions about camera tracking that arise among CG artists.

The goal of the experiment #3 is to determine can we use the same lens with a different serial number to get a reliable lens distortion preset?
For testing we selected 9 pairs of the lenses with different serial numbers:

Atlas Orion 50mm

Atlas Orion 65mm

Sigma 50mm

Ultra prime 20mm – 3х

Ultra prime 32mm – 3х

Arri SP 125mm

Cooke S4 18mm

Cooke S4 25mm

Cooke S7i 100mm

First, we captured distortion grids for each lens and created their profiles using 3DEqualizer.
Then we shoot three additional test shots for every lens pair:
After that we solved shots taken with Lens A using the distortion profile from Lens B and compared their solve to original distortion profile:
Arri SP 125mm
Atlas Orion 50mm
Cooke S4 18mm

As you can see the lenses turned out to be largely interchangeable. While the distortion curves showed some minor differences, their overall behavior was remarkably consistent. The solve errors practically identical.

That means we can reliably use a distortion profile from any lens within the same model range and focal length.

Curious to dive deeper or run your own analysis?

All detailed data and test materials are available for download

TCourier tool

Preview

Easily transfer Camera, Points and Geo between 3DEqualizer, Nuke and Blender.

CamDB by Matchmove machine

CamDB Cover
Go to CamDB

The most accurate and detailed camera sensor database available today.

Accuracy down to 100-thousandth
Instant plugins for Nuke, 3DEqualizer and Blender
Powerful API for custom integration.

Flange Distance test

In this series of tests, we will answer the most popular questions about camera tracking that arise among CG artists.

The goal of the experiment #2 is to determine can we use a different camera with the same lens to achieve an accurate distortion preset?
The key idea is that Flange Distance for the same mount in all cameras is the same thus we can shoot distortion grids for lens preset on any camera that have enough sensor size.
For testing we selected three popular cameras featuring larger sensor sizes, paired with the Supreme Prime 25mm lens using an LPL mount.
First, we positioned the RED camera precisely so the distortion grid covered the entire frame in Open Gate mode and captured the footage.
Next, we carefully measured the RED camera’s sensor position along all three axes and then placed the next camera’s sensor precisely in the same location before capturing footage in Open Gate mode.
We repeated the same exact procedure with our third camera.
Then, using our CropDistortionGrid , we cropped the grid footage from RED and ARRI to match Sony’s full sensor size, allowing us to compare results directly.
RED to Sony
ARRI to Sony

As you can see, differences in Angle of View between these cameras, when sensor areas are matched, are minimal. The slight discrepancies observed are mostly due to our practical limitations in positioning sensors precisely rather than actual flange distance variations.

This shows that each mount standard maintains a consistent flange distance across all compatible cameras, even when adapters are used.

This means, to shoot a distortion grid, we can safely choose any camera with a larger sensor size than the one used for our tracked footage. After capturing, we simply crop the grid footage to match the sensor dimensions of our main camera.

Also you can download distortion grids below for self checkup.

Shooting Modes & Lens Distortion: Does Cropping Open Gate Work?

In this series of tests, we will answer the most popular questions about camera tracking that arise among CG artists.

The goal of the experiment #1 is to determine whether switching formats results in a proportional crop or if other manipulations are applied to the captured image.
The key idea is that distortion grids should always be captured using the full available sensor (Open Gate mode). Afterward, cropping can be performed during post-processing if necessary. This eliminates the need to shoot grids separately for each mode.
For testing, we selected the most popular cameras with the largest sensor sizes and captured four sets of distortion grids in all formats. You can download the samples by clicking the button below:
Next, we took the grids captured in Open Gate mode and cropped them in Nuke using our script CropDistortionGrid.

Reminder: It is crucial to crop based on the sensor size ratio, not the pixel aspect ratio.
Most cameras have varying pixel densities per millimeter of the sensor.

Some cameras also perform downscaling – reducing the resolution of the digital image while maintaining proportions. This is done to ensure the required recording speed, for example, when shooting in slow motion (high frame rate). downscaling ≠ crop!

You need to take the original sensor size (original_sensor) and the size of the sensor you want to crop the image to (new_sensor).

The formula for calculating the crop factor based on width is:

The formula for calculating the crop factor based on height is:

Then, the new image dimensions (in pixels) after cropping are calculated as follows:

For width:

For heigth:

As a result, we obtain a cropped grid based on its original Open Gate (OG) image and the sensor size to which we are adapting the original OG grid.
Examples of grids captured on the Alexa Mini LF in ProRes:
Examples of grids captured on the Alexa Mini LF in RAW:
Examples of grids captured on the RED Monstro 8k:
Examples of grids captured on the Sony Burano:
Alexa Mini LF
RED Monstro 8k
Sony Burano
There was no visible difference between the cropped OG grids and those captured in the target format, which means that the cameras apply the crop without additional transformations. Therefore, we can shoot grids in Open Gate mode, then crop to the desired format and calculate the distortion afterward.

Overscan in different 3D software

Real camera lenses have distortion, but virtual cameras in 3D software don’t, which leads to renders with straight lines.
To blend CG with real footage, you need to apply lens distortion. This can create empty edges on the render.


Overscan helps solve this by rendering with extra margin, ensuring that after distortion, the frame remains complete.
In this guide, we’ll show how to apply overscan in various 3D software tools.

Overscan through Nuke camera

Download the Nuke Overscan Script

Why do you need an overscan

Screenshot 1 shows a grid with lens distortion. You can see how straight lines are distorted at the edges.

Let’s assume that the CG object is at the edge of the frame (screenshot 2)

In order for a CG object to render correctly in the footage, you must apply distortion to it that corresponds to the lens.
In this case, after distortion is applied, the CG part will be stretched and “tiling” will occur.

To find out which overscan is needed, you can look at the bounding box settings in Nuke.
The dotted box shows how much the image extends beyond the edges.

You can also create a reformat node with these parameters:

It is important to note that overscan is not just an increase in render resolution.
It is an increase in the field of view of the camera + an increase in the resolution of the CG renderer.

Cinema 4D

To create an overscan in Cinema 4D there is a handy xpresso script that will create a new camera with overscan values.
To do this, add Canvassizecamera to your project and select the xpresso tag.

Next, in the camera parameter you need to create a link to your 3D camera.
After that specify old resolution and new resolution with overscan and then make the Canvassizecamera camera active.
Don’t forget to specify the new resolution in the project render settings.

As a result, the renderer will have an increased FoV.

Houdini

Mantra

To get overscan you need to go to the camera settings, View tab and then in the Resulution field specify how many percent overscan is needed.

ovesrcan 10%

Then specify a new value in the Screen Window Size field. In the case of the example it is 1.1

ovesrcan 10%

When rendering, select the desired camera with the applied modifications and do not change the resolution in the render settings.

Karma

To specify overscan parameters, go to the render settings, Image Output → Aspect Ratio tab. In the Data Window NDC field specify the required parameters.
For example, an overscan of 10% will have the following parameters: -0.1, -0.1, 1.1, 1.1

Note that the first two fields must have negative values.

Houdini Docs:

The default is 0, 0, 1, 1 (no cropping). Note that you can use negative values.
For example, -0.1, -0.1, 1.1, 1.1 will give you 10% overscan on each side.

Blender

In order to apply overscan in Blender it is necessary to install Addon.
The addon has been tested in program version 3.5.1.
Download Camera Overscan Script (Python)
Download Overscan Background Popover Script (Python)

Installation Instructions

Place both files in the following path:
Windows: C:\Users\{USER}\AppData\Roaming\Blender Foundation\Blender\3.5\scripts\addons
Linux: HOME/.config/blender/3.5/scripts/addons
macOS: /Users/$USER/Library/Application Support/Blender/3.5/scripts/addons

Then run the program, go into the settings and enable addons:

After that, an additional button will appear in the viewer, which will open the menu.

Overscan Percentage – specify the percentage of overscan;
Toggle Overscan – increases overscan by the specified percentage;
Restrore Original Settings – returns original parameters.

Maya

To set an overscan in Maya, regardless of the renderer, you need to go into the render settings:

Specify a new resolution with overscan. For example, if you need 10% overscan, then the new resolution for 1920×1080 will be 2112×1188.
Then, in Outliner, select the required camera and in the camera attributes, change the Camera Scale parameter by specifying the required FoV increase of the camera. For example, at 10% you should specify 1.1.

Conclusion

Now you understand the purpose of overscan and how to work with it in various 3D packages.
For convenience, you can use a ⚙️🔧 script for Nuke that automates the process of creating overscan and preparing nodes for its further use.

Guide to file naming

File naming in a project is a crucial aspect that helps maintain order and facilitates working with files at different stages of the project. File names should be standardized so that any artist in the team understands exactly what each file contains.

In this guide, we will explore the fundamental rules and recommendations for creating consistent file names, as well as discuss methods for organizing projects based on the practices of our studio.

Project organization

  1. First and foremost, create a folder named “Matchmove_machine” at the root of your drive. This will be our main working folder. It is important to place it directly in the root, not in a path like D:\Vasya\Work\matchmove or any other variations. For ease of interaction within the team and for transferring to the client, we use a standardized path for everyone.
  2. Next, when we start working on a new project, we create a folder with the project’s name. The name is taken from the work table where all the shots are listed, or from the Telegram group for that project; we use the full name as indicated.
  3. Next, within the project folder, create a folder with the name/number of the shot as listed in the table, FTP, or the topic in the Telegram group. After this, you can download the source material into this folder.
  4. While the download is in progress, create a folder named “_v001” within the project folder. This folder will be used to store all our exports from the first version of the project. The underscore is not mandatory, but it ensures that the export folder will always appear at the top of each shot.

Thus, the general path of a shot will be: Drive/Matchmove_machine/Project_Name/Shot_Number/ShotNumber_v001

File naming

After setting up the project structure, we begin our work.

During the work process, all files are saved in the “_v001” folder. Here are the main types of files you will encounter:

Undistort files

  1. shotnumber_undistort_v001/shotnumber_undistort_v001_####.jpg — For exporting the undistort sequence, create a folder where the sequence itself is saved. Do not forget to add the frame variable after the version (in most cases, this is _####, as sequences usually start from frame 1001).
  2. shotnumber_STMap_undistort_v001.exr — Export of the undistort as an STMap.
  3. shotnumber_STMap_distort_v001.exr — Export of the distort as an STMap.

Geometry naming

Geometry naming nuances:

  1. If, during the work process, any geometry is created and used that was not created within Nuke but loaded separately (obj or fbx) and exists in a single instance, then we use the naming: shotnumber_geo_v001.obj.
  2. If there are multiple files of such geometry, we use simple names, for example: shotnumber_floor_v001.obj, shotnumber_locators_v001.obj, shotnumber_door_v001.obj.
  3. If we have geometry provided by the client and we use it in our work, then the name remains unchanged (unless specified otherwise in the technical task).

Version numbering

Special attention should be paid to version numbering.

It is important to pay special attention to version numbering.

We never increment the version number on our own. Even if we create internal versions during our work or receive comments from the lead, the project version only changes when we receive comments from the client. Only then do we create a new version folder “_v002” within the project folder and save all the files under the second version.

There should not be a situation where the export folder with version 001 contains files with higher version numbers (shotnumber_track_v004_3_(2).3de).

To save intermediate versions, when you want to fix the result and try a different approach, you can create a “backup” folder within the version folder. This folder should be used to store various working versions. However, in such cases, it is important to be careful not to get confused, not to send this folder during the export to FTP, and not to send an incorrect version.

All of this is necessary to avoid confusion between the client’s and our versions during the work process. When we start working on a shot, the client expects the first version (v001) from us. If a situation arises where corrections or additions are needed, then the second version (v002) is expected from us, and so on. This ensures clear naming and understanding from all sides.

Uploading the result

Before uploading the result, we check how our folder with files looks. We ensure that if the working version is 001, then all files have this version. We look for typos in the names.

The export requirements are usually specified and pinned in the project’s work chat and may vary. We carefully read the chat and ensure that we have all the files and there are no extras. The export location is also specified and pinned in the project’s work chat.

Note: If a situation arises where corrections are needed and a new version is created, then we upload all files with the v002 suffix. Even if these files have not been changed. It should not be the case that, for example, the scene is the second version, but the geometry and undistort are the first. This can lead to questions about whether the correct files have been uploaded and whether the required changes have been made. Explanations like “take the scene from the third version, the geometry of the second, and the undistort from the first” look confusing to everyone.

Therefore, we eliminate these questions by uploading everything in the new version.

It is important to check yourself every time before and after uploading the result, regardless of how confident you are and how automatic the process is. Mistakes, typos, and oversights can happen to anyone, so it is not difficult to double-check the number of uploaded and required files, check the names, and check the version.

  • Project file 3DE4 (.3de)
  • Lens file from Nuke (.nk)
  • Scene script file in Nuke (.nk)
  • Scene export file in Alembic (.abc)
  • Scene export file in FBX (.fbx)
  • Geometry file from 3DE4 (.obj)
  • Result preview file (.mov)
  • Folder with undistort sequence in (.jpg)
  • STMap undistort file in (.exr)
  • STMap distort file in (.exr)

How to Shoot Distortion Grids

Welcome to our comprehensive step-by-step guide on how to shoot distortion grids! In this video, we will walk you through the entire process, ensuring you capture precise and accurate distortion grids for your projects.

Discover the essential techniques and tips for setting up your shots, avoiding common pitfalls, and achieving optimal results. We’ll highlight the most frequent mistakes made during the shooting process and provide valuable insights on how to correct them.

Additionally, you can find links to useful materials and resources in the description below, which will further aid you in mastering the art of shooting distortion grids.

Watch the video to enhance your skills and ensure your distortion grid captures are flawless every time!

Guide

Explore our complete Distortion Grid shooting guide with detailed instructions for different cameras and lenses, including proper file naming schemes according to industry standards. Based on experience from major VFX studios.