IPSDK 0.2
IPSDK : Image Processing Software Development Kit
Intensity based tracker 2dSee full documentation
PlanIndexedRegistrationMotionTransform2d = intensityBasedTracker2d (inGreySeqImg2d,inCoords2dColl)
PlanIndexedRegistrationMotionTransform2d = intensityBasedTracker2d (inGreySeqImg2d,inCoords2dColl,inOptRegMotionModel2d)
PlanIndexedRegistrationMotionTransform2d = intensityBasedTracker2d (inGreySeqImg2d,inCoords2dColl,inOptRegMotionModel2d,inOptRegistrationTracking2dGrid)
PlanIndexedRegistrationMotionTransform2d = intensityBasedTracker2d (inGreySeqImg2d,inCoords2dColl,inOptRegistrationTraining2dConfig,inOptRegistrationTracking2dGrid,inOptRegMotionModel2d,inOptNbIterByStage,outRegistrationTraining2dData)

Detailed Description

intensity based registration tracker 2d algorithm

This algorithm allows to track data into a 2d image. This is a packaged version of Training Step for intensity based registration 2d and Tracking Step for intensity based registration 2d algorithms sequence.

This algorithm is inspired from the work of F. Jurie and M. Dhome [1].

This algorithm will track the motion transform of a cloud of points InCoords2dColl through a sequence of 2d images InGreySeqImg2d. Input points are generally selected to be interest points such as corner points (see Harris corner detection 2d).

An initial training step (Training Step for intensity based registration 2d) is proceeded on first image plan of sequence. A tracking step (Tracking Step for intensity based registration 2d) is then computed sequentially on next plans.

During the training step we compute an image grey level collection at input point coordinates $\left\{I[P_i]\right\}$ where $I[]$ stands for image value and $P_i$ stands for coordinates of point with index $i$. We then process to a random perturbation of input point coordinates (with respect to a geometric transformation) and resample image grey level at new coordinates $\left\{I[T(P_i)]\right\}$ where $T()$ stands for applied perturbation (i.e. applied geometric transformation). This allows us to link variations of grey level $\left\{\delta_i=I[T(P_i)]-I[P_i]\right\},i=1..n$ with applied perturbation $T()$. We then repeat the previous step to obtain a kind of mapping between perturbation around initial points coordinates and grey level variations :

\[ \left\{T^k()\right\}, k=1..p \leftrightarrow \left\{\left\{\delta_i=I[T^k(P_i)]-I[P_i]\right\},i=1..n\right\}, k=1..p \]

Once these perturbations computed, we can solve a linear system allowing to retrieve a motion transformation given variations of grey level. This training stage will be repeated for several "scale factors" to be able to handle quite "large" perturbations and converge to a fine estimation of motion transformation. Please see previous publication for more information on these computation.

During the training stage, some border points may frequently be perturbated to points outside of image. To avoid such case we pre-process input cloud of points to remove points outside a given area called "tracking grid" and associated to optional parameter InOptRegistrationTracking2dGrid. This "grid" is composed of four points (which must be counterclockwise ordered) and which will be perturbated during training step and later tracked during tracking step.

intensityBasedTracker2d_grid.png

Training stage behavior is controlled by InOptRegistrationTraining2dConfig which agregate :

Results of this stage can be inspected via output parameter OutRegistrationTraining2dData which agregate :

Once the training stage proceeded on first image plan, the algorithm enters into a tracking stage for other plans. During this stage, motion transformations will be computed plan by plan and stored into "by plan" output parameter OutPIRegistrationMotionTransform2d. Each stored motion transformation is expressed between initial image plan and current image plan.

The behavior of this stage can be customized using following paramaters :

Here is an example of usage of this algorithm in case of rigid transform computation over the tracking sequence :

intensityBasedTracker2d.png

References

[1] "Real time template matching", F. Jurie and M. Dhome, In Proc. IEEE International Conference on Computer vision, pages 544–549, Vancouver, Canada, July 2001

Example of Python code :

Example imports

import PyIPSDK
import PyIPSDK.IPSDKIPLFeatureDetection as fd
import PyIPSDK.IPSDKIPLRegistration as registration

Code Example

# opening of input images
inSeqImg = PyIPSDK.loadTiffImageFile(inputImgPath, PyIPSDK.eTiffDirectoryMode.eTDM_Temporal)
sizeT = inSeqImg.getSizeT()
# retrieve first sequence plan
inFirstImgPlan = PyIPSDK.extractPlan(0, 0, 0, inSeqImg)
# detection of input features points using harris corner detector
nbSamples = 400
minDist = 5
pixels2d = fd.harrisCorner2d(inFirstImgPlan, nbSamples, minDist)
# retrieve associated 2d coordinates
coords2dColl = PyIPSDK.toCoords2dColl(pixels2d)
# computation of sequence motion tracking
planIndexedRegistrationMotionTransform2d = registration.intensityBasedTracker2d(inSeqImg, coords2dColl)
# retrieve computed motion from first image plan to last one
# (note that scale equals 1 since we do not ask for scale estimation)
registrationMotionTransform2d = planIndexedRegistrationMotionTransform2d.getValue(0, 0, sizeT-1)
theta = registrationMotionTransform2d.params[PyIPSDK.Rigid2d.eTP_Theta]
tx = registrationMotionTransform2d.params[PyIPSDK.Rigid2d.eTP_Tx]
ty = registrationMotionTransform2d.params[PyIPSDK.Rigid2d.eTP_Ty]

Example of C++ code :

Example informations

Header file

#include <IPSDKIPL/IPSDKIPLFeatureDetection/Processor/HarrisCorner2d/HarrisCorner2d.h>
#include <IPSDKIPL/IPSDKIPLRegistration/Processor/IntensityBasedTracker2d/IntensityBasedTracker2d.h>

Code Example

// Load the input sequence image
ImagePtr pInSeqImg = loadTiffImageFile(inImgFilePath, eTiffDirectoryMode::eTDM_Temporal);
const ipUInt64 sizeT = pInSeqImg->getSizeT();
// extraction of first image plan of sequence
ImageConstPtr pFirstImgPlan;
SubImageExtractor::extractPlan<const BaseImage>(0, 0, 0, *pInSeqImg, pFirstImgPlan);
// detection of input features points using harris corner detector
const ipUInt32 nbSamples = 400;
const ipUInt32 minDist = 5;
Pixels2dConstPtr pPixels2d = harrisCorner2d(pFirstImgPlan, nbSamples, minDist);
// retrieve associated 2d coordinates
Coords2dCollConstPtr pCoords2dColl = toCoords2dColl(*pPixels2d);
// computation of sequence motion tracking
PlanIndexedRegistrationMotionTransform2dPtr pPlanIndexedRegistrationMotionTransform2d = intensityBasedTracker2d(pInSeqImg, pCoords2dColl);
// retrieve computed motion from first image plan to last one
// (note that scale equals 1 since we do not ask for scale estimation)
const RegistrationMotionTransform2d& registrationMotionTransform2d = pPlanIndexedRegistrationMotionTransform2d->getValue(0, 0, sizeT-1);
const Real64Vector& estimParams = registrationMotionTransform2d.getLeafColl<RegistrationMotionTransform2d::Params>();;
const ipReal64 theta = estimParams[Rigid2d::eTransformParams::eTP_Theta];
const ipReal64 tx = estimParams[Rigid2d::eTransformParams::eTP_Tx];
const ipReal64 ty = estimParams[Rigid2d::eTransformParams::eTP_Ty];