PlanIndexedRegistrationMotionTransform2d = | intensityBasedTracker2d (inGreySeqImg2d,inCoords2dColl) |
PlanIndexedRegistrationMotionTransform2d = | intensityBasedTracker2d (inGreySeqImg2d,inCoords2dColl,inOptRegMotionModel2d) |
PlanIndexedRegistrationMotionTransform2d = | intensityBasedTracker2d (inGreySeqImg2d,inCoords2dColl,inOptRegMotionModel2d,inOptRegistrationTracking2dGrid) |
PlanIndexedRegistrationMotionTransform2d = | intensityBasedTracker2d (inGreySeqImg2d,inCoords2dColl,inOptRegistrationTraining2dConfig,inOptRegistrationTracking2dGrid,inOptRegMotionModel2d,inOptNbIterByStage,outRegistrationTraining2dData) |
intensity based registration tracker 2d algorithm
This algorithm allows to track data into a 2d image. This is a packaged version of Training Step for intensity based registration 2d and Tracking Step for intensity based registration 2d algorithms sequence.
This algorithm is inspired from the work of F. Jurie and M. Dhome [1].
This algorithm will track the motion transform of a cloud of points InCoords2dColl through a sequence of 2d images InGreySeqImg2d. Input points are generally selected to be interest points such as corner points (see Harris corner detection 2d).
An initial training step (Training Step for intensity based registration 2d) is proceeded on first image plan of sequence. A tracking step (Tracking Step for intensity based registration 2d) is then computed sequentially on next plans.
During the training step we compute an image grey level collection at input point coordinates where
stands for image value and
stands for coordinates of point with index
. We then process to a random perturbation of input point coordinates (with respect to a geometric transformation) and resample image grey level at new coordinates
where
stands for applied perturbation (i.e. applied geometric transformation). This allows us to link variations of grey level
with applied perturbation
. We then repeat the previous step to obtain a kind of mapping between perturbation around initial points coordinates and grey level variations :
Once these perturbations computed, we can solve a linear system allowing to retrieve a motion transformation given variations of grey level. This training stage will be repeated for several "scale factors" to be able to handle quite "large" perturbations and converge to a fine estimation of motion transformation. Please see previous publication for more information on these computation.
During the training stage, some border points may frequently be perturbated to points outside of image. To avoid such case we pre-process input cloud of points to remove points outside a given area called "tracking grid" and associated to optional parameter InOptRegistrationTracking2dGrid. This "grid" is composed of four points (which must be counterclockwise ordered) and which will be perturbated during training step and later tracked during tracking step.
Training stage behavior is controlled by InOptRegistrationTraining2dConfig which agregate :
Results of this stage can be inspected via output parameter OutRegistrationTraining2dData which agregate :
Once the training stage proceeded on first image plan, the algorithm enters into a tracking stage for other plans. During this stage, motion transformations will be computed plan by plan and stored into "by plan" output parameter OutPIRegistrationMotionTransform2d. Each stored motion transformation is expressed between initial image plan and current image plan.
The behavior of this stage can be customized using following paramaters :
Here is an example of usage of this algorithm in case of rigid transform computation over the tracking sequence :
[1] "Real time template matching", F. Jurie and M. Dhome, In Proc. IEEE International Conference on Computer vision, pages 544–549, Vancouver, Canada, July 2001