Video Object Tracking Using Motion Estimation

Maharana, Himanshu and Dubasi, Monika and Sahoo, Soumya Ranjan (2012) Video Object Tracking Using Motion Estimation. BTech thesis.

[img]PDF
1808Kb

Abstract

Real time object tracking is considered as a critical application. Object tracking is one of the most necessary steps for surveillance, augmented reality, smart rooms and perceptual user interfaces, video compression based on object and driver assistance. While traditional methods of Segmentation using Thresholding, Background subtraction and Background estimation provide satisfactory results to detect single objects, noise is produced in case of multiple objects and in poor lighting conditions.

Using the segmentation technique we can locate a target in the current frame. By minimizing the distance or maximizing the similarity coefficient we can find out the exact location of the target in the current frame. Target localization in current frame was computationally much complex in the conventional algorithms. Searching an object in the current frame using these algorithms starts from its location of the previous frame in the basis of attraction probably the square of the target area, calculating weighted average for all iteration then comparing similarity coefficients for each new location.

To overcome these difficulties, a new method is proposed for detecting and tracking multiple moving objects on night-time lighting conditions. The method is performed by integrating both the wavelet-based contrast change detector and locally adaptive thresholding scheme. In the initial stage, to detect the potential moving objects contrast in local change over time is used. To suppress false alarms motion prediction and spatial nearest neighbour data association are used. A latest change detector mechanism is implemented to detect the changes in a video sequence and divide the sequence into scenes to be encoded independently. Using the change detector algorithm (CD), it was efficient enough to detect abrupt cuts and help divide the video file into sequences. With this we get a sufficiently good output with less noise. But in some cases noise becomes prominent. Hence, a method called correlation is used which gives the relation between two consecutive frames which have sufficient difference to be used as current and previous frame. This gives a way better result in poor light condition and multiple moving objects.

Item Type:Thesis (BTech)
Uncontrolled Keywords:LIST OF SYMBOLS 1. σ_w^2 (t) Intra class variance 2. σ_i^2 (t) Variance of a particular class 3. w_i(t) Probability of two classes separated by a threshold. 4. 〖 σ〗_b^2 (t) Inter class variance 5. µ_i(t) Mean of a class. 6. DCT Discrete cosine transforms 7. I_t Intensity of a particular pixel 8. B_t Intensity level of background 9. τ Predefined threshold 10. I_x Derivative of the image intensity value along x axis 11. I_y Derivative of the image intensity value along y axis 12. I_z Derivative of the image intensity value along z axis 13. α Regularization constant 14. µ_T Mean intensity for the whole image 15. SAD Sum of absolute difference 16. 〖IC〗_ji Contrast change image 17. L_o Object Luminance 18. L_B Surrounding background Luminance 19. σ_L Local standard deviation 20. µ_L Local mean intensity 21. M_CM Local contrast salience map 22. 〖MD〗^(〖[p,q]〗_(c_ji ) ) Mask for contrast difference image between two contrast salience images. 23. I^(〖[p,q]〗_(c_mi ) ) Contrast image of i^th image.
Subjects:Engineering and Technology > Electrical Engineering > Image Processing
Engineering and Technology > Electrical Engineering > Image Segmentation
Divisions: Engineering and Technology > Department of Electrical Engineering
ID Code:3530
Deposited By:Himanshu Maharana
Deposited On:25 May 2012 15:33
Last Modified:25 May 2012 15:33
Supervisor(s):Patra, D

Repository Staff Only: item control page