Object Tracing Based On Exponential Function
Abdulla Mohamed1*, Phil F Culverhouse1, Angelo Cangelosi1, Chenguang Yang2
Centre for
Robotics & Neural Systems, Plymouth University, UK
Zienkiewicz Centre for Computational Engineering, Swansea University, UK
*Corresponding author: Abdulla Mohamed, Centre for Robotics & Neural Systems, Plymouth University, UK. Tel: +447761624073; Email: abdulla.mohammad@plymouth.ac.uk
Received Date: 27
October, 2018; Accepted Date: 20 November, 2018; Published Date:
29 November, 2018
Citation: Mohamed
A, Culverhouse PF, Cangelosi A, Yang C (2018) Object Tracing Based On
Exponential Function. Int J
Chromatogr Sep Tech: IJCST-119. DOI: 10.29011/2577-218X.000019
1. Abstract
In football podcasting and other games the cameras required to track the ball or players all the time. This paper presents a work done on an object tracking with a movable camera. This work was done to integrate into an active stereo vision platform that uses vergence vision in order to retrieve the 3D position of the target. This required a smooth tracking controller to keep the object within the field of view and avoid generating large blur that affects the quality of the image or leads to loss the feature in the image. The controller was designed based on exponential function to generate a smooth trajectory that decreases when it closer to the centroid of the object. The result of the exponential function helps to keep the target within the center of the image at the accuracy of ±5 pixels.
2.
Keywords: Control System;
Exponential Function; Object Tracking; Vision
1. Introduction
Visual tracking is a classical problem has been studied in computer vision and has many applications. The classical visual tracking is used a statistic camera where an object tracked within the field of view of the camera; such process uses in industrial especially on the conveyor belt. Many algorithms were developed for the static camera such as background subtracting that assume the background is static and the foreground is changing [1,2]. Background subtraction has many disadvantages like the introduction of illumination and light changing [3], various moving backgrounds like moving trees or slow moving of the foreground where these two issue has been studied in [4]. Another approach in tracking an object is optical flow. Optical flow is an algorithm depends on feature extraction of the target like using corner detection, Scale-Invariant Feature Transform SIFT [5], then track these feature in the next frames [6]. Many works have been done on this approach to improve the quality and the speed of the tracking [7-9]. In sport the camera tracking the players or the ball where the camera in these case is moving or in humanoid case the head is tracking the moving object. In this case, the problem of object tracking gets more complicated when the issue of moving camera introduced. The object tracking problem introduces to the control system, which required to design a controller suit the camera specification. Many works have been done on this type of issue using different techniques and based on the required task. Won Jin Kim and In-So Kweon (2011) [10] implement object tracking for multiple targets using the homography based motion detection [11] to detect an individual target then an online boost tracker was integrated to combine the separate targets.
In [12] a detection algorithm to track an object in a moving camera was studied. The algorithm base on feature correspondences between frames, then using the information generated from feature matching the properties of motion is computed. Hu et al. (2015) [13] studied a multiple object detection in moving camera. Which the algorithm was presented in their work is using feature detection in the frames. The features are classified into background and foreground where the foreground represent the target. In [14], an algorithm of object tracking in 3D coordinate was studied. The controller used was based fuzzy logic that gives the performance to track the object in 3D coordinate relate to the robot. Where the focus was to control the motor that attached to the camera in order to keep the target within the field of view and the centroid of the camera. In this paper, the work is focusing on the controller system that controls the camera during the tracking process where the primary focus on keeping the target within the field of view. In this work, we are interested in keeping the centroid of the target matching the center of the image during the tracking process. This paper focuses on the control side of the system by integrating an exponential function with the motor controller. To provide smooth object tracking and control the blur that generated during the movement of the camera. The paper is organized as follow: the next section introduces the methodology of the control system, then the setup of the experiment will be shown in section three. In the fourth section, the result and discussion are presented and, finally the conclusions and future development closing this work.
1.1 Background and Preliminaries
In this paper, the object tracking was implemented on an active stereo vision platform. The platform is used in studying the dynamic vergence vision that depends on tracking the object. The platform consists of 5 DOF. Each camera has a pen and tilt independently and sharing the same active baseline. The platform is shown in (Figure 1).
The cameras used in the rig are colour Point Grey Flea 3 (FL3- U3-88S2C-C) camera with 8.8MP and frame rate of 21 FPS. The sensor is Sony IMX121 with a resolution of 4096 x 2160, 12-bit ADC, and the pixel size is 1.55 µm. The camera has a global rest shutter at a speed of 0.021 ms to 1 s.
A Dynamixel xl430-w250-t motor was used. These motors have an absolute magnet encoder with 14 bit which gives a resolution of 0.088°. The maximum speed of the motor is 60 rpm. The communication between the motors and the PC using a USB2Dynamixel dangle, with a power supply of 12V. The control system was designed using Robot Operating System ROS [15] on a desktop PC with Ubuntu 16.04. The PC has Intel Core i7-7700K 4.2 GHz Quad- Core Processor, with DDR4 3200MHz Memory RAM.
1.2 Camera Model
The pinhole camera model is the standard model used to describe a space point relate to the camera origin (Figure 2). Point P is a world point in front of the camera. This point coordinate [
Figure 1: Active stereo vision platform with 5 DOF.
Figure 2: single camera model.
Figure 3: Motion blur generated due to the fast camera motion.
Figure 4: The block diagram of the object tracking.
Figure 5: The experiment setup with the static target.
Figure 6: Moving object tracking experiment setup.
Figure 7: Angular velocity of the motor at lambda 0.0010.
Figure 8: Angular velocity of the motor at lambda 0.0015.
Figure 9: Angular velocity of the motor at lambda 0.0020.
Figure 10: Exponential function at different lambda
Figure 11: object tracking using cantilever length 200 mm.
Figure 12: object tracking using cantilever length 400 mm.
Figure 13: object tracking using cantilever length 500 mm.
© by the Authors & Gavin Publishers. This is an Open Access Journal Article Published Under Attribution-Share Alike CC BY-SA: Creative Commons Attribution-Share Alike 4.0 International License. Read More About Open Access Policy.