In this blog post I have written how we can access the individual frame from QCamera, In this blog post I will show how to use those frame to track some particular colored object and detecting gesture from motion of that object.
Following is demo of my sample application running on N9.
Tracking colored object
I don't know the actual algorithm for detecting color in image but i created simple algorithm that will detects some predefined color in image. Please note that if image has multiple object with the color which we are tracking it will return rectangle which cover all objects, not individual rectangle of each object.
As I am not interested in details of captured image, just interested in checking if image has defined color object or not, I reduces size of image to half, so i have to process small number of pixel to detect color.
Then to detect color in image, I convert image capture from camera from RGB color spec to HSV color spec, as its quite easy to process HSV color spec to detect color.
After image is converted to HSV color spec, I converted image to black and white image, black portion will be detected object and rest of things will be in white. After getting this image I just need to scan image to find area of black portion of image.
So now I have coordinate of colored object which we are detecting.
Following code implements the above logic to detect the red colored object, in code I combined process of converting image to black and white and detect the black portion of image.
When we detect the position of object using above color detection code, we can use that position to detect if position tracked from individual image create some kind of gesture.
I will show how to use captured position to detect horizontal swipe gesture, we can easily extend it to detect vertical swipe or diagonal swipe.
I used following logic to detect swipe gesture,
> As color detection code returns position of tracked object, We compare this new position with its old position.
> If there is any progress in motion of object, we add difference of x coordinate to total progress made. In case of no progress, we discard whole gesture and reset variable that keep track of motion.
> While doing so if we detect certain amount of movement in particular direction, we decide if gesture was left swipe or right swipe using difference in position of object and reset the variables.
Following code implement above logic.
Putting all together
Now we have code that detect colored object and code that detect gesture. Following code shows how those function are used together.
Following is vary simple gesture handler, which just print handled gesture.
Following is demo of my sample application running on N9.
Tracking colored object
I don't know the actual algorithm for detecting color in image but i created simple algorithm that will detects some predefined color in image. Please note that if image has multiple object with the color which we are tracking it will return rectangle which cover all objects, not individual rectangle of each object.
As I am not interested in details of captured image, just interested in checking if image has defined color object or not, I reduces size of image to half, so i have to process small number of pixel to detect color.
Then to detect color in image, I convert image capture from camera from RGB color spec to HSV color spec, as its quite easy to process HSV color spec to detect color.
After image is converted to HSV color spec, I converted image to black and white image, black portion will be detected object and rest of things will be in white. After getting this image I just need to scan image to find area of black portion of image.
So now I have coordinate of colored object which we are detecting.
Following code implements the above logic to detect the red colored object, in code I combined process of converting image to black and white and detect the black portion of image.
QRect ColorMotionDetector::detectColor( const QImage& origImage) { //reduce size of image QImage image(origImage); image = image.scaled(QSize(320,240)); emit originalImage(image); //rectanlge of detected colored object int maxX = -1; int minX = 99999; int maxY = -1; int minY = 99999; int width = image.width(); int height = image.height(); bool detected = false; //black and white image QImage converted(image.size(),image.format()); for (int y = 0; y< height; ++y ) { for( int x = 0; x < width; ++x ) { //convert individual pixel to HSV from RGB QRgb pixel = image.pixel(x,y); QColor color(pixel); color = color.toHsv(); //default whitel color for other object QRgb newPixel = qRgb(255, 255, 255); //detecting red color if( color.hue() >= 0 && color.hue() <= 22 && color.saturation() <= 255 && color.saturation() >= 240 && color.value() <= 255 && color.value() >= 100 ) { detected = true; if( x > maxX ) { maxX = x; } else if( x < minX ) { minX = x; } if( y > maxY ) { maxY = y; } else if( x < minY ) { minY = y; } //black color for detected object newPixel = qRgb(0, 0, 0); } converted.setPixel(x,y,newPixel); } } QRect rect; if( detected) { rect = QRect(minX,minY, maxX - minX, maxY-minY ); //drawing red rectangle around detected object QPainter painter( &converted ); painter.setPen(QPen(Qt::red)); painter.drawRect(rect); painter.end(); } emit processedImage(converted); return rect; }Detecting swipe gesture
When we detect the position of object using above color detection code, we can use that position to detect if position tracked from individual image create some kind of gesture.
I will show how to use captured position to detect horizontal swipe gesture, we can easily extend it to detect vertical swipe or diagonal swipe.
I used following logic to detect swipe gesture,
> As color detection code returns position of tracked object, We compare this new position with its old position.
> If there is any progress in motion of object, we add difference of x coordinate to total progress made. In case of no progress, we discard whole gesture and reset variable that keep track of motion.
> While doing so if we detect certain amount of movement in particular direction, we decide if gesture was left swipe or right swipe using difference in position of object and reset the variables.
Following code implement above logic.
Gesture ColorMotionDetector::detectGesture(QRect rect) { //not valid rectangle, mean no object detected if( !rect.isValid()) { mLastRect = QRect(); mXDist = 0; return Invalid; } //there is no previous cordinate, store rect if( !mLastRect.isValid() ) { mLastRect = rect; mXDist= 0; return Invalid; } Gesture gesture = Invalid; int x = rect.x(); int lastX = mLastRect.x(); int diff = lastX - x; mLastRect = rect; //check if there is certain amount of movement if( qAbs( diff ) > 10 ) { //there is movement in x direction, store amout of movement in total movement mXDist += diff; //x motion match to amount required for perticular gesture //check if motion of let to right or right to left if( mXDist > 150 ) { qDebug() << "Right horizontal swipe detected..." << mXDist; mXDist = 0; gesture = SwipeRight; } else if ( mXDist < -150 ) { qDebug() << "Left horizontal swipe detected..." << mXDist; mXDist = 0; gesture = SwipeLeft; } } else { //discard the gesture mXDist = 0; mLastRect = QRect(); } return gesture; }
Putting all together
Now we have code that detect colored object and code that detect gesture. Following code shows how those function are used together.
//detection motion from captured image from camera void ColorMotionDetector::detectMotion( const QImage& image) { QRect rect = detectColor( image); Gesture gesture = detectGesture( rect ); if( gesture != Invalid ) { emit gestureDetected( gesture ); } }
Following is vary simple gesture handler, which just print handled gesture.
void MyWidget::gestureDetected( Gesture gesture) { if( gesture == SwipeLeft) { mSwipeLabel->setText("Left swipe"); } else if( gesture == SwipeRight) { mSwipeLabel->setText("Right swipe"); } }