Showing posts with label Camera. Show all posts
Showing posts with label Camera. Show all posts

Saturday, June 8, 2013

Using QML Camera and passing image to C++ code

I tried to compile one my application with Qt5. Application was using QML camera and sharing image to C++ code for further processing.

Following is sample code, it works with Qt5 and Qt Multimedia 5.

Lets start with ImageProcessor class, the C++ class which is called from QML to further image processing.

Following is header file for ImageProcessor class, it declares processImage() slot which can be invoked from QML code.
#ifndef IMAGEPROCESSOR_H
#define IMAGEPROCESSOR_H

#include <QObject>

class ImageProcessor : public QObject
{
    Q_OBJECT
public:
    explicit ImageProcessor(QObject *parent = 0);
 
public slots:
    void processImage( const QString& image);   
};
#endif // IMAGEPROCESSOR_H
Following is cpp file for ImageProcessor class. processImage() function retrieves Image from camera image provider. Once we have valid image, we can process it further.
#include "imageprocessor.h"
#include <QtQml/QmlEngine>
#include <QtQml/QmlContext>
#include <QtQuick/QQuickImageProvider>
#include <QDebug>

ImageProcessor::ImageProcessor(QObject *parent)
    : QObject(parent)
{}

void ImageProcessor::processImage( const QString& path)
{
    QUrl imageUrl(path);
    QQmlEngine* engine = QQmlEngine::contextForObject(this)->engine();
    QQmlImageProviderBase* imageProviderBase = engine->imageProvider(
     imageUrl.host());
    QQuickImageProvider* imageProvider = static_cast<QQuickImageProvider>
     (imageProviderBase);
    
    QSize imageSize;
    QString imageId = imageUrl.path().remove(0,1);
    QImage image = imageProvider->requestImage(imageId, &imageSize, imageSize);
    if( !image.isNull()) {
        //process image
    }
}
Now we need to register ImageProcessor class with QML. so that we can use it from QML code. This can be done by using qmlRegisterType global function.
#include  <QtGui/QGuiApplication>
#include  <QQmlEngine>
#include  <QQmlComponent>
#include  <QtQuick/QQuickView>

#include "imageprocessor.h"

int main(int argc, char *argv[])
{
    qmlRegisterType<ImageProcessor>("ImageProcessor", 1, 0, "ImageProcessor");

    QGuiApplication app(argc, argv);

    QQuickView view;

    QObject::connect(view.engine(),SIGNAL(quit()),&app,SLOT(quit()));    
    view.setSource(QUrl::fromLocalFile("qml/main.qml"));
    view.show();

    return app.exec();
}
That's all from C++ side, QML code is event easier. Following how you can use ImageProcess class from QML code.
import QtQuick 2.0
import QtMultimedia 5.0
import ImageProcessor 1.0

Rectangle {
    width: 360
    height: 360

    //shows live preview from camera
    VideoOutput {
        source: camera
        anchors.fill: parent
        focus : visible
    }

    //shows captured image
    Image {
        id: photoPreview
        anchors.fill: parent
        fillMode: Image.PreserveAspectFit
    }

    Camera {
        id: camera
        imageProcessing.whiteBalanceMode: CameraImageProcessing.WhiteBalanceFlash
        captureMode: Camera.CaptureStillImage

        exposure {
            exposureCompensation: -1.0
            exposureMode: Camera.ExposurePortrait
        }

        flash.mode: Camera.FlashRedEyeReduction

        imageCapture {
            onImageCaptured: {
                photoPreview.source = preview
                imageProcessor.processImage(preview);
            }
        }
    }

    MouseArea{
        anchors.fill: parent
        onClicked: {
            camera.imageCapture.capture();
        }
    }

    //image processor for further image processing
    ImageProcessor{
        id: imageProcessor
    }
}

Saturday, July 7, 2012

Crazy Chickens game with Motion detection

Past some time I was working on update of my application Crazy Chickens for N9.

In new version, Game character recognizes gesture based on user motion and move his bucket accordingly to catch egg. Game uses the phone back camera to capture image and recognizes gesture by tracking motion of some predefined colored object, which user is holding and moving to move the character. Hope you will enjoy the new update.

There also some minor update in UI and game logic to make game more enjoyable.

You can download app from following link,
- Fixed web browser: http://store.nokia.com/content/200523
- Nokia mobile browser: http://store.ovi.mobi/content/200523


Game need some setup before you play the game, You will need to connect phone to TV using TV out cable and place phone such that it back camera faces you. Game can recognized four color, Red, Blue, Yellow and Green. You need to choose one color and hold that colored object in hand. To move character to put bucket under hen, you need to move colored object in direction of hand. 


Please also make sure that you are fully visible and color object's movement does not go out of camera frame. Also make sure that there is enough light in room else game might has problem recognizing color clearly.

If you don't have TV out cable then you can install VNC server on Phone and connect it from PC to project phone's screen on PC.

Hope you will like the update. Please let me know if you have any feedback. I will try to update game with your feedback.

Monday, May 28, 2012

Tracking color in image and detecting gesture in Qt

In this blog post I have written how we can access the individual frame from QCamera, In this blog post I will show how to use those frame to track some particular colored object and detecting gesture from motion of that object.

Following is demo of my sample application running on N9.


Tracking colored object

I don't know the actual algorithm for detecting color in image but i created simple algorithm that will detects some predefined color in image. Please note that if image has multiple object with the color which we are tracking it will return rectangle which cover all objects, not individual rectangle of each object.

As I am not interested in details of captured image,  just interested in checking if image has defined color object or not, I reduces size of image to half, so i have to process small number of pixel to detect color.

Then to detect color in image, I convert image capture from camera from RGB color spec to HSV color spec, as its quite easy to process HSV color spec to detect color.

After image is converted to HSV color spec, I converted image to black and white image, black portion will be detected object and rest of things will be in white. After getting this image I just need to scan image to find area of black portion of image.

So now I have coordinate of colored object which we are detecting.

Following code implements the above logic to detect the red colored object, in code I combined process of converting image to black and white and detect the black portion of image.

QRect ColorMotionDetector::detectColor( const QImage& origImage)
{
    //reduce size of image
    QImage image(origImage);
    image = image.scaled(QSize(320,240));

    emit originalImage(image);

    //rectanlge of detected colored object
    int maxX = -1;
    int minX = 99999;
    int maxY = -1;
    int minY =  99999;

    int width = image.width();
    int height = image.height();
    bool detected = false;

    //black and white image
    QImage converted(image.size(),image.format());

    for (int y = 0; y< height; ++y ) {
        for( int x = 0; x < width; ++x ) {
            //convert individual pixel to HSV from RGB
            QRgb pixel = image.pixel(x,y);
            QColor color(pixel);
            color = color.toHsv();

            
            //default whitel color for other object
            QRgb newPixel = qRgb(255, 255, 255);
            
            //detecting red color
            if( color.hue() >= 0 && color.hue() <= 22
                    && color.saturation() <= 255 && color.saturation() >= 240
                    && color.value() <= 255 && color.value() >= 100 ) {

                detected = true;

                if( x > maxX ) {
                    maxX = x;
                } else if( x < minX )  {
                    minX = x;
                }

                if( y > maxY ) {
                    maxY = y;
                } else if( x < minY )  {
                    minY = y;
                }
                
                //black color for detected object
                newPixel = qRgb(0, 0, 0);
            } 
            converted.setPixel(x,y,newPixel);
        }
    }

    QRect rect;
    if( detected) {
        rect = QRect(minX,minY, maxX - minX, maxY-minY );

        //drawing red rectangle around detected object
        QPainter painter( &converted );
        painter.setPen(QPen(Qt::red));
        painter.drawRect(rect);
        painter.end();
    }
    emit processedImage(converted);

    return rect;
}
Detecting swipe gesture

When we detect the position of object using above color detection code, we can use that position to detect if position tracked from individual image create some kind of gesture.

I will show how to use captured position to detect horizontal swipe gesture, we can easily extend it to detect vertical swipe or diagonal swipe.

I used following logic to detect swipe gesture,

> As color detection code returns position of tracked object, We compare this new position with its old position.
> If there is any progress in motion of object, we add difference of x coordinate to total progress made. In case of no progress, we discard whole gesture and reset variable that keep track of motion.
> While doing so if we detect certain amount of movement in particular direction, we decide if gesture was left swipe or right swipe using difference in position of object and reset the variables.
Following code implement above logic.

Gesture ColorMotionDetector::detectGesture(QRect rect) {

    //not valid rectangle, mean no object detected
    if( !rect.isValid()) {
        mLastRect = QRect();
        mXDist = 0;
        return Invalid;
    }

    //there is no previous cordinate, store rect
    if( !mLastRect.isValid() ) {
        mLastRect = rect;
        mXDist= 0;
        return Invalid;
    }

    Gesture gesture = Invalid;
    int x = rect.x();
    int lastX = mLastRect.x();
    int diff = lastX - x;

    mLastRect = rect;
    //check if there is certain amount of movement
    if( qAbs( diff ) > 10 ) {
        //there is movement in x direction, store amout of movement in total movement
        mXDist += diff;
       
        //x motion match to amount required for perticular gesture
        //check if motion of let to right or right to left
        if( mXDist >  150 ) {
            qDebug() << "Right horizontal swipe detected..." << mXDist;
            mXDist = 0;
            gesture = SwipeRight;
        } else if ( mXDist < -150 ) {
            qDebug() << "Left horizontal swipe detected..." << mXDist;
            mXDist = 0;
            gesture = SwipeLeft;
        }
    } else {
        //discard the gesture
        mXDist = 0;
        mLastRect = QRect();
    }
    return gesture;
}


Putting all together

Now we have code that detect colored object and code that detect gesture. Following code shows how those function are used together.

//detection motion from captured image from camera
void ColorMotionDetector::detectMotion( const QImage& image) {

    QRect rect = detectColor( image);
    Gesture gesture = detectGesture( rect );

    if( gesture != Invalid ) {
        emit gestureDetected( gesture );
    }
}

Following is vary simple gesture handler, which just print handled gesture.

void MyWidget::gestureDetected( Gesture gesture) {

    if( gesture ==  SwipeLeft) {
        mSwipeLabel->setText("Left swipe");
    } else if( gesture == SwipeRight) {
        mSwipeLabel->setText("Right swipe");
    }
}

Saturday, March 31, 2012

Using camera API and getting raw image frame on N9 (Meego)

Past few weeks I was working on my pet project. I thought to add one feature in this project and it required me to use camera and access its individual raw frame.

Accessing camera on harmattan platform is supported by using QCamera API,  camera API is quite easy to use and I checked camera example application which works on n900.

After going through example application, I decided to use it in my application.

Creating camera and capturing  image or video using QCamera is quite simple and straight forward. I did  not faced any problem with it. But remember you need to request access from aegis framework for using camera, you can use following request for this purpose.
    
    <aegis>
        <request>
            <credential name="GRP::video" />
            <credential name="GRP::pulse-access" />
            <for path="absolute path to application" />
        </request>
    </aegis>


But when I decided to access raw image frame from camera, it did not proved so easy. N900 camera example does not work on N9 and need some changes.

I will try to list those changes and reason here.

To access individual camera frame from camera I decided to use MyVideoSurface class, you can find original source here.

But as I run the program on device, I noticed many camera related error in console and did not capture any valid camera image.

Error goes not like this.
CameraBin error: "Could not negotiate format" 
Reason for this is QCamera on N9, return image in UVVY format, so we need to add this image(QVideoFrame::Format_UYVY) support in MyVideoSurface class. But if you just add this support and dose not implement code to handle this format, you will face follwing error.
Failed to start video surface / CameraBin error: “Internal data flow error.” 
Reason it you can not use QVideoFrame offered in present method call, to create QImage directly. You need to convert image from UYVY to RGB and then use those RGB data to create QImage.

QGraphicsVideoItem class in n900, has a fast implementation of this conversion. You can find its implementation here.

Now you should have a valid QImage that you can use for further processing. Here is my code for MyVideoSurface class after making above changes. This class is tested on N950 device.

MyVideoSurface::MyVideoSurface( QObject* parent)
    : QAbstractVideoSurface(parent)
{
}

bool MyVideoSurface::start(const QVideoSurfaceFormat &format)
{
    mVideoFormat = format;
    //start only if format is UYVY, i dont handle other format now
    if( format.pixelFormat() == QVideoFrame::Format_UYVY ){
        QAbstractVideoSurface::start(format);
        return true;
    } else {
        return false;
    }
}

bool MyVideoSurface::present(const QVideoFrame &frame)
{
    mFrame = frame;

    if (surfaceFormat().pixelFormat() != mFrame.pixelFormat() ||
            surfaceFormat().frameSize() != mFrame.size()) {
        qDebug() << "stop()";
        stop();
        return false;
    } else {
        //this is necessary to get valid data from frame
        mFrame.map(QAbstractVideoBuffer::ReadOnly);

#ifdef  __ARM_NEON__

        QImage lastImage( mFrame.size(), QImage::Format_RGB16);
        const uchar *src = mFrame.bits();
        uchar *dst = lastImage.bits();
        const int srcLineStep = mFrame.bytesPerLine();
        const int dstLineStep = lastImage.bytesPerLine();
        const int h = mFrame.height();
        const int w = mFrame.width();

        for (int y=0; y < h; y++) {
            //this function you can find in qgraphicsvideoitem_maemo5.cpp,
            //link is mentioned above
            uyvy422_to_rgb16_line_neon(dst, src, w);
            src += srcLineStep;
            dst += dstLineStep;
        }

        mLastFrame = QPixmap::fromImage(lastImage);
        //emit signal, other can handle it and do necessary processing
        emit frameUpdated(mLastFrame);

#endif
        mFrame.unmap();

        return true;
    }
}

QList MyVideoSurface::supportedPixelFormats(
            QAbstractVideoBuffer::HandleType handleType) const
{
    if (handleType == QAbstractVideoBuffer::NoHandle) {
        //add support for UYVY format
        return QList<QVideoFrame::PixelFormat>() <<  QVideoFrame::Format_UYVY;
    } else {
        return QList<QVideoFrame::PixelFormat>();
    }
}

And following is MyCamera class, that use above MyVideoSurface class to display individual frame. I derived this class from QDeclarativeItem, so I can use it in QML as well.

MyCamera::MyCamera( QDeclarativeItem * parent ) :
    QDeclarativeItem(parent),mCamera(0)
{
    startCapture();
    mPixmap = new QGraphicsPixmapItem(this);
}

MyCamera::~MyCamera(){
    stopCapture();
}

void MyCamera::stopCapture(){
    if( mCamera )
        mCamera->stop();
}

void MyCamera::startCapture()
{
    mCamera = new QCamera(this);
    //set Still image mode for image capture or Video for capturing video
    //mCamera->setCaptureMode(QCamera::CaptureStillImage);
    mCamera->setCaptureMode(QCamera::CaptureVideo);

    //set my surface, to get individual frame from camera
    mSurface = new MyVideoSurface();
    mCamera->setViewfinder(mSurface );

    connect(mSurface,SIGNAL(frameUpdated(QPixmap)),this,SLOT(frameUpdated(QPixmap)));

    //set up video capture setting
    QVideoEncoderSettings videoSetting;
    //videoSetting.setQuality((QtMultimediaKit::EncodingQuality)0); //low
    videoSetting.setResolution(QSize(848, 480));

    // Media recoder to capture video,use record () to capture video
    QMediaRecorder* videoRecorder = new QMediaRecorder(mCamera);
    videoRecorder->setEncodingSettings(videoRecorder->audioSettings(),videoSetting);

    //  set up image capture setting
    //QImageEncoderSettings imageSetting;
    //imageSetting.setQuality((QtMultimediaKit::EncodingQuality)0); //low
    //imageSetting.setResolution(QSize(320, 240));

    // Image capture to capture Image,use capture() to capture image
    //m_stillImageCapture = new QCameraImageCapture(mCamera,this);
    //m_stillImageCapture->setEncodingSettings(imageSetting);

    // Start camera
    if (mCamera->state() == QCamera::ActiveState) {
        mCamera->stop();
    }
    mCamera->start();
}

void MyCamera::frameUpdated(const QPixmap& pixmap) {
    mPixmap->setPixmap(pixmap);
}