Sunday, April 13, 2014

Creating Custom Swipe handler in QML

I was often request for sample that shows how custom swipe handler can be created in QML. In this post I will show how same can be achieved.

Please note that code is just prototype level code and is not tested well with actual use. It has also lots of hard coded value that assume certain size of application.

But, you should be able to change those according to your use and can try sample with your app.

This sample implement three QML views and you can swipe on that to change view from one to next. This code also implement some parallax effect on view and view transition animation. In addition to swipe you can also change view using keyboard Left/Right arrow key. Following is demo for the sample app.



So let's start with code.

Following code is from SwipeHandler.qml, it extends MouseArea and try to detect swipe based on mouse's x position change. Swipe can be generated by two way, by flicking on view or dragging it.
Flick is detected, if there is large change in mouse x position in less time. In case of drag, if mouse travel certain distance then code consider it as a swipe.

import QtQuick 2.0

MouseArea{
    id: root

    property int oldX: mouseX;
    property int swipeOffset: 100;
    property int originX:mouseX;

    property var gestureStartTime;
    property bool gestureStarted: false;

    signal swipeEnded(var diff);
    signal swipeContinues(var diff);

    anchors.fill: parent

    onReleased: {
        if( gestureStarted ) {
            //swipe canceled
            root.swipeEnded(0);
            resetGesture();
        }
        //else swipe is already ended
    }

    onPressed: {
        gestureStarted =  true;
        gestureStartTime = new Date();
    }

    onMouseXChanged: {
        if( mouseX < parent.x
        || mouseX > parent.width || gestureStarted == false )
            return;

        if( originX == 0 ) {
            originX = mouseX; oldX = mouseX;
            return;
        }

        var diff = (oldX - mouseX);
        if(handleFlick(diff)){
            return;
        }

        if( haldleDrag(mouseX, diff)){
            return;
        }

        oldX = mouseX;
        root.swipeContinues(diff);
    }

    function resetGesture() {
        originX = 0; oldX = 0;
        gestureStarted =  false;
    }

    function haldleDrag(xPos,xPosDiff){
        if(xPosDiff < 0) {
            if( Math.abs(originX-xPos)  > swipeOffset ){
                root.swipeEnded(xPosDiff);
                resetGesture();
                return true;
            }
        } else {
            if( Math.abs(originX-xPos) >  swipeOffset ){
                root.swipeEnded(xPosDiff);
                resetGesture();
                return true;
            }
        }
        return false;
    }

    function handleFlick(xPosDiff){
        var now = new Date();
        var timeDiff = now - gestureStartTime;

        //high velocity and large diff between start end point
        if(timeDiff < 40 && Math.abs(xPosDiff) > 10 ){
            if(xPosDiff < 0) {
                root.swipeEnded(xPosDiff);
                resetGesture();
                return true;
            } else {
                root.swipeEnded(xPosDiff);
                resetGesture();
                return true;
            }
        }
        return false;
    }
}
So, this was SwipeHandler which can detect if swipe is generated or not. To demonstrate its use, I created a small View Management component, that create's three views. On swipe, view changes form one to another base on direction of swipe movement. Here is code for the same.
import QtQuick 2.0

Rectangle {
    id: root
    width: 200
    height: 300

    property var delegate: comp;

    property var centralView;
    property var nextView;
    property var prevView;

    focus: true

    Component.onCompleted: {
        var colors = ["red","blue","green"];
        var objs = [];
        for(var i =0; i < 3; ++i){
            var obj = comp.createObject(root);
            obj.text = i+1;
            obj.color = colors[i];
            objs.push(obj);
        }

        centralView = objs[0]
        nextView = objs[1]
        prevView = objs[2]

        setViewPos();
    }

    function setViewPos(oldX){
        centralView.animate(50,0);
        nextView.animate(50,root.width);
        prevView.animate(50,-root.width);

        centralView.z = 1;
        nextView.z = 0;
        prevView.z = 0;
    }

    Keys.onRightPressed: {
        var tempView = centralView;
        centralView = prevView;
        prevView = nextView;
        nextView = tempView;

        centralView.animate(150,0);
        nextView.animate(150,root.width);
        prevView.x = -width
    }

    Keys.onLeftPressed: {
        var tempView = centralView;
        centralView = nextView;
        nextView = prevView;
        prevView = tempView;

        centralView.animate(150,0);
        prevView.animate(150,-root.width);
        nextView.x = width
    }

    SwipeArea{
        onSwipeEnded: {
            if(diff === 0) {
                root.setViewPos();
                return;
            }

            var tempView = centralView;
            if(diff < 0) {
                centralView = prevView;
                prevView = nextView;
                nextView = tempView;
            } else {
                centralView = nextView;
                nextView = prevView;
                prevView = tempView;
            }
            root.setViewPos();
        }

        onSwipeContinues: {
            centralView.x = centralView.x - diff;
            if(diff < 0) {
                prevView.x = prevView.x  + Math.abs(diff*1.6);
                prevView.z = 1
                centralView.z = 0;
            } else {
                nextView.x = nextView.x - Math.abs(diff*1.6) ;
                nextView.z = 1
                centralView.z = 0;
            }
        }
    }

    Component{
        id: comp
        Rectangle{
            id: rect
            property alias text: label.text

            width: parent.width; height: parent.height
            Text{
                id: label; anchors.centerIn: parent
            }

            function animate(duration, to){
                anim.to = to; anim.duration = duration
                anim.running = true
            }

            PropertyAnimation{
                id: anim; target:rect; property: "x";duration: 50
            }
        }
    }
}

Saturday, March 8, 2014

Handling Multi Touch in Unity 3D

While working on my Unity Project, I faced some strange problem. I had few buttons on screen and I was handling touch by handling Mouse Event. While it worked fine if you use it with mouse, but when I try same on phone with Touch, I realized that my button can not detect touch if I already touched some other button. So basically in case of multi touch, mouse events do not work. This introduced problem if user is touching button very rapiddly, so to resolve issue, we need to handle multi touch.

Handling multi touch is simple in Unity. You can put the following code in Update method and you will be ready to handle multi touch.

  
  // Check if there is any touch event 
  if (Input.touchCount > 0)
  { 
   //get last touch and its location on screen
   Touch touch = Input.GetTouch(Input.touchCount-1);
   Vector3 wp = Camera.main.ScreenToWorldPoint(touch.position);

   //check if that location collides with our collider
   if (collider2D == Physics2D.OverlapPoint(wp)){    
    if(touch.phase == TouchPhase.Began){
    // Touch just began
    }else if(touch.phase == TouchPhase.Canceled 
              || touch.phase == TouchPhase.Ended ) {
    //Touch canceled or ended
    }
   }
  }
That's it. Thanks for visiting my blog.

Saturday, February 22, 2014

Creating Parallax effect in Unity 4.3 2D game

I was checking out Unity3D's 2D workflow and I seen the were able to implement Parallax effect for game background without much effort.

In this post I will show I they achieved parallax effect without single line of code for Unity2D game.

Final effect looks like shown in below video.

First, I created a sample project in 2D mode and imported Sky background in asset and created a sky game object.

This is how it looks after importing Sky background.


I wanted to show clouds moving in the sky, so I created image with clouds and imported those to Unity. Now we need to create a two sprite of cloud image and put those side by side. Once that is done we need to animate both cloud background such that at end of animation second could background image is at the place of first cloud background.

To make this movement process easy, we will create a empty game object and name it Clouds.


Now we will create two object of cloud background image and place those image under empty gameobject which we just created.


We need to arrange those side by side, like below.


We can move Clouds object( the empty gameobject) and both cloud object will move together. This empty gameobject make things easier for movement and its easy way to group object together.

So, now we are ready to make parallax animation, first make sure Animation window is visible, you can make it visible like shown in blow pic.


Select the Clouds empty gameobect, we need to create initial key for animation, which is our current default position.


Once initial animation key is created, right click on time on timeline and select add key, it will add animation key, now you can move Clouds game object to create animation. This key is out intermediate animation key.


Finally, we need to add final animation key, right click on another time on timeline and add key, and move Clouds gameobject such that second cloud image is at place of first one.


We are done, now if you press play button you can see cloud moving continuously as shown in video.

Tuesday, February 18, 2014

Getting Raw image in ARGB format from Camera using BB10 Native API

Recently I was trying to get Raw image from Camera in BB10, I wanted to display image in Unity3D game and process it. I created a Native Plugin to pass image data from native API to Unity3D script.

I will post more about Native plugin creation process in separate post, in this post I will describe process to get camera frame in ARGB format.

Before we begin, we need to request access to Camera by using following statement in bar_descriptor.xml and we also need to link our source to camapi library in Pro file.
<permission>use_camera</permission>
Following code is for initializing the camera, it opens camera, set proper resolution required by application, then set format and lastly starts the view finder with callback function which will be called once we camera frame is ready.

One thing to notice is, We are setting image format to CAMERA_FRAMETYPE_RGB8888 by using camera_set_videovf_property function, so we will receive image data in ARGB format and we will not need to do any conversion from native format to ARGB our self.

#define DELTA(x,y) ((x>y)?(x-y):(y-x))

int initCamera()
{
    //Open camera
    if (camera_open(CAMERA_UNIT_REAR, CAMERA_MODE_RW, &handle)){
        //camera_open fails...
        return -1;
    }

    //Camera_open success...
    unsigned int orientation = 0;
    camera_get_native_orientation(handle, &orientation);

    // going to run a query here to find a 480P 16:9 resolution
    unsigned int num;
    camera_get_video_vf_resolutions(handle, 0, &num, NULL);
    camera_res_t res[num];
    camera_get_video_vf_resolutions(handle, num, &num, res);
    unsigned int best = 0;
    unsigned int i;
    for (i=0; i < num; i++) {
        fprintf(stderr, "Supported resolution: %d x %d\n", res[i].width, res[i].height);
        if ((orientation % 180) == 0) {
            if ((DELTA(res[i].height, 480) <= DELTA(res[best].height, 480)) &&
            (DELTA(res[i].width, 480*16/9) <= DELTA(res[best].width, 480*16/9))) 
            {
                best = i;
            }
        } else {
            if ((DELTA(res[i].width, 480) <= DELTA(res[best].width, 480)) &&
           (DELTA(res[i].height, 480*16/9) <= DELTA(res[best].height, 480*16/9))) 
            {
                best = i;
            }
        }
    }
    fprintf(stderr, "Selecting resolution %d x %d\n", 
            res[best].width, res[best].height);

    // get camera running
    if (camera_set_videovf_property(handle,
            CAMERA_IMGPROP_CREATEWINDOW, 0,
            CAMERA_IMGPROP_FORMAT, CAMERA_FRAMETYPE_RGB8888,
            CAMERA_IMGPROP_FRAMERATE, 30.0,
            // note: using the rotation value corresponding to the native orientation
            // gives the best performance since the output from the sensor will not be
            // passed through a rotation stage on devices that require such a step.
            CAMERA_IMGPROP_ROTATION, (360-orientation) % 360,
            CAMERA_IMGPROP_WIDTH, res[best].width,
            CAMERA_IMGPROP_HEIGHT, res[best].height)) {
        return -1;
    }
    //camera_set_videovf_property done...

    if (camera_start_video_viewfinder(handle, 
        &viewfinder_callback, 
        &status_callback, NULL)) {
        return -1;
    }

    //camera_start_video_viewfinder started...
    return 0;
}
Following function is called by viewfinder once frame is ready. Once you receive callback from viewfinder with image data, you can either show it or process further it by applying some filter and do something with it.
static void viewfinder_callback(camera_handle_t handle,camera_buffer_t* buf,void* arg)
{
    if (buf->frametype != CAMERA_FRAMETYPE_RGB8888) {
        //viewfinder_callback, format not CAMERA_FRAMETYPE_RGB8888");
        return;
    }

    fprintf(stderr,"frame %d(%d) x %d\n",
            buf->framedesc.rgb8888.width,
            buf->framedesc.rgb8888.stride,
            buf->framedesc.rgb8888.height);

    cameraBuffer = buf;
}
Following function is called when viewfinder wanted to deliver error message.
static void status_callback(camera_handle_t handle,
                camera_devstatus_t status,
                uint16_t extra,
                void* arg)
{
    fprintf(stderr, "status notification: %d, %d\n", status, extra);
}
While working on this code, I referred this link, please check out that link as well. Hope this post will be helpful.

Sunday, February 2, 2014

Handling GamePad events in BB10 Cascades App

I added GamePad support to my CrazyFlight game for BB10. You can see demo here.



In this post I will describe, how we can add GamePad support to BB10 cascades or Qt app.

First we should add use_gamepad permission to bar-descriptor.xml file. This is not necessary to enable gamepad support but its necessary for AppWorld to detect that your game supports GamePad, this helps in app discovery process.
 <permission>use_gamepad</permission>
We should also add libscreen dependency to our .pro file.
LIBS += -lscreen
As far as I know there is no Cascades API for handling GamePad events, we need to rely on native API to get GamePad events. I created a Helper class that handles Native API call back and send events to Cascades QML items.

Here is definition of my helper class's (GamePadObserver.h) header file.
#ifndef GAMEPADOBSERVER_H_
#define GAMEPADOBSERVER_H_

class GamePadObserver: public QObject {
 Q_OBJECT
 Q_ENUMS(GamePadButton)

 // Structure representing a game controller.
 struct GameController {
  // Static device info.
  screen_device_t handle;
  int type;
  int analogCount;
  int buttonCount;
  char id[64];

  // Current state.
  int buttons;
  int analog0[3];
  int analog1[3];

  // Text to display to the user about this controller.
  char deviceString[256];
 };

public:
 //Enum which we will use to send signal when GamePad event is detected
 enum GamePadButton{
   A_BUTTON=0,
   B_BUTTON,
   C_BUTTON,
   X_BUTTON,
   Y_BUTTON,
   Z_BUTTON,
   MENU1_BUTTON,
   MENU2_BUTTON,
   MENU3_BUTTON,
   MENU4_BUTTON,
   L1_BUTTON,
   L2_BUTTON,
   L3_BUTTON,
   R1_BUTTON,
   R2_BUTTON,
   R3_BUTTON,
   DPAD_UP_BUTTON,
   DPAD_DOWN_BUTTON,
   DPAD_LEFT_BUTTON,
   DPAD_RIGHT_BUTTON,
   NO_BUTTON
 };

public:
 GamePadObserver(QObject* parent = 0);
 virtual ~GamePadObserver();

 //Main event loop should send event to this handler if it can not handle event by itself
 //This handler will try to handle event if its related to GamePad
 void handleScreenEvent(bps_event_t *event);

signals:
        //Signals will be emitted when Gamepad events is detected
 void buttonReleased(int button);
 void buttonPressed(int button);

private:
 //Helper methods to discover the GamePad and device connection
 void discoverControllers();
 void initController(GameController* controller, int player);
 void loadController(GameController* controller);
 void handleDeviceConnection(screen_event_t screen_event);
       
 // Methods to handle gamepad events
 void handleGamePadInput(screen_event_t screen_event);
 QString gamePadButtonAsString(GamePadButton button);

private:
 screen_context_t _screen_ctx;
 GameController _controllers[2];
 bool _conneted;
 GamePadButton _lastButton;
};
#endif /* GAMEPADOBSERVER_H_ */
Now let's see source file.
In constructor we are creating screen context and then trying to discover if there is GamePad connected to device already.
#define SCREEN_API(x, y) rc = x; \
    if (rc)  printf("\n%s in %s: %d, %d", y, __FUNCTION__,__LINE__, errno)

GamePadObserver::GamePadObserver( QObject* parent)
: QObject(parent),_screen_ctx(0),_conneted(false)
{
 // Create a screen context that will be used to create an EGL surface to receive libscreen events.
 SCREEN_API(screen_create_context(&_screen_ctx, SCREEN_APPLICATION_CONTEXT), "create_context");
 discoverControllers();
}

void GamePadObserver::discoverControllers()
{
    // Get an array of all available devices.
    int deviceCount = 0;
    SCREEN_API(screen_get_context_property_iv(_screen_ctx, SCREEN_PROPERTY_DEVICE_COUNT, &deviceCount), "SCREEN_PROPERTY_DEVICE_COUNT");
    screen_device_t* devices = (screen_device_t*) calloc(deviceCount, sizeof(screen_device_t));
    SCREEN_API(screen_get_context_property_pv(_screen_ctx, SCREEN_PROPERTY_DEVICES, (void**)devices), "SCREEN_PROPERTY_DEVICES");

    // Scan the list for gamepad and joystick devices.
    int controllerIndex = 0;
    for (int i = 0; i < deviceCount; i++) {
        int type;
        SCREEN_API(screen_get_device_property_iv(devices[i], SCREEN_PROPERTY_TYPE, &type), "SCREEN_PROPERTY_TYPE");

        if ( !rc && (type == SCREEN_EVENT_GAMEPAD || type == SCREEN_EVENT_JOYSTICK)) {
            // Assign this device to control Player 1 or Player 2.
            GameController* controller = &_controllers[controllerIndex];
            controller->handle = devices[i];
            loadController(controller);

            // We'll just use the first compatible devices we find.
            controllerIndex++;
            if (controllerIndex == MAX_CONTROLLERS) {
                break;
            }
        }
    }
    free(devices);
}
On destructor we should release screen context.
GamePadObserver::~GamePadObserver()
{
 screen_destroy_context(_screen_ctx);
}
loadController setup our GameController structure, that we will use to store currently pressed buttons while handling the events.
void GamePadObserver::loadController(GameController* controller)
{
    // Query libscreen for information about this device.
    SCREEN_API(screen_get_device_property_iv(controller->handle, SCREEN_PROPERTY_TYPE, &controller->type), "SCREEN_PROPERTY_TYPE");
    SCREEN_API(screen_get_device_property_cv(controller->handle, SCREEN_PROPERTY_ID_STRING, sizeof(controller->id), controller->id), "SCREEN_PROPERTY_ID_STRING");
    SCREEN_API(screen_get_device_property_iv(controller->handle, SCREEN_PROPERTY_BUTTON_COUNT, &controller->buttonCount), "SCREEN_PROPERTY_BUTTON_COUNT");

    // Check for the existence of analog sticks.
    if (!screen_get_device_property_iv(controller->handle, SCREEN_PROPERTY_ANALOG0, controller->analog0)) {
     ++controller->analogCount;
    }

    if (!screen_get_device_property_iv(controller->handle, SCREEN_PROPERTY_ANALOG1, controller->analog1)) {
     ++controller->analogCount;
    }

    if (controller->type == SCREEN_EVENT_GAMEPAD) {
        sprintf( controller->deviceString, "Gamepad device ID: %s", controller->id);
        qDebug() << "Gamepad device ID" <<  controller->id;
    } else {
        sprintf( controller->deviceString, "Joystick device: %s", controller->id);
        qDebug() << "Joystick device ID" <<  controller->id;
    }
}
handleScreenEvent function should be called from main event loop when event is related to screen domain. This function check if event type is GamePad device connection or its GamePad button event and handle it accordingly.
void GamePadObserver::handleScreenEvent(bps_event_t *event)
{
    int eventType;

    screen_event_t screen_event = screen_event_get_event(event);
    screen_get_event_property_iv(screen_event, SCREEN_PROPERTY_TYPE, &eventType);

    switch (eventType) {
        case SCREEN_EVENT_GAMEPAD:
        case SCREEN_EVENT_JOYSTICK:
        {
         handleGamePadInput(screen_event);
         break;
        }
        case SCREEN_EVENT_DEVICE:
        {
         // A device was attached or removed.
         handleDeviceConnection(screen_event);
         break;
        }
    }
}
handleDeviceConnection handles GamePad device connection. If it detects new connection then it loads new device, in case of device disconnection it remove device.
void GamePadObserver::handleDeviceConnection(screen_event_t screen_event)
{
    // A device was attached or removed.
    screen_device_t device;
    int attached;
    int type;

    SCREEN_API(screen_get_event_property_pv(screen_event, SCREEN_PROPERTY_DEVICE, (void**)&device), "SCREEN_PROPERTY_DEVICE");
    SCREEN_API(screen_get_event_property_iv(screen_event, SCREEN_PROPERTY_ATTACHED, &attached), "SCREEN_PROPERTY_ATTACHED");

    if ( attached ) {
        SCREEN_API(screen_get_device_property_iv(device, SCREEN_PROPERTY_TYPE, &type), "SCREEN_PROPERTY_TYPE");
    }

    int i;
    if (attached && (type == SCREEN_EVENT_GAMEPAD || type == SCREEN_EVENT_JOYSTICK)) {
        for (i = 0; i < MAX_CONTROLLERS; ++i) {
            if (!_controllers[i].handle) {
                _controllers[i].handle = device;
                loadController(&_controllers[i]);
                break;
            }
        }
    } else {
        for (i = 0; i < MAX_CONTROLLERS; ++i) {
            if (device == _controllers[i].handle) {
                initController(&_controllers[i], i);
                break;
            }
        }
    }
}

void GamePadObserver::initController(GameController* controller, int player)
{
    // Initialize controller values.
    controller->handle = 0;
    controller->type = 0;
    controller->analogCount = 0;
    controller->buttonCount = 0;
    controller->buttons = 0;
    controller->analog0[0] = controller->analog0[1] = controller->analog0[2] = 0;
    controller->analog1[0] = controller->analog1[1] = controller->analog1[2] = 0;
    sprintf(controller->deviceString, "Player %d: No device detected.", player + 1);
}
handleGamePadInput function that handles GamePad events.
void GamePadObserver::handleGamePadInput(screen_event_t /*screen_event*/)
{
    int i;
    for (i = 0; i < MAX_CONTROLLERS; i++) {
        GameController* controller = &_controllers[i];

        if ( controller->handle ) {
            GamePadButton gamePadButton = NO_BUTTON;

            // Get the current state of a gamepad device.
            SCREEN_API(screen_get_device_property_iv(controller->handle, SCREEN_PROPERTY_BUTTONS, &controller->buttons), "SCREEN_PROPERTY_BUTTONS");

            if (controller->analogCount > 0) {
             SCREEN_API(screen_get_device_property_iv(controller->handle, SCREEN_PROPERTY_ANALOG0, controller->analog0), "SCREEN_PROPERTY_ANALOG0");
            }

            if (controller->analogCount == 2) {
             SCREEN_API(screen_get_device_property_iv(controller->handle, SCREEN_PROPERTY_ANALOG1, controller->analog1), "SCREEN_PROPERTY_ANALOG1");
            }

            for(int i = A_BUTTON ; i < NO_BUTTON ; ++i) {
             if( controller->buttons & (1 << i) ) {
              gamePadButton = (GamePadButton)(i);
              break;
             }
            }

            if( gamePadButton == NO_BUTTON ) {
             emit buttonReleased( _lastButton );
             _lastButton = NO_BUTTON;
            }
            else if( _lastButton != gamePadButton ) {
             emit buttonReleased(_lastButton);
             emit buttonPressed(gamePadButton);
             _lastButton = gamePadButton;
            }
        }
    }
}
Now we have all necessary implementation to handle the GamePad connection and GamePad button events. But we should pass events related to GamePad to our helper class when we receive it in our main event handler. To do that, In our main function we should register main bps event loop or filter as below and can use GamePadObserver as described.
#include "GamePadObserver.h"

static QAbstractEventDispatcher::EventFilter previousEventFilter = 0;
static GamePadObserver _gamePadObserver;

static bool bpsEventFilter(void *message)
{
    bps_event_t * const event = static_cast(message);

    if( event && bps_event_get_domain(event) == screen_get_domain()) {
     _gamePadObserver.handleScreenEvent(event);
    }

    if (previousEventFilter)
        return previousEventFilter(message);
    else
        return false;
}

int main(int argc, char **argv)
{
    // Register GamePadObserver so we can use it in QML code
    qmlRegisterUncreatableType("GamePadObserver", 1, 0,"GamePadObserver", "");

    QApplication app(argc, argv);

    previousEventFilter = QAbstractEventDispatcher::instance()->setEventFilter(bpsEventFilter);

    QScopedPointer view(new QDeclarativeView());
    view->setRenderHints(QPainter::Antialiasing | QPainter::SmoothPixmapTransform);
    view->setResizeMode( QDeclarativeView::SizeRootObjectToView );

    QDeclarativeContext *ctxt = view->rootContext();
    view->setSource(QUrl("app/native/assets/main.qml"));

    ctxt->setContextProperty("GamePad",&_gamePadObserver);
    view->showFullScreen();

    return app.exec();
}
And finally, out GamePadObserver class is ready to deliver GamePad events. To handle GamePad events in QML you can use GamePadObaserver class as below.
        
        Connections{
            target: GamePad
            onButtonPressed: {
                                
                if(button == GamePadObserver.DPAD_UP_BUTTON
                || button == GamePadObserver.DPAD_DOWN_BUTTON ){
                    handleUpButton();
                }else if(button ==  GamePadObserver.DPAD_LEFT_BUTTON 
                || button ==  GamePadObserver.DPAD_RIGHT_BUTTON) {
                    handleLeftButton();
                }          
                else if(button == GamePadObserver.MENU1_BUTTON                 
                || button == GamePadObserver.MENU2_BUTTON
                || button == GamePadObserver.X_BUTTON
                || button == GamePadObserver.Y_BUTTON 
                || button == GamePadObserver.A_BUTTON 
                || button == GamePadObserver.B_BUTTON ) {
                    handleButtonSelection();
                } 
            }        
        }
So, I hope this will help to utilize GamePad events in your games.