abitofcode

Irregular touch detection, when CGRect is not enough – part 1

This is the first of 3 posts about handling hit tests in irregular shaped areas, we’ll provide the point to be tested with a touch and give the user some simple feedback via a label if they hit anything interesting. This first approach involves building a map of the hit areas in Photoshop, saving as a Raw file and then using this data as a lookup (tools such as Gimp can also be used if they can save in this format).

A nice side effect of this approach is it moves the creation of the hit areas over to the creators of the content, the hotspot maps are stored within the Photoshop PSD files. If the image or the hit areas need changing just load in a new image and data file.

It may help to read Create Data in Photoshop first.

There are a number of techniques that can be employed to make the process more efficient however I’m going to make this simple and not worry about optimisation for the moment.

To demonstrate I’ll use this 800×600 image of a Piranha taken at Newquay aquarium with my 3GS.

Fig 1. The source image - 800x600 Piranha

The current setup of a Byte per pixel gives us a possible 256 values for each point. For this example I only need four values, three of these will be hotspot areas of interest and the final one will be a value to indicate nothing selected.

These are decided in advance to make things simple;

  • 1 – piranah
  • 2 – eye
  • 3 – pointy teeth
  • 255 – Nothing selected

Why 255 for nothing selected? Well the approach involves setting the red value of the fill color to the value of the ID, If we use 255 (Hex FF) I can use white to fill any area that is not selectable.

Creating a hotspot map in Photoshop

Fig 2. Selecting a color to map to an area

For each hotspot we need to create a filled area using the ID of the hotspot as the red value, the green and blue values can be adjusted to make the fill color more distinctive, these will be discarded when we convert the Raw file later on. Here we are creating the fill color for the piranha using the value 1 as the red value.

Fig 3. Creating a layer for each new hotspot

To make things easier I create a seperate layer for each hotspot in Photoshop and then start painting in the fish with the color I set in the step above. Use whatever tools you like to paint in the area but don’t feather the edges, we need a block of color.

Turning the visibility down helps while working on the area but remember to put it back up to 100% before you output the file or you will not get the values you expect for the lookup.

I use the lasso tool and then fill with the color, mistakes can be lassoed and deleted.

Fig 4. Painting in the Piranah

Fig 5. All the hotspots have now been added

Here’s the final layers output with the addition of a white layer so any non hotspot areas will return a value of 255 (FF)

Fig 6. The output before being saved to Raw

Create the data file

Save the file to Raw output, I named the file 20110705_piranah_hotspot.raw

Open up the terminal and run through convert.py (included in the download) see Create Data in Photoshop if you have any issues.

>python convert.py 20110705_piranah_hotspot.raw

This will give us the data file we need to continue _20110705_piranah_hotspot.raw

Accessing the value at a point

Now we have our data but its been transmogrified into a long block of bytes, we need a way of converting an x,y position into a position within the data so we can return the value at that point.

For example an image 8 pixels wide by 5 pixels high;

8x5 grid with 3 example touch points

When saved as a Raw file it becomes a sequence of 40 bytes, In the second diagram I’ve colored the rows to make the translation clearer.

The 8x5 grid becomes a sequence of 40 Bytes (40x1)

Converting the (x, y) point of the 8 by 5 grid into an offset into the 40 byte array is simple. If we take point C we can see that there are 4 complete rows above the selected point, 4 is also our y value. Taking the row that our selected point is on we can see that there are 3 full pixels before the selected point, this is our x value. Using these values we can determine that the offset into the data is;

Position in data = y * row width + x;

For point C: 4 * 8 + 3 = 35
For point B: 2 * 8 + 7 = 23
For point A: 0 * 8 + 1 = 1

A Cocos2d example

I’m going to use cocos2d to demo the use of this but the method is pretty much the same whatever you’re using. Touch handling and Node spaces are covered rather nicely by Bob Ueland, if there is any confusion check out the following posts The magic of node spaces and How to detect a touch.

This is a simple example of use, not a best practice programming guide, I’m adding everything to the HelloWorldLayer template that is created when you use the standard cocos2d template (I’m using Xcode 4). The attached file contains a commented working example but I’ll cover some of the interesting bits here.

First a method to return a value when given a x,y coordinate using what we learnt above.

-(int)getValueAtPoint:(CGPoint)pt
{
  // set the value to 255, a default no hotspot selected value
  int retValue = 255;
 
  // Check if the image lookup data is present and the value is within
  // the bounds of the image
  if(self.lookupData && CGRectContainsPoint(CGRectMake(0, 0, IMAGE_WIDTH, IMAGE_HEIGHT), pt)) 
  {
    // The raw data assumes a top left origin
    pt.y = (IMAGE_HEIGHT - pt.y) - 1;  
 
    //NSLog(@"pt: %@", NSStringFromCGPoint(pt));
    // calculate an offset
    int offset = pt.y * IMAGE_WIDTH + pt.x;
 
    // get the data at the offset
    NSRange range = {offset,sizeof(Byte)};
    NSData *pixelValue = [lookupData subdataWithRange:range];  
 
    retValue = *(int*)[pixelValue bytes];
  }
  return retValue;
}

Here’s the touch began method. For demonstration purposes I’ve just added a simple switch statement, in a real world example the ID’s of the areas along with any actions/labels etc would probably be fed in via XML.

-(void)ccTouchesBegan:(NSSet *)touches withEvent:(UIEvent *)event
{
    // get a touch
    UITouch *touch = [touches anyObject];
 
    // get a touch in the UIKit coordinate system
    CGPoint loc=[touch locationInView:[touch view]];
 
    // convert the UIKit coordinate  to an openGl coordinate
    // world space (origin in lower left corner)
    loc=[[CCDirector sharedDirector] convertToGL:loc];
 
    // node space - relative to piranha image. Without 
    // this we would get the coordinate relative to the layer 
    // not the piranah image. As the Piranah image is offset
    // by 100 pixels to the right and 50 pixels up the
    // hit areas would not be correct
    loc = [piranah convertToNodeSpace:loc];
 
    //NSLog(@"touch (%g %g)",loc.x, loc.y);
 
    int value = [self getValueAtPoint:loc];
 
    // get a reference to the label
    CCLabelTTF *label = (CCLabelTTF*)[self getChildByTag:101];
 
    // update the label
    switch (value) {
        case 1:
            [label setString:@"Piranah"];
            //NSLog(@"Piranah");
            break;
        case 2:
            [label setString:@"Beady eye"];            
            //NSLog(@"Beady eye");
            break;
        case 3:
            [label setString:@"Pointy teeth"];            
            //NSLog(@"Pointy teeth");
            break;
        default:
            [label setString:@""];                        
            break;
    }
}

The interesting part in our init method is the setting up of the lookupData from the Raw file which I have made into a property, the rest of the code is boilerplate code for adding the Piranha image and touch handling.

        // get the filepath to the Raw data
        NSString *filePath = 
        [[NSBundle mainBundle] pathForResource:@"_20110705_piranah_hotspot" ofType:@"raw"];  
 
        // load it as NSData
        self.lookupData = [NSData dataWithContentsOfFile:filePath];

Summary

  1. Pick an image to map hotspots onto
  2. Paint in coloured regions for each hotspot using the Red channel value as an ID
  3. Add a White layer to cover anything that is not a hotspot
  4. Save as Raw format
  5. Run through convert.py
  6. Add your original image and the _**filename**.raw file output by convert.py to your project
  7. If you use the sample code remember to set the IMAGE_WIDTH and IMAGE_HEIGHT parameters.

Download the files here: [Download not found]

10 Comments

  1. Awesome post yet again, I will definitely be using this, thanks man! :]

  2. Cheers :) Just glad to see all the time I spent finding ways around the limitations of Director and Flash wasn’t wasted ;)

  3. Nice Chris,

    Thanks for sharing.

    Mike

  4. Very usefull.

    Thank’s for your info

  5. Thanks for these great tutorials – very helpful.

    Craig

  6. Just wondering if this could be the method used for creating hotspots in those highly detailed images used for “find-the-object” games?

  7. @Nifty Possibly. It allows pixel precision so you could make finding an object really painful ;)

  8. @chrish can you suggest how we can detect touch in irregular shapes after applying Zoom scale?

  9. @nikhil I intended to follow this post up with some optimisations one of which would also cover image scaling but recent work has left me with little time.

    The existing code assumes a 1:1 relationship between each pixel in the image and the number of positions in the array. When the image size changes on the screen we still have the same sized array of data so we need to adjust for this.

    e.g original screen size 5 px wide 4 px high, zoomed to 10px wide by 8 px high.

    calc ratios
    5/10 = 0.5 width
    4/8 = 0.5 height

    Therefore a touch at (9,7) in our zoomed image =

    x = 9 * 0.5 = 4.5 -> floor it to 4
    y = 7 * 0.5 = 3.5 -> floor it to 3

    (4, 3)

  10. Thats really a nice code, thanks for sharing.
    I am thinking now how transformations (scale, rotation, etc) could be implemented on this new “touch mask”.
    Anyway, I think is the beginning of a new cocos2d-extension.

%d bloggers like this: