i'm containing touchesMoved: output to the frame of a UIImageView. so if touchesMoved drags outside of the image, it no longer returns a value.
however, CGRectContainsPoint is showing strange results. when adding this UIImageView subclass to the main view, there seems to be some sort of offset of the image's frame position according to the main view, leading to inaccurate and undesirable results.
i would like to know why this offset is happening with CGRectContainsPoint.
however, CGRectContainsPoint is showing strange results. when adding this UIImageView subclass to the main view, there seems to be some sort of offset of the image's frame position according to the main view, leading to inaccurate and undesirable results.
Code:
CGPoint touchPoint = [touch locationInView:self];
if (CGRectContainsPoint([self rect], touchPoint)
{
//this doesn't work. inaccurate and undesirable results
}
if ((touchPoint.x >= 0) && (touchPoint.x <= self.frame.size.width) && (touchPoint.y >= 0) && (touchPoint.y <= self.frame.size.height))
{
//this works and is precise.
}
i would like to know why this offset is happening with CGRectContainsPoint.