Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

Soulstorm

macrumors 68000
Original poster
Feb 1, 2005
1,887
1
Perhaps you are getting tired of me starting new threads. However, I thought I should just start a new topic, since this has nothing to do with memory management. It's more like performance and optimization methods.

I am still building image Filterizer as an exercise. However, I am not using the same approach, so you may need to redownload the project.

I am using a filter in core image called Disk Blur. I apply the filter onto the image, using a value above 20, and it is SLOW. That's ok, if it's a heavy filter. But when I try to scroll the or resize the window, it really is slow, as if it applies the same filter over and over again. Can you tell me why is this happening?

I thought that when I applied the filter to the image and made that image my main image to be drawn, I avoided forcing the processor to reapply the filters, thus saving memory, and processor resources. However, I see that this isn't the case.

Here is the project. Any recommendations?
 

Attachments

  • Image Filterizer.zip
    53.1 KB · Views: 192

kainjow

Moderator emeritus
Jun 15, 2000
7,958
7
From the docs:

Although a CIImage object has image data associated with it, it is not an image. You can think of a CIImage object as an image “recipe.” A CIImage object has all the information necessary to produce an image, but Core Image doesn’t actually render an image until it is told to do so. This “lazy evaluation” method allows Core Image to operate as efficiently as possible.

So it'd probably be better to create an NSImage from the CIImage and draw that instead.
 

Soulstorm

macrumors 68000
Original poster
Feb 1, 2005
1,887
1
So it'd probably be better to create an NSImage from the CIImage and draw that instead.

Hm... I used Core Image directly because there isn't any clear connection between CIImage and NSImage. Seems I must find a way to create an NSImage object from a CIImage, and do this the other way around...
 

Soulstorm

macrumors 68000
Original poster
Feb 1, 2005
1,887
1
Unfortunately...

I used NSImage and it didn't make any difference... Am I doing something wrong? I loaded the file as an NSImage, then in each filter, I used Core Image. Then, I converted the resulting Core Image to NSImage, and I displayed that to the NSImageView. However, I see no change in performance.
 

Attachments

  • Image Filterizer.zip
    55.1 KB · Views: 153

Soulstorm

macrumors 68000
Original poster
Feb 1, 2005
1,887
1
Hm... So let me get this straight.

At first, I have an NSImage. I take that NSImage and convert it to a CIImage object in order to apply some filters. Then, I will need to make an NSBitmapImageRep from the CIImage object and add that representation to the NSImage object that will be displayed on the NSImageView? And I will make that using the NSGraphicsContext?
 

cblackburn

macrumors regular
Jul 5, 2005
158
0
London, UK
The problem that you have is that CIImages are calculated on the Graphics Card. Then to draw them in a NSImageView requires you to copy them from the VRAM into an allocated chunk of RAM and then draw them back into the VRAM to show it on the screen. This is slow not only because there are multiple copy operations but because you are never supposed to do that it is not optimised. So instead of doing the VRAM -> RAM first and then RAM -> VRAM it might do one bit at a time through each operation. Very slow indeed. Also there is a bug in the Core Image Framework where if you copy an image from the VRAM to the RAM it leaks memory, a lot of memory, equivalent to the size of the image. I designed a program that did this with video from the iSight and it leaked about 250MB per second. Bear this in mind if you do it.

If you are only interested in showing it to the user then keep it in the VRAM and render it using an NSOpenGLView subclass (there is a good example here, http://developer.apple.com/samplecode/WhackedTV/listing9.html).

If you want to do some other pixel level alterations not using a CIFilter then you are going to have to copy the data down into an NSBitmapImageRep but beware of the bug mentioned above.

HTH

Chris
 

Soulstorm

macrumors 68000
Original poster
Feb 1, 2005
1,887
1
Got it. Thanks a lot for the information, I will put it to good use. However, I have a question.

Why do I make that move from the VRAM onto the RAM? Actually, that will happen during applying the filter. But that will happen only once. After that, when resizing the window or moving the scroll view, only the NSImage is called for redraw. And that is already on the RAM. I am not calling anything that would require the graphics card to intervene.

So, why does resizing take so much processor speed? Is it because of the bug you mentioned? A memory leak has been created? And that bug exists in Leopard?
 

WeeBull

macrumors newbie
Aug 18, 2004
9
0
Why do I make that move from the VRAM onto the RAM?

Ok, Core Image works by applying the filters to the image on the GPU (btw, you haven't mentioned what GPU you have. That will make major differences on the speed)

Speed is retained by keeping the information on the GPU and in it's VRAM. Copying data back is slow, as Chris says, especially if the image is large (sounds like it must be as you're scrolling around it).

Creating an NSBitmapImageRep, or drawing to an NSImageView will create a host copy (i.e. one on the CPU side). This is because these are not GPU based classes families. NSImage (I think) has been expanded to be able to contain CIImages, so creating an NSImage from a CIImage probably doesn't have a high cost, but you only do this to do something like draw it in a NSImageView, so the cost comes somewhere in the chain of events.

By keeping the CIImage, and using an NSOpenGLView, everything stays on the GPU, so no speed cost, and no memory leak.

Two other points:

1) Large images will be slower (obvious, but bare it in mind)
2) Changing inputs requires things to be recalculated, and defeats caching.

The second one is important.

Say you've got your image going through a blur. If you say [blurFilter setValue":x ForKey:mad:"blurRadius"] in drawRect:, then every time the image is redisplayed the filter will re-blur the whole image. If you don't it can re-use the image from last time round.

All you want in your drawRect method is drawImage: call and no other messing with the filter chain. If you're doing video or animation, that will cause slow down, but should still be elsewhere in your code so it's only done when necessary.

In my (limited) experience this is far bigger than host<->GPU transfers.
 

Soulstorm

macrumors 68000
Original poster
Feb 1, 2005
1,887
1
The image I test it on is only 96 kbytes. And my system config is in my signature.

All you want in your drawRect method is drawImage: call and no other messing with the filter chain. If you're doing video or animation, that will cause slow down, but should still be elsewhere in your code so it's only done when necessary.

I am using this drawing method. I only draw the image:

Code:
-(void)drawRect:(NSRect)rect
{
	NSLog(@"redrawing now...");
	[theImage drawInRect:[self bounds] fromRect:[self bounds] operation:NSCompositeSourceOver fraction:1.0];
	
}

Say you've got your image going through a blur. If you say [blurFilter setValue":x ForKey:mad:"blurRadius"] in drawRect:, then every time the image is redisplayed the filter will re-blur the whole image. If you don't it can re-use the image from last time round.
I only apply the filter once, and I create an NSImage from that object. I then draw that image to an NSImageView object. No matter how much time it took for the resulting NSImage to be created, such calculations will not have to be done again, when displaying that image on the NSImageView. That's why I can't understand the resulting speed.

By keeping the CIImage, and using an NSOpenGLView, everything stays on the GPU, so no speed cost, and no memory leak.

I didn't have the time to get involved with OpenGL in Cocoa in my project, when I have the time, I will convert my application to use NSOpenGLView instead of an NSImageView, to see if it handles more properly the memory allocated.

Btw, this is a very serious bug. How come apple has not fixed this memory leak?
 

Krevnik

macrumors 601
Sep 8, 2003
4,101
1,312
Btw, this is a very serious bug. How come apple has not fixed this memory leak?

You haven't filed many bugs with apple, have you? :)

I have had bugs filed which can cause an application crash because APIs that Apple exposed in 10.5 weren't properly guarded (A malformed search predicate hard-locked an app back in the WWDC seed)... and they still are open issues.
 

WeeBull

macrumors newbie
Aug 18, 2004
9
0
The image I test it on is only 96 kbytes. And my system config is in my signature.

I missed the config, exactly the same as me, but when I was talking about image size I was talking about resolution rather than file size.

I am using this drawing method. I only draw the image:

Code:
-(void)drawRect:(NSRect)rect
{
	NSLog(@"redrawing now...");
	[theImage drawInRect:[self bounds] fromRect:[self bounds] operation:NSCompositeSourceOver fraction:1.0];
	
}

Ok, that looks fairly minimal. Only thing I'd try is setting the operation to NSCompositeCopy, unless you are actually blending one image over another.

So theImage is an NSImage there right?

I only apply the filter once, and I create an NSImage from that object. I then draw that image to an NSImageView object. No matter how much time it took for the resulting NSImage to be created, such calculations will not have to be done again, when displaying that image on the NSImageView. That's why I can't understand the resulting speed.

Agreed, if I understand you correctly, and all that's happening is you're scrolling an NSImageView with an NSImage inside it then CoreImage isn't the problem.

Might be time to get Shark out and profile your app. It's in your /Developer/Applications/Performance Tools. Start it, run a debug build of your app, hit the start button in shark, and then make your app do it's slow thing. After 30 secs shark will stop recording, analyize for a bit, and hopefully tell you where you're spending your time.

Sharks a really good tool, and worth learning how to use. Sometimes, it's not the thing that you expect that's slowing you down.

Btw, this is a very serious bug. How come apple has not fixed this memory leak?

I personally hadn't noticed it. A lot of the time you don't need to create host copies of images, so no problem. It shouldn't be what's causing you problems, you're only doing one conversion.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.