Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

ChristianVirtual

macrumors 601
Original poster
May 10, 2010
4,122
282
日本
I have the following code which work well with dimensions multiple of 480x320. But my OpenGL scene is 600x600; based on the screen layout I don't want to change.

Is it possible to encode in (kind of) free dimensions or are we bound to (480x
320)xn

Code:
   self.videoSize = self.view.bounds.size;
   //   self.videoSize = CGSizeMake(1280 , 780);

   
   NSMutableDictionary * outputSettings = [[NSMutableDictionary alloc] init];
   [outputSettings setObject: AVVideoCodecH264 forKey: AVVideoCodecKey];
   [outputSettings setObject: [NSNumber numberWithInt: self.videoSize.width] forKey: AVVideoWidthKey];
   [outputSettings setObject: [NSNumber numberWithInt: self.videoSize.height] forKey: AVVideoHeightKey];
   

   self.assetWriterInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:outputSettings];
   self.assetWriterInput.expectsMediaDataInRealTime = YES;
   
   NSDictionary *sourcePixelBufferAttributesDictionary = [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithInt:kCVPixelFormatType_32BGRA], kCVPixelBufferPixelFormatTypeKey,
                                                          [NSNumber numberWithInt:self.videoSize.width], kCVPixelBufferWidthKey,
                                                          [NSNumber numberWithInt:self.videoSize.height], kCVPixelBufferHeightKey,
                                                          nil];
   
   self.assetWriterPixelBuffer = [AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:self.assetWriterInput sourcePixelBufferAttributes:sourcePixelBufferAttributesDictionary];
   
   [self.assetWriter addInput:_assetWriterInput];
   
   self.assetWriter.metadata = metadata;
   
   [self.assetWriter startWriting];
   
   self.startTime = [NSDate dateWithTimeIntervalSinceNow:0];
   [self.assetWriter startSessionAtSourceTime:kCMTimeZero];
   
   
   self.videoIsRecording = TRUE;

writing the frame I do with the following code
Code:
   CVPixelBufferRef pb = NULL;
   
   CVReturn status = CVPixelBufferPoolCreatePixelBuffer (NULL, [self.assetWriterPixelBuffer pixelBufferPool], &pb);
   if ((pb == NULL) || (status != kCVReturnSuccess))
   {
      CLLog(@"error with frame %d", status);
      self.startTime = NULL;
      return;
   }
   else
   {
      CVPixelBufferLockBaseAddress(pb, 0);
      GLubyte *pixelBufferData = (GLubyte *)CVPixelBufferGetBaseAddress(pb);
      glReadPixels(0, 0, self.videoSize.width, self.videoSize.height, GL_BGRA, GL_UNSIGNED_BYTE, pixelBufferData);
   
      CMTime currentTime = CMTimeMakeWithSeconds([[NSDate date] timeIntervalSinceDate:_startTime],120);
   
      if (![_assetWriterPixelBuffer appendPixelBuffer:pb withPresentationTime:currentTime])
      {
         CLLog(@"Problem at time: %lld", currentTime.value);
      }

      CVPixelBufferUnlockBaseAddress(pb, 0);
      CVPixelBufferRelease(pb);
   }

Conversion of RGBA to BRGA seems ok; its really just the dimensions (I guess)

If I do that I get the following
https://www.youtube.com/watch?v=g_osJC5fTG8

if I choose like 1280x780 I get the proper output; except its not cropped the way I want

https://www.youtube.com/watch?v=luhc0xleMZ8

Any idea ?
 
Can you elaborate on what you mean by your OpenGL scene is 600x600? Are you rendering to an offscreen buffer that has the dimensions 600x600?

Making the assumption that you are trying to display a 600x600 OpenGL buffer at different resolution. Once the contents have already been rendered by OpenGL to that buffer it can not magically be converted to 480x320 with out cropping or some other distortion.

You can do that quite easily with OpenGL doing a render to texture then drawing that texture to a quad, but it appears you are using readPixels to create a pixelBuffer and simply presenting that.

You might have an easier time changing the buffer dimensions in to which your geometry is rendered. This should easily be accomplished by changing the viewport in OpenGL and changing the buffer dimensions.
 
Thanks taegls for your response.

What I try to achieve is a screen recording of a GLKIt View I have in a popup of the screen.

image.jpg

I can zoom in, rotate and change the rendering methods of those proteins. The dimension is 600x600. Partially given by the constraint that a popup should only have a width of max 600. I used the height with the same value; so it fit well in portrait and landscape.

I render that scene into this 600x600 GLKView and write one frame into the AVasset once rendered and before presented. Inside the delegate method of the corresponding view controller.

What I would like avoid is to switch UI to full screen or something like that. I might try to reduce the UI in case the user want to record a video.

But I wonder if there is any useful transformation I can do between GLKView glReadPixel and writing into the video asset.
 
Ok, I think I understand now. So here are some options you can use. It depends on how complex your scene is and how long it takes to render.

You can render the scene to an offscreen buffer at whatever resolution you want and have it linked to an OpenGL texture. You can then render the texture for the UI portion inside your GLKView by just rendering a quad with the texture.

Or, you can just render the scene twice if your rendering is optimized and the scene isn't that complex. Every time you render the scene to the UI have it render again to a separate offscreen frame buffer using whatever resolution you want.

Using an OpenGL texture you might also be able to bypass using glReadPixels which is an extremely slow and blocking call. I believe iOS has a way to convert an OpenGL texture to an IOSurface that can then be used for saving. I know it exists for Mac OS Cocoa.

If you don't want to do any of that and just try to transform the rendered image you can try and re-render the 600x600 image to a quad using orthographic projection at what ever resolution you want. That will center the image but you will still have quality issues ie using a resolution greater then the 600x600 as well you will get black bands on the sides as the image will be centered.

If you have any questions about what I mentioned let me know.
 
I might have to give the offscreen buffer another try. How can I create offscreen buffer within the GLKIt framework ?. I tried but had difficulties to get 24bit depth test working. I only made it to 16 bit which is not enough for those little atoms.
With GLKIt I was able to get 24bit working (with RGBA8888 for color). Which seems sufficient for the real world data I try to visualize.
 
You create an offscreen buffer the same way you normally would just make sure to create it and use it while you have the OpenGL context for the GLKView active.
 
ok; finally it worked out with the offscreen buffer. I create it in a proper size able to encode while keeping the screen view in the size I wanted to.

On a positive side effect I can use the same logic for an external screen. On the negative I have to render twice; lucky the iPad mini is fast enough.

Still need to position the image better but the main problem is solved: thanks

 
Last edited by a moderator:
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.