Quantcast
Channel: Adobe Community : All Content - AIR Development
Viewing all articles
Browse latest Browse all 2954

Using OpenGL ES in an AIR native extension on iPad

$
0
0

Hi,

 

I am trying to find out whether I can render images using iPad GPU from a native extension loaded by an AIR application on iPad. The main reason for this is that I need every bit of GPU power in my AIR application and I cannot use Stage3D (see http://forums.adobe.com/thread/1267084 for details).

 

The idea is to pass a bitmap from the ActionScript code to the native extension, render it by Objective C code and raw OpenGL ES and send it back to the AIR code.

 

Is it technically possible? I am afraid that AIR runtime uses OpenGL ES for its own needs (at least for Stage3D support) so native extension possibly cannot share OpenGL with it.

 

Nevertheless I have made a try. Here is some code:

 

The first strange thing is that the following initialization code:

 

myContext = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
[EAGLContext setCurrentContext:myContext];

 

does not make much sense for me. Instead of this I can see that EAGLContext already contains some instance previously set up by someone else (maybe AIR runtime did it). And I was able to get an image only when I do not create this context at all. So these two lines are actually commented out in my test app.

 

Here is how I initialize framebuffer:

 

glGenFramebuffersOES(1, &framebuffer);
glBindFramebufferOES(GL_FRAMEBUFFER_OES, framebuffer);
glGenRenderbuffersOES(1, &colorRenderbuffer);
glBindRenderbufferOES(GL_RENDERBUFFER_OES, colorRenderbuffer);
glRenderbufferStorageOES(GL_RENDERBUFFER_OES, GL_RGBA8_OES, width, height);
glFramebufferRenderbufferOES(GL_FRAMEBUFFER_OES, GL_COLOR_ATTACHMENT0_OES, GL_RENDERBUFFER_OES, colorRenderbuffer);

 

I do not need 3D so I am not creating depth buffer. Insteat I need to render a lot of 2D polygons and the drawing order is OK for me.

 

Then I tried the following code to render a single triangle specified in the vertexData array:

 

glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrthof(-1.0f, 1.0f, -1.0f, 1.0f, -1.0f, 1.0f);
glMatrixMode(GL_MODELVIEW);

glClearColor(1.0f, 1.0f, 1.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);

// vertexData contains 2d float coordinates followed by 4 bytes of color for each vertex
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(2, GL_FLOAT, 12, vertexData);

// The following two lines cause the whole screen to be filled with random gradient semi-transparent fill.
// If I comment them out then it renders a black triangle that I really expect to get.
glEnableClientState(GL_COLOR_ARRAY);
glColorPointer(4, GL_UNSIGNED_BYTE, 12, (vertexData + 8));

// Draw the triangle:
glDrawArrays(GL_TRIANGLES, 0, 3);

// Get the final image:
glPixelStorei(GL_PACK_ALIGNMENT, 4);
NSInteger datalength = width * height * 4;
GLubyte* rawdata = (GLubyte*)malloc(datalength * sizeof(GLubyte));
glReadPixels(x, y, width, height, GL_RGBA, GL_UNSIGNED_BYTE, rawdata);
// In this point the rawdata array contains an image that I am able to convert and send back to my AS code.

 

So each time I try to specify color for vertexes then I get the whole iPad screen filled with some random gradient. I have also tried glColor function and it causes such effect too. Disabling lighting and fog did not helped.

 

 

So my main question is the following: is it technically possible to render an offscreen image in the native extension using OpenGL?

Maybe the black triangle that I was able to get from the rendered image is rendered accidentally and the whole thing should not work at all?


Viewing all articles
Browse latest Browse all 2954

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>