Quantcast
Channel: Processing Forum
Viewing all articles
Browse latest Browse all 1768

Blending very large images: performance, memory

$
0
0
I need to alpha blend 2700x1600 images at runtime.  Not totally surprisingly, I'm having problems keeping my framerate up.  I've tried doing the blending on an offscreen buffer, but that hasn't seemed to help much if any.  Are there any general strategies for working with images this large that might apply to this situation?  Something to leverage the graphics card as much as possible, perhaps?

I'm also having a bit of trouble managing memory; each of these images uncompressed is ~17MB (2700x1600x4 bytes), and there are a total of ~60 images I'll be blending (not all simultaneously!).  I have strategies in mind for the memory issue that are outside the scope of this question, but I include it here in case there is a clever way to balance memory usage between the computer's memory and the graphics card.

Based on my digs into the source, it does seem that Processing (PGraphicsOpenGL specifically) is using JOGL internally for PApplet.image(), so that optimization at least is already happening.

Advice in raw JOGL is not as nice as Processing code, but I can probably thrash my way through it if there's a JOGL route.

Basic strategy (very simplified code) as of right now is:
  1. PImage a = loadImage("imageA.png");
  2. PImage b = loadImage("imageB.png");
  3. PGraphics buffer = createGraphics(width, height, OPENGL);

  4. void draw () {
  5.   buffer.beginDraw();
  6.   buffer.tint(255, 200);
  7.   buffer.image(a, 0, 0);
  8.   buffer.tint(255, 100);
  9.   buffer.image(b, 0, 0);
  10.   buffer.endDraw();
  11.   image(buffer, 0, 0);
  12. }


Viewing all articles
Browse latest Browse all 1768

Trending Articles