-
-
Notifications
You must be signed in to change notification settings - Fork 104
Description
Processing is tightly coupled to ARGB 8-bit textures via the PImage class hierarchy. When the project started, the only crossplatform drawing lib available in the JDK was AWT/Java2D. Those APIs expose pixels as int in ARGB (TYPE_INT_ARGB/TYPE_INT_RGB), and java.awt.Color wraps 0–255 channels. That layout provides zero copy access, blits via MemoryImageSource, and compatibility with other AWT stuff that's used pervasively throughout the codebase.
Every part of Processing assumes pixel data is stored as 32‑bit ints with 8‑bit channels. That assumption is baked into our public API (the "pixels" array) and is all over the place inside PGraphics, image filters, tessellators, OpenGL and future WebGPU backends, and font/shape caches, etc. Because there is no abstraction around "pixel format" or "color storage", any attempt to add floating point textures would require touching practically every file, amounting in an effective soft-fork.
Deep coupling
Here are some more specific issues:
-
int[] pixelsis exposed as public state and could be used from user sketches all the way down to renderer internals. Every consumer assumes 4 bytes per pixel with the same layout. Public fields are particularly problematic in Java as they can't really be overridden, just shadowed.
public int[] pixels;
public int[] pixels;
processing4/core/src/processing/opengl/PGraphicsOpenGL.java
Lines 5379 to 5386 in 11f1a1e
protected void allocatePixels() { updatePixelSize(); if ((pixels == null) || (pixels.length != pixelWidth * pixelHeight)) { pixels = new int[pixelWidth * pixelHeight]; pixelBuffer = PGL.allocateIntBuffer(pixels); loaded = false; } } -
Color math is hardcoded to obtuse bit manipulation and 0–255 ranges eg
colorCalcpacks/unpacks ints,PShapeduplicates this, and image blending/filtering is doing lots of>>> 24& masks.
processing4/core/src/processing/core/PGraphics.java
Lines 7631 to 7638 in 11f1a1e
protected void colorCalc(int rgb) { if (((rgb & 0xff000000) == 0) && (rgb <= colorModeX)) { colorCalc((float) rgb); } else { colorCalcARGB(rgb, colorModeA); } }
processing4/core/src/processing/core/PGraphics.java
Lines 7680 to 7735 in 11f1a1e
protected void colorCalc(float x, float y, float z, float a) { if (x > colorModeX) x = colorModeX; if (y > colorModeY) y = colorModeY; if (z > colorModeZ) z = colorModeZ; if (a > colorModeA) a = colorModeA; if (x < 0) x = 0; if (y < 0) y = 0; if (z < 0) z = 0; if (a < 0) a = 0; switch (colorMode) { case RGB: if (colorModeScale) { calcR = x / colorModeX; calcG = y / colorModeY; calcB = z / colorModeZ; calcA = a / colorModeA; } else { calcR = x; calcG = y; calcB = z; calcA = a; } break; case HSB: x /= colorModeX; // h y /= colorModeY; // s z /= colorModeZ; // b calcA = colorModeScale ? (a/colorModeA) : a; if (y == 0) { // saturation == 0 calcR = calcG = calcB = z; } else { float which = (x - (int)x) * 6.0f; float f = which - (int)which; float p = z * (1.0f - y); float q = z * (1.0f - y * f); float t = z * (1.0f - (y * (1.0f - f))); switch ((int)which) { case 0: calcR = z; calcG = t; calcB = p; break; case 1: calcR = q; calcG = z; calcB = p; break; case 2: calcR = p; calcG = z; calcB = t; break; case 3: calcR = p; calcG = q; calcB = z; break; case 4: calcR = t; calcG = p; calcB = z; break; case 5: calcR = z; calcG = p; calcB = q; break; } } break; } calcRi = (int)(255*calcR); calcGi = (int)(255*calcG); calcBi = (int)(255*calcB); calcAi = (int)(255*calcA); calcColor = (calcAi << 24) | (calcRi << 16) | (calcGi << 8) | calcBi; calcAlpha = (calcAi != 255); }
https://github.com/processing/processing4/blob/main/core/src/processing/core/PShape.java#L3514
processing4/core/src/processing/core/PImage.java
Lines 2471 to 2480 in 11f1a1e
private static int blend_blend(int dst, int src) { int a = src >>> 24; int s_a = a + (a >= 0x7F ? 1 : 0); int d_a = 0x100 - s_a; return min((dst >>> 24) + a, 0xFF) << 24 | ((dst & RB_MASK) * d_a + (src & RB_MASK) * s_a) >>> 8 & RB_MASK | ((dst & GN_MASK) * d_a + (src & GN_MASK) * s_a) >>> 8 & GN_MASK; } -
GPU texture uploads/downloads are always
int[]→IntBuffer→GL_RGBA/GL_UNSIGNED_BYTE. SeeTexture.set/setNative, thetexSubImage2D(...PGL.RGBA, PGL.UNSIGNED_BYTE...)calls, thergbaPixelsconversions that only handle ARGB/RGB/ALPHA ints and theupdatePixelBufferIntBufferpath.
processing4/core/src/processing/opengl/Texture.java
Lines 303 to 305 in 11f1a1e
public void set(int[] pixels) { set(pixels, 0, 0, width, height, ARGB); }
processing4/core/src/processing/opengl/Texture.java
Lines 346 to 347 in 11f1a1e
pgl.texSubImage2D(glTarget, 0, x, y, w, h, PGL.RGBA, PGL.UNSIGNED_BYTE, pixelBuffer);
processing4/core/src/processing/opengl/Texture.java
Lines 1000 to 1061 in 11f1a1e
protected void convertToRGBA(int[] pixels, int format, int w, int h) { if (PGL.BIG_ENDIAN) { switch (format) { case ALPHA: // Converting from xxxA into RGBA. RGB is set to white // (0xFFFFFF, i.e.: (255, 255, 255)) for (int i = 0; i< pixels.length; i++) { rgbaPixels[i] = 0xFFFFFF00 | pixels[i]; } break; case RGB: // Converting xRGB into RGBA. A is set to 0xFF (255, full opacity). for (int i = 0; i< pixels.length; i++) { int pixel = pixels[i]; rgbaPixels[i] = (pixel << 8) | 0xFF; } break; case ARGB: // Converting ARGB into RGBA. Shifting RGB to 8 bits to the left, // and bringing A to the first byte. for (int i = 0; i< pixels.length; i++) { int pixel = pixels[i]; rgbaPixels[i] = (pixel << 8) | ((pixel >> 24) & 0xFF); } break; } } else { // LITTLE_ENDIAN // ARGB native, and RGBA opengl means ABGR on windows // for the most part just need to swap two components here // the sun.cpu.endian here might be "false", oddly enough.. // (that's why just using an "else", rather than check for "little") switch (format) { case ALPHA: // Converting xxxA into ARGB, with RGB set to white. for (int i = 0; i< pixels.length; i++) { rgbaPixels[i] = (pixels[i] << 24) | 0x00FFFFFF; } break; case RGB: // We need to convert xRGB into ABGR, // so R and B must be swapped, and the x just made 0xFF. for (int i = 0; i< pixels.length; i++) { int pixel = pixels[i]; rgbaPixels[i] = 0xFF000000 | ((pixel & 0xFF) << 16) | ((pixel & 0xFF0000) >> 16) | (pixel & 0x0000FF00); } break; case ARGB: // We need to convert ARGB into ABGR, // so R and B must be swapped, A and G just brought back in. for (int i = 0; i < pixels.length; i++) { int pixel = pixels[i]; rgbaPixels[i] = ((pixel & 0xFF) << 16) | ((pixel & 0xFF0000) >> 16) | (pixel & 0xFF00FF00); } break; } } rgbaPixUpdateCount++; }
processing4/core/src/processing/opengl/Texture.java
Lines 792 to 795 in 11f1a1e
protected void updatePixelBuffer(int[] pixels) { pixelBuffer = PGL.updateIntBuffer(pixelBuffer, pixels, true); pixBufUpdateCount++; } -
Moving data between CPU pixels and GPU buffers is just "reinterpret int as native RGBA".
readPixels()readsRGBA/UNSIGNED_BYTEand immediately callsPGL.nativeToJavaARGB. The reverse path (drawPixels) callsPGL.javaToNativeARGBbefore writing to the FBO.
processing4/core/src/processing/opengl/PGraphicsOpenGL.java
Lines 5389 to 5411 in 11f1a1e
protected void readPixels() { updatePixelSize(); beginPixelsOp(OP_READ); try { // The readPixelsImpl() call in inside a try/catch block because it appears // that (only sometimes) JOGL will run beginDraw/endDraw on the EDT // thread instead of the Animation thread right after a resize. Because // of this the width and height might have a different size than the // one of the pixels arrays. pgl.readPixelsImpl(0, 0, pixelWidth, pixelHeight, PGL.RGBA, PGL.UNSIGNED_BYTE, pixelBuffer); } catch (IndexOutOfBoundsException e) { // Silently catch the exception. } endPixelsOp(); try { // Idem... PGL.getIntArray(pixelBuffer, pixels); PGL.nativeToJavaARGB(pixels, pixelWidth, pixelHeight); } catch (ArrayIndexOutOfBoundsException e) { // ignored } }
processing4/core/src/processing/opengl/PGL.java
Lines 1646 to 1653 in 11f1a1e
protected static int nativeToJavaARGB(int color) { if (BIG_ENDIAN) { // RGBA to ARGB return (color >>> 8) | (color << 24); } else { // ABGR to ARGB int rb = color & 0x00FF00FF; return (color & 0xFF00FF00) | (rb << 16) | (rb >> 16); } }
PGL.javaToNativeARGB(nativePixels, w, h); -
GL resources themselves are fixed to 8‑bit RGBA, renderbuffers/textures are always allocated as
RGBA8, and vertex colors/materials are uploaded asUNSIGNED_BYTEattributes.
pgl.renderbufferStorageMultisample(PGL.RENDERBUFFER, nsamples,
processing4/core/src/processing/opengl/PGL.java
Line 2854 in 11f1a1e
public static int RGBA8;
processing4/core/src/processing/opengl/PShapeOpenGL.java
Lines 5517 to 5520 in 11f1a1e
shader.setVertexAttribute(root.bufPolyVertex.glId, 4, PGL.FLOAT, 0, 4 * voffset * PGL.SIZEOF_FLOAT); shader.setColorAttribute(root.bufPolyColor.glId, 4, PGL.UNSIGNED_BYTE, 0, 4 * voffset * PGL.SIZEOF_BYTE); -
Even our new WebGPU backend just forwards the existing
fillR/fillG/fillB/fillAvalues that originate from the same 8‑bitcolorCalcmethod flow, so it inherits every limitation and would otherwise have to reimplement basically all of the existing API.
PWebGPU.backgroundColor(windowId, backgroundR, backgroundG, backgroundB, backgroundA);
Why is this an issue / What this limits
At a high level, while 8-bit textures may have been the norm in the early 2000s, the assumption has basically flipped. Modern GPUs execute most shading math in 32-bit lanes regardless of the source format, so even when sampling from an 8-bit texture the ALUs are still getting their parallelism over 32-bit registers. Unless you're explicitly using packed instructions (which is rare for generalist graphics work), the only real thing sthat 8-bit gets is memory bandwidth, which mostly only matters on low-end mobile. Everywhere else the industry is moving toward mixed precision (e.g. FP16 tensor cores for ML matmul ops) because compute cost is no longer tied to per-channel bit depth.
However, specifically relevant to art, the lack of floating point textures is severe limitation to the following techniques.
Installation art
Stuff like motion tracking, heatmap accumulation, or any "paint with time" piece depends on adding and accumulating tiny deltas every frame. In 8-bit those changes round away, so small movement never shows up and trails die instantly. Float buffers let artists integrate small movement over minutes or hours and only quantize when sending to the projector. Multi-projector setups also have problems, feathering overlaps, gamma correcting different units, and warping/projection mapping content through multiple correction stages all require smooth ramps. Quantizing each stage to 256 buckets produces visible steps, which is especially problematic in dark rooms typical for these kinds of installs. Hardware sensorb ased work (think depth cameras, environmental sensors, etc) typically rely on filters and other post-processing to be useful for producing a final render for the projector.
Generative art
Feedback is super common in generative art, ie read the last frame, do some texture processing, write it back, repeat. In 8-bit every read-modify-write cycle loses precision, so the loop collapses after a few iterations and colors wander off due to rounding and look bad if they work at all. Float textures keep temporal buffers alive and stable which is why applications like TouchDesigner or vvvv typically default to them. Popular algos like reaction-diffusion, fluid sims, and cellular automata also add/subtract tiny gradients every step. Quantization steps in 8-bit makes patterns fall apart. Floats match the reference equations. Even simple particles need floats to ensure the motion looks and particle sim parameters work from frame to frame. Layering hundreds of low-alpha sprites in 8-bit will jump between "invisible" and "too strong" because the intermediate alpha values simply don't exist. For people who care about color grading or LUT work, esp to reference stuff like mockups made by designers which is important in professional settings working on a team, you need headroom. Doing the math in 8-bit crushes blacks and create midtone bands which will make your designers cry.
Modern 3D graphics techniques
In the more game oriented world, TAA and other AA techniques assume you can accumulate subpixel jitter over many frames without the values collapsing. When you use 8-bit components, the first couple of blends push dark tones to zero and you're left with banding or ghost trails. Modern engines accumulate in 16/32-bit floats and only clamp when tonemapping out to the display. The same is true for HDR stuff like bloom, tone mapping, or PBR shading that need to push energy past 1.0, do postprocessing, then compress the result. With 0-255 you just clip to white and the math breaks. Deferred renderers and screenspace effect passes also suffer when encode normals, roughness, or depth in 8-bit and SSAO/SSR immediately reveal the quantization steps. Backends like Metal/Vulkan assume you can request RGBA16F, R11G11B10F, depth-only attachments, etc. which the user may want to display / debug. There's interesting Processing style techniques here that simply aren't possible atm.
Conclusion
Changing Processing's texture format and color math is a breaking change at basically all levels of the existing codebase but is crucially important for modern graphics techniques.