I see your point, the problem is that the rendering of unlit/untextured, textured, and stroke geometry is handled by separate shader programs since they involve different calculations at both the vertex and fragment levels. Another "architectural" option could have been to write a single massive shader that encompasses all possible combinations, but it would have been filled with if branches, which are not good to have for performance reasons. This is why I decided early on in the development of the new opengl renderer to handle each geometry class with a separate shader type...
fogColor.glsl:
However, you have the possibility to set only the fragment shader for any shader type, in that case Processing will complete your shader program using the default shader for that type. So, in the case of your fog example, you would need write three fragment shaders:
Main sketch:
- PShader fogColor;
- PShader fogLines;
- PShader fogTex;
- void setup() {
- size(640, 360, P3D);
- fogColor = loadShader("fogColor.glsl");
- fogColor.set("fogNear", 0.0);
- fogColor.set("fogFar", 500.0);
- fogLines = loadShader("fogLines.glsl");
- fogLines.set("fogNear", 0.0);
- fogLines.set("fogFar", 500.0);
- fogTex = loadShader("fogTex.glsl");
- fogTex.set("fogNear", 0.0);
- fogTex.set("fogFar", 500.0);
- hint(DISABLE_DEPTH_TEST);
- }
- void draw() {
- shader(fogColor);
- background(0);
- noStroke();
- translate(mouseX, mouseY, -100);
- box(200);
- fill(255,0,0);
- box(100);
- shader(fogLines);
- stroke(255);
- strokeWeight(10);
- line(0,0,0, width/2, height/2, 100);
- line(0,0,0, -width/2, height/2, 100);
- shader(fogTex);
- text("shader", 100,0,100);
- }
- varying vec4 vertColor;
- uniform float fogNear;
- uniform float fogFar;
- void main(){
- gl_FragColor = vertColor;
- vec3 fogColor = vec3(1.0,1.0,1.0);
- float depth = gl_FragCoord.z / gl_FragCoord.w;
- float fogFactor = smoothstep(fogNear, fogFar, depth);
- gl_FragColor = mix(gl_FragColor, vec4(fogColor, gl_FragColor.w), fogFactor);
- }
fogLines.glsl:
- varying vec4 vertColor;
- uniform float fogNear;
- uniform float fogFar;
- void main(){
- gl_FragColor = vertColor;
- vec3 fogColor = vec3(1.0,1.0,1.0);
- float depth = gl_FragCoord.z / gl_FragCoord.w;
- float fogFactor = smoothstep(fogNear, fogFar, depth);
- gl_FragColor = mix(gl_FragColor, vec4(fogColor, gl_FragColor.w), fogFactor);
- }
fogTex.glsl:
- uniform sampler2D textureSampler;
- varying vec4 vertColor;
- varying vec4 vertTexcoord;
- uniform float fogNear;
- uniform float fogFar;
- void main() {
- gl_FragColor = texture2D(textureSampler, vertTexcoord.st) * vertColor;
- vec3 fogColor = vec3(1.0,1.0,1.0);
- float depth = gl_FragCoord.z / gl_FragCoord.w;
- float fogFactor = smoothstep(fogNear, fogFar, depth);
- gl_FragColor = mix(gl_FragColor, vec4(fogColor, gl_FragColor.w), fogFactor);
- }
For this to work properly though, you have to make sure that you are using the correct varying names, as defined in the default vertex shaders. In this case, vertColor and vertTexcoord. All of this will be properly documented in the Processing reference of course, so people can exactly know the naming conventions adopted in the default Processing shaders.
Now, a couple of observations:
1. fogColor.glsl and fogLines.glsl are actually identical, and even though this seems like an advantage (only two shaders to be written instead of three), this doesn't work properly in 2.0b7 because Processing will try to guess the type of the shader by inspecting the code you are providing to loadShader(). But because fogLines.glsl doesn't contain any information that unambiguously clear that it should work in combination with the vertex stage of line rendering, then Processing will build another color shader out of it, which is not what you want. This has been solved recently with some changes in the github repo, by removing the shader type autodetection and replacing it by a #define based approach that allows you to explicitly set the type of your shader: #13.
2. The fog effect is computed by the same snippet of code across all these shaders. I understand that it would be great to be able to specify just that snippet of code as some sort of "include" file that then could be referenced by shaders of any type. First thing to point out here is that GLSL doesn't natively support include files, so for this to work some substantial amount of additional logic/API needs to be incorporated to the opengl renderer. Something along the lines:
- loadShaderFunction("fogCalc.glsl");
- ...
- fog = loadShader("fog.glsl");
with fogCalc.glsl as you would expect:
- vec4 calcFog(vec4 fragCoord, vec4 fragColor, float near, float far) {
- vec3 fogColor = vec3(1.0,1.0,1.0);
- float depth = fragCoord.z / fragCoord.w;
- float fogFactor = smoothstep(near, far, depth);
- return mix(fragColor, vec4(fogColor, fragColor.w), fogFactor);
- }
so you could do in the actual shader:
- varying vec4 vertColor;
- uniform float fogNear;
- uniform float fogFar;
- void main(){
- gl_FragColor = calcFog(gl_FragCoord , vertColor, fogNear, fogFar);
- }
The renderer should be able to add the implementation of calcFog() automatically, but before the shader object is actually compiled... so things can get complicated easily at this point. This would be a really cool functionality, but needs more thinking and won't probably make it into the 2.0 release as the shader API is pretty much "done" at this point (with the recent changes I mentioned earlier).