GLSL Quick notes

 

  1. Source:

    • gl_FragCoord is provided by GLSL and automatically populated with the current fragment’s coordinates.
    • u_resolution is provided by the user, typically set in the application code.
  2. Usage:

    • gl_FragCoord is used to obtain the position of the current fragment.
    • u_resolution is used to get the dimensions of the rendering target, often for normalizing coordinates or other resolution-dependent calculations.
  3. Accessibility:

    • gl_FragCoord is always available in fragment shaders without any additional setup.
    • u_resolution requires defining and passing from the application.

When you divide gl_FragCoord.xy by u_resolution, you normalize the fragment coordinates to a range of [0, 1], where (0, 0) represents the bottom-left corner and (1, 1) represents the top-right corner of the screen or rendering target. This allows you to work with coordinates that are independent of the actual screen resolution.

The value of gl_FragCoord.xy is determined by the position of the fragment within the framebuffer during rendering. gl_FragCoord.xy varies for each fragment within a rendered frame. Each fragment corresponds to a pixel, and gl_FragCoord.xy provides the window-relative coordinates for that fragment.

How gl_FragCoord.xy Works

  • Per-Fragment Basis: The value of gl_FragCoord.xy is different for each fragment (or pixel) being processed in a single frame. It provides the exact position of the fragment within the framebuffer.
  • Coordinates:
    • gl_FragCoord.x is the horizontal coordinate.
    • gl_FragCoord.y is the vertical coordinate.
  • Origin and Range:
    • The origin (0, 0) is at the bottom-left corner of the framebuffer.
    • The maximum values are the width and height of the framebuffer.

 

A fragment is an intermediate result produced by the rasterization stage of the graphics pipeline. It contains all the information needed to potentially contribute to a final pixel's color, depth, and other attributes.
A pixel (short for "picture element") refers to the final discrete element of an image as it appears on the screen after all processing is complete.

 

 

Uniforms Changing Between Frames: Uniforms can indeed change from one frame to another in a rendering application. This allows for dynamic updates such as moving objects, changing lighting conditions, or adjusting camera perspectives.

Uniforms Consistency Within a Frame: Within a single frame render, once set, the values of uniforms remain constant throughout all draw calls issued during that frame. This ensures that all objects rendered within that frame use the same set of parameters (like transformation matrices, light positions) unless explicitly update

constant variables in GLSL remain unchanged throughout the entire execution of a shader program, spanning multiple frames if necessary. 

 

we dont have to manual iterate each pixel (for example using a for loop and we could need in a lets say p5js context) to find out each value. instead gl_Fragcoord gives a different coordinate for each fragment (pixel) during rendering and gl_Fragcolor sets the color value for each pixel:

 

  1. Vertex Shader (main() function):

    • The main() function in a vertex shader is executed once per vertex. Its primary responsibility is to process individual vertices, transforming them from object space to clip space (where they can be rasterized into fragments).
    • This function does not execute per fragment; rather, it prepares vertices for subsequent stages of the rendering pipeline.
  2. Fragment Shader (main() function):

    • The main() function in a fragment shader is executed once per fragment. Fragments are generated during the rasterization stage, where each fragment represents a pixel (or part of a pixel) on the screen.
    • Within the fragment shader, calculations such as color determination, texture mapping, lighting computations, and other effects are performed for each fragment.

 

draw() in p5.js operates at the frame level, updating the entire canvas.
main() in GLSL operates at the pixel (fragment) level, processing individual pixels based on shader logic.

 

The use of U and V coordinates instead of X and Y coordinates in the context of texture mapping comes from historical conventions in computer graphics. The terms U and V were likely chosen to avoid confusion with the X and Y coordinates used in 2D screen space.

When you're working in a 3D environment, you already have X, Y, and Z coordinates representing the spatial dimensions. Introducing a different set of letters (U and V) for texture coordinates helps to distinguish between the spatial coordinates of the 3D geometry and the coordinates used for mapping textures onto that geometry.

In mathematical terms, U and V are simply variables representing the axes of the texture space. It's a convention that has been widely adopted in computer graphics, and it helps maintain clarity when discussing both spatial and texture coordinates within the same context.

 

 

emmet shortcut increment <3 

https://dev.to/robole/vs-code-quickly-increment-and-decrement-numeric-values-with-keyboard-shortcuts-2nl