This website uses cookies to help us give you the best experience when you visit our website. By continuing to use this website, you consent to our use of these cookies. You can learn more here.

Overview

Rayground is an online integrated development environment (IDE) for interactive demonstration and/or prototyping of ray tracing algorithms. Rayground is free for everyone, on any platform that has a WebGL2-compliant browser (no special plugins are required). In general, the user has the ability to create any number of new projects from scratch or copy an existing one from a variety of ray tracing projects made available from other users. Since Rayground IDE is web-based and online, users can work on it from anywhere, anytime.

The graphical user interface of Rayground is designed to have two discrete parts, the preview window and the shader editor. Visual feedback is interactively provided in the WebGL rendering context of the preview canvas, while the user performs live source code modifications.


Ray Tracing Pipeline

At the core of Rayground there is a traditional ray tracing image synthesis pipeline, with several programmable stages via event handling shaders. It was designed with the aim to help users gradually understand how a ray tracer works, without getting distracted by the particular implementation of the framework or platform-specific characteristics. Since ray tracing is now tightly integrated into modern real-time rendering APIs, such as Vulkan, DirectX and OptiX, we follow a similar programming model. Rayground’s pipeline has five distinct configurable stages, namely Scene, Generate, Hit, Miss and Post Process, which are explained below, focusing on function rather than implementation. Thus, the shader editor consists of five tabs, corresponding to each one shader stage. A quick API reference is also provided under the "?" tab.

Menu Tab image

Our raytracing pipeline is consisted of various stages that are executed in parallel for each pixel of a Canvas and outputs a color. For each iteration of the pipeline, or frame, a pixel color is computed and blended with the previous values stored in an Accumulation Buffer. Results, after the blending operation, are displayed on a Canvas. The steps of our raytracing pipeline are illustrated below:

Menu Tab image

At first, geometric objects of the scene are specified in the Scene description stage. These objects are used to build data structures that accelerate ray intersection queries. The remaining 4 stages are programmable and are utilized to spawn and manipulate rays and in the end display the results on a Canvas. Each programmable stage has a different set of inputs and expects a different set of outputs. Rays are spawned in parallel for each pixel of the Canvas during the Generate stage using a ray generation shader. Each ray is defined by it's origin position, direction and length. Additionally, it has a payload value and an output color value that will be written to the pixel of the canvas that the ray originate from, when the ray has terminated. Rays are then intersected against the scene geometric objects and the intersection results are passed to the Hit or Miss stage depending on whether the ray intersected an object or not. On the Hit stage, the closest hit point on the ray is provided to a ray hit shader that performs calculations and updates the ray's output color and payload values. If nothing is hit, then a ray miss shader is executed. Both Hit and Miss stages output a new ray which is again tested against the scene geometry and a new ray hit or miss shader is invoked based on the intersection result. This recursive procedure is performed up until a specified number of recursion depth is achieved. Optionally, each ray recursion can be terminated, during the Hit or Miss stage and as a result get excluded from the recursive procedure. After this recursive precedure has ended, each final ray output color is blended with the pixel color of the previous result stored in the Accumulation Buffer. Finally, results are displayed to the Screen using a post processing shader during the Post Processing stage. We cover each stage separately in the following text.


Application Programming Interface (API)

Thr Rayground API is implemented using the WebGL2 standard, supporting shader programming via GLSL, thus providing a convenient and familiar code development interface. The user is encouraged to use built-in GLSL functions (e.g. dot, cross) and types (e.g. vec4, mat4). However, any use of the standard input and output variables of the GLSL programmable pipeline stages (e.g. gl_FragCoord) as well as samplers (e.g. sampler2D) may result in undefined behaviour and should be avoided. While certain functionality is common to all stages, there are also stage-specific input and output variables, which are described below in more detail.

1. Scene Declaration


This stage describes the 3D scene. Scene description is given in JSON format in the Scene editor tab. It is defined in a declarative format and therefore does not have any notion of input and output variables.

 {
  "settings": {
    "depth": 3
    .
    .
    .
  },
  "objects": [
    {  object0 properties  },
    {  object1 properties  },
    {  object2 properties  },
    {  object3 properties  },
    .
    .
    .
    {  objectN properties  },
  ]
}

The settings JSON object is used to configure scene-wide, backend options. The depth property sets the level of recursion in the ray-tracing procedure and is mandatory. Users should set the appropriate depth value according to their algorithm's goal. Note that any ray segment can be terminated earlier than the specified depth. Other properties include the save_first_bounce which enables access to additional information regarding the first ray bounce in the post-processing stage (see further below).

The objects JSON array specifies the scene's geometry. Most of the properties for each object are straightforward. The type field indicates the geometric type of the object. Several fundamental geometric types are supported out of the box for convenience.

These are:

  • quad
  • cube
  • sphere
  • triangles
translate, rotate and scale properties are used to position and orient an object in 3D space using a TRS transformation matrix. If a model matrix is present, it is used instead. In the special case of spheres translate is used to position the sphere in world coordinates and radius to change its scaling.

Material properties for each object are supplied using material_property entries per object. The current version of Rayground supports up to 8 material properties of float[4] storage type. These values can be used for various purposes. For example, to define BRDF values, distinguish between material types or used in other inventive ways. Their values are always available to the user when a ray intersects the object.

This is an object description using all discussed properties.

  {
    "type": "quad",
    "model": [ 1, 0, 0, 0,
    0, 1, 0, 0,
    0, 0, 1, 0,
    10, 10, 10, 1 ],
    "translate": [ 0, 1, 1 ],
    "scale": [ 0.5, 0.5, 1 ],
    "rotate": [ 1, 0, 0, 90 ],
    "material_property0": [ 1, 0, 0, 0 ],
    "material_property1": [ 1, 0, 0, 0 ],
    "material_property2": [ 0, 1, 0, 0 ],
    "material_property3": [ 0, 0, 1, 0 ],
    "material_property4": [ 0, 1, 1, 0 ],
    "material_property5": [ 1, 0, 1, 0 ],
    "material_property6": [ 1, 1, 0, 0 ],
    "material_property7": [ 1, 1, 1, 0 ]
}

Following is a complete example JSON object of how to setup a Cornell Box scene.

  {
  "settings": {
    "depth": 10
  },
  "objects": [
    {
      "type": "quad", "translate": [ 0, 547.8, 0 ], "scale": [ 130, 105, 1 ],
      "rotate": [ 1, 0, 0, 90 ], "material_property0": [ 0.5, 0, 0 ]
    },
    {
      "type": "quad", "translate": [ 278, 274, 0 ], "scale": [ 556, 559, 1 ],
       "rotate": [ 0, 1, 0, -90 ], "material_property0": [ 0.05, 0.80, 0.05 ]
    },
    {
      "type": "quad", "translate": [ -278, 274, 0 ], "scale": [ 556, 559, 1 ],
      "rotate": [ 0, 1, 0, 90 ], "material_property0": [ 0.80, 0.05, 0.05 ]
    },
    {
      "type": "quad", "translate": [ 0, 548, 0 ], "scale": [ 556, 559, 1 ],
      "rotate": [ 1, 0, 0, 90 ], "material_property0": [ 0.80, 0.80, 0.80 ]
    },
    {
      "type": "quad", "translate": [ 0, 548, 0 ], "scale": [ 556, 559, 1 ],
      "rotate": [ 1, 0, 0, 90 ], "material_property0": [ 0.80, 0.80, 0.80 ]
    },
    {
      "type": "quad", "translate": [ 0, 0, 0 ], "scale": [ 556, 559, 1 ],
      "rotate": [ 1, 0, 0, -90 ], "material_property0": [ 0.80, 0.80, 0.80 ]
    },
    {
      "type": "quad", "translate": [ 0, 274, 280 ], "scale": [ 560, 565, 1 ],
      "rotate": [ 0, 1, 0, 180 ], "material_property0": [ 0.80, 0.80, 0.80 ]
    },
    {
      "type": "cube", "translate": [ 100.0, 82.5, -85.0 ], "scale": [ 165, 165.0, 165 ],
      "rotate": [ 0, 1, 0, 17 ], "material_property0": [ 0.8, 0.8, 0.8 ]
    },
    {
      "type": "cube", "translate": [ -100.0, 165, 85.0 ], "scale": [ 165, 330.0, 165 ],
      "rotate": [ 0, 1, 0, -17 ], "material_property0": [ 0.8, 0.8, 0.8 ]
    }
  ]
}
	

Programmable Stages


After the scene stage, the rest 4 stages of the Rayground pipeline are fully programmable. Generate and Post Process shaders execute the user's code in parallel for each pixel of the canvas. The programmer should use rg_Pixel to query the pixel's coordinates on the canvas for the current program invocation.

Certain functionalities and utilities are common to all stages but each stage has a specific set of input and output variables. Following are the functions and constants available to all shader stages:

Shader Stage Functions


Type Name Description
vec4 rg_Random(uint index, uint seed0, uint seed1) Obtain 4 random values from a counter-based pseudo-random sequence.
bool rg_TraceOcclusion(vec3 origin, vec3 direction, float tmax) Send a ray to determine occlusion along the specified direction.

Shader Stage Constants


Type Name Description
vec2 rg_Canvas canvas resolution in pixels.
vec2 rg_Pixel pixel coordinates relative to the lower left corner of the canvas.
float rg_Time time in seconds since simulation start. Updates in every frame.
int rg_Frame current frame counter.
ivec4 rg_Mouse .xy current mouse position if clicked, else (-1, -1), .zw previous click position. Updates on mouse press.
int rg_Depth current ray depth iteration. Starts at 0 and gets incremented after each ray intersection wave.
uvec4 rg_Seed cpu generated seed values. Updated for each stage.
float RG_RAY_MAX_DISTANCE 1.e27
float RG_RAY_ACTIVE_FLAG 1.0
float RG_RAY_INACTIVE_FLAG 0.0
float RG_PI 3.14159265359
float RG_TWO_PI 6.28318530718
float RG_FOUR_PI 12.5663706144
float RG_INV_PI 0.31830988618
float RG_INV_TWO_PI 0.15915494309
float RG_INV_FOUR_PI 0.07957747154

rg_TraceOcclusion can be used to query occlusion information along a direction. It is available in the Gen, Hit and Miss stages. true is returned when a hit is found within the specified distance along the direction. It can be used for visibility testing (shadow rays).

One of the most important building blocks for ray tracing is random number generators (RNG). There is no "single" way to generate random numbers and the requirements depend on the task at hand. Generating good random sequences in parallel on the GPU is generally a complicated task. To ease development, Rayground offers a pseudo random number generation function that's based on the Philox counter-based RNG [1]. Usable seeds are provided by the backend in rg_Seed. Users are free to develop and use their own random number generator.

vec4 get_uniform_random() {
  uint counter = uint(rg_Frame) * (uint(rg_Pixel.x) + uint(rg_Pixel.y) * uint(rg_Canvas.x) + 1u);
  return rg_Random(counter, rg_Seed.x, rg_Seed.y);
}

The idea is to request the random number at index counter from a PRNG sequence. Generating the index is the responsibility of the user. This kind of RNG maps really well to parallel tasks. In Rayground, one can use the backend provided rg_Seed to initialize a random sequence and use rg_Pixel or rg_Frame to generate unique indexes for each pixel.


Ray Generate/Hit/Miss stages interface


Output properties that describe the ray passed to the next stage of the pipeline.

Type Name Description
out vec4 rg_Accumulation output ray accumulation color values (.rgb values, .a additive blending factor).
out vec4 rg_Payload0 .. 3 output ray payload values.
out vec4 rg_RayDirection output ray direction (.xyz direction, .w max travel distance).
out vec4 rg_RayOrigin output ray origin (.xyz origin, .w RG_RAY_ACTIVE_FLAG or RG_RAY_INACTIVE_FLAG).
in vec4 rg_PrevAccumulation input ray accumulation color values.
in vec4 rg_PrevPayload0 .. 3 input ray payload values.
in vec3 rg_PrevRayDirection input ray direction
in vec3 rg_PrevRayOrigin input ray origin

2. Ray Generation


The Ray Generation stage is the entry point for a particular scene. The function signature for this stage is

void rg_generate()

As the name suggests, this stage is responsible for generating the primary rays and initializing simulation values. A ray is defined with an origin point and a direction. Each pixel must output to the rg_RayDirection and rg_RayOrigin. Subsequent stages depend on those output values and the user must be very careful to initialize them for all pixels and all control paths. Neglecting to do so, can result in undefined behavior.

This is also the right place to initialize rg_Accumulation and rg_Payload0.

rg_Accumulation carries the final color of the image in the .rgb components. The .a component holds the blending factor. As many frames are simulated, the result must be accumulated with an equal weight for all samples. This is achieved by specifying a blending weight equal to 1.0 / float(rg_Frame)

If a constant update of the resulting image is desired, then .a shouldbe set to 1.0.

rg_Payload0 is per-ray storage of size vec4 that can be used by the user for any purpose. It becomes available in later stages to help implement the algorithm's logic.

The typical structure for a ray generation program is shown below:

  void rg_generate() {
    vec2 canvas_size   = rg_Canvas;
    vec2 pixel_coords  = rg_Pixel;
    int  frame_index   = rg_Frame;
    .
    .
    .
    vec3 ray_direction = ...;
    vec3 ray_origin    = ...;
    .
    .
    .
    rg_RayDirection    = vec4(ray_direction, RG_RAY_MAX_DISTANCE);
    rg_RayOrigin       = vec4(ray_origin, RG_RAY_ACTIVE_FLAG);
    rg_Accumulation    = vec4(0.0, 0.0, 0.0, 1.0 / float(frame_index));
    rg_Payload0        = vec4(1.0);
}
  

Several Rayground builtins are shown in the above code snippet. rg_Canvas holds the canvas's width and height. rg_RayOrigin and rg_RayDirection are out parameters for the ray generation program. When our shader is finished whatever is stored in these variables will be used for the next iteration of the ray tracer.

Notice how we make sure to mark the ray as an active with RG_ACTIVE_RAY_FLAG and specify a maximum travel distance to make sure that we travel far enough to hit any actual geometry.

After this stage, each ray that hits a surface will trigger our Ray Intersection event and each ray that does not hit any geometry will trigger a Ray Miss event.

3. Ray-Object Intersection


Hit event is where the most interesting work takes place. The Rayground API provides a lot of information regarding the surface intersection.

Ray Hit stage interface


Ray hit shader stage input/output variables and functions

Type Name Description
void rg_hit ( ) entry point signature.
in vec3 rg_Normal geometric normal of the intersected primitive.
in vec3 rg_Hitpoint ray hit position in world space coordinates.
in vec3 rg_BaryCoords the barycentric coordinates on the intersection with the primitive.
in vec2 rg_TexCoords the interpolated texture coordinates of the intersected primitive.
in int rg_MaterialID the primitive's material ID.
in int rg_ShapeID the primitive's shape ID.
in int rg_PrimitiveID the primitive ID.
in float rg_RayDistance the ray segment length.
in vec4 rg_MaterialProperty0 .. 7(int materialID) the material properties, as specified in the scene description for the given material ID.

rg_ShapeID directly corresponds to the order with which shapes where declared in the scene description. It can be used to detect specific objects during hit events for customized shading.

rg_PrimitveID is a unique identifier for each primitive (triangle or sphere). No ordering is guaranteed.

rg_MaterialID uniquely identifies the material of each shape. It can safely be passed to the rg_MaterialProperty0 .. 7() functions to retrieve each properties.

  void rg_hit() {
    vec2 canvas_size   = rg_Canvas;
    vec2 pixel_coords  = rg_Pixel;
    int  frame_index   = rg_Frame;
    .
    .
    .
    vec3 ray_direction = ...;
    vec3 ray_origin    = ...;
    .
    .
    .
    rg_RayDirection    = vec4(ray_direction, RG_RAY_MAX_DISTANCE);
    rg_RayOrigin       = vec4(ray_origin, RG_RAY_ACTIVE_FLAG);
    rg_Accumulation    = vec4(0.0, 0.0, 0.0, 1.0 / float(frame_index));
    rg_Payload0        = vec4(1.0);
}
  

4. Ray-Object Miss


A ray miss event is triggered every time a ray does not intersect an object along a specific direction and a certain travel distance.

void rg_miss()

Miss kernels, offer the same capabilities as intersection kernels, but since there is no geometry hit, they do not have access to any surface properties(e.g. rg_Normal e.t.c.).

However, a miss shader is responsible for generating or terminating ray segments using rg_RayDirection and rg_RayOrigin. Therefore, a miss, does not automatically terminate a ray segment. This is very useful for certain techniques that do not terminate a path on miss. If desired, the user needs to explicitly mark the ray as inactive using the RG_RAY_INACTIVE_FLAG constant. Anyhit queries, through rg_TraceOcclusion, are also available in the ray miss stage.

Similarly, rg_Accumulation needs to be updated accordingly. Both the .rgb components and the accumulation factor in the .a component are important.

Miss shaders can be used to apply a background color or environment map.

  void rg_miss() {
  /* Properly set accumulation color and blending factor */
  rg_Accumulation = vec4(0.0, 0.0, 0.8, 1.0 / float(rg_Frame));

  /* Explicitly terminate ray */
  rg_RayOrigin = vec4(0,0,0, RG_RAY_INACTIVE_FLAG);
  rg_RayDirection = vec4(0.0);
}

5. Post Processing


The post processing stage gives global access to the accumulated image buffer from previous steps. This allows for various programmable filters to be applied on the final image.

The image is accessible through the rg_AccumulatedImage which is of type rg_Image2D. It can be passed to the rg_ImageFetch2D function to get the accumulated value at any pixel. The final pixel color is controlled through rg_PixelColor which when set, corresponds to the final pixel color.

Post Processing stage interface


Type Name Description
void rg_post_process ( ) entry point signature.
out vec4 rg_PixelColor final pixel color to be presented.
in rg_Image2D rg_AccumulatedImage 2D image buffer with the accumulated values (after blending operation).
in rg_Image2D rg_Payload0Image 2D image buffer with the payload0 values from the previous stage.
in rg_Image2D rg_Payload1Image 2D image buffer with the payload1 values from the previous stage.
in rg_Image2D rg_Payload2Image 2D image buffer with the payload2 values from the previous stage.
in rg_Image2D rg_Payload3Image 2D image buffer with the payload3 values from the previous stage.
in rg_Image2D rg_FirstBounceRayOriginImage 2D image buffer with the first generated ray origin. These values are provided by the backend if specified in the scene description.
in rg_Image2D rg_FirstBounceRayDirectionImage 2D image buffer with the first generated ray direction. These values are provided by the backend if specified in the scene description.
in rg_Image2D rg_FirstBounceRayDepthImage 2D image buffer with the first bounce depth (distance to the closest intersected primitive). These values are provided by the backend if specified in the scene description.
in rg_Image2D rg_FirstBounceNormalsImage 2D image buffer with the first bounce normals (the normal of the closest intersected primitive). These values are provided by the backend if specified in the scene description.
in rg_Image2Di rg_FirstBounceMaterialID 2D image buffer with the first bounce material ID (the material ID of the closest intersected primitive) stored in the R channel. These values are provided by the backend if specified in the scene description.
in rg_Image2Di rg_FirstBounceTriangleID 2D image buffer with the first bounce triangle ID (the triangle ID of the closest intersected primitive) stored in the R channel. These values are provided by the backend if specified in the scene description.
in vec4 rg_ImageFetch2D (rg_Image2D image, ivec2 coords) 2D image fetch from the specified pixel coordinates.
in ivec4 rg_ImageFetch2Di(rg_Image2Di image, ivec2 coords) 2D image fetch of integer data from the specified pixel coordinates.

In its simplest form, a post processing kernel simply outputs the accumulated image to the final image.

  void rg_post_process() {
    rg_PixelColor = rg_ImageFetch2D(rg_AccumulatedImage, ivec2(rg_Pixel));
}

In this stage, the user can also request access to a series of special 2D image buffers corresponding to values from the first ray bounce, like rg_FirstBounceNormalsImage, rg_FirstBounceRayDepthImage e.t.c. These need to be explictly enabled through the save_first_bounce boolean flag in the scene description. Additionally, the last output value in rg_Payload0 is accessible though the rg_Payload0Image image. These image buffers facilitate the development of algorithms that require access to additional information during post-processing like advanced denoizing methods.


Keyboard Shortcuts

The following table gives an overview of the default Rayground keyboard shortcuts that are useful while working on shader editor.
Keystroke Action
Ctrl-S Compile & Run source code
Ctrl-Space Upload Changes & Capture teaser image
Ctrl-D Download source code
Ctrl-L Export image (EXR format)
Ctrl-F Find
Ctrl-G Find next
Shift-Ctrl-G Find previous
Shift-Ctrl-F Replace
Shift-Ctrl-R Replace all
Alt-G Go to line
F11 Toggle full screen editing
Esc Exit full screen editing

Miscellaneous

To get more familiar with Rayground we encourage users to see some of the existing samples that further showcase Rayground's capabilities.

Currently, the WebGL spec exposes only a subset of the functionality that a modern GPU has to offer which limits our available options. Stay tuned as we look forward to improving Rayground as we implement new solutions and additional GPU features become available.

Rayground would not have been possible without these amazing open-source technologies

- Code Mirror by Marijn Haverbeke
- sajson by Chad Austin
- WebAssembly through emscripten
- stats.js by mrdoob
- sweetalert2

References

[1] J. K. Salmon, M. A. Moraes, R. O. Dror and D. E. Shaw, "Parallel random numbers: As easy as 1, 2, 3," SC '11: Proceedings of 2011 International Conference for High Performance Computing, Networking, Storage and Analysis, Seatle, WA, 2011, pp. 1-12.