A Gentle Introduction to DirectX Raytracing 8



In Tutorial 7, we added random jitter to the camera position to allow antialiasing by accumulating samples temporally over multiple frames. This tutorial modifies our RayTracedGBufferPass from Tutorial 4 to use a simple thin lens camera model. Each pixel randomly selects the camera origin somewhere on the lens of this camera. Temporally accumulating these random camera origins over multiple frames allows us to model dynamic depth of field.

教程7中,我们在相机位置添加了随机抖动,通过在多个帧上临时积累样本来允许抗锯齿。本教程修改了教程4中的 RayTracedGBufferPass 以使用简单的针孔相机模型。每个像素随机选择相机镜头上某处的相机原点。在多个帧上临时累积这些随机相机原点,使我们能够对动态景深进行建模。

Our Antialised Rendering Pipeline

If you open up Tutor08-ThinLensCamera.cpp, you will find our new pipeline combines a new ThinLensGBufferPass, the AmbientOcclusionPass from Tutorial 5, and the SimpleAccumulationPass from Tutorial 6.

如果您打开 Tutor08-ThinLensCamera.cpp,您会发现我们的新管道结合了教程5 中的 ThinLensGBufferPassAmbientOcclusionPass教程6 中的SimpleAccumulationPass

// Create our rendering pipeline
RenderingPipeline *pipeline = new RenderingPipeline();
pipeline->setPass(0, ThinLensGBufferPass::create());
pipeline->setPass(1, AmbientOcclusionPass::create());

This ThinLensGBufferPass builds on the RayTracedGBufferPass from Tutorial 4 but adds our new per-pixel random camera origin. Essentially, camera jitter from Tutorial 7 perturbs the camera ray direction; a thin lens model also perturbs the camera origin.

ThinLensGBufferPass 基于教程 4中的 RayTracedGBufferPass 构建,但添加了我们新的每像素随机相机原点。从本质上讲,教程7 中的相机抖动会扰乱相机光线方向;薄透镜模型也扰乱了相机的原点

In order to combine antialiasing, our thin lens camera model, and to allow the user to toggle them both on and off, the ThinLensGBufferPass also reuses the random jittering code from Tutorial 7.

为了结合我们的薄透镜相机模型的抗锯齿,并允许用户同时打开和关闭它们,ThinLensGBufferPass 还重用了 教程7 中的随机抖动代码。.

Setting up Our Thin Lens

Continue by looking in ThinLensGBufferPass.h. The key changes are the introduction of a number of variables related to lens parameters:

继续查看 ThinLensGBufferPass.h。主要变化是引入了许多与镜头参数相关的变量:

bool      mUseThinLens = false;  // Currently using thin lens?  (Or pinhole?)
float mFNumber = 32.0f; // The f-number of our thin lens
float mFocalLength = 1.0f; // The distance to our focal plane
float mLensRadius; // The camera aperature. (Computed)

mUseThinLens is a user-controllable variable in the GUI that allows toggling camera jitter. mFNumber and mFocalLength are the user-controllable parameters for the thin lens. mFNumber controls the virtual f-number. mFocalLength controls our camera’s focal length, which is the distance from the camera where all rays contributing to a pixel converge.

mUseThinLens 是 GUI 中用户可控制的变量,允许切换相机抖动。mFNumbermFocalLength 是薄透镜模型中用户可控参数。mFNumber控制虚拟 通光孔径mFocalLength控制相机的焦距,这是与相机之间的距离,其中所有光线都聚集在一个像素上。

Since we define default values for all our thin lens parameters in the header file, there are no camera-specific additions to our ThinLensGBufferPass::initialize() method, so we move on to the changes required in ThinLensGBufferPass::execute()

由于我们在头文件中定义了所有薄镜头参数的默认值,因此我们的 ThinLensGBufferPass::initialize() 方法中没有特定于相机的添加,因此我们继续进行 ThinLensGBufferPass::execute() 中所需的更改。

void ThinLensGBufferPass::execute(RenderContext::SharedPtr pRenderContext)
// Compute lens radius based on our user-exposed controls
mLensRadius = mFocalLength / (2.0f * mFNumber);

// Specify our HLSL variables for our thin lens
auto rayGenVars = mpRays->getRayGenVars();
rayGenVars["RayGenCB"]["gLensRadius"] = mUseThinLens ? mLensRadius : 0.0f;
rayGenVars["RayGenCB"]["gFocalLen"] = mFocalLength;

// Compute our camera jitter
float xJitter = mUseJitter ? mRngDist(mRng) : 0.0f;
float yJitter = mUseJitter ? mRngDist(mRng) : 0.0f;
rayGenVars["RayGenCB"]["gPixelJitter"] = vec2(xOff, yOff);

The first addition computes mLensRadius based on our user-specified focal length and f-number. We then pass down our thin lens parameters to our DirectX ray generation shader, where we’ll use it to determine what rays to shoot from our camera. Note: A thin lens camera model degenerates to a pinhole camera if the lens radius is set to zero, so if the user chooses a pinhole camera the logic need not change.

第一个加法根据我们用户指定的 焦距和通光孔距计算 mLensRadius。然后,我们将薄镜头参数传递到 DirectX 光线生成着色器,在那里我们将使用它来确定要从相机拍摄哪些光线。注意:如果镜头半径设置为零,则薄镜头相机型号会退化为针孔相机,因此如果用户选择针孔相机,逻辑无需更改。

We also pass down a random camera jitter to antialias geometry that is focus. This is slightly different that when rasterizing in Tutorial 7, where Falcor utilities handled everything to jitter the camera. Here we pass the pixel jitter down to our ray generation shader, where we’ll take also this jitter into account when tracing our rays.

我们还将随机的相机抖动传递给作为焦点的抗锯齿几何图形。这与教程 7 中的栅格化略有不同,在 教程 7 中,Falcor 实用程序处理所有内容以使相机抖动。在这里,我们将像素抖动传递到光线生成着色器,在跟踪光线时,我们还将考虑这种抖动。

DirectX Ray Generation for Jittered, Thin-Lens Camera Rays

The final step in this tutorial is updating our G-buffer’s ray generation to perturb our camera origin and ray directions:

本教程的最后一步是更新 G 缓冲区的光线生成,以扰动摄像机原点和光线方向:

void GBufferRayGen()
// Get our pixel's position on the screen
uint2 rayIdx = DispatchRaysIndex();
uint2 rayDim = DispatchRaysDimensions();

// Convert our ray index into a jittered ray direction.
float2 pixelCenter = (rayIdx + gPixelJitter) / rayDim;
float2 ndc = float2(2, -2) * pixelCenter + float2(-1, 1);
float3 rayDir = ndc.x * gCamera.cameraU +
ndc.y * gCamera.cameraV +

To start off, it looks very similar to our previous ray traced G-buffer. However, instead of using a fixed 0.5f offset to shoot our ray through the center of each pixel, we’ll use our computed camera jitter gPixelJitter as our sub-pixel offset. This gives us antialiasing as in Tutorial 7.

首先,它看起来与我们之前的光线追踪G缓冲区非常相似。但是,我们将使用计算的相机抖动gPixelJitter作为子像素偏移,而不是使用固定的0.5f偏移量来拍摄穿过每个像素中心的光线。这为我们提供了抗锯齿,如 教程7 所示。

// Find the focal point for this pixel.
rayDir /= length(gCamera.cameraW);
float3 focalPoint = gCamera.posW + gFocalLen * rayDir;

Next, we want to find the focal point for this pixel. All rays in the pixel pass through this focal point, so a ray from the camera center goes through it. Compute the right point by moving along this ray an appropriate distance. The division ensures this focal plane is planar (i.e., always the same distance from the camera along the viewing vector cameraW).

接下来,我们要找到此像素的焦点。像素中的所有光线都通过此焦点,因此来自相机中心的光线穿过它。通过沿着这条射线移动适当的距离来计算正确的点。该划分确保该焦平面是平面的(即,始终沿观察向量相机与相机保持相同的距离 cameraW ).

// Initialize a random number generator
uint randSeed = initRand(rayIdx.x + rayIdx.y * rayDim.x, gFrameCount);

// Get point on lens (in polar coords then convert to Cartesian)
float2 rnd = float2( nextRand(randSeed) * M_2PI,
nextRand(randSeed) * gLensRadius );
float2 uv = float2( cos(rnd.x) * rnd.y, sin(rnd.x) * rnd.y );

Now we need to compute a random ray origin on our lens. To do this, we first initialize a random number generator and pick a random point on a canonical lens of the selected raidus (in polar coordinates), then convert this to a Cartesian location on the lens.

现在我们需要计算镜头上的随机光线原点。为此,我们首先初始化一个随机数生成器,并在所选 raidus 的规范透镜上选取一个随机点(在极坐标中),然后将其转换为透镜上的笛卡尔位置。

// Use uv coordinate to compute a random origin on the camera lens
float3 randomOrig = gCamera.posW + uv.x * normalize(gCamera.cameraU) +
uv.y * normalize(gCamera.cameraV);

// Initialize a random thin lens camera ray
RayDesc ray;
ray.Origin = randomOrig;
ray.Direction = normalize(focalPoint - randomOrig);
ray.TMin = 0.0f;
ray.TMax = 1e+38f;

Finally, we compute the random thin lens camera ray to use. We convert our random sample into an actual world space position on the camera (using the origin and the camera’s u and v vectors).


Once we have the random camera origin, we can compute our random ray direction by shooting from the random origin through this pixel’s focal point (computed earlier).


The rest of our G-buffer pass is identical to RayTracedGBufferPass from Tutorial 4.

我们的 G 缓冲区传递的其余部分与教程 4中的RayTracedGBufferPass相同.

What Does it Look Like?

That covers the important points of this tutorial. When running, you get the following result:


Hopefully, this tutorial demonstrated how to add add a simple thin lens model by randomly selecting ray origin and direction based on standard thin lens camera parameters.


When you are ready, continue on to Tutorial 9, which swaps out our simplistic ambient occlusion shading for a slightly more complex Lambertian material model using ray traced shadows.

准备就绪后,请继续学习 教程 9,该教程将我们简单的环境光遮蔽着色替换为使用光线追踪阴影的稍微复杂的 Lambertian 材质模型。