New rendering

12000spp.png_hdr

Rendered with 12000spp – Reinhard tone mapping has been applied after rendering.

Posted in Uncategorized | Leave a comment

Cosine weighted hemisphere

image

In the above picture you can see the sampling points from uniform hemisphere sampling.

image

The left images shows cosine-weighted sampling of the hemisphere – the right uniform sampling (both with 100 000 sampling points)

image

The top hemisphere shows cosine-weighted sampling – the bottom hemisphere shows uniform sampling (both with 100 000 sampling points).

If you only use 100 or 1000 sampling points it is very hard to see actually a difference between both sampling methods.

The code for uniform sampling:

vector3f UniformRandomHemisphereDirection( normal3f n ) { int abortCounter = 0; while(true) { vector3f b(randomFloat() - 0.5f, randomFloat() - 0.5f, randomFloat() - 0.5f); b.normalize(); if (b * n > 0) { return b; } abortCounter++; if (abortCounter > 500) { return b; // for some reason (NaN's perhaps) we don't found a normal } } }

The code for cosine-weighted sampling:

vector3f CosWeightedRandomHemisphereDirection2( normal3f n )
{
    float Xi1 = (float)rand()/(float)RAND_MAX;
    float Xi2 = (float)rand()/(float)RAND_MAX;

    float  theta = acos(sqrt(1.0-Xi1));
    float  phi = 2.0 * 3.1415926535897932384626433832795 * Xi2;

    float xs = sinf(theta) * cosf(phi);
    float ys = cosf(theta);
    float zs = sinf(theta) * sinf(phi);

    vector3f y(n.x, n.y, n.z);
    vector3f h = y;
    if (fabs(h.x)<=fabs(h.y) && fabs(h.x)<=fabs(h.z))
        h.x= 1.0;
    else if (fabs(h.y)<=fabs(h.x) && fabs(h.y)<=fabs(h.z))
        h.y= 1.0;
    else
        h.z= 1.0;


    vector3f x = (h ^ y).normalize();
    vector3f z = (x ^ y).normalize();

    vector3f direction = xs * x + ys * y + zs * z;
    return direction.normalize();
}

The same topic is discussed here: http://www.rorydriscoll.com/2009/01/07/better-sampling/

Posted in Uncategorized | 8 Comments

A simple Reyes Implementation

Last year I started implementing a naive Reyes renderer. The source code can be found here:

http://home.in.tum.de/~amannj/BlueReyes.zip

I started to program the renderer because I wanted to do a little research project on the Reyes – but after two weeks I gave it up and started a new one ;). Initially I didn’t plan to publish the source code – so don’t wonder about formatting, code architecture and comments (main.cpp is really bad – but the rest is quit readable ;))- it was hacked together Quick&Dirty – but nevertheless I think it is a good point to get started with a reyes implementation

Especially of great help were the tutorials of Alex:

http://www.steckles.com/reyes1.html

http://www.steckles.com/reyes2.html

I also started to read the book “Production Rendering” -I tried to get the code as close to the description in the book as possible. Because of the short time I didn’t managed it to implement an advanced frame buffer or a stochastic hider.

To parse *.rib files I used the parser of Aqsis (http://www.aqsis.org/)

BTW: Once my supervisor told me: “Everything about renderman is perfect – except reyes” – and I think he is right 😉

Posted in Uncategorized | 2 Comments

Realistic Image Synthesis Using Path Tracing

Definiton 1: The goal of realistic image synthesis is to render a realistic image that is indistinguishable from a real photograph.

The realstic image synthesis test

To test if we succeeded in building a system that is capable of rendering realistic images we just take a collection of 20 real photos that were taken by different commonly used cameras and 20 computer generated images of our rendering system that should show images that could be possible made by a real photo camera. The images should be all of the same format (i. e. aspect ratio 4:3, pixel resolution 2560×1920) and the images taken should show typical representative scenes independent if they were photographed or computer generated. The later means that there should not be images for example in extreme close-up views or in unusual lighting conditions because it’s easy to fool a test observer under unusual situations. Finally all photos should be printed and the printed images should be mixed. We now show these 40 printed images to 40 test persons that should be demographically representative. The test persons have now to classify without knowing how the images were created which of them are photographs and which of them were computer generated. If the number of false positives matches about 50% it means that we succeeded in building a realistic image synthesis system. False positive means here an image that is for example a photograph but was classified as a computer generated image or the other way around, a images that was classified as a photograph but is in fact computer generated.

Right now its not important to have a correct test for a realistic image synthesis system and to define exactly what for example means demographically representative because current renderers are far away from rendering realistic scenes.

Even if we come up with a system that can render realistic scenes we also have to take into account if scenes can be created with this system within a reasonable time, because for example in the movie industry there are serve time constraints. So for pixar movies a frame should not take longer than 10 minutes to render – which means a full feature film can be rendered in about a year.

On the way to a simple path tracer

There are many algorithms for realistic images. One of the most simplest algorithms among them is path tracing. In the following I try to develop a path tracer that is capable of rendering realistic images.

The most interesting part of a path tracer is the computation of radiance. A very simple algorithm to compute the radiance under some constraints along a ray can be seen in the following code listing:

Spectrum radiance( ray r )
{
    if(r hits scene at x)
    {
        vector3 b = randomDirection();
        ray rn = constructRay(x, b);
        return emittedRadiance(x) + reflectance * radiance(rn)
    }
    else
    {
        return BackgroundSpectrum;
    }
}

image

The above picture shows a rendering that was done using the descriped radiance algorithm with 100 sampels per pixel.

If you are confused by the terms spectrum or radiance just think of them like they mean something similar as color.

So there are right now two questions – how can this be translated to working code and how is the above algorithm related to reality.

Well I want to answere the second question first.

If you didn’t understand this section continue reading – if you understood it leave the site – there is nothing I can tell you Zwinkerndes Smiley

Physical foundations

Basics of radiometry

Lambert’s cosine law

Imaging a bundle of light rays is hitting a surface. To make things easier we can consider only the 2d case of this:

image

If the bundle is parallel to the normal vector of the surface it hits the hit area of the bundle is smaller than the hit area if the bundle that is not parallel to the area. See the figure to understand this. So from a physical standpoint this means the “light energy” that is arriving through the light bundle is always the same (we assume here that a light bundle of 1 m has always the same “light energy” independent how ridiculous this from a physical standpoint is.) This bundle energy has to be divided through the hit area, so we can say how much “light energy” per meter is entering. If the area is large there is not mush “light energy” per meter is it small it is exactly the other way around.

If you want to compute the hit area you will find out that it is depended on the cosine of the angle between the normal vector of the surface and the vector of the light. Therefore the name of this law contains the word cosine. the idea goes back to a scientist called Lambert.

To every time you see somewhere a cosine in an equation with high probability it is just Lambert’s cosine law.

Radiometric Quantities

There are some fundamental quantities in radiometry that help us to talk about realistic rendering. Notice that to render realistic images does not mean to render in a physically plausible way. But for example for refractions it makes much sense to take the reflective laws of physics into consideration. The rule of thumb is rather easy. Every time a physical motivated concept helps to get a more realistic image it will be used.

The following tabel gives an overview of the most importand radiometric quantities.

Quantity Symbol Unit
energy flux clip_image002 W
radiant intensity I W/sr
irradiance E W/m²
radiance L W/m²sr

The rendering equation

There is a simple relation between irradiance and radiance:

image

In simple words this means that the irradance at a point is just all the collected radiance of the sourounding hemisphere of that point. The following figure illustrates this:

image

One of the differences between radiance and irradiance is that radiance is a directed qunatity and irradiance has no direction. Just take a look at the units of the quantities to figure that out.

If we want to know how much radiance from a particuallary direction is arraving at our eye it is quite simple. Its just L=Le + Lr. If the point we are looking at is light source it will emit maybe some radiance into our direction this is express through the term Le. If its not a light source no problem then Le while be just zero and everything is fine. Lr is more complex to explain. Lr describes the radiance that arravies at the point we are looking at and is reflected excatly in our direction. So there are tow questions:

1. How much radiance is arraiving at the point we are looking at?

2. How mush radiance is reflected to your eye point.

The second questions can be eayily answerd just look at equation dE = ….

To answer the first question I have to introduce the Bidrectional relfection distribution function or short BRDF:

image

In simple words the BRDF describes how mush “light” is reflected from a ceratin direction into another direction.

With some mathematics the equation can be written as:

image

The orange equation is called reflection equation, because for a point x all directions of its sourounding hemisphere are “sampled” to determine the radiance that is arraiving at x from all directions. After that according to the particualar BRDF (fr) only the part of each radiacne is “relfected” into the direction wr that is relevant for the direction wr. In your case we are interested in how much radiance arrives into the direction of the eye. And thats it.

L = Le + Lr can also expressed as:

image

Path Tracing

The above equation is called the rendering equation. One could also write:

radiance = emitted_radiance + integral(BRDF*incoming_radiance * n.l)

For path tracing (single sample) it becomes:
radiance = emitted_radiance + BRDF*incoming_radiance(random_direction) * n.l / pdf

If you are using a diffuse BRDF with a cosine weighted importance sampling:
radiance = emitted_radiance + color/pi * incoming_radiance * n.l / (n.l/pi)
or, simplified:
radiance = emitted_radiance + color * incoming_radiance

to be continued…

Posted in Uncategorized | 1 Comment