Definiton 1: The goal of realistic image synthesis is to render a realistic image that is indistinguishable from a real photograph.
The realstic image synthesis test
To test if we succeeded in building a system that is capable of rendering realistic images we just take a collection of 20 real photos that were taken by different commonly used cameras and 20 computer generated images of our rendering system that should show images that could be possible made by a real photo camera. The images should be all of the same format (i. e. aspect ratio 4:3, pixel resolution 2560×1920) and the images taken should show typical representative scenes independent if they were photographed or computer generated. The later means that there should not be images for example in extreme closeup views or in unusual lighting conditions because it’s easy to fool a test observer under unusual situations. Finally all photos should be printed and the printed images should be mixed. We now show these 40 printed images to 40 test persons that should be demographically representative. The test persons have now to classify without knowing how the images were created which of them are photographs and which of them were computer generated. If the number of false positives matches about 50% it means that we succeeded in building a realistic image synthesis system. False positive means here an image that is for example a photograph but was classified as a computer generated image or the other way around, a images that was classified as a photograph but is in fact computer generated.
Right now its not important to have a correct test for a realistic image synthesis system and to define exactly what for example means demographically representative because current renderers are far away from rendering realistic scenes.
Even if we come up with a system that can render realistic scenes we also have to take into account if scenes can be created with this system within a reasonable time, because for example in the movie industry there are serve time constraints. So for pixar movies a frame should not take longer than 10 minutes to render – which means a full feature film can be rendered in about a year.
On the way to a simple path tracer
There are many algorithms for realistic images. One of the most simplest algorithms among them is path tracing. In the following I try to develop a path tracer that is capable of rendering realistic images.
The most interesting part of a path tracer is the computation of radiance. A very simple algorithm to compute the radiance under some constraints along a ray can be seen in the following code listing:
Spectrum radiance( ray r )
{
if(r hits scene at x)
{
vector3 b = randomDirection();
ray rn = constructRay(x, b);
return emittedRadiance(x) + reflectance * radiance(rn)
}
else
{
return BackgroundSpectrum;
}
}
The above picture shows a rendering that was done using the descriped radiance algorithm with 100 sampels per pixel.
If you are confused by the terms spectrum or radiance just think of them like they mean something similar as color.
So there are right now two questions – how can this be translated to working code and how is the above algorithm related to reality.
Well I want to answere the second question first.
If you didn’t understand this section continue reading – if you understood it leave the site – there is nothing I can tell you
Physical foundations
Basics of radiometry
Lambert’s cosine law
Imaging a bundle of light rays is hitting a surface. To make things easier we can consider only the 2d case of this:
If the bundle is parallel to the normal vector of the surface it hits the hit area of the bundle is smaller than the hit area if the bundle that is not parallel to the area. See the figure to understand this. So from a physical standpoint this means the “light energy” that is arriving through the light bundle is always the same (we assume here that a light bundle of 1 m has always the same “light energy” independent how ridiculous this from a physical standpoint is.) This bundle energy has to be divided through the hit area, so we can say how much “light energy” per meter is entering. If the area is large there is not mush “light energy” per meter is it small it is exactly the other way around.
If you want to compute the hit area you will find out that it is depended on the cosine of the angle between the normal vector of the surface and the vector of the light. Therefore the name of this law contains the word cosine. the idea goes back to a scientist called Lambert.
To every time you see somewhere a cosine in an equation with high probability it is just Lambert’s cosine law.
Radiometric Quantities
There are some fundamental quantities in radiometry that help us to talk about realistic rendering. Notice that to render realistic images does not mean to render in a physically plausible way. But for example for refractions it makes much sense to take the reflective laws of physics into consideration. The rule of thumb is rather easy. Every time a physical motivated concept helps to get a more realistic image it will be used.
The following tabel gives an overview of the most importand radiometric quantities.
Quantity 
Symbol 
Unit 
energy flux 

W 
radiant intensity 
I 
W/sr 
irradiance 
E 
W/m² 
radiance 
L 
W/m²sr 
The rendering equation
There is a simple relation between irradiance and radiance:
In simple words this means that the irradance at a point is just all the collected radiance of the sourounding hemisphere of that point. The following figure illustrates this:
One of the differences between radiance and irradiance is that radiance is a directed qunatity and irradiance has no direction. Just take a look at the units of the quantities to figure that out.
If we want to know how much radiance from a particuallary direction is arraving at our eye it is quite simple. Its just L=Le + Lr. If the point we are looking at is light source it will emit maybe some radiance into our direction this is express through the term Le. If its not a light source no problem then Le while be just zero and everything is fine. Lr is more complex to explain. Lr describes the radiance that arravies at the point we are looking at and is reflected excatly in our direction. So there are tow questions:
1. How much radiance is arraiving at the point we are looking at?
2. How mush radiance is reflected to your eye point.
The second questions can be eayily answerd just look at equation dE = ….
To answer the first question I have to introduce the Bidrectional relfection distribution function or short BRDF:
In simple words the BRDF describes how mush “light” is reflected from a ceratin direction into another direction.
With some mathematics the equation can be written as:
The orange equation is called reflection equation, because for a point x all directions of its sourounding hemisphere are “sampled” to determine the radiance that is arraiving at x from all directions. After that according to the particualar BRDF (fr) only the part of each radiacne is “relfected” into the direction wr that is relevant for the direction wr. In your case we are interested in how much radiance arrives into the direction of the eye. And thats it.
L = Le + Lr can also expressed as:
Path Tracing
The above equation is called the rendering equation. One could also write:
radiance = emitted_radiance + integral(BRDF*incoming_radiance * n.l)
For path tracing (single sample) it becomes:
radiance = emitted_radiance + BRDF*incoming_radiance(random_direction) * n.l / pdf
If you are using a diffuse BRDF with a cosine weighted importance sampling:
radiance = emitted_radiance + color/pi * incoming_radiance * n.l / (n.l/pi)
or, simplified:
radiance = emitted_radiance + color * incoming_radiance
to be continued…