Light rendering on maps

Or also said “lightmaps”. I have finally came to try to implement a full fledge lightmapper for projet S.E.R.Hu.M.
projet S.E.R.Hu.M. is my highschool (2002) attempt at copying the Valve’s gold engine, and make a game out of it.

I’ve never come close, but I still like to develop on its codebase for experiments, or just random progress on it. It is a piece of art, like a sculpture, but that would take a lifetime to complete.
I always wanted to make this, it was one of the main excitement perspective I had when I first started the project “oh yeah when I need to make lightmaps, juicy tech in sight !”
But in 2002 I had no idea how to process radiosity, and I thought a direct lighting raytracer would be just enough. And it could. As long as you manually place lights everywhere, like probes.

But, now that I am an educated senior graphics programmer, I have no problem grasping some algorithms, notably Henrik Wann Jensen’s photon map approach, with final gathering.
As you can see if you follow the link, this method dates back 1996. Many newer crazier method have followed, the one I’m using is actually a later variant, but still from about the 2000’s.
Today, we have Metropolis light transport, augmented with low variance estimators, implanted in stochastic path tracers; and the whole thing runs on GPUs. Pretty crazy stuff.

Today we have a myriad, power 10, of crazy, impossible to understand, graphics rendering methods:
http://cseweb.ucsd.edu/~ravir/papers/invlamb/josa.pdf
https://www.solidangle.com/research/egsr2012_volume.pdf
http://www.tnw.tudelft.nl/fileadmin/Faculteit/TNW/Over_de_faculteit/Afdelingen/Radiation_Radionuclides_Reactors/Research/Research_Groups/NERA/Publications/doc/PhD_Christoforou.pdf
https://people.csail.mit.edu/fredo/TOG/tog.pdf
Some are easier…
http://www.crytek.com/download/Light_Propagation_Volumes.pdf
https://research.nvidia.com/sites/default/files/publications/GIVoxels-pg2011-authors.pdf

And that is all very well, but I will not implement something I don’t fully get. I have actually implemented LPV, they can be seen in a product called LumenRT 2015.
Check them out: https://www.youtube.com/watch?v=dBxMCdujdUw

But, I didn’t want to redo a tech I already implemented, so I went for my old target, lightmaps. This way I get to implement final gathering, yay !

First I had to do a mesh parameterizer. This wasn’t very easy, it was fun, but I did a crappy job, mesh parameterization is crazy hard. So I did an ad-hoc tech that works well with blocky architectural designs that we get out of Worldcraft (sorry Hammer).
I decided to make a database of individual triangles, each would bear information on its surface, and maximum edge length. Then I regrouped similar pairs together, with preference if they actually share an edge in 3d. This would give me a list of quads.

Then, comes the packing, I took this idea, it worked awesome. Plus some personal pepper on it to make it more adapted to my case, for the final seasoning.

Now we’re ready to render stuff. I took the approach of visiting the lumels from the lightmap, then I would reproject the lumel in 3d by interpolating the coordinates from the vertices. From this 3d point, I can finally do actual lighting; this is where embree comes into play.

Embree is freaking awesome. This is a beautiful piece of software that is made by intel to run fast, on intel architectures. And fast, it is. I managed to get 19 million intersections per seconds (corei7) in my use case without working on packaging rays or streaming rays (and another paper here), at all.

I quickly had direct lighting, with broken results at first. Notably all black. Then I got some black and some white, then some stuff that looked ok mixed with weird black seams. Then I managed to get it to work completely.

Capture8

This is a view of one of the firsts results I had, This shows my classic warehouse scene with 3 or 4 spot lights at the ceiling.
We can still see what seems to look like a bug, the top iron seams are very bright. This is because their triangles are too stretchy, my sorting algorithm decided to ban them, I intend to treat this kind of geometry per vertex later.

You can see the difference from the flat lighting I had before, this is what you would get, without lightmaps:

Capture1

Some other spot light view from inside the tall observatory stair case:

Captureb

I am not sure if the attenuation formula is right. This is not easy to get, because of non physical units used, and the fact that infinitely small lights makes no sense, so how do you design a formula that makes sense ? Will all formulas I used to see, the light intensity is infinity, at light position, then after 1 meter, it becomes “original artist light value”, or if you are lucky/unlucky, it could be 1 meter divided by Pi. Why 1 meter ? because intensity = lightcolor / distance. (or distance squared) You see that intensity is equal to lightcolor when distance is 1. So in world units, if your unit is a meter, it means you attenuate from 1 meter. What if your unit is not meter ? your attenuation varies. THAT is the pain in the butt. This smells arbitrariness to me. One day I’ll sort this out.
Until then, I use a contraption, some empirical technique where the artist specifies from what distance in world he/she wants the attenuation to be 95% (so 5% energy remaining). In the middle I use a distance squared curve, because that’s the most physically correct.

However, you find a lot of renderers that use linear attenuation. I now know why. This is because in the past, we never use gamma correct color encoding. We made all lighting computations in gamma space instead of linear space. Which is a total mistake. It breaks everything. Of course, now that I know, I didn’t make that mistake.
I even went so far as to create a color class that can store its current working space, and convert from one to the other on demand. It will pop some asserts in case of mixed operands during computations. Yay !

The final goodies is the lighting from the sky. This is much more interesting than plain stupid direct lighting. I made a monte carlo sampler into a cube map I prepared with cubemapgen, that pre-bakes irradiance. However, one does not simply evaluate the ambiant occlusion of a lumel. This is where the monte carlo sampler plays its role. It sends many random rays towards the sky and count how many passes. Many means I can take the cube map sample almost as-is. Few means we lay in the dark.

Let’s see some images

Capture12

Here we see the effect of ambient occlusion, the parts indoors don’t get light from the sky.
You can also see the noisy grain, this is due to the random sampling. I experimented with stratified sampling and got some results, but I also have banding. I am not sure which artefact I prefer !

The same image with 4000 samples per pixel:

Capture2e

Unfortunately this level as it is, takes about 1 hour to compute at this quality of sampling. Not good. I need a drastic cut. My target is one minute per level.

Now let’s see simply more images with some comments to go with them.

Capture33

This is an example of how smooth the lighting gets with 4k samples per lumel.

Capture19

This shows noise in the random sampler.

Capture17

This exhibits the seaming problem everybody has eventually with a mesh parameterizer. Mine is particularly bad, so I get particularly bad results.

Capture15

This is 50% stratified, so we get some noise, but… not fully randomly.

Capture13

Here is 100% random, we can clearly see the grain on those otherwise clean walls !

Capture1a

Nice ambient occlusion effects, in the test map.

More to come !

Advertisements

One comment

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s