Thursday 28 November 2013

Tech Feature: Linear-space lighting


Linear-space lighting is the second big change that has been made to the rendering pipeline for HPL3. Working in a linear lighting space is the most important thing to do if you want correct results.
It is an easy and inexpensive technique for improving the image quality. Working in linear space is not something the makes the lighting look better, it just makes it look correct.

(a)  Left image shows the scene rendered without gamma correction 
(b) Right image is rendered with gamma correction

Notice how the cloth in the image to the right looks more realistic and how much less plastic the specular reflections are.
Doing math in linear space works just as you are used to. Adding two values returns the sum of those values and multiplying a value with a constant returns the value multiplied by the constant. 

This seems like how you would think it would work, so why isn’t it?

Monitors

Monitors do not behave linearly when converting voltage to light. A monitor follows closer to an exponential curve when converting the pixel value. How this curve looks is determined by the monitor’s gamma exponent. The standard gamma for a monitor is 2.2, this means that a pixel with 100 percent intensity emit 100 percent light but a pixel with 50 percent intensity only outputs 21 percent light. To get the pixel to emit 50 percent light the intensity has to be 73 percent.

The goal is to get the monitor to output linearly so that 50 percent intensity equals 50 percent light emitted.

 Gamma correction

Gamma correction is the process of converting one intensity to another intensity which generates the correct amount of light.
The relationship between intensity and light for a monitor can be simplified as an exponential function called gamma decoding.



To cancel out the effect of gamma decoding the value has to be converted using the inverse of this function.
Inversing an exponential function is the inverse of the exponent. The inverse function is called gamma encoding.




Applying the gamma encoding to the intensity makes the pixel emit the correct amount of light.

Lighting

Here are two images that use simple Lambertian lighting (N * L) .

(a) Lighting performed in gamma space
(b) Lighting performed in linear space
The left image has a really soft falloff which doesn’t look realistic. When the angle between the normal and light source is 60 degrees the brightness should be 50 percent.  The image on the left is far too dim to match that. Applying a constant brightness to the image would make the highlight too bright and not fix the really dark parts. The correct way to make the monitor display the image correctly is by applying gamma encoding it. 

 (a) Lighting and texturing in gamma space
(b) Lighting done in linear space with standard texturing
(c) The source texture

Using textures introduces the next big problem with gamma correction. In the left image the color of the texture looks correct but the lighting is too dim. The right image is corrected and the lighting looks correct but the texture, and the whole image, is washed out and desaturated. The goal is to keep the colors from the texture and combining it with the correct looking lighting.

Pre-encoded images

Pictures taken with a camera or paintings made in Photoshop are all stored in a gamma encoded format. Since the image is stored as encoded the monitor can display it directly. The gamma decoding of the monitor cancels out the encoding of the image and linear brightness gets displayed. This saves the step of having to encode the image in real time before displaying it. 
The second reason for encoding images is based on how humans perceive light. Human vision is more sensitive to differences in shaded areas than in bright areas. Applying gamma encoding expands the dark areas and compresses the highlights which results in more bits being used for darkness than brightness. A normal photo would require 12 bits to be saved in linear space compared to the 8 bits used when stored in gamma space. Images are encoded with the sRGB format which uses a gamma of 2.2.

Images are stored in gamma space but lighting works in linear space, so the image needs to be converted to linear space when they are loaded into the shader. If they are not converted correctly there will be artifacts from mixing the two different lighting spaces. The converstion to linear space is done by applying the gamma decoding function to the texture.



      (a) All calculations have been made in gamma space 
        (b) Correct texture and lighting, texture decoded to linear space and then all calculations are done before encoding to gamma space again

Mixing light spaces

Gamma correction a term is used to describe two different operations, gamma encoding and decoding. When learning about gamma correction it can be confusing because word is used to describe both operations.
Correct results are only achieved if both the texture input is decoded and then the final color is encoded. If only one of the operations is used the displayed image will look worse than if none of them are.



     (a) No gamma correction, the lighting looks incorrect but the texture looks correct. 
(b) Gamma encoding of the output only, the lighting looks correct but the textures becomes washed out
(c)  Gamma decoding only, the texture is much darker and the lighting is incorrect. 
(d) Gamma decoding of texture and gamma encoding of the output, the lighting and the texture looks correct.

Implementation

Implementing gamma correction is easy. Converting an image to linear space is done by appling the gamma decoding function. The alpha channel should not be decoded, as it is already stored in linear space.

// Correct but expensive way
vec3 linear_color = pow(texture(encoded_diffuse,  uv).rgb, 2.2);
// Cheap way by using power of 2 instead
vec3 encoded_color = texture(encoded_diffuse,  uv).rgb;
vec3 linear_color = encoded_color * encoded_color;

Any hardware with DirectX 10 or OpenGL 3.0 support can use the sRGB texture format. This format allows the hardware to perform the decoding automatically and return the data as linear. The automatic sRGB correction is free and give the benefit of doing the conversion before texture filtering.
To use the sRGB format in OpenGL just pass GL_SRGB_EXT instead of GL_RGB to glTexImage2D as the format.

After doing all calculations and post-processing the final color should then to be correct by applying gamma encoding with a gamma that matches the gamma of the monitor.

vec3 encoded_output = pow(final_linear_color, 1.0 / monitor_gamma);

For most monitors a gamma of 2.2 would work fine. To get the best result the game should let the player select gamma from a calibration chart.
This value is not the same gamma value that is used to decode the textures. All textures are be stored at a gamma of 2.2 but that is not true for monitors, they usually have a gamma ranging from 2.0 to 2.5.

When not to use gamma decoding

Not every type of texture is stored as gamma encoded. Only the texture types that are encoded should get decoded. A rule of thumb is that if the texture represents some kind of color it is encoded and if the texture represents something mathematical it is not encoded. 
  • Diffuse, specular and ambient occlusion textures all represent color modulation and need to be decoded on load 
  • Normal, displacement and alpha maps aren’t storing a color so the data they store is already linear

Summary

Working in linear space and making sure the monitor outputs light linearly is needed to get properly rendered images. It can be complicated to understand why this is needed but the fix is very simple.
  • When loading a gamma encoded image apply gamma decoding by raising the color to the power of 2.2, this converts the image to linear space 
  • After all calculations and post processing is done (the very last step) apply gamma encoding to the color by raising it to the inverse of the gamma of the monitor

If both of these steps are followed the result will look correct.

References


Friday 22 November 2013

People of Frictional: Thomas Grip

Introduction
This will be the first part in a series where we introduce all the members of Frictional Games. Apart from the obvious "getting to know the team", it will also be an insight into the daily workings of the company. What makes Frictional Games different from many other developers is that everybody works from home, rarely meet in person and very few have had any professional game making experience before joining the team. All communication is done over Skype (plus the rare phone call), and for the last few years the whole team only meets up once a year. When we tell this to people we usually get surprised reactions, and they have trouble understanding how it all can work. Hopefully this series can help answer that.

With that said, let's get this series started! First up, I will get myself out of the way.

Who am I?
Hi all! My name is Thomas Grip and I am one of the two founding members of Frictional Games. For the first few years at Frictional Games I used to work from my living room, on a desk placed next to the TV.(This made me an expert in shows like Top Model, Bold and Beautiful and whatever my fiancee watched while I worked during the evenings.) Eventually we moved to a bigger apartment and I got my own office. This how my work space looks right now:


Background
I started out making games in 1997 (when I was 16) and my first game, called "Köttar Monstret" (yeah, I know...), was made on a TI-83 and became kinda popular in my class. At the time I did not have a computer, and had never really used one. I did not feel I was a very technical person and even though I had chosen to study the natural sciences, my main interest was with art and I drew and painted a lot. But when I started to program on that TI-83, which was quite clunky with only 8 or so short lines visible at once, it was like a revelation to me. I had never understood that you could do this sort of thing with a computer. I was hooked, and needed to learn more. First up, I got hold of an actual PC, this wonderful machine, and started to learn QBasic on it. With no access to the internet, my only source of information was old and worn programming books that I found at the library. I remembered that I searched hard for some book that explained how to display graphics. When QBasic did not tell me, I learned Pascal, but no graphics in there, so I went on to C, but I did not find anything there either. The best I could do was to get colored symbols from the extended ASCII character set, but that was no fun, I wanted proper pictures!


When at school I mostly spent lectures drawing stuff like this.

Eventually, I stumbled upon a book, called Game Programming Explorer or something, in the back of a strange bookstore at the outskirts of my home town. It explained to me that I had to program these routines myself! So I learned all about the wonderful world of Mode 13h. Soon after I bought a proper PC (120Mhz if I recall correctly) that some shady guy had advertised in the newspaper. As we got better access to internet at school I found a site called ProgrammersHeaven.com (it looked different back in 98) and I downloaded tons of stuff on floppy disks. My most important discoveries were Denthor's Asphyxia Tutorials and a small game called "Boboli" that came along with source code (made by this guy). These were my main inspirations for a while - until I stumbled upon Allegro. This was (and still is) a game development library with tons of useful functionality. No longer did I need to code all those low-level graphics, keyboard and sound routines myself! It was like magic to me. And what was more, around this library was a whole community of people making games.There were annual competitions, reviews and an online database with all games using the library. As far as I know, this was the first gathering similar to today's indie movement.

Exploring a dark basement in my first proper horror game, Fiend.

Using Allegro I created Project 2 and continued making another similar top-down game using rendered Half-Life models. Eventually I made Fiend, the game that set me on the course as a horror game developer. In this game I made pretty much everything myself, code, art and music. As a sidenote, it is interesting to note that I had zero expectations to make any money from this. I simply made these games, because I loved making them. Even getting player feedback was a rare thing. The very idea of selling my games was preposterous. I think this was a pretty common mindset at the time, and quite different from how it is nowadays with outlets like Steam. Making your own games feels much more like a viable career option today. Back in 2000 this was not the case at all.

In 2002 I started studying at the university (bachelor of science in software engineering) and I had also started my next project: Unbirth. This time I wanted to make it in 3D and started the to learn some basic modelling and texturing. However, there was a big problem with finding a 3D engine. All the good ones were commercial and expensive, and the free alternatives did not feel like viable options. I think the best one was Ogre3D, but it was lacking a lot of features back then. Luckily, I got in contact with a guy that was developing his own commercial 3D engine and I got to use it for free. I worked on the game for 2 years, but it never got completed, mainly due to various engine problems along the way. After this I swore to never use unfinished third-party software again, and try to make as much as possible by myself. All this time was not wasted though as I had learned tons about the structure and design of a game engine. Had I not used this engine for Unbirth, I doubt I could have created my own later on.

Jumping and shooting, while conserving energy, were the core aspects of Energetic.

During the development of Unbirth I got to know Jens, whom I would later found Frictional Games with, and as our university educations would end at the same time, we decided to make a thesis project together.This resulted in Energetic, which can be seen as a the first step towards the formation of Frictional Games. It was the first project that we made from the ground up together and some of the game's engine code is still in use (the engine was actually named HPL at this point).

When university was over I did not know what to do next. I knew I wanted to make games, but I do not think I ever saw it as a proper career path and instead just thought I should do something non-game programming related. At  this point Jens asked me if I wanted to do a Master's course at Gotland. The course was all done from a distance and was mainly about making a big game project. That sounded really interesting to me, so before the course even started, I began working (using Energetic's code as a base) on my own 3D engine. The idea was to make a game that continued along the same lines of Unbirth. And one thing was sure: I did not want to use a third party engine again.  When the course was over, the Penumbra Tech Demo was the result. The game did not do very well at a competition we submitted it to (SGA), but I hoped it might be a way to get a foot inside some actual game company. However, a month or so after putting it up online, it exploded and got downloaded more than a million times over the course of the summer. Remember that all start-up game devs: bad results in a competition is not the end of the world!

Before starting Penumbra: Overture, we had some plans to do a sci-fi brawler/shooter. Here is an enemy sketch I made for that game.

With this success behind us, we decided to try and start a company, and I scrapped my thoughts on joining a "proper" game developer. The technology used in the tech demo was the foundation for our first game "Penumbra Overture", with the team consisting of myself, Jens and another guy from the master's course, Anton.  Having worked on the game for more than half a year,  Frictional Games was officially formed January the 1st, 2007.

Working from home means you sometimes need to do multiple tasks at once...


What do I do?
When Frictional Games first started I did all the C++ programming, level design, planning, about half of the map scripting (using Angel Script), most concept art and even some level modelling. As we hired more people the amount of stuff I have to do has (thank god!) gone down a bit, and currently I mostly do design, part of the programming and most of the planning. I also act as a sort of lead artist and decide in broad terms what direction the art should take.

The thing that I spend most of my time doing these days is design work. This includes a large variety of tasks, and the most obvious is simply writing a design document for each level. When making the type of games that we do, a proper design for each level is crucial. We do not have any basic gameplay mechanics that you can simply add in a variety permutations. Every activity must be designed, programmed and often have specific art assets created for it. On top of that, every single part of the game is deeply connected with the story. Actually, when we create our games we do not really separate the gameplay and story, as both stem from the same kind of interactions. The only thing that we take care of separately is the plot, which is something that is written at a fairly early stage and describes the main happenings that the player will take part in.

So when you have a game like this, you cannot just start with a basic ideas and then flesh things out as you go along (as you might do in a shooter). Normally, we have our writer, an artist, a programmer and sometimes even our sound and music people doing assets for a level at the same time. All of these parts are crucial for the final experience and had we not had a written plan that everybody could use as a base, then nothing would work. However, the design document is not something set in stone. It just represent the first draft. As the map is being implemented things evolve and might change quite drastically. This means that the people who are working on the map, writer, programmer and artist, are all part-designers as well. Sometimes it is just not possible to implement something like the design document says, sometimes details are missing and sometimes new ideas that takes things in a entirely new direction pop up.


Example of the amazing ms-paint art I sometimes send as feedback to artists.

This leads to my biggest design related task: feedback. As all of the assets and implementations are constantly in flux it is my job that make sure that they are still coherent with the overall vision of the game. This might sometimes lead to long discussions on what the intentions are, nagging on specific details or just explanations of the bigger picture. While crucial, this sort of things is often annoying to me because it never feels like you are never accomplishing anything. You basically just pester people about changing things. But it is also a great feeling, as you got more of an outside view and can see the entire project coming together, step by step.

The programming tasks I do mostly have to do with subsystems, map scripting and AI. At the start of SOMA (our current project), I did a lot of tech related programming, for instance terrain, undergrowth and scripting. But ever since we hired a dedicated tech programer I hardly do any of that. I still try and get my hands dirty in tech when I have time for it though, and I implemented an immediate GUI system quite recently. But mainly I just plan out what tech related things to focus on, and help out with some of the high-level design. Since I do most of the gamedesign work, I try and program the more design-sensitive or unpredictable parts when I am able to. I think that if you as a designer only ever supervise the construction of a game, there is a certain magic that gets lost. For certain parts of the gameplay, you cannot say how you want it to work until you see it in action. Therefore I feel it is very important that I build some of that stuff, like AI and certain visual effects, myself.

All planning is done in Google Docs. Here is how end of last year looked like. (Spoilerish stuff cencored!)

Finally, I also do a lot of the planning for the project. Our approach is not to micro manage or waste time on any sort of strict development method. What we do is that every week people get something they should work on and then we have special "Show And Tell"-days when the task should be done and shown to the rest of the team. How to utilize the time during the week is totally up to each and everyone. Despite having this loose attitude towards planning, there is still quite a lot of work to it. Whenever some assignment slips, it often affects the schedule of several other team members and you need to move stuff around.. It is also important to constantly plan far ahead, and make sure that project is on track. It is easy to just get focused on the "here and now" and forget about the overall progress. As early as possible we make a rough plan on when the game is to be completed, and then update that with more detailed information as we go along. This can be really depressing work, as looking a year or two into the future makes it feel like the time ahead is so short, which leads you to thinking life is too short, etc, yada, yada.

There is a bunch of other small stuff that I do, like pr, interviews and booking travel. But all that is not very interesting and I think you should have heard enough now to have a fairly good idea of what it is that I do all day!


Stay tuned for more! In two weeks it will be time for Jens, the other founder of Frictional, to talk about his past and what his job is all about.