Tuesday, May 7, 2013

Alpha Post-Mortem - CoSigners


Time slips through my fingers so fast that the semester ends before I even realize it. If there is one thing that I wish I could get more from this semester, it’s time. However, no matter how busy I am, Co-Signers has always been my prime focus, and that’s probably why I’m hunted by other projects at the end of the semester. But on the other hand, I did learn a lot through this craziness.

Anyway, let’s cut to the case.

What did I contribute?

Since there is a lot, I’ll divide them into three parts: prototype, alpha and overall.

Prototype

·        Networking: Start interface, Player Connection Building, Chat Window & Server Register


      This was done on the first week of starting of the project development. I chose to finish this first because all of us were worried about whether we could pull this off or not since it’s a pretty ambitious project. Some even suggested that we finish the two ends first then worry about the communication. I knew that if we did that, the networking would haunt us for the whole prototyping process. So I did my research on Unity networking, figured out how to register on remote server and pass data through different network instances, made a game starting interface for networking that are still using in the alpha build.
Even it’s just one week, but the completion of networking showed the whole team that we could do this, which cheers up everyone without me saying one word. And this brings every member into the “development mode”, which in my opinion means far more than the task itself. I consider this one of my favorite moments of this semester.

·         Hacker: Node (& Graph) System

The first “real task” I have in this project, which is the core of the entire game, since most of the systems are built upon this system (or at least related).
This one is a result of teamwork, we had a “little challenge” when building this system: Since two nodes are connected on both ends, we are dealing with an un-directional graph; but we do need to know whether each node is connected to the source node when releasing a node, so there is “direction” in that graph. How to build a direction in an un-directional graph? Chris, Nikhil and I discussed this for a few days and came up with a solution, which is basically adding a marking mechanic on to the graph. Although I’m the one did all the “dirty work”, but I’m pretty happy with how this task is completed. And we had a habit of discussing tricky coding problems because of this, which benefits a lot to the team.   

·        Hacker: Node UI Logic & Timer

The Node UI was another thing that I spent many time on, although Chris was the one that actually made most of the UI, all those pretty little icons that you can click. The actual logic that controls what icon to show on what state, what kind of buttons to show when you click on a node and what happens to the node when you click the button is quite painful to implement, especially when there are nine states on each node and each node could have different types which means different icons and buttons to show based on those state.
The timer was the little bonus task that I gave to myself, it’s funny to see how people just get used to the look of it, considering the fact that we haven’t even discussed seriously about how the timer should look. I guess that’s my secret way of affecting game design.

·        Networking: Data Communication Between Two Players
Passing data from one player to another is not hard at all; there are a lot of ways to achieve that in Unity. What takes time is to find out the best way to get the things we want, by best I mean using least amount of data to synchronize the game. I always believe that in some future we’ll have to implement voice communication into our game, which would make the bandwidth of the network very precious. That’s why I took the time to do this, and I’m pretty happy with the result.

·        Point-men: Tracker bug & Camera bug on hacker’s side
I worked with Kiran on this because it concerns not only point-men but also hacker and network synchronization.
For the tracker bug, he made sure that the point-men is able to throw a tracker (&camera) bug onto the guard, and I made sure that there is an icon of the guard show-up on the hacker’s screen.  
For the camera bug, he made sure that there is a camera on the bug; I made sure that a camera icon is shown on hacker’s side when it’s thrown and it’s clickable and viewable.

·        Point-men: Door on hacker’s side

Who knows a door open/close/lock function could be so annoying to implement?
I worked with Vaibhav on this because it concerns not only point-men but also hacker and network synchronization. I made sure that the hacker can’t perform actions on an opened door, door states’ synchronization through the network and the door’s UI on the hacker’s side. Vaibhav did the rest, including door’s animation and the actual state change.




·        The Game: Control-Panel & Objective System
The control-panel is an easy task; basically it’s just a collider in the point-men’s world which turns a normal node of the hacker to a source node. I added the objective system myself because there is pretty much no goal for the hacker and point-men by the time we want to play test our prototype.

Alpha

·        The Game: Re-architect the networking interface, node system, UI system & Timer
One of the main differences between a prototype and a real game is how organized and flexible your code is. That’s why we spent about two weeks re-architect the game. We had our discussion through the whole team, between individuals. A few major changes were made on networking, UI and Guard AI. I've spent most time re-factory the networking interface and the UI system. The communications between two players has now been integrated into one interface class, which has made it much easier to debug and handle. And the UI has now been separated from the node system, which makes it easier to manage the UI changes on nodes.

I consider this the greatest success of the alpha because this is the result of collaboration of the whole team and this is what brings the new teammates “into the team”.

·        The Game: Jammed State to The Doors & Door Indicators &gray out buttons when it is opened
Apparently the door has become a main thing in our game, which is sad, but necessary, since this is the most apparent way to make two players communicate. We had a team discussion on how the door should behave on hacker, point-men and guards’ actions and came up with this design change. The actual task is simple, which only took me about two hours to finish.
                                    
·        Hacker: New State To the Node – “Inactive”
The hacker is known to be too hard and overwhelming, partly because of the security system, another part is that when a node gets released, the whole branch goes dark and you have to recapture all of them again. That’s why we added a new state to the node “inactive”  so that the branch could be re-captured automatically when the hacker get back the released node. I implemented this after a discussion with Nikhil.  And I’m still worried about whether this would make the hacker too easy or not.

·        Hacker: Power-off node
I did this one with Vaibhav, I did the hacker part, Vaibhav did the point-men part.
A new feature that makes the hacker collaborate with the point-men, it’s kind of like the control-panel. The difference is that the hacker can’t actually capture this node unless the point-men activate the power panel.

·        Hacker: IR node functionality & IR radius UI

I did this one with Kiran and Chris. Kiran finished the system that calculate whether a guard is in the IR radius or not; Chris made sure that each IR node could have its own radius and I did the node logic on the hacker’s part, implemented the creation of an actual “radius” image when the player captures the node and the play/display of the guard’s icon on hacker’s map.
Personally I like this node function, but there is a concern that the IR is so over power that it may make the camera useless, which is the thing that we really want to keep in the game.

·        Hacker: EMP node functionality & EMP Wave
I did this one with Vaibhav. He worked on the whole functionality and I made sure that the EMP reflected correctly on the node system. I also handled the creation of the “EMP wave” image on the hacker map and disabled the click ability of hacker on any of the “EMPed” node during the EMP duration.



·        Hacker: Tutorial Cues
This is just a GUI thing that I did in 20 minutes on the hacker map for the play-testing on the open house. Doesn’t seem to help that much.



Overall

As you can see, I contribute a lot on this project. For the whole semester, I’ve been doing everything I can either on the team or on the tasks to move the project forward as fast as possible. It’s safe to say that I know most of the code in the game, which makes me a great asset on debugging (and believe me when I say we spent a lot of time on this). And because of that, I become the “go-to guy” when someone had a problem with Unity or the code. Overall, I’m pretty happy with what I did this semester and my role in this team.

What would I do different?

There are so much stuff going on this semester, things got so crazy and intertwined together that I was so close to lose track of everything. So if there is anything that I want to change, the first thing would be to keep things more organized.
With that being said, I’m very satisfied with everything I’ve done and learned this semester. So there is nothing that I want to change on that regard.
However, team and game wise, there are things that could have been done differently. At the end of the alpha, there are a few left over bugs that occurred on the play-testing mainly because we treat part of our implementation with “let’s do this for now and change it later, as long as it works” attitude. We kind of keep our eyes closed on that since everybody is so busy and we don’t want to be too harsh on anybody. And that’s the wrong way of doing things since we are professionals, or at least we are learning to be.
On the game side, since the development cycle is so short (one month for prototype and one month for alpha, during which there are tons of other work to do), we hardly have any time to spare on play testing and iterating process. Which made us went a little astray from thinking on the player’s perspective. We added a few new features that we thought could solve the point-men have no influence on the hacker issue. But we neglected the fundamental issue that there is still nothing much to do for the point-men, I mean EMP and power-off panel are great features, but they are still the things that the point-men can do FOR the hacker, what’s the fun for the point-men? The candy is a potential fun element but we can’t know for sure since it’s a suicide weapon right now as it’ll attract the guards from half of the map when you throw it. The thing is, the point-men must be able to improvise and we have to create the “thrill” for both the point-men and the hacker. This should be the top thing on our to-do list and I really regret the fact that we couldn’t do this earlier.

What did I learn?

Ø  Communication is the key.
We had a lot of conversations inside our team (besides stand-up), it becomes a habit for us to talk about the game and the programming after lunch on Tuesday and Thursday, which is really helpful, not only to figure out solutions for certain problems, but also to know what everybody is doing and what’s new on game design. 
Ø  Keep an open mind, listen to other’s suggestion, they have values.
We all hate people that do not listen, I sometimes do it myself. But yes, hear what they have to say, there are many cases that someone’s suggestion really helped me.
Ø  How to multi-task
You really have to figure that out when you have more than four projects to do every week. Since it takes time for me to get “in the zone”, I always evaluate how much time each project needs and set different schedules for each project and focus only on that during that time period. I also left some schedule open in case I didn’t finish one of them in time, which does happen from time to time.
Ø  It’s nice to help out others.
By nice I mean helpful to myself, sometimes I got to relive what I learned, sometimes I got to know how others do their stuff and what’s more fun than solving problems?
Ø  Keep it professional
We shouldn’t let our personal feelings get in the way of pushing forward our project. There are cases that I really want some tasks since they are challenging and interesting, chances are your team members would also be interested on these tasks. Sometimes it’s nice to let go if you already have a handful of stuff to do.
Ø  Always look at the future
This is regarding programming; when you design your system, you should really take into consider what could happen to that system, what functions that system should support in the future. It’s could make life a lot easier.
Ø  You never know what problems you game have until you play-test.
Especially on networking, things could be very different between playing along and playing on network. 

Tuesday, April 30, 2013

HELL, IT'S ABOUT TIME!

Finally, we got to build something cool with our renderer.

To me, "cool" is not about how realistic things are, but how artistic they are. That's why I build this, for your consideration:



I spent some time doing the swirl effect on that giant sphere in the middle, which turns out to be very sweet. Like many cool effects, this one requires post processing (in this case the opaque texture). Once I got the texture and data required for the texture coordinates look-up, the rest is all in pixel shader, and this is the core part of the effect:


float2 center = float2(0.5,0.5);
float2 toUV = texcoord_screen - center;
float distanceFromCenter = length(toUV);
float2 normToUV = toUV / distanceFromCenter;
float angle = atan2(normToUV.y, normToUV.x);

angle += distanceFromCenter * distanceFromCenter * (20* sin(g_secondsElapsed/10));
float2 newUV;
sincos(angle, newUV.y, newUV.x);
newUV *= distanceFromCenter;
newUV += center;

float4 c1 = tex2D(g_textureSampler, newUV);
        float4 c2 = tex2D(g_textureSampler, texcoord_screen);
        float4  textureSample =  lerp(c1,c2, 0.3); //this is for the partial transparency

As you can see, I set the center to 0.5, 0.5 (which is the center of the screen), instead of the center of that sphere (which could be done by passing the position of the sphere and the world_to_view, view_to_projected transform matrix). I didn't bother to do it because the sphere is big enough that there is no point to do that. I was planning on making that sphere a black hole, parallel to the moon that behind it, but it doesn't look that good since the effect is too small and the black hole needs something more than just a swirl effect.


As for other objects, it's nothing we haven't done before. 

  • The rook and the ground is done by using environment map. 
  • The knight has a normal map with a soft intersection effect. 
  • As for the moon that you can't see, it's using the normal diffuse lighting with a normal map. 
  • The background is a simple plane with lighting disabled in fragment shader.
  • The UI is just ... UI, plus a vignetting effect.
Controls:
WASD - to move all the objects.
UOJKIL - to move the directional light to control the shadow.

Anyways, enough for talking, let's enjoy this image for another minute and call this an end.

Link to the code:






Graphics Programming 12

Finally! Shadows!

To be honest, after doing the depth technique for soft intersection effect, this one is not that hard. All we need to do is to add another pass for the shadow map. The process is almost the same as the depth pass, the only difference is instead of using the world to view, view to projected transform, we use world to light, light to projected transform, since the "depth" in light's view is how we determine which part of the object should be lit or not.

Again, most of the time is spent on debugging, thanks to PIX, I managed to locate most of the bugs pretty fast.

Here is one of the core part of shadowing:

float shadowModifier;
{
float3 position_projected = i_lightProjectedPosition.xyz/ i_lightProjectedPosition.w;
float2 texcoord_screen = (i_lightProjectedPosition.xy * float2(0.5, -0.5)) + 0.5;
float current_depth = i_shadowDepth;
float depth_previous = tex2D(g_shadowMapSampler, texcoord_screen).x;
depth_previous *= g_farClipPlaneDistanceForShadow;
shadowModifier = current_depth < depth_previous + 1.0f;
}

float3 diffuseLighting = lightColor * lightIntensity * strength + g_directionalLightColor.xyz * directionalLightStrength* shadowModifier;
float4 lighting = float4(diffuseLighting, 1.0) + g_ambientLightColor;

Since I have two lights in the scene, I only made shadows influence the directional light, which makes the shadow less "black" and more real.

Here is a screen shot of the scene:


To Control the directional light:
U - back, O - forward, J - left, L - right, I - up, K - down.

WASD - controls all the objects.

The shadow map in PIX:



Link of the code:
Graphics 12

Thursday, April 11, 2013

Graphics Programming 11

Well, 11 is just a small tweak of 10.

The whole post processing thing is just rendering the entire screen into a texture and then do tricks on it.
The key is to use a plane and set its coordinates the same as the screen coordinate. As for UI, its the same,  you can just change its vertex positions in the shader by using uniform values based on the values you set in the config file.

Here is the screenshot of the 11:


And here is the code:
Graphics11

Tuesday, April 9, 2013

Graphics Programming 10

Well, this one is supposed to be easy, but it takes longer than I thought.

The main goal is to create a depth pass that store the opaque bucket depth information into the depth texture which then can be used by the fragment shader or the vertex shader of the "main pass" to achieve something that you can't do by using only one pass. In our case, it's soft intersection, which takes the current depth information of the target object and compare to the depth info of the texture and change the alpha based on the difference.

Another thing, we calculate the depth value by using the z value of the view position and divide it by the distance of the far clip plane. And I did it in the vertex shader and let it being interpolated instead of doing it for each pixel in fragment shader to save some performance, seems to be working so I'm keeping that.

The code took me 2-3 hours to write, and another 3 hours to debug. It's really frustrating to find out nothing works when you first build and run your code. And even more frustrating when you spent 1 hour tracking down one of the problem only to find out there is a typo in the freaking config file that you wrote.

But this was the first time that I appreciated PIX, it did make my life much easier. Debugging in PIX is truly a much faster way to find out your fix works, and it's very satisfying to see your program moving forward bit by bit by sequentially tracking down all those bugs.

Anyway, it's glad to see things working.

Here is the screenshot:

Here is the Screenshot of the depth texture(depth is stored in red channel):

The Actual depth buffer:
Finally, the code:
Graphics10

Friday, March 22, 2013

Graphics Programming 9


Screenshot:

Well... The sphere (Actually a soccer ball on the top-right side) that suppose to have a wobble effect using the render target down here. It doesn't seem like it because algorithm for that in the shader is too crappy for now.

I'll change it when I got some time.
 
Pix:


Code:
Graphics09

Monday, March 4, 2013

Team Hack n' Hide - First Month

Producer:
AJ Dimick - Scrum Master
Zac Truscott - Lead Producer
Andrew Witts - Lead Designer
Engineer:
Vaibhav Bhalerao - Thief Engineer
Kiran Rajachandran - Thief Engineer
Chris Rawson - Hacker Engineer, Tech Artist
Nikhil Raktale - Hacker Engineer
Miao Xu (Max) - Hacker Engineer, Networking Engineer

We made it! After one month of crazy work, we finished the prototype of our thesis game Co-Signers and passed the first gate! The team really glued together since everyone loves the idea and we all have devoted so much on this game.

Co-Signers is a two player couch co-op game. One player plays as the hacker who operates on a 2D screen. Hie main goal is to hack into the computer network to help the other player by unlocking/locking doors, hacking into cameras, etc. The other player plays as the thief, who follows the hacker's guide, avoid guards and infiltrate into the highly secured building and retrieve the goal. We build a system that let two player help each other out and the main goal is to build the trust and tension between two players.

See more details on our team blog! : Team Hackn'Hide

Now when I finally got sometime to look back, it has really been a crazy ride!

We had our physic play-testing, whose goal is to make sure the core of our game - communication between two players - is working as it should be and fun enough to play. And I'm the camera in the video.



We spent a lot of time on brainstorming to make sure both player could have a fun experience, here is a screenshot of the whiteboard of one of our discussions:



So, what's my role in the whole developing process? Basically, I finished all the networking stuff of the prototype and also part of hacker system. I'll open two other posts to talk about this since networking in Unity is really painful and the graph system that we build for the hacker is really fascinating! So excuse me for  not going too much details here.

The presentation is another thing that we spent much time in, since this is all that matters at the end of day. Chris and A.J. have put together a really cool trailer for our game, which truly helped us on the presentation day, here is the link:

                                     

Our producers really did a great job at the presentation day, even though there is a minor issue happened that day (the second projector for our live demo is somehow not working properly, which really freaks us out). Anyway, here is the presentation, enjoy!



To be continued...

Graphics Programming 8

Finally, normal map! It's always nice to know how to make things look realistic without using ten thousand polys. And this is how we do this:

In addition to the diffuse map, we use the normal map which gives you the "bump" information (similar to height map) to calculate lighting, so you actually feel like the surface as "bumpy" as it should be.

Sounds easy enough, so we just store the normal information in the normal map and pass it into the shader and that's it? Of course not.

The normals you store in the texture are set, which means they are only facing only one direction, this could be problematic. What is this suppose to mean? Well, imagine you have a cube,  its six faces has the same texture. If you think about it, the normals you store in the texture can only fit one of those faces. How to make it fit any face of the cube? You need some way to convert the normals in the map from "texture space" to model space. 

The way we do this is by using the "TBN" rotation matrix, we can get the right normals by using tangent, bitangent and normal of each vertex on the mesh. And we can get this information from mesh in Maya. After we get the normal, the lighting part is the same as before.

The implementation is fairly easy, nothing worth mention.

The screenshot:



Source:
Graphics 08


Friday, March 1, 2013

Graphics Programming 7

So this one is fairly easy, all about transparent effects, just a change of render states. We still need to add a few variables to effect and material files, but it's trivial compared to 4, 5 & 6.

So, as always, about those effects:
Partially Transparent:
Doing this will make your own materials transparent, based on the alpha you set. In order to achieve this, you'll have to draw those objects back to front (based on the distance from the camera). The reason for that is, to make a pixel "transparent", I actually need to know what color is already there on the screen, then do the calculation based on the alpha that I set.
Binary Alpha:
Basically, you set a threshold to determine whether the alpha value in the texture is 0 or 1. In other word, completely opaque or transparent. In this way, you can "cut" the irrelevant part of the texture that you don't want, which is pretty handy in many cases.
Additive:
Add the color info in this texture to what's already in the scene. Kind of like the binary alpha, but instead of completely overwrite the original pixel, it adds to it.

The tough part of this assignment is to render the partially transparent entities. The sorting algorithm is easy, but because of the way I did the sorting (I sorted effect->material->entity on load using a manager kind of thing) in Graphic 5 & 6, I can't sort the distance from the camera based on effect. I have to get the entity list again the way I did in Graphic 4 and do the sorting on that base, which actually took me sometime.

Some textures that I used are from this website: textures

Screenshots from PIX:




Screenshot for the game:

Control:
To control the point light:
use "I" - up, "K" - down, "J" - left, "L" - right, "U" - backward, "O" - forward;
To control the camera: Arrow Keys
To control the box: WASD.

Link to the code:
Graphic 07

Friday, February 22, 2013

Graphics Programming 5(Second Half)&6

For assignment5, I didn't do the sorting, instead I added a effectmanager and material manager to manage those things. I linked all the materials to the effect that it cares about. And did the same with the entities and materials. It adds a few overhead on load, but reduces the performance hit in every loop. Anyway, I thought it was the right thing to do.

Here is the screenshot of the PIX:

As for assignment 6, I changed the Maya exporter to export the file format exactly the same as I did before. So that I can use the parser that I wrote before. Here is the screen shot of the mesh.

File parsing in the mesh builder did took me sometime (not the parsing part, since it's exactly the same as before), I spent a few hours trying to figure out how to debug in the MeshBuilder. Turns out I was doing it the right way all the time, except there is a space in the file directory that i passed to the command, which the console doesn't recognize of course. Once i was able to debug, it took me a few minutes to solve the issue (again, some stupid mistakes).

The binary files is the easy part, in order to get every mesh info in three load, I create a new struct for the extra information I need (vertex number and index number), something like this:


//load binary file
//load vertex and index number
myFile.read((char*)&m_info, sizeof(VertexFormat::info));
m_vertexData = new VertexFormat::data[m_info.vertexNumber];
m_indexData = new unsigned int[m_info.indexNumber];
myFile.read((char*)m_vertexData, sizeof(VertexFormat::data) * m_info.vertexNumber);
myFile.read((char*)m_indexData, sizeof(unsigned int) * m_info.indexNumber);

Anyway, i'm able to load any shape from Maya into my renderer with my own file format, which is kind of exciting.

Here is the screen shot of scene:



Control:
To control the point light:
use "I" - up, "K" - down, "J" - left, "L" - right, "U" - backward, "O" - forward;
To control the camera: Arrow Keys
To control the box: WASD.

Link to the code:
Graphic 06








Friday, February 15, 2013

Graphics Programming 4&5(First Half)


So this one took a little bit of time to finish. Since we are adding scene files, entity files, materials and effects, we are kind of forced to re-factory the code again and write a bunch of parser. Which is pretty good of course since we have a chance to make our own file formats and have a pretty good sense of what data we need to pass to the renderer. "Data-driven" is what this is and it's important because this is commonly used in applications.

On the other hand, the shader part is not that hard actually. I added an ambient light, an attenuation and color to the point light that I have, and also specular lighting to a few materials. The way I'm doing this is to assign the texture sampler to the diffuse color and multiply it by the fragment color from the vertex shader.

Then I get the light direction L and the normal of that fragment N, normalize L, then get the dot product and get the strength of the point light and clamp it:

strength = clamp(dot(L, N), 0 ,1);

After that, I calculate the light attenuation. First I check the distance d from the point light and fragment position. Then get the light radius r from the scene file. Then I do something like this: 1/(1 + (d/r))^2 to get the attenuation.(More information about the equation: here)

So the diffuse lighting = light color * light attenuation * strength.

After that is specular lighting, first I get the reflection R, then I get the camera view V from the fragment position to the camera position. Then I get the dot product of these two:  saturate(dot(V, R)); And I can get the specular lighting by power it by an exponent. (We also do strength and attenuation because you shouldn't see any light on the back side of the mesh.)

So the specular lighting =  power(dotproduct, exp) * strength * attenuation.

So the final output is diffuseColor * (diffuse lighting + ambient light) + specular lighting.

Here is the screenshot of the actual effect:



Here is the screenshot of the PIX:


Here is the source code:
Graphics04_Source

Control:
To control the point light:
use "I" - up, "K" - down, "J" - left, "L" - right, "U" - backward, "O" - forward;
To control the camera: Arrow Keys
To control the box: WASD.


Thursday, January 31, 2013

Portfolio Base


Since I don't really have a place to show all the stuffs & prototypes I had done, which I should have obviously. I figured I'll just put all of them here.

Hack n' Hide:
TeamBlog:
http://hacknhide.blogspot.com/

Reveal:











Reveal_UnityPrototype

Control: 
Turn On/Off Flashlight - left click;
Melee - F;
Turn energy into battery - right click;
Move - WASD.

Goobles:









Goobles_UnityPrototype

Control:
Placing turret - 1;
Placing landmine - 2;
Placing sticky paper - 3;
Shooting - right click.

Converse:












Converse_playableLink

Control:
Click.

Crusaders:














Crusaders_XNAPrototype


Control:
Move - WASD


Monte's Quest:















Monty's Quest_MOAI_SDK_Prototype

Control:
Jump - click.

Friday, January 25, 2013

Graphics Programming 3


Since I'm on fire today, I thought I might as well finish assignment 3 :p

The concept is simple:

Texture coordinate: 
The whole point of texture coordinate is to mark those coordinates on vertex so that each vertex know what part of the texture would it render. In that way, when the coordinates get interpolated into the fragment shader, each fragment know exactly which coordinate it gonna use on the texture, thus get the color info from that point. That's how a triangle gets to know which part of the texture to render.

One thing to note though, the texture coordinate of a texture is actually "flipped", which means the upper-left is actually 0, 0, instead of 0, 1. (So the lower-left is 0,1);

Point light& diffuse light:
In graphics, we assume diffuse light goes into all directions, and the "strength" of the light is determined by the light source (in our case, point light). 
How do we do that? 
By first: get the light direction vector using light's position minus the actual fragment's position (So we know we are gonna do it in the fragment shader, well... we could actually calculate the light results in the vertex shader and let the graphic card interpolate it. But the result could be worse than per-pixel light. And considering how strong the graphic card is nowadays, we can afford it.) 
Second: get the normal vector from the mesh and transform it to world position (that's how I prefer it anyway).
End: dot product these two vector, which is the cosine of their angle, which is exactly the power of the light at that point.

Anyway, it's all simple 3D math.

So, this is how it actually looks:


And the texture resource in PIX:


To control the point light:
use "I" - up, "K" - down, "J" - left, "L" - right, "U" - backward, "O" - forward;
To control the camera: Arrow Keys
To control the box: WASD.

Here is the code:
Graphics03_Source


Graphics Programming 2


This week is kinda tough, the goal of the assignment itself is not hard, but re-factory the code (render simply gets too long if we don't do that) actually took me most of the time.

As always, I'll start on the theory for future reference:
This assignment is all about showing a 3D mesh on a screen, which is, in fact, project everything in the 3D space on a 2D plane. To be able to do that, you need three matrices:

ModelToWorld Matrix: Transform the mesh information from model space (which starts from the origin) to world space where the mesh actually is. To be able to do that, you need to get the transform information from your code (including position, rotation, even scaling value). And you can get the final matrix from multiply the rotation matrix, position matrix, and so on. (Ideally, you can get any transform you want by continually multiplying matrices, it's all linear algebra. ) 

WorldToView Matrix: Transform everything from world space based on camera's location (position and rotation, to be exact), so that everything will be placed at our point of view. To do that you can just multiply the ModelToWorld matrix by the INVERSE camera transform matrix (since everything will move at the opposite position the camera moves, for example, when the camera move left, everything will be moving right in the viewport.)

Projection Matrix: Right now what we see is a 3D place, but what we see on the screen is actually all 2D, so we need a matrix to project everything from 3D to a 2D plane. A projection matrix will do that, and DirectX does provide some function for you to do that. (D3DXMatrixPerspectiveLH() or D3DXMatrixPerspectiveFovLH())

Now back to the code:
I actually did this assignment twice, the first time I just went along and started re-factory right at the beginning. After I finished everything, the cube just doesn't show on the screen. I debugged and rechecked the code for 6 hours and did everything I could  and still couldn't figure out what could possibly the reason, even now. That really pisses me off. 


Here is the PIX debug screen for that one:



Everything works perfect, the vertex buffer and index buffer did passed into the vertex shader, the preVS and postVS are all right on track and all the matrices are loaded into the vertex shader. It's just that nothing is shown on the viewport. I even tried to disable the backface culling, and it still doesn't work. My guess is that somehow the data doesn't passed into the fragment shader, but I can't figure out why.

So I restarted, again from the last assignment. This time every time I made a change to part of the code, I debugged it and made sure everything showed properly on the screen. It worked and didn't take me much time since I simply reused most of the code I did at the first time. 

Anyway, the box:


PreVS:


PostVS:


   
Input instructions:

  Move Camera: Arrow Keys;
  Move Box: WASD

Link to the code:
Graphic02_WorkingCode

Here is also the link to the first one that doesn't work if anyone wants to check it out:
Graphics02_BrokenCode

Wednesday, January 23, 2013

First two weeks - the game pitch


Beginning this semester, we are going on full production on a complete game until we graduate. Not only the game would apply for the IGF, it could also be the first published that we have. So it's kind of a big deal.

The first two weeks are the pitching phase. Everybody can form team and pitch their idea if they want.  By the end of the first week, there are more than ten pitches going around, many of them are very brilliant and innovative. Choosing one from them becomes a quite painful task.

And the truth is, the choosing process (copied from EA) is quite not I imaged it would be. I like the fact that everybody can choose the game they want to make, which is right thing to do and quite reasonable to be honest. But letting this process going on for a week and the rule that each team must have at least 5 people kind of making everything shifting away from its original purpose. Polities come in, people start to compromise or get influenced by other people. Some choose the strong team that they think that actually got a shot, some just let go and choose the people that they want to work with instead of games. A lot of brilliant ideas get killed because there are not enough people support them. But even though, I believe most of the people are pretty happy about where they are. I guess that's good enough.

For the next cohort, my suggestion would be let people vote two games (in case everybody vote what they pitched, and they would.) that they like the most right after the pitching and feedback is finished. And choose the 5 or 6 game that got most votes and regrouping after that.

Anyway, I'm very glad that I'm working on the game that I like the most. The first gate is after three weeks and we better start working on the prototype.