Testing NVIDIA 3D Vision (Stereographic gaming)

I spend this weekend in my parents house, where my brother lives. This won’t have anything to do with the topic of the post if it wasnt that my brother is 35 and have a well paid job, under this circunstances he likes to waste the money in all kind of gadgets, even those who will never use, just for the pleassure of showing off in front of the people.

Sometime ago he decided to purchase a nice pack from NVidia that bundle a monitor, 3d card and stereographic glasses. They sell it as the next experience in videogames, and I wanted to know if this is really an improvement.

First of all I have to battle against the driver, using the latest it didnt work. The “NVidia Stereo Controller” driver was not found, I had to download a previous driver and install just the USB driver item (I document this beacuse maybe somebody find this useful).

Once it worked I tested some demos that look nice, but I wanted to see it on games, and I did, and I have some kind of contradictory impressions:

The technology itself is nothing new, the glasses are just LCD synchronized using a IR flashing device connected through USB. You know how it works, the computer shows the frame for the left eye on the screen and syncronize the glasses to block the right eye, and switches fast enough so the perception is that both eyes are watching different images.

This technology was easy with previous CRT Monitors, but TFTs can’t switch images so fast, so with current TFT monitors the other eye still can see the image that was rendered for the first eye.

This is the reason why this pack comes with a monitor, this monitor (Samsung) is able to work at 120Hz so there is no problem to work with the active glasses.

Amd about the glasses, I was hoping some improvement but no, all the glasses that uses LCDs have the same problems, and this ones are not an exception.

First, the brightness. Your eyes are watching half of the frames, and when they are not supposed to see the screen they are covered with a dark screen, that means that the perceived brightness is half of the monitor brightness, and you feel it, when you are used to bright monitors this is like playing with your brightness set to 50%. Annoying. Of course they could create special monitors with the double of brightness, but that brings us to the next problem:

Ghosting. When working with LCD glasses you have to ensure that the darkened eye won’t see anything on the screen, otherwise the user sees strange objects floating on the screen that reduces the stereographic sensation.

So where is the great improvement here? Well, it is not a hardware improvement, it is more a software improvement, and the improvement is that the game do not need to be coded to work in 3D, the driver is able to do it by itself.

Thats a great step forward, and I can tell you that it is not easy. Because the driver somehow has to understand all the steps during the rendering process and it has to determine which parts needs to be redone, readjust the camera position and render the frame for the other eye. And it works!, but not perfectly, because the rendering pipelines are composed of lots of steps and a driver it is not able to understand them fully. Thats why DirectX has some specific features meant to take advantage of this technology.

Those games developed to be used in 3D will work perfectly, the others probably will show horrible gliches, or will have inter-ocular distances that will make your brain explode. Because thats another big point, when working with stereographic you have to set the distance between the cameras used for every eye, and the focus point distance, if you don’t set these distances right, the sensation is annoying or you just loose the 3D effect. For instance, I’ve been testing Colin McRae Dirt 2, the game looks amazing in 3D, the speed feeling is awesome, but you can’t play it using the inside-car camera, because the intra-ocular distance is too big.

I guess that what the driver uses to determine the distance is probably something like ( (far_plane – near_plane) / 2 ) and as I said it work, but not under all circunstances. Another game I tested is TorchLight, in this game I couldnt find any glich (and the game is not meant to work with 3DVision) but the only annoying thing was the cursor, which floated on the screen instead of being at the ground level.

So at the end what is my impression? It is hard to say. It really looks 3D, it really improves the gaming experience, but at the same time you feel that this is some kind of natural evolution in games, and you are used to play in 2D that you really don’t bother if there is no depth perception. I think that this technology needs to walk hand to hand with the headtracking technology, otherwise is some kind of expensive eye-candy.

Twittear para decir que twitteo

Cuando uno twittea me estoy casando, deja automáticamente de casarse. E ingresa en la metavida. Cuando se pretende narrar en directo, el único hecho que resulta es el propio acto de narrar. La narración, como la felicidad de Ferlosio, sólo puede ser retrospectiva. Todo esos tweets llevan un único mensaje. Estoy twitteando.

Aunque no esté de acuerdo con el contexto, me ha sorprendido esta valoracion sobre el arte del twitteo, escrita por Arcadi Espada.

pygame makes me waste my time

Today I had some free time so I decided to take back python programming to add some features I had in mind for the pyncel app.

But then I discovered an anoying bug, if you resize the window the mouse messages from pygame don’t work properly, they are limited to the old size, so if the new window is bigger, when the mouse moves outside the old size it stop sending messages, and when it sends it, the position is always in the old border.

I track the problem till pygame, and I discovered that it is just a bug, nothing else to say. I guess the problem is some kind of ‘if’ statement inside pygame code, because I’ve been working with SDL for long time and it never gave me this problem

The weird thing is that I’ve been trying to find more people commenting this bug and there is no trail. And being such an obvious error I’m dazed. Maybe is a problem that only arises with Windows 7

So this is why I comment it here, with the keywords to help search engines: pygame VIDEORESIZE mouse border problem error. I hope somebody finds this post somebody and stop wasting time with pygame.

No improvements to my app today.

DDS in Python

Due to some work duties I’ve been all day messing around with DDS files and python. DDS is a file format that is capable to store compressed textures, the interesting thing is that the compression algorithms on DDS files are supported by the current graphics card, it means you don’t have to uncompress them before sending them to the VRAM, opposed to what you would do when you  use JPGs (you have to load the file, uncompress it, ans send it to VRAM).

There are more pros, the mipmaps for instance, with the regular textures the driver is in charge to create the mipmaps when uploading a texture, and that is slow as hell, indeed most of the time we spend uploading a texture is the mipmaps’s construction process.

And there is still an advantage, the textures are stored compressed in VRAM (less memory), and can be accesed without uncompressing the whole texture, so it means the internal buses of the card are more free, and it traduces to better performance.

Ok, so what about DDS in python? well, sad news is I couldnt find anybody who made a DDS file loader, PIL doesnt support them. So maybe I will get interested on adding DDS file support to my little framework.

It is not hard, I just need to read the header but to support DDS means lots of work.

But once again, too much wrapping, not enough creation. So this feature will get on hold till I really need it.

Today I found this interesting work by Dasol, it is a procedural spiral generator and he uses also some celullar automata for the background. I felt like he beated me somehow because that was more or less the kind of stuff I tryed to achieve when coding my celullar automata but I didn’t spend much time in polishing it or giving it some kind of meaning. Also the automata is cool, and the way he renders the board (using a texture for every cell looks cool when you zoom in) is clean.

Anyway, ideas for the pyncel app:

  • create a SceneGraph
  • refactor the canvas to make every canvas more like a SceneEntity of a SceneGraph
  • some tools to move and rotate objects
  • create a background texture loader
  • create a internet image loader

Sounds boring but the results could be nice. So no screenshots or code for today.

Hackpact Day 10: Bug fixes and multicanvas

Today some minor bug fixes, for instance, I extended the app to support more than one canvas at the same time. The idea is to overlap them as layers but right now I just use them as a way to extend the canvas on the sides which should be the same as having a bigger canvas, but I want to use separated canvas so maybe in the future I have some kind of infinite canvas where it can create new ones just but painting outside of the current canvas.

But it didnt worked, I only could paint in the first one, in the other ones it didnt paint. I was convinced that it was a problem with the FBOs in the RenderTexture, so I keep watching all the OpenGL code without much luck. Then today I realize that the problem was the brush, after painting in the first canvas the internal var storing the last time it painted was updated, so when it has to paint in the other canvas it block it according to the flow property in the brush.

So now I have several canvas that I can overlap, I don’t have an interface to move them around, to sort them in Z or to choose the active one, and I’m lazy about it, I don’t want to code GUI stuff, so I will see how I can sort it out.

I also discovered an easier way to create a FileDialog, check the code:

def ChooseFileDialog(caption="Choose a file",folder="C:/",default="file.png",wildcard="*.png"):
 result=""
 app = wx.PySimpleApp()
 dlg = wx.FileDialog(None, "", "c:/", "", "*.*", wx.FD_SAVE)
 if dlg.ShowModal() == wx.ID_OK:
  result = dlg.GetPath()
 app.Destroy()
 return result

Better than the last one, indeed I don’t think I need the Destroy line, but I’m always scared of leaving an app running in background because I don’t have any way to check it.

I also added the option to resize the application window, more according to the kind of application I’m creating.

Today there is no code to upload, I don’t think it will be interesting to release this version without a propper control. I’m also planning to create new brushes and textures.

hackpact_4canvas

You can see 4 canvas arranged horizontally. I render a grid to make it easy to see it.

Hackpact Day 9: Text and Widgets

I think I totally have lost the path in this project. I wanted to experiment more about different ideas and instead I’m building a full application, which somehow is good because I’m touching all the fields of python, but I miss to have more crazy ideas.

So I don’t know what will I do next, I think that the current version of Pyncel is enought powerful to do interesting things. Btw, the name comes from pincel (brush in spanish) and python.

I want to do experiments with automated brushes, for that purpose I will create a XML file document where I will write all the info, the textures filenames involved, the preassure, and more settings. But not just that, Im planning to put a little bit of source code in the XML, so when you load a brush you can add some automations.

Today I’m going to put the source code of the latest version. It is growing fast so now there are a lot of files, but they are well defined so it is easy to understand.

There is only one new module required, WxWidgets. I added it because I wanted to have some dialogs for loading or saving files, and I thought it would be stupid to code them by myself. But it doesnt mean all the app runs on Wx now, I just created a tiny WxApp that is created when you need the dialog, and destroyed afterwards, and it works perfectly for what I need.

Here is the source code to have a FileDialog:

def ChooseFileDialog(caption=”Choose a file”,folder=”C:/”,default=”file.png”,wildcard=”*.png”):

class SAPPWX(wx.Frame):

myfile=””
def __init__(self,parent,id,title):

wx.Frame.__init__(self,parent,id,title)
self.initialize()

def initialize(self):

dlg = wx.FileDialog(self, caption, folder, default, wildcard, wx.FD_SAVE)
if dlg.ShowModal() == wx.ID_OK:
SAPPWX.myfile = dlg.GetPath()
self.Destroy()

app = wx.PySimpleApp()
frame = SAPPWX(None,-1,”)
app.MainLoop()

return SAPPWX.myfile

I though it was interesting to have such a freedom to create a GUI element without having to deal with all the application stuff. Maybe there is a better way to do this but I didnt found it. I am concerned that maybe the wx app is still running in the background…

I also wanted to render a little HUD but I know from previous work how hard it is to draw text on a OpenGL application, so I just used the GLUT options to raster text, it is slow and ugly, but it is just 3 lines of code without addind more dependencies. The only problem is that it doesnt allow you to change the font-size, but I don’t care.

So here it is the source code: hackpact day 9

And some random screenshots made with the latest version:

nebula

nebula2

Hackpact Day 8: Refactoring, classes and operators

I’ve been improving my canvas app these days but I didn’t have time to blog about it, sorry, thats the reason why I’m a few days behind. Mainly because most of the work done is not really interesting, it is more about refactoring my old code, arranging it in a more clever way, and dealing with stupid problems.

I have improved the way the brushes behave, I created new brushes, and solve some bugs.

The only interesting thing I did was to create a Vector class, the common class you use to store the coordinates of a point. I overwrited all the operators so now is transparent to use the class, it behaves more like a list, but you can multiply it or divide it, operate between Vectors, etc.

That task is kind of fustrating, when you are an experienced C++ programmer and you jump to a high level language like python you always miss some of the low level part of programming. For instance, in python if I have an instance in A and I do “B = A” then A and B share the same instance, so vars behave more like pointers.

That is a big source of bugs, because most of the times I dont realize python don’t do copies unless you say it explicitly, and I have several vars sharing the same instance. So now I tend to solve the problem having an option in the constructor of a class that receives an instance. So I can do:

a = vec([10,10])

b = vec(a) # this is a copy

All the information you need about OOP in python is on the internet, so that is not a big problem. But to code the class Vector was more like a test, because I will end up using the CG library I wrote about some posts ago, I don’t like to have more dependencies but I don’t want to code all those low level maths, more when I hardly know how to make efficient functions.

Here is my Vector class, it can be used for 2,3,4,or N dimension vectors, and you can use it where the app expects a list and it wont crash:

from copy import copy
from math import *

class vec:
    def __init__(self,v=[0.0,0.0]):
        if type(v) == list:
            self.data=v
        elif type(v) == tuple:
            self.data=list(v)
        elif v.__class__.__name__== self.__class__.__name__:
            self.data = copy(v.data)
        else:
            raise Exception("Wrong parameter type:" + str( type(v)) )

    def __repr__(self):
        s = "vec("
        for a in self.data: s += "%0.3f,"%a
        return s[:-1] + ")"

    def toList(self):
        return copy(v.data)

    # overload []
    def __getitem__(self, index):
        return self.data[index]

    # overload set []
    def __setitem__(self, key, item):
        self.data[key] = item

    def __add__(self, other):
        return vec( map(lambda a,b:a+b,self,other) ) #[self.data[0] + other.data[0],self.data[1] + other.data[1]] )

    def __sub__(self, other):
        return vec( map(lambda a,b:a-b,self,other) )

    def __mul__(self, other):
        if type(other) == int or type(other) == float:
            return vec( map(lambda a:a*other,self) )
        else:
            return vec( map(lambda a,b:a*b,self,other) )

    def __div__(self, other):
        if type(other) == int or type(other) == float:
            return vec( map(lambda a:a/float(other),self) )
        else:
            return vec( map(lambda a,b:a/float(b),self,other) )

    # return size to len()
    def __len__(self):
            return len(self.data)

    def copy(self,v):
        self.data = copy(v.data)

    def module(self):
        return sqrt(sum(map(lambda a: a*a,self.data) )  )

    def distance(self,b):
        return (b-self).module()

Today there is no screenshots or code, sorry, but check the next post.

Hackpact Day 7: bytes, pixel formats, PIL and to Save

Today I wanted to add the scrolling feature to my canvas, so I can have a canvas larger than the window.

To implement the feature was easy, but then I realized that when I did the “save” function I only made a dump of the screen, not the whole texture, and now the texture was bigger than the screen, so I needed a save method on the RT class.

It wasnt hard to code it, but the problem came when I tryed to save the RGB16F RT, it just didnt work, the pixels in the resulting image looked like it was taking one byte per pixel and channel instead of reading it as a Float, somehow it was obvious, you cant pass an array of bytes to a function and expect it will know how to handle it, but the documentation of PIL (the library used to handle images in python) is crap, they don’t explain well how to specify the pixel format when is 16 or 32 bits and has more than one channel.

I have been searching info all day long and nothing was found, I just end up thinking PIL doesnt support to read images in RGB with channels of more than 8 bits. They say something about a “F” format if you use the fromBuffer function, but the encoder looks like it only allows one channel images. Silly.

Then I had an idea, if I take every pixel and divide it by 256 (when a short is used) I will have a 8 bits precission, indeed I don’t need to save a 16bits image, mainly because not a lot of file formats supports it (and right now Im using JPG).

I tried to do so, to divide every pixel read from the buffer by 256 and store it using 8bits per channel. First, it wasnt easy, because the image is stored using numpy, which I understand, but the documentation of numpy is crap, they don’t tell you some basic things like how to convert from one data type to another, or how to apply a function to every value of the matrix. So finally I discovered how, but it didnt work, it appear like some values where out of bounds.

So after wasting a whole day just to save a image I came up with the simplest idea, to create a temporary RGB image with 8 bits precission and render a quad using the other texture. I’m wasting some memory and performance but it works and it is easy.

No screenshots today or resources today.

Hackpact Day 6: Application and Canvas

I’m three days behind, I know, but I’ve been coding hard last days, but I never found time to write in the blog about it, and also I wanted to have a nice version without bugs before I share it here.

So what I’ve been coding these days? Well, I pushed away the old code about cubes and cellular automatas, and started something new from scratch, but first I refactor a little bit my code to create a classic class in almost every interactive application, the Application class.

This class encapsulates the ugly and boring code common to all Applications, like creating the window, the main loop, reading the input, calculating the elapsed time, quitting the app in a clean way, and some minor stuff.

When refactoring I tryed to use as much python tricks as I could, not just the regular C++ syntax, I followed some nice tutorials were they explain how to take advantage of the python features to reduce the amount of code, this particularly was preaty useful: Python Tips, Tricks, and Hacks

This translates to a better use of lists, parameters in funcions, and iterations in general.

I even added some exceptions to avoid leaving the window open if the application crashes, that was anoying.

So I refactor my old code to make it really simple to create an application from scratch, here is one example:

#!/usr/local/bin/python

from OpenGL.GL import *
from OpenGL.GLU import *
from GLTools import *
from shaders import *
from Application import *

WINDOW_SIZE = [800,600,False]

class MyApp(Application):

 def init(self):
     glDisable( GL_CULL_FACE )        
     self.logo_tex = Texture()
     self.logo_tex.load("data/tmt-logo.png")

 def render(self):
     glClearColor(0.0, 0.0, 0.0, 1.0)
     glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT)

     glBlendFunc(GL_SRC_ALPHA,GL_ONE)
     glEnable(GL_BLEND)
     glColor3f(1.0,1.0,1.0)
     self.logo_tex.render([-1,1],[2,-2])

 def update(self, time_in_ms):
      pass

app = MyApp()
app.createWindow("My App", WINDOW_SIZE, WINDOW_SIZE[2] )
app.start()

The formating here is a little bit messed up but you can download the file from the sourcode file at the end of this entry.

Then I decided to create an application in 2D, I’m a little bit tired of 3D cubes and I don’t plan to create a mesh loader for the moment, for now I want to focus in other ideas, more headed to pictures and basic shapes.

I remembered and old idea about creating something similar to a canvas where to draw in a Photoshop way. I like to use Photoshop to do illustration but sometimes I have ideas of brushes that can’t be done with the features of photoshop (maybe in the latest versions they have added them).

The idea is to create a RenderTexture and to use it as a canvas, then to draw texturized quads on the RT when painting, they can be paint using blending to achive cool overlay effects, and I have complete freedom to resize, rotate or do other tricks on the brush. Creating a canvas was easy and that was done two days ago pretty quickly once I had the Application class to handle the mouse and keyboard events.

But I thought it didn’t had too much interest, mostly because the tool barely had any feature that couldnt be done in photoshop.

The next day I spend some time playing with the app and adding some common features like saving the image to disk, the Undo option, having different brushes, controling the alpha and the repetition, etc.

I have some ideas in mind for the future, for starters, I want to create special brushes, that behave like they have their own life (auto-brushes in advance), then I want to create a big canvas, not just the one I have now, something bigger composed  by several RTs, and leave the auto-brushes wandering around, drawing strange shapes.

I also have more features that I would like to add, like having layers ‘a la photoshop’, sharing the canvas online like in webcanvas, or having a small class to do on-the-fly coding for the brushes.

Lot’s of ideas in this field to explore…

Problems found

During the refactoring I found some anoying problems related to python and OOP, mostly because some internal behaviours. For instance, all the brush instances where sharing the same list instance for the textures, instead of having their own due to have initialized the list on the body of the class, not in the constructor.

Other issues where related to openGL. Dealing with RTs based in FrameBuffer Objects is simple in concept but tricky in practice, mostly because it can behave erratic under some systems. My friend Miguel Angel is having some issues with the openGL code in his machine, and I’m having some in mine.

Also, I had a pixel resolution problem when doing the canvas. If the brush paints too much quads in the same region it is easy to overdraw the same zone quickly which doesnt looks nice, then the solution is to draw quads with a small alpha so the color increases slowly, but this has a problem, if the brush alpha is too small and the texture has also alpha or values closer to zero, when both values are multiplied and stored in the RT, there is not enough pixel resolution and they are clamped to the closest value, creating ugly artifacts.

The solution is obvious, increase the resolution for the RT, instead of having 8bits per channel (the usual) I changed the RT code to support more formats, like 16bits or 32. This was tricky because I don’t know how they behave in different cards, and my first surprise was when I tested in my home computer, it was running at 2 frames per second, just because my GForce 6600 doesnt like too much the RGB32F format. I made some fixes to use 16bits but I got dissapointed that drawing a quad in a 32bits texture could reduce the performance to 2 fps.

I had more problems but now I don’t remember, probably because they weren’t too much important.

Screenshots

Here are some of my artistic results, I created some brushes in photoshop but I enjoy playing more with the plain ones.

Code

There are lots of keys so here is a list:

  • 1-5 to change between brushes
  • Control Z to Undo
  • Control S to save to disk
  • Keypad / and * to control flow
  • Keypad + and – to control preassure
  • Keypad . to change between white and rainbow color
  • C to clearthe buffer
  • Mouse Wheel to control the brush size
  • Shift Mouse Wheel to control brush rotation

You can download it from here: hackpact day 6

Hackpact Day 5: Conway in a cube

Today I was a little bit short of ideas, and having the latest Alone In The Dark game didn’t helped.

So I decided to give a better look to the conway shader I coded yesterday, so instead of using it as a PostFX I used it as a texture for the cube. It was easy, I only had to add the texture coordinates to the cube and activate the result texture from the conway code when rendering the cube.

I can’t say the results are very good but for those who love the cellular automatas it is fun to see.

I tryed to improved a little bit the conway shader but I ran out of ideas. I end up putting a different world in every color channel, so what you see is three boards at the same time (red, green and blue). I start the world using a texture which is in grayscale so more or less all channels start being almost the same.

All the faces in the cube use the same texture, and when I coded the conway I forced the textures to repeat on the edges, so it gives the look and feel that every face behaves diferent, but they are all the same.

I also tryed to render the cube several times with diferent sizes to give the feel that the pixels have volume, but it didnt worked. So at the end I just took advantage of the posibilities of the graphics card and use a texture big enough to have a huge conway world, and it looks fun when you see so many cells in action.

Now screenshots and source code:

hackpact_day5_screenshot1

hackpact_day5_screenshot2

Here you can download the source code: hackpact day 5