Build/installation notes: libraries used are OpenGL, freeGLUT, and FLEW32. Headers and libraries are under ./avsx/include/. Intermediate files are in ./avsx/Release/ and ./avsx/Debug/. Binaries are places into ./avsx/bin/. Make sure that glut32.dll, freeglut.dll, and glew32.dll are in the same directory as the executable (they are placed in ./avsx/bin/ already). Tested to work with VisualC++ 2009 and latest nVidia GeForce drivers.

Video: (short 2MB clip, local), (longer clip, www)


Traditionally, all drawing is done on videocards in just 2 or 3 buffers in a limited environment. Some complex graphical effects, such as environmental mapping, reflections, and picture-in-picture effects require to reuse the drawn objects by making multiple passes or passing data back-and-forth between CPU and the videocard. The computational cost of this is unacceptable for real-time applications.

Several proprietary solutions (like nVidia's pbuffers) provided alternative methods of reusing rendered images on the videocards as textures, but it did not gain much popularity or hardware support. Recently OpenGL2 standard introduced a new mechanism for dealing with image reuse and offscreen rendering, called FBO/FramebufferObject. It unifies storage and access to image data on the videocard to reduce the need for making multiple render passes and CPU-GPU communication.

I created a small framework that allows small visualization modules to be chained together to produce complex real-time visualizations in one pass right on the videocard. FBO is used to make OpenGL draw directly on textures or pixel arrays.


I created 11 individual modules:


Any number of modules can be chained together, and configuration can be imported and exported to a special file format (.avsx). The file contains basically chains of ASCII numbers and supports comments, so it's human-readable and editable. Interations between modules are stored in a directed graph. Module controller iterates through the graph, figures out the correct flow of data, assigns buffers to modules, and creates a call stack. Right now most of the manipulation is done with pixel arrays, but it is possible to alter the program to run completely on the GPU with modules written in a shader language.

I have created several presets to demonstrate all the different modules. To run a preset, drag it on the executable in Explorer or run "avsx.exe preset_file_name" from command line. These demos can be found under ./avsx/DEMOS/ directory.


Originally, I was going to create a visual editor for the preset graph, but I ran out of time. I plan to continue working on the projects for fun after this class is over, here is the concept art of what I would like to end up with for the visual editor (see picture on the right).

The program is restricted to square textures with power-of-2 dimensions due per-pixel caching done in some modules. Performance on my desktop is decent: 30-50 frames per second on most configurations (except for the sluggish convolution filter).

I could not get the program to run in class - I was programming only on my desktop before and I should have checked my laptop. The program uses some recent OpenGL features that are not supported by older videocards and drivers, and I suspect that that was the problem.


Below is the definition for the .avsx file format. Everything after the semicolon is a comment (ignored by parser).

version  total_modules  total_links			; first line always has these 3 numbers
id_of_module  type_of_module  number_of_config_entries	; for each module, these 3 numbers are listed.
datatype_of_config_entry  config_entry_value		; every module can have multiple config entrie
id_of_source_module  id_of_destination_module		; every link in the graph is listed here after
							; the modules and configs