Sunday, September 23, 2012

Where I've been

Yes everyone, I still exist, and In Profundis weighs heavily on my mind.  I've not been talking about it here lately because I reckon it gets pretty old being told the same old things about its progress.  Unfortunately I have bills to pay and the Kickstarter money doesn't really last that long, and I'm loathe to ask you good people for more.

But hopes run high in the near future.  The big project that's been consuming my time concerning ends meeting is about to publicly release, so maybe I can get back to devoting time to (the unfortunately acronymed) IP.  It also has to do with cellular automation, and I've picked up a couple of ideas relating to In Profundis while working on it.

Monday, June 11, 2012

Progress: OpenGL, pyglet, new fluid dynamics, optimization rewrite

Not a lot to show publically, I'm rewriting the engine to make it more optimizable (a big part of this is switching from a Python class for cells to parallel arrays that can be turned into C arrays a lot more easily), and while I'm at it I'm learning pyglet and all about creating vertex lists and loading them into memory.  This, plus the fluid sorting experiment, should provide for much better performance, although it remains to be seen how that will work.  More soon.

Monday, May 28, 2012

bubble vs decorated sort

There were some comments on the previous post about using a limited number of bubble sort runs, as opposed to a full sort of the whole list weighted by original position, which was an appealing argument in a Keep-It-Simple-Stupid kind of way.

So I'm running some performance tests comparing limited bubblesorting of lists against a decorated sort weighted by initial position.  Here is the code tested:

# This is run on the data eight times by the testing code
def bubblepass(sortlist):
    skip = False
    for a in xrange(len(sortlist)-1):
        if skip == False:
            if sortlist[a] > sortlist[a+1]:
                temp = sortlist[a]
                sortlist[a] = sortlist[a+1]
                sortlist[a+1] = temp
                skip = True
        else:
            skip = False
    return sortlist

def decsortundec(sortlist):
    templist = zip(sortlist,range(len(sortlist)))
    declist = [(item[0]+(item[1]/4),item[0]) for item in templist]
    declist.sort()
    return [item[1] for item in declist]

In performance tests on lists of 3200 random small integers, the decorated sort runs from 40-60% faster than the bubblesort, despite it creating three entirely new lists along the way.

decsortundec uses list comprehensions, which are a powerful, yet to me somewhat confusing, aspect of the language.  To my understanding they are a way of creating a new list from a source list quickly, and along the way you can modify each item of the original list and/or choose whether to include it or not.  Note that this code runs in Python 2.7; Python 3 list comprehensions create iterators instead of lists, so the return value must be explicitly turned into a list, using list().  (There might be a better way to do this, in Python 2, using functional programming, but my brain hasn't been twisted into thinking in that particular Lispy way yet.)

Notes:
1. bubblepass is not a pure bubble sort, because on our trip through the list we skip positions that have already been swapped from the previous position.  That doesn't matter in a true bubble sort where we're ultimately trying to sort all the items as quickly as possible, but we'd need it here since we specifically don't want particular items to travel far.
2. It is possible that PyPy will optimize the bubblesort better, maybe far better, since it uses fairly elementary Python, while the other uses list comprehensions which are mostly implemented in C by the virtual machine.
3. I'm new to writing list comprehensions (which decsortundec uses twice) so it's possible there's a more efficient way to write this.  Would anyone know of a better way to factor in an item's original index number into its sorting weight?
4.  It might be possible to use some of that list comprehension stuff on the bubble sort, but I don't see how to implement it off the top of my head.

Thursday, May 24, 2012

Weird simulation idea

I think I've had something of a breakthrough on this project.  Think about this for a bit:

Ultimately, the simulation is just a kind of inefficient sorting operation.  Fluids travel from higher places to lower places.  Other things get in the way.  If things aren't falling, then they might migrate side to side.

Sorting is a hard problem, but it's one that a lot of effort has been poured into.  Python particularly has some good sorts written in optimized C.

We already use Python's sort functions in the code, to arrange the positions of different slices of liquid in a single cell.  My idea is to extend it beyond that cell -- to sort whole columns of fluid at once.

The biggest problem with this is that sorting works too well -- heavy fluid will transmigrate from the top of a column to the bottom instantly, with no possibility of affecting anything along the way.  There needs to be some way to limit how far an individual slice of fluid can travel in its passage.  There are no sorts that I'm aware of that support this kind of travel limiting, but it might be doable using a Python trick more frequently used back in older versions, called Decorate-Sort-Undecorate.  That is, you first modify the contents to encode additional information into it that weights each item in the sort by some variable, in this case that would be initial position in the column.  The weights would have to be chosen in such a way that the fluid weights would overpower the positional weights over short distances but not long ones.  Then we sort the column, then remove the extra information (or in our case, overwrite it with new positional information the next time we do a sort).

Another problem is that we would still need to make another pass through each column to handle sideways motion.

How does that sound to you folk?

Tuesday, May 15, 2012

Vector tiles

I figure, caution to the wind.  I'm going to have a go at a completely vectorized graphical look for the game.  Not only will it give In Profundis a distinctive appearance, but it's both easier and faster to apply dynamic graphic effects to the game, such as, say, in making liquids look more visually appealing and to producing the "outline effect" for regions outside of the player's direct view.  It would also look somewhat closer to the pitch videos I posted to YouTube a year ago.

So, my current objective is to implement this system in pyglet.  After that, it's probably time to go in and scale back the random generation a bit, which after some thought I think is a bit too random.  The game world was more interesting when it was just a big blob of random shapes than the rather too strictly laid out shafts and corridors of the current system.

Monday, May 14, 2012

Pyglet Progress

There is a real sense of having gotten over a hump right now.  I have the beginnings of a pyglet tile engine worked up.  What is more, I have gotten the engine to render a 2D, hardware-accelerated, gradient-shaded polygon to my specifications.  It feels a little like I, a piddly little Python hacker, shouldn't have access to this kind of power.

Right now I'm having to fight hard to not turn the whole game into an Asteroids-style vector art extravaganza.  I might just do that yet.  I make no secret that I'm not happy with the look of the game generally.  The water and liquids look okay, but the stone looks like I did it up in 15 minutes in MSPaint (where actually, it took me a bit longer than that, and I used Paint.NET).

Just imagine it: an exploration game with neon beams shining around the screen, representing walls and liquids.  Yeah, I think that kind of thing is awesome.  The perfect arcade game, in my book, would be something with a Vectorscan monitor and a Williams sound chip.

Sunday, May 13, 2012

Ah-ha!

I've got a basic tile grid working under pyglet!  Now we are so happy, we do the dance of joy.

Pygame, although easy to use, has always been a performance bottleneck for the game because it's built on SDL and does its drawing in software.  Pyglet is a lot more closely coupled with OpenGL.

Currently, a game frame is drawn in three steps:
- Background layer/gasses
- Fluids/stone/other objects
- Sprites

My idea is to separate these things out into sprite batches and vertex lists in pyglet:
- Background layer/gases: vertex list
- Fluids: vertex list
- Stone and other objects: sprite batch
- Player and other objects: sprite batch

Fluids particularly have been something I've worried about; I stopped using tiles for those in Pygame some time back due to the lack of flexibility.  Fortunately the polygonal shapes I've been using in Pygame look like they map to pyglet fairly nicely.

What is more, switching to pyglet will make it easier to use the JIT compiler of Psyco's successor, PyPy, to get around the speed limitation imposed by Python's interpreted nature.

Saturday, May 12, 2012

pyglet Tile Engine Progress

This is just to let you know that I'm still working on the pyglet tile engine, here and there, and that it's making my brain hurt.

Reading up on some of the more advanced features of Numpy makes my brain hurt in a different way.  Whence comes the mindpain?  This is legal:


>>>import numpy
>>>testarray = numpy.array([17,23,94,81,3,76,12,52,80,35,102,69,52])
>>>boolarray = testarray > 51

Now, boolarray is a parallel array to testarray, containing only booleans.  In each slot in boolarray that matches testarray is True if the condition was true, and False if the condition was false.

What's more, we can now do this:
>>>testarray[boolarray]
array([ 94,  81,  76,  52,  80, 102,  69,  52])

That is, we can index an array with a parallel boolean array.  And the resulting array object isn't actually a separate array, but a view into the array.  If we change an element in this new array, what gets changed is the object in the original array.

Some similar things can be done in stock Python, I gather, using moderate-level Pythonic mojo like list comprehensions, but doing it using Numpy, if I'm understanding this correctly, has the advantage of being optimized.  Access to Numpy arrays benefit from all being the same type of data; behind the scenes, indexes into these arrays are done using C-style pointer arithmetic instead of Python list redirections.  Particularly, performing an action over every element in a Numpy array is supposedly fairly quick compared to the same thing in Python... which might provide a clue as to why I'm researching this feature.


Thursday, April 26, 2012

Pyglet and tiles

This is my second attempt to move the graphics to the hardware-acceleration friendly pyglet module.  The thing that held me up the first time is a thing that I'm currently making headway on right now: writing a tile engine for it.

pyglet comes with built-in support for sprites, and in fact to get good use out of it it's important to use its sprite class instead of pushing images to the screen yourself, which I had been doing with Pygame.  pyglet's sprite rendering loop takes care of queueing its "official" sprites and sending them to OpenGL, and the video card, for you, so as long as you use its sprite class to display images.  The word is this is worth a substantial boost when drawing.

But pyglet doesn't provide as simple a mechanism for drawing tiles, meaning I have to simulate them with a field of sprites.  I'm doing this with an array of sprites that more-or-less mirrors those onscreen.  As the map scrolls around and new tiles enter the screen, the sprites that scrolled off are moved to the other side of the screen and updated to reflect the new map section.

Or at least that's the plan.  The problem is keeping the coordinate translations between the screen, the sprite frame, and the map all straight in my head, which has always been kind of a problem for me, probably due to some dyslexia.  Still, it seems to be coming along well for now.

Tuesday, April 24, 2012

Ways to speed up yer Python code

So I've been looking into ways to get Python code to run quickly, which is a continuation of the search I did last year (yikes, it's been almost a year since this began...)

I spent time some getting Cython working, and getting the code to work with it, but I kept running into difficult-to-solve problems with Python data types in Numpy arrays. Particularly, an array that wouldn't compile because the compiler claimed it was of type long when I defined it as int, or at least as far as I can tell. I eventually decided to shelve that and look in other directions.

Currently, I'm changing the Pygame bits over to Pygame in order to make use of hardware acceleration, which is iffy under Pygame. This has the additional benefit of allowing the code to work under PyPy, the successor to Psyco. A drawback, however, is that pyglet wants to run the program's main event and draw loops itself. I've gotten in some more coding experience, so I'm less standoffish about that now than I was before, but to really do things the pyglet way everything has to be a sprite, or a polygon. The way we draw liquids at the moment in Pygame uses filled polygons, so it's good that there's a way to port this over. The combination of hardware acceleration and JIT compiling is a potent one, and could potentially bring better performance than even using Psyco... if only I could figure out how to install pyglet to a PyPy installation.

If this fails for some reason, there's still weave, a system for inlining C code in Python. My problems with C and C++ tend to concern linking; C itself I have no trouble with, but getting code more complex than a handful of simply-included source files to link together into an executable has been a woeful journey for me.

Anyway, work continues. You can't see it on your end, but behind the scenes of this blog's inscrutable periods of silence are me wringing my hands over the problems I've been facing. I have considered opening the source, or at least posting bits of it, and asking for advice from you guys. I don't suppose there are any Python mavens reading this?

Saturday, April 14, 2012

Figured that out

The problem I mentioned in the last post?  Turns out that attributes on cdef functions that are used outside the class have to be explicitly declared public in Cython.  Well, at least I figured out what was wrong!

More Cython

I've made substantial progress in getting the code to work with Cython, but I'm still working through all the little errors that crop up with I try to optimize the code for it.

I'm going to write something here about Cython, a language I am not yet completely proficient in, so please take the following with that caveat in mind:

The idea behind Cython is that it's basically a C translator for Python.  It takes source files written in Python but with the extension pyx and, using another Python script that functions a bit like a Makefile, first converts them into C source files (with a .c extension) then converts those into Python compiled modules (.pyd) that a Python interpreter can then import.

If you just rename your Python files to .pyx and write a plain setup.py to convert them, then they should work as-is.  In my case they do, and the result is an about 10-20% speedup, not bad but nothing to write home about.

But one thing you can do with Cython is add keywords to variables, functions and classes that impart type information to them, which supposedly helps tremendously with heavy processing-focused applications, applications such as cellular automation.  You can also define a class so that, instead of using a Python dictionary to hold attributes, you use a C struct, by using the cdef keyword when defining the class.  These things are basically what I've spent the past few days of work on In Profundis doing.

Unfortunately, accessing a cdef class's attributes from outside the class appears to work weirdly.  Sometimes it seems to work, but sometimes it throws up a runtime error that it can't find the attribute.  For it to work consistently, apparently, I have to write accessor functions for those attributes.  And sometimes (not all the time) even those accessor functions aren't found.  (I think it might have to do with the return type of the function, which I might have to declare.  I should look into that.)

Anyway, just wanted to keep you all posted.  In Profundis work has been faster this past week than it's been for a long time.  It's having to share time with a for-pay game project I'm working on (which also has to do with celular automation), but it's continuing fairly well.  Expect more updates soon.

Thursday, April 12, 2012

Frame rate timings

From faster to slowest:

Psyco
Mixture of selectively-compiled 32-bit Python bytecode and Cython
32-bit Python bytecode
64-bit Python bytecode

Things left to try:
PyPy (may or may not be compatible with Pygame)
Optimized Cython

Wednesday, April 11, 2012

Installing a working Cython on Windows with MinGW: a comic tragedy

The following is the contents of a text file I have written to myself and will keep on-hand for the next time I have to install Cython:



Setting up Cython:
1. Works with 32-bit Python for some reason.  (I discovered why but forgot.)
2. Install 32-bit Python
3. Install 32-bit MinGW
3. Install setuptools for Win32
4. Write a batch file with the necessary paths, like containing:

*********
SET PATH=C:\MinGW\bin;C:\MinGW\MSYS\1.0\local\bin;C:\MinGW\MSYS\1.0\bin;C:\Python27_32;C:\Python27_32\Scripts
start cmd
*********

5. IN C:\(pythonpath)\Lib\distutils, create distutils.cfg, containing the text:

*********
[build]
compiler = mingw32
*********

6. In cygwinccompiler.py in the same directory, remove all instances of "-mno-cygwin".  This refers to a command line switch that has been removed in more recent versions of Cygwin/MinGW.

7. Run this to set up a command prompt window with the appropriate paths, then run: easy_install cython

8. Write a setup.py for the source modules to compile.  The modules should have the extension .pyx (instead of .py).  It should look something like:

**********
from distutils.core import setup
from distutils.extension import Extension
from Cython.Distutils import build_ext

ext_modules = [Extension("hello", ["hello.pyx"])]
// ext_modules can contain as many Extension entries as there are
//   modules to compile.  Change the module name and filename to
//   match each module to compile, of course.

// In the following, change name to something appropriate.
setup(
  name = 'Hello world app',
  cmdclass = {'build_ext': build_ext},
  ext_modules = ext_modules
)
**********

9. Run from the pathed command window:
python setup.py build_ext --inplace

The module should now be importable.  It should work, at least.

The tragic/comic part of it all is the week it took me to get all this working. I actually did it all for the 64-bit version of Cython and almost got it working, only to be stopped by (yet another) mysterious error, only this one didn't have a solution. There's still one or two steps I ran into doing that that I haven't run into with the 32-bit version yet -- I'll probably run into those once I actually bring Cython to In Profundis' code (currently I've only compiled Hello World with it).

Monday, April 2, 2012

Progress 4/2

Back into the swing of things....

So I've been researching NumPy, a Python module for doing fast things with arrays, to see if it might help speed things up a bit.  Looks promising, although to get the speed advantage I'll have to redesign the code somewhat to use parallel arrays instead of a single 2D array of objects.

I discovered, with Psyco, the code is slight more than twice as fast as with it off.  On, I tend to get framerates in the 40-50 range, off, around 22-25.  But alas, Psyco's homepage now lists the project as officially abandoned, which is kind of a blow honestly.  Its successor PyPy isn't universally useable yet -- in particular, it's incompatible with Pygame.

Another possibly useful modules/packages is Cython, which is a more explict method of compiling Python code, but has the disadvantage that the best optimizations requires peppering the code with variable type declarations, which are not legal Python in themselves, so it's not a drop-in system.

Sunday, March 25, 2012

Python Explorations

Most of my In Profundis work this week has been in research.  Here's what I'm looking into.

There are benefits and drawbacks to using Python as a development language.  The biggest drawbacks are performance (although that's not as bad as you'd think) and something called the GIL, or Global Interpreter Lock.

A feature of many dynamic languages (Ruby has one too), the GIL is something that ensures that, even in multithreaded code, only one thread can execute at a time.  On a single-core machine this is not so bad since the code is stuck doing this, more or less, anyway, but it means we won't get all the expected performance out of multicore machines that we might expect.  (Note: I'm not currently sure if Pygame's rendering respects the GIL.)  This is done to simplify the behind-the-scenes details of Python.  There are exceptions (especially regarding I/O), but for the most part threading seems like of a convenience feature than something one would turn to to improve speed.

But all is not lost, there are other solutions.  One is to use the module multiprocessing, which offers a way around the GIL.  Another is to use a version of Python that solves the GIL problem; both Jython and IronPython allow this, although they don't interface to Pygame to my knowledge.  And then there's to optimize the cellular loop using something like Numpy and/or Cython.

Other notes:

Discarding Psyco means some hard choices I had to go with in earlier versions, especially concerning Python version, aren't so hard anymore.  This opens things up a bit concerning other modules, although I still can't go to Python 3.X because Pygame requires 2.X.

In the last message someone expressed concern that calculating the whole world would make the game unresponsive.  This is actually not necessarily the case, as the calculation loop is written in such a way that on a given "frame" it can just calculate part of the world, remembering where it left off to continue later.  So, I can calculate one-fifth of the world this frame, get player input, then another fifth the next frame, and so on.  It queues cells up in a spiral pattern from around the player's location, and the visible screen is the beginning of each pass of the world, so we don't even have to worry about visible calculation artifacts.  Neat, huh?  In the future it would be nice to calculate different, non-adjacent areas of the world in different threads to make use of multicore systems -- maybe the multiprocessing module can help with that.

Tuesday, March 20, 2012

State of the project: 3/20/11

This is rambly and questionably edited, but I wanted to get something out there about the project and what's up with it.

There hasn't been a lot of comment here recently because I've been thinking long and hard about the design, and about what I'm happy with about the direction In Profundis is going, and what I'm not so happy about.

Because of this, a semi-radical redirection of the project is underway.  Many computers are not fast enough to run an automation of the size of the game's world as an action game unless I only simulate a zone around the player, and I've been unhappy with my efforts to make a platforming engine. As Miyamoto has been known to say, an idea is something that solves two problems at once, and I think I have the solution: to make In Profundis a quasi-real-time, turn-based game.

So how will this work?  Well, first off, character movement won't be on a pixel level.  Instead, like in roguelikes, the character will move one cell at a time.  There will probably be an animation between cells and some concessions to playability, but you won't be able to stop "between" spaces.

The game will attempt to simulate the entire field each frame.  This will greatly reduce framerates, but will mean that the engine can support one of the features I consider to be essential: continuously flowing water between points.  If I don't simulate the whole field then liquids will tend to accumulate at the edges of the execution frame, meaning flows like waterfalls and rivers will eventually cease if one end of the flow is outside the frame.  Obsessive thinking about the problem has identified no solutions that themselves won't cause bigger problems, so I've come to the conclusion that cheating this isn't the answer.

This means to a degree actually abandoning the platforming physics engine, but I'm okay with it as it has taken a disproportionately large amount of development time to implement, and it's still pretty bad.  By simplifying the physics here, I can avoid some of the annoying edge cases that result in frequent character wall embeds, or at least handle them more elegantly.

However if you sit and do nothing, I'm thinking that game time will not freeze, although it may slow down.  Not only does this still provide some time pressure, but it also makes it more obvious what direction flows are going without having to add new visual elements to identify them.

What will probably happen is I'll run the world simulation and character turns on different threads.  One problem with this, however, is that Python threading is a odd.  To my understanding, due to the existence of a thing called the Global Interpreter Lock, threads aren't like real operating system threads, but cannot execute more than one process at a time, even if Python is running on a multiprocessor system.  (There is a module to get around this, multiprocessing, and there are versions of Python that either remove the GIL or make it less onerous, but I'm not sure how compatible those are with Pygame.)

But one good thing about this is, since the game won't be so dependent on processor speed, we'll be able to abandon Psyco.  Honestly it has been the source of many technical problems.  It works great as a drop-in solution that just "works" to make Python faster... so long as all your modules work with it (at least one has mysteriously broken in the past when Psyco's been enabled, which took some time to discover)...  and so long as you avoid esoteric language features like generators... and don't use Python versions after 2.6.  I've at last reached the point where I'm ready to declare it's just more trouble than it's worth.  Fortunately, it's just as easy to remove as it was to add.

As an aside, I am thinking hard about the nature of fluids.  The game can currently simulate up to eight kinds of fluids with random properties, but that might be a case of too much randomness.  It might be best to stick with water, sand, and a couple of others in each world.

Monday, February 13, 2012

I exist....

I'm still banging around. Unfortunately there are other necessary things going on right now, but I'm still here. I realize that it's been slow going, and it gets embarrassing to say "it's been slow" too often. We're approaching ten months since the Kickstarter project made, and I realize that's pretty sorry. There is no need to remind me of that fact. I have gotten some inventory stuff and tool stuff put in. The big thing that's been worrying me so far is that I'm thinking random fluids, as they're currently implemented, may not be as interesting as I had projected. It's not that random properties are bad, but when you have several kinds of fluids all mixing in a big basin, I'm not sure they interact in ways that are too clean. Graphics are another thing that's been worrying me lately. They're okay in a kind of makeshift kind of way, but I haven't really spent much time on making sure the stone looks nice. I'm not talking about photorealistic nice, but in a wasn't-knocked-up-in-thirty-minutes-in-MSPaint nice. Well, those are the things on my mind concerning the project at the moment. More before long, I expect.