thoughts, ideas, code and other things...

Saturday, March 20, 2010

We <3 MadeToKill

2+ years and still counting . . .


Quine attempt :)

Just tried writing a quine in python, I'm ashamed that it took me too much time to find a solution, so the hide the shame I covered my quine in an ascii art -

#!/usr/bin/env python
# -*- coding: utf-8 -*-
if __name__ == '__main__':
'adppppba, pp
dp" `"pb ""
dp` `pb
pp pp pp pp pp pb,dPPYba, ,adPPYba,
pp pp pp pp pp ppP` `"pa apP_____pp
Yp, "pp,,pP pp pp pp pp pp pPP'""'"""
print open(__file__).read();"""pp pp "pb, ,aa
`"YppppY"Ypa `"YbbdP'Yp pp pp pp '"Ybbdp'

Another approach with sys.exit() -
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import sys
if __name__=='__main__':
print open(__file__).read()

for x in xrange(1,10):
for y in xrange(x):
print chr(y),

# ^^ this one is just a cover up, actual quine ends at exit

and yet another approach with muting the stdout :P -
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import sys

class Mute:
def write(self,msg):
''' I am a quine alright '''
def flush(self):
''' so am i :P '''

if __name__=='__main__':
print open(__file__).read()
sys.stdout, sys.stderr = Mute(),Mute()
for x in xrange(1,10):
for y in xrange(x):
print chr(y),

foozooooo < iz_this_an_error

if now_this_too:
if you_too:
if santa:
if you:
if me:
if no_one_at_all:
' /( )\
\ \__ / /\
/- _ `-/ / \
(/\/ \ \ \
/ / | ` \
O O ) |\
`-^--``< /\
(_.) _ ) / \
`.___/` |\
`-----` \
<----. __ / __ \
<----|====O)))==) \) \
<----` `--` `.__,` \
| |\
\ /\
____( (_ / \_____\
,` ,----` | \
`--{__________) \/'

Labels: ,

Tuesday, March 16, 2010

Fun with PyGame's camera module

OMG T2 starts tomorrow, I'm probably gonna fail in Random Processes this time.
But before I start preparing, I had to try out the tempting Camera Module of PyGame.
PyGame has pretty cool stuff under pygame.transform and pygame.mask, which makes the task of thresholding very easy.
I came up with an interactive Tux based on the "Capturing a Live Stream" code in the Camera Module Introduction.

My program tries to detect red colored objects, finds the centroid of such points and draws a ghost (find in your /usr/share/icons/oxygen/32x32/apps/) and makes Tux (/usr/share/icons/oxygen/128x128/apps/tux.png) run away from it.

Here's how the code looks like -

# -*- coding: utf-8 -*-

# An interactive Tux, interact with it using any red colored object
# based on PyGame camera module intro by Nirav Patel
# -- Abhishek Mishra (ideamonk #

import os
import pygame
from pygame.locals import *

class Capture(object):
''' A Capture class to get location of a desired blob '''

def __init__(self, ccolor=(248, 111, 115), threshold=(60, 10, 10)):
self.size = (640,480)
# create a display surface. standard pygame stuff
self.display = pygame.display.set_mode(self.size, 0)
# initialize camera module
# this is the same as what we saw before
self.clist =
if not self.clist:
raise ValueError("Sorry, no cameras detected.") =[0], self.size)

# create a surface to capture to. for performance purposes
# bit depth is the same as that of the display surface.
self.snapshot = pygame.surface.Surface(self.size, 0, self.display)
# target color to detect -- default is red
self.ccolor = ccolor
# by default we give more priority to shades of red
self.threshold = threshold

def get_blob_location(self):
self.snapshot =
# threshold against the color we got before
mask = pygame.mask.from_threshold(self.snapshot, self.ccolor, self.threshold)
# keep only the largest blob of that color
connected = mask.connected_component()
# these numbers are purely experimental and specific to your room and object
# print mask.count() # use this to estimate
# make sure the blob is big enough that it isn't just noise
if mask.count() > 7:
# find the center of the blob
return mask.centroid()
return (None,None)

class Ghost():
''' Ghost class, to have a rect for collision detection '''
def __init__(self):
self.image = pygame.image.load (os.path.join ("./","gv.png"))
self.rect = self.image.get_rect()

def set_rect(self,position):
self.rect = pygame.Rect(self.left,,32,32)

class Tux(Capture):
''' Tux class extends Capture and does stuff using get_blob_location '''

def __init__(self):
self.location = [ x/2 for x in self.size ]
self.set_rect (self.location)
self.image = pygame.image.load (os.path.join ("./","tux.png"))
self.ghost = Ghost()
self.backbuffer = pygame.Surface (self.size)
self.force = 5

def set_rect(self,position):
self.rect = pygame.Rect(left,top,128,128)

def interact_tux(self):
if (pygame.sprite.collide_rect(self,self.ghost)):
# ghost collides with tux
if self.ghost.left<self.location[0]+64:
self.location[0] += self.force
if self.ghost.left>self.location[0]+64:
self.location[0] -= self.force
self.location[1] += self.force
self.location[1] -= self.force

def main(self):
going = True
old_coord = (0,0)

while going:
events = pygame.event.get()
for e in events:
if e.type == QUIT or (e.type == KEYDOWN and e.key == K_ESCAPE):
# close the camera safely
going = False

new_coord = self.get_blob_location()
if new_coord != (None,None):
# delta = sum( [(x-y)**2 for (x,y) in zip(new_coord,old_coord)]) # for less fuzziness
# if delta>200:
old_coord = new_coord

self.set_rect (self.location)
self.backbuffer.blit(self.ghost.image, old_coord)
self.backbuffer = pygame.transform.flip(self.backbuffer,True,False)

if __name__=='__main__':
t = Tux() # all default params
And that's me doing weird things with tux :D -

Kalman Filter
would be more interesting.

Labels: , , ,

Failed attempts at tracking a colored object in OpenCV

Recently, I purchased a webcam for fun. Basically wanted to have some augumented reality fun while sitting at home, trying to make something interactive, so that I can stand back and maybe control a car with something like a Star Wars projection torch. For now, an Old Spice empty deo can would do, its totally red.
Time to get some tools of trade in my bag of tricks. So this lazy pythonista looks out for something pythonic and dead easy.
A little common sense from past experience (SpaceLock) tells that OpenCV is the way to go. A backing from intel makes it look more shinier. But surprisingly, he finds these many options - PyOpenCV, ctypes-opencv, swig based default bindings, and completely newly written bindings in OpenCV 2.0.
Wow, now you're in a mess where every blog, every other newbie tutorial speaks about their own python bindings for opencv. A big mess out there with some folks on stackoverflow - "Since the new bindings are incomplete and the old ones are painful to use", what the hell!

Tried out the face detection code (had to be modified to work with opencv2.0 bindings) with haar-like features. Worked well with my face though it couldn't detect when I was looking down. I think one needs a bigger data file with all angles of human face covered to make detection more accurate. So much for 1MB of haar data.

Here is a better formatted code if you have trouble indenting that one -

import sys
import cv

class FaceDetect():
def __init__(self):
cv.NamedWindow ("CamShiftDemo", 1)
device = 0
self.capture = cv.CaptureFromCAM(device)
capture_size = (320,200)
cv.SetCaptureProperty(self.capture, cv.CV_CAP_PROP_FRAME_WIDTH, capture_size[0])
cv.SetCaptureProperty(self.capture, cv.CV_CAP_PROP_FRAME_HEIGHT, capture_size[1])

def detect(self):
cv.CvtColor(self.frame, self.grayscale, cv.CV_RGB2GRAY)

#equalize histogram
cv.EqualizeHist(self.grayscale, self.grayscale)

# detect objects
faces = cv.HaarDetectObjects(image=self.grayscale, cascade=self.cascade,, scale_factor=1.2,\
min_neighbors=2, flags=cv.CV_HAAR_DO_CANNY_PRUNING)

if faces:
#print 'face detected!'
for i in faces:
if i[1] > 10:
cv.Circle(self.frame, ((2*i[0][0]+i[0][2])/2,(2*i[0][1]+i[0][3])/2), (i[0][2]+i[0][3])/4, (128, 255, 128), 2, 8, 0)

def run(self):
# check if capture device is OK
if not self.capture:
print "Error opening capture device"

self.frame = cv.QueryFrame(self.capture)
self.image_size = cv.GetSize(self.frame)

# create grayscale version
self.grayscale = cv.CreateImage(self.image_size, 8, 1)

# create storage = cv.CreateMemStorage(128)
self.cascade = cv.Load('haarcascade_frontalface_default.xml')

while 1:
# do forever
# capture the current frame
self.frame = cv.QueryFrame(self.capture)
if self.frame is None:

# mirror
cv.Flip(self.frame, None, 1)

# face detection

# display webcam image
cv.ShowImage('CamShiftDemo', self.frame)
# handle events
k = cv.WaitKey(10)

if k == 0x1b: # ESC
print 'ESC pressed. Exiting ...'


if __name__ == "__main__":
print "Press ESC to exit ..."
face_detect = FaceDetect()

After some thinking I tried writing a colored object detector. I haven't yet gone into things like filters and thresholding techniques, which I guess are faster than my kiddish approach.
Simply put what I am doing is -
  1. Grab the image
  2. Look for points with distance from target color in range of a particular tolerance
  3. Count the number of these points
  4. If this count is > a particular density then show a circle at the mean location of these points
  5. Tune up the constants for your lightening conditions and target color.
  6. Don't crib much as again this is "kid's" approach to object tracking :P
Here is how the failed attempt looks like -

# -*- coding: utf-8 -*-
# A simple capture and draw code
import sys
import cv

class ColorFinder():
''' Finds out red objects on webcam '''

def __init__(self, colors=[], tolerance=500, density=100, step=1, windowName='ColorFinder'):
# -- CV settings
self.device = 0
self.capture_size = (320,240)
self.windowName = windowName

# -- Recognition settings
# Maximum rgb space distance to consider close
self.tolerance = tolerance
# how many pixels indicate an object
self.density = density
# currently two shades of red, one bright, other dark
self.colors = colors
# step is opposite of accuracy, you have to tweak density and tolerance accordinly
self.step = step

# -- detection vars
# mean positions
self.mean_pos = [0,0]

def setupCV(self):
''' sets up opencv to capture from webcam '''
cv.NamedWindow (self.windowName, 1)
self.capture = cv.CaptureFromCAM(self.device)
cv.SetCaptureProperty(self.capture, cv.CV_CAP_PROP_FRAME_WIDTH, self.capture_size[0])
cv.SetCaptureProperty(self.capture, cv.CV_CAP_PROP_FRAME_HEIGHT, self.capture_size[1])

if not self.capture:
print "Error opening capture device"

def distance2(self,source, dest):
''' finds square euclidean distance in RGB space '''
return sum ([ (x-y)**2 for (x,y) in zip(source[:3][::-1],dest[:3]) ])
# ^^ we just need rgb

def find_by_steps(self):
''' finds colored object by calculating mean position of such colors '''
mean_pos = [0,0] # reset the mean
pix_count = 0 # to find density

x,y = (0,0)
for x in xrange(0, self.capture_size[0], self.step):
for y in xrange(0, self.capture_size[1], self.step):
source = cv.Get2D(self.frame,y,x)
for color in self.colors:
if ( self.distance2(source,color) < self.tolerance ):
pix_count +=1
#print pix_count # just use this to tweak you
if pix_count>self.density:
# now we have a good bulk under detection, update mean
self.mean_pos = [t/pix_count for t in mean_pos]

def run(self):
''' runs a loop to do color detection '''
self.frame = cv.QueryFrame(self.capture)
while True:
self.frame = cv.QueryFrame(self.capture)


cv.ShowImage(self.windowName, self.frame)

k = cv.WaitKey(10)
if k == 1048603: # ESC
print 'ESC pressed. Exiting ...'

if __name__ == '__main__':
cf = ColorFinder(colors=[(172, 0, 16)], density=8, tolerance=300, step=3) # find me these shades of red

Afterthoughts -
  1. OpenCV is good, given the support and backing it is supposed to have. But it isn't straightforward enough for someone without understanding of filters, etc to try it out.
  2. There is webcam support in PyGame, which I'm tempted to try out. A good introduction by Nirav Patel. Besides his blog is full of inspration for anyone else to try out.
  3. Documentation of current OpenCV binding is insufficient, besides presense of just one example for new bindings and 10+ examples for old SWIG based binding leaves newcomers in dilemma. So does the presence of many other bindings, tools, etc
  4. My current approach is very slow, hence I'm leaving 3 pixels gap, besides this method is also very sensitive to lightening conditions around your place, better tweak it before you try out.
  5. A better approach would be a divide and conquer kind of approach of finding the region (maybe an experienced person would disagree, but I'm speaking from a novice's point of view)
  6. Or even better would be filtering out only reddish components and thresholding them to a black & white image. Afterwards finding white portions is not tough.
T2 starts from Wednesday, I better get back to my books for a while.

Labels: , ,

Thursday, March 11, 2010


From elsewhere, what I thought...
Similar mind reducing effects of current education system felt here. Hopelessly cramming 6 things parallely to maintain a cgpa, I lost interest in anything they taught (even the subjects I was looking forward to with great interest). Over that bombarded with subjects of no relevance at all to Computer Science, the scene becomes pathetic.

So now I've stopped giving a damn about where the fuck I stand on the stupid scale of 10, going on ones own path has better rewards (things that make you really happy from inside)... so far, its going great on the recovery track to the endlessly energetic child that ones existed within me.

Probably you should too try out the road not taken.

Essentially the current setup does keep you away from the pleasure of finding things, the pleasure of discovery, the ability to say "We will find a way", without knowing the way yet.


Tuesday, March 09, 2010

I knew all that spec shit was bull shit

"a 'spec' is close to useless. I have _never_ seen a spec that was both big enough to be useful _and_ accurate. And I have seen _lots_ of total crap work that was based on specs. It's _the_ single worst way to write software, because it by definition means that the software was written to match theory, not reality."
-- Linus Trovalds
from Linus on specifications,