Wednesday, September 12, 2012

Presenting ... the inupypi server

After using the pypiserver: http://pypi.python.org/pypi/pypiserver/ for sometime to house our internal eggs and libraries, we (one of my colleagues at least, very very quietly I might add) decided add to give back to the Python community by developing our own pypiserver called InuPyPi: http://pypi.python.org/pypi/inupypi/0.2.3 which is based on Flask. Some of the features of our pypi server are:


  • A nice(r) web interface for packages.
  • Supports multiple eggbasket folder per server.
  • Auto searching for updates of packages (planned)
  • One click update for outdated packages (planned)
  • Command-line tool to upload to server something like 'python setup.py upload' (planned. This will be pretty close to doing)
For those who are interested to contribute the project, the project lives in github: https://github.com/hallaj/inupypi. We will appreciate suggestions and help with our pypi server. 

Tuesday, September 11, 2012

How do you test your Python apps for production ?


As usual I am just thinking aloud for this post and thinking about the ways to solve a problem that we are having. Haven't really got the answer yet just sharing hoping that some other smart soul out there might offer me their 2 cents. Our apps currently have got pretty good test coverage or so I feel. I would say that our coverage currently is about 95% of our application functionality. We have unit tests, functional tests and even selenium tests (currently trying out splinter to replace selenium). We run them all against our test fixtures and use Jenkins to visualize and notify us whenever the tests fail. While I think there are more rooms for improvements, on the whole I am pretty proud of our whole test architecture.

We freeze our code and only branch our code once all of it's tests hits blue on Jenkins. That's all fine. Hunky Dory. That's when the ice berg reared it's ugly head on our sailing Titanic. The problem is our code failed twice while we were trying to deploy the app (wsgi app) and we only detected it through manual means, eye balling it. Once because our ssl certs failed because of this particular bug: http://code.google.com/p/httplib2/issues/detail?id=202#c2. This bug occurred with httplib2 when using wildcard certs which we don't use in our testing. The whole wsgi app came crumbling down without any of our tests informing us of the wiser. I would be okay if these were some obscure bug that occurred because of some strange use case that we have never considered prior to this, but these are big-pie-on-your-face kind of embarrassing show stoppers. What I want to do is a way to catch show stopper bugs before deploying our code on production servers.

What I am talking about here almost like a full dress rehearsal before deployment. Setting up a dedicated machine for testing that mirrors the live data seems the most obvious way but in someways is too much of a resource hog in my opinion. From what I gather from doing a bit of quick 2 minute research I am not alone:

This seems like an idea too http://xunitpatterns.com/Test%20Logic%20in%20Production.html. I think overall the main thing to be moving towards is to stream line the test machines to be as close to the live machines as possible and to lessen the 'magic' that we do with test servers to make tests work and determine which environmental variable would be crucial to our application being deployed well and replicating that in our tests. 

What I don't like about this solution is that for 'n live sites with different environment' you need n amount of replicated live environments. Another idea that I am playing around with is the create a suite of 'critical tests' that is run after an upgrade just to ensure the most critical services are running fine. Ideally these tests should be short but it should give you an a pretty good idea if the main services will be running when you go home after an upgrade is performed on the server. The tests should just be a subset of the main battery of tests and should complete in a fraction of the time taken to run the full suite of test, just like one of those old windows installers that run 'self diagnosis' after installation. 

Monday, September 10, 2012

OpenSuSE 12.2 delivers again

Just upgraded OpenSuSE to 12.2 and I am happy to report the upgrade process is flawless as usual. Stuffed in the DVD, settled some packaging conflicts and I was good to go. Half an hour later 12.2 was smiling back at me after rebooting after I restarted my machine. This is by no means a review, just a little tip of the hat to the OpenSuSE guys for making the upgrade process smooth.

I don't think enough is said or enough accolades is paid to hardworking folks that helps me to retain a few more of my already receding follicles after a major upgrade. Thank you guys! Aside from having to re-install my google chrome and my codecs (Why can't they just have an option to do or enable that during upgrade process ??) nothing much of anything was interesting. Boring. Crickets. That's the way a smooth and great upgrade should be.

Really like the work put into the boot up screen and the improvements done to zypper. Overall the box at homes is chugging along at a somewhat higher speed.

Thank you OpenSuSE. Just the other day it crossed my mind that I have had more than a decade affair with OpenSuSE and it just seems get stronger from release to release.