A guy contacted me recently regarding training. Me tried of course to look for local trainers here but some were either too expensive or just strangely did not reply emails. So it boiled down to having to do it myself. My junior programmer (or should I say sys admin since he said he wants nothing to do with programming) asked essentially can we survive being a Python trainer in Malaysia. Sadly, the answer is still no. Training here are few and far apart. After hearing my answer, his face showed a kind of demeanor that your neighbor would have if you drove up the drive way with your broken down Toyota claiming that this is dream car.
He must have been thinking that Java as a language is still better than Python since he can at least see some Java jobs on jobstreet and stuff. Generally that is still the whole conception of Python in Malaysia. Talks do help. We just need more. At least nowadays when I talk to corporates about Python they don't ask me why I suddenly bring wildlife into a conversation when they asked me what would the ideal programming language that their IT department should adopt for high productivity. I should hit the university level more to create more awareness. I have thought classes for corporates who were full of Python doubters and at the end of the class, all the Java people walked out at least respecting Python for what it can do and actually contemplating using Python in their environment.
It's really gratifying to see programmers who were initially so skeptical about Python suddenly light up to the possibilities of using Python for their work after being in the Python training with me. That to me is priceless and I wish that I could do more of these kind of Python gigs.
I think that awareness is the key for us Pythonistas to be able to survive doing the thing that we love so much here. I think that here at least if more people knew about the existence of Python it would really help and then I can finally tell that doubting junior system administrator ... "Yes! There is a future being a Python trainer!"
Thursday, December 17, 2009
Thursday, December 3, 2009
Project Documentation
Currently I am in the process of passing down my project to some juniors and I find the need to document it. Me being the lazy ass programmer that I am obviously started to look for something that would allow me to complete my documentation in as least steps and most painless way as possible. Found two main applications that I am using now:
1. Sphinx -> For main documenting work.
2. Happydoc -> For parsing and generating code documentation.
Both are quite easy to use and allows documentation to be generated in various format, which includes latex, html or pdf. Nice thing is both supports rest type reStructured text formatting which I just absolutely adore and is great to work with. I am especially impressed with Sphinx and the beautiful documents generated by Sphinx, coming with it's own search and everything. It's just what the lazy programmer doctor ordered. I am still evaluating both of these programs and am still open to other choices but so far these fit the bill nicely.
1. Sphinx -> For main documenting work.
2. Happydoc -> For parsing and generating code documentation.
Both are quite easy to use and allows documentation to be generated in various format, which includes latex, html or pdf. Nice thing is both supports rest type reStructured text formatting which I just absolutely adore and is great to work with. I am especially impressed with Sphinx and the beautiful documents generated by Sphinx, coming with it's own search and everything. It's just what the lazy programmer doctor ordered. I am still evaluating both of these programs and am still open to other choices but so far these fit the bill nicely.
Wednesday, November 11, 2009
django-evolution aka orm pain part 2
Finally bit the bullet and tried out a new application to help me do the changes I needed to do on the db and boy am I glad I did. django-evolution is a joy to use. It's easy to install with no strange dependencies and it did everything out of the box as described. Initially somethings did trip me up such as the django-evolution's initialization actually looks only at the models.py and does not actually bother what is really in the db, as in say the models.py you define a field that is not actually in the db during the first 'python manage.py syncdb' to create the django-evolution table, it does not actually know that and will just carry on not knowing that there are discrepancies between the models.py and the actual db.
After I figured that part out, it was easy. Just add the fields and their attributes, re-run an update script on my data and everything is done. Now my selects are much faster. Initially a 3k row db select was taking 50s and now using the caching functionality of select_related() it comes back in a blazing 4s. Ah here too initially I got tripped a bit as after putting in all the relationship I noticed that it still did not do a cache select_related until I did some digging and found that all columns defined with 'null=True' is not cached by select_related(). Looking at the actual sql generated by the Django CRM, it looks proper and very well done. I am still checking out the little nooks and crannies of the application but so far I give django-evolution the thumbs up! I am actually looking at the code to see if during the initial syncdb django-evolution can actually peer into the db to see if the definitions are actually kosher before proceeding.
After I figured that part out, it was easy. Just add the fields and their attributes, re-run an update script on my data and everything is done. Now my selects are much faster. Initially a 3k row db select was taking 50s and now using the caching functionality of select_related() it comes back in a blazing 4s. Ah here too initially I got tripped a bit as after putting in all the relationship I noticed that it still did not do a cache select_related until I did some digging and found that all columns defined with 'null=True' is not cached by select_related(). Looking at the actual sql generated by the Django CRM, it looks proper and very well done. I am still checking out the little nooks and crannies of the application but so far I give django-evolution the thumbs up! I am actually looking at the code to see if during the initial syncdb django-evolution can actually peer into the db to see if the definitions are actually kosher before proceeding.
Monday, November 9, 2009
When you play outside of django's ORM ... you get pain!
For one of my projects, I forgot to implement the relationship of one my models. I thought I could get away with it by writing a tag that made a query to the db upon being fed a string in the report. To my horror, this caused the report to take about 50seconds to generate for a paltry 3000 record db! Upon tailing the logs of query.log, I found that the multitude of query to the db was causing the problem. It was basically hitting the db about 3000+ times for a single page report. That is when I found out the good thing about Django's select_related(). The problem now is, I do not have that relationship and need to build that relationship into the db. Not exactly fun with a db that is already populated with data. Led me to thinking, while Django's ORM might be great for stuff like calling data from another table, but would it also incur a performance hit everytime data is being queried. I have heard Django being used for high traffic sites but then are these high traffic sites?
I tried looking around but I could not find a satisfactory answer to this: "What if you wanted to model data for an existing MyISAM table which does not have the relationship?" It would be nice if Django provided a mechanism to recreate the relationship between MyISAM tables that is implemented at the MiddleWare. Say if I were to go the Django way now, would I have to dump out the data, re-implement the tables in a 'Django-ic' model then reimport back my data?
Currently I am feeling that in order to play in Django ORM's park. I am being forced to play by it's rules. If I opt to write my sqls raw then the ORM takes back it's ball and refuses to let me even venture into it's side of the park and I am left to recreate back a lot of the convenient functionality that was provided by the ORM layer. There doesn't seem to be a nice middle ground and that just sucks.
I tried looking around but I could not find a satisfactory answer to this: "What if you wanted to model data for an existing MyISAM table which does not have the relationship?" It would be nice if Django provided a mechanism to recreate the relationship between MyISAM tables that is implemented at the MiddleWare. Say if I were to go the Django way now, would I have to dump out the data, re-implement the tables in a 'Django-ic' model then reimport back my data?
Currently I am feeling that in order to play in Django ORM's park. I am being forced to play by it's rules. If I opt to write my sqls raw then the ORM takes back it's ball and refuses to let me even venture into it's side of the park and I am left to recreate back a lot of the convenient functionality that was provided by the ORM layer. There doesn't seem to be a nice middle ground and that just sucks.
Friday, September 18, 2009
Patching Fun - Update
Looking around for a solution for a cleaner way of patching up my code on the Windows server, without installing the unixtools for windows, I eventually arrived at the python-patch tool. This nice little (less than 600 lines) tool allows you to have patch like capability and all deliciously packed in a Python script. Thanks techtonik!
So now I can just packup the patch.py Python script together with my patch file, and Walla! the admin at the other side can just run something like:
to patch up my code. With a little modification, the patch.py can even generate a results.log for me to review the patch process. If anything were to go south, I can just send the reverse patch.
Now .... should I just get the script to autorun .... hmmm...
So now I can just packup the patch.py Python script together with my patch file, and Walla! the admin at the other side can just run something like:
python patch.py diff_patch
to patch up my code. With a little modification, the patch.py can even generate a results.log for me to review the patch process. If anything were to go south, I can just send the reverse patch.
Now .... should I just get the script to autorun .... hmmm...
Labels:
python
Thursday, September 17, 2009
Patching fun (on windows) 101
I have sent about 100 odd patches to a system I am working on and in this post I gather together some of the lessons I have gathered along this path. I write all of my patches in Python and after much hair pulling sessions over the phone with customers am happy to say the sacrificed hair did go along to help me have a epiphany of what would be the best way to help customers who are willing to help us apply patches along. You can say it's a collection of what to do and what not to do tested in real life project scenarios.
First off and of ultimate importance is to get into the right frame of mind before you write a patch to be sent to a customer. The frame of mind you should be in is ... remember that paying customers never ever want to help you apply patches so if you run across one that is willing (like in my case), thank Allah, Buddha, the Universe or whatever deity you pray to for your lucky break! Your (paying) customer does not owe you anything, in fact you are in their debt if they even entertain your patch request. Get in this frame and your chances of writing a successful patch will be increased two fold.
Okay, here are some dos for writing patches. These steps below are for systems that are not accessible remotely and a bit physically too far or demanding for your to travel to.
1. Always recreate (as much as possible) the environment in which to run the patch. This means that if your customer's site has a G Drive (windows) your machine in which you are creating your patch better have that. I have had some great time tracking down why a patch does not work when it magically died while trying to open up a log file on Drive G in a machine that only has C and D drives. This includes in the case of windows of using the same binaries of libraries that your code belongs to. Just recently I had a good time on the phone with a customer who claims that come functionality was not working whereas using we were both using the same script! He insisted that my script was not working while I kept on suspecting that the problem lies between the computer and the chair. Finally I found the problem was due to an outdated binary build that he was using. If having the same build is no longer possible, at least go through the changelog of whatever library you are using to get prepared for whatever problems that might arise.
2. Always have backups of each of your patching steps. In fact it's best to include a sort of un-patch patch just incase your patch goes south. The steps to backup the important files should be done by your script and should not depend on your customer. So something like 'Please backup and replace file bla-bla" is not recommended because how do you know what your customer is backing up the original file to. Here you are thinking that they backed it up to something like.bak and they backed it up to .bak. A lot of fun will ensue in the case if you have to write a revert script later. Step no. 2 can be summarized as your customer should do as least as possible in the patching process short of running a patch script.
3. Generate error and result log of your patch process and ask the good will of your customers to send those logs back to you. This will help in determining if your patch process is successful.
4. Describe in your email what should the customer see in the event that your script is successfully run. I find here the quickest and surest way is to include a screenshot of what they should see. Typing out the results is error prone and it's hard to describe what they should see plus do you think your customers got that much time to read a long drawn out email?
5. Test, test, test and test each of your steps at least 4 times before sending the patch out. No testing is too much in this kind of scenario especially if you are dealing with live data.
6. Always include a way in which to identify what build are you customers currently running. This could be as simple as a BUILD text file indicating what BUILD are they currently running or in the case of a web application, a build number nicely tucked away at the top left hand or right hand corner of the screen. Everyone of your patches should update this BUILD number. This is instrumental if you were on the phone with your customer and want them to help you identify how updated are their source code.
Hopefully this post will help me remember the things that I did right and those that I painfully learned not to do again. It would be great too if it can help you avoid those pot holes that have claimed the lives of a few of those hair on my head. Happy patching!
First off and of ultimate importance is to get into the right frame of mind before you write a patch to be sent to a customer. The frame of mind you should be in is ... remember that paying customers never ever want to help you apply patches so if you run across one that is willing (like in my case), thank Allah, Buddha, the Universe or whatever deity you pray to for your lucky break! Your (paying) customer does not owe you anything, in fact you are in their debt if they even entertain your patch request. Get in this frame and your chances of writing a successful patch will be increased two fold.
Okay, here are some dos for writing patches. These steps below are for systems that are not accessible remotely and a bit physically too far or demanding for your to travel to.
1. Always recreate (as much as possible) the environment in which to run the patch. This means that if your customer's site has a G Drive (windows) your machine in which you are creating your patch better have that. I have had some great time tracking down why a patch does not work when it magically died while trying to open up a log file on Drive G in a machine that only has C and D drives. This includes in the case of windows of using the same binaries of libraries that your code belongs to. Just recently I had a good time on the phone with a customer who claims that come functionality was not working whereas using we were both using the same script! He insisted that my script was not working while I kept on suspecting that the problem lies between the computer and the chair. Finally I found the problem was due to an outdated binary build that he was using. If having the same build is no longer possible, at least go through the changelog of whatever library you are using to get prepared for whatever problems that might arise.
2. Always have backups of each of your patching steps. In fact it's best to include a sort of un-patch patch just incase your patch goes south. The steps to backup the important files should be done by your script and should not depend on your customer. So something like 'Please backup and replace file bla-bla" is not recommended because how do you know what your customer is backing up the original file to. Here you are thinking that they backed it up to something like
3. Generate error and result log of your patch process and ask the good will of your customers to send those logs back to you. This will help in determining if your patch process is successful.
4. Describe in your email what should the customer see in the event that your script is successfully run. I find here the quickest and surest way is to include a screenshot of what they should see. Typing out the results is error prone and it's hard to describe what they should see plus do you think your customers got that much time to read a long drawn out email?
5. Test, test, test and test each of your steps at least 4 times before sending the patch out. No testing is too much in this kind of scenario especially if you are dealing with live data.
6. Always include a way in which to identify what build are you customers currently running. This could be as simple as a BUILD text file indicating what BUILD are they currently running or in the case of a web application, a build number nicely tucked away at the top left hand or right hand corner of the screen. Everyone of your patches should update this BUILD number. This is instrumental if you were on the phone with your customer and want them to help you identify how updated are their source code.
Hopefully this post will help me remember the things that I did right and those that I painfully learned not to do again. It would be great too if it can help you avoid those pot holes that have claimed the lives of a few of those hair on my head. Happy patching!
Monday, September 7, 2009
import scripts using python
Today I finished another import script using python. The import medium is from Excel to MySQL db. I began my importing exploits using Python and have used it ever since. After doing so many import scripts using Python as my main language I could not think of using anything else. While some other people might be using Bash, I have been using Python ever since the beginning I got my feet wet in doing data migration for systems. Some of the reasons why I like Python for this task are:
1. The wealth of libraries available. I particularly like the csv module which I use heavily in my imports the DictReader module in the csv module also makes short of a lot work during imports. I just cannot imagine using something else.
2. The dicts, the lists and tuples are indispensable and great little tools in sifting through data.
3. Python's clean and structured program allows me to pass the script to juniors or allows me to look back at the import script to either use it at another location or improve it 3 months down the line.
Then the question is why don't more people write import scripts in python? Well I can think of a few cons when it comes to using Python in import scripts and these are:
1. For old ass machines such as old style Solaris or HP/UX Python is just not installed by default. While you can install a copy of python on these boxes, sometimes you do not have that liberty and besides the first rule of any importing job is not to add more applications on the system you are trying to import data into.
Hmm I think I can only think of one valid reason why using Python would not give you and edge. I am using Python till this day importing all my data and have not faced any issues yet. How is your experience ?
1. The wealth of libraries available. I particularly like the csv module which I use heavily in my imports the DictReader module in the csv module also makes short of a lot work during imports. I just cannot imagine using something else.
2. The dicts, the lists and tuples are indispensable and great little tools in sifting through data.
3. Python's clean and structured program allows me to pass the script to juniors or allows me to look back at the import script to either use it at another location or improve it 3 months down the line.
Then the question is why don't more people write import scripts in python? Well I can think of a few cons when it comes to using Python in import scripts and these are:
1. For old ass machines such as old style Solaris or HP/UX Python is just not installed by default. While you can install a copy of python on these boxes, sometimes you do not have that liberty and besides the first rule of any importing job is not to add more applications on the system you are trying to import data into.
Hmm I think I can only think of one valid reason why using Python would not give you and edge. I am using Python till this day importing all my data and have not faced any issues yet. How is your experience ?
Labels:
python
Sunday, August 30, 2009
My Personal take Python 31 on Windows
Downloaded and installed Python 3.1 on Windows to kick around the tyres a bit on the new version of Python. Currently I am getting myself used to one of the biggest changes where print is now a function. This means that stuff like this:
print "testing"
will not work anymore. This will now give you an error :
The help function has also been made a function. This means that ....
help will not work too and will just end up ...
Now you will have to put up the parentheses like so ... help(print)
The map function will also not work like it used to. I am still trying to find out how it works.
These means too most of my code would be borked .. sigh! Stuff that I have been using for my client's code like Django and Zope will need to be updated too. This is all I have explored for now. Will write more as I encounter more.
Update: I updated the title a bit after the first comment. Tsk tsk some people can so be so uptight these days.
print "testing"
will not work anymore. This will now give you an error :
>>> print "testing"
File "", line 1
print "testing"
^
SyntaxError: invalid syntax
The help function has also been made a function. This means that ....
help
>>> help print()
File "", line 1
help print()
Now you will have to put up the parentheses like so ... help(print)
The map function will also not work like it used to. I am still trying to find out how it works.
>>> def print_me(line):
... print (line)
...
>>> print_me("stuff")
stuff
>>> a = ["print","me","now"]
>>> map(print_me,a)
map object at 0x01765CD0
These means too most of my code would be borked .. sigh! Stuff that I have been using for my client's code like Django and Zope will need to be updated too. This is all I have explored for now. Will write more as I encounter more.
Update: I updated the title a bit after the first comment. Tsk tsk some people can so be so uptight these days.
Labels:
python
Monday, July 27, 2009
Django 1.1
Django 1.1 is nearing a release and from what I can see about the new features the two most exciting one (to me at least) are improvements from the models end that is the "managed" model and the "proxy" models. Managed models would be good for existing tables that should not be managed by django. This is useful in some of my projects in which I am using django for database work but do not necessarily need django to manage it. Now if only the database evolve stuff or database versioning stuff: http://paltman.com/2008/jul/03/managing-database-changes-in-django/ could be integrated into the core, now that would be superb!
Talking about migration too, is this project: http://code.google.com/p/dmigrate/ still alive. Doesn't look like there are any downloadable stuff. The examples are half of what I need what I further need is the ability to undo db migration changes between revisions.
Talking about migration too, is this project: http://code.google.com/p/dmigrate/ still alive. Doesn't look like there are any downloadable stuff. The examples are half of what I need what I further need is the ability to undo db migration changes between revisions.
Labels:
django
Saturday, June 20, 2009
ModelAdmin readonly fields
Finalizing my web application, there was a few outstanding patches left to do. One of this I always felt strange why it was left out in the open. Django's admin interface form by default allows you to edit the primary key of the model. I always felt strange why it allowed this as it led to unexpected behavior. My users would edit the primary key expecting the changes to be done on the record which of course did not happen as django created another new object with the new edited id.
In solving this bug I was hoping that django's admin model allowed for something like 'readonly = ( 'id_field')' or something like to that respect. Just thinking out aloud but shouldn't this be automatically done for all primary key fields in models.py ? Shouldn't the primary key be made non editable by default unless explicitly stated in the definition? Even then should the primary key be allowed to be edited? If a user were to be allowed to edit the primary key, would it really be editing or 'cloning' a new object with similar data but differing ids? I haven't really done a thorough search yet but I think by default all the fields is open to edit in the current django's ModelAdmin. Of course you hide the field away by chucking the field in the exclude tuple in your model's admin declaration but what if the user wanted to see that field? I was actually quite surprised when a declaration of fields that should be readonly in my ModelsAdmin did nothing. That was the first way I thought of doing it anyway.
The best would be a new tuple called readonly that would declare fields that are readonly in the change_form. To cut a long story short, I found someone had already created a snippet for it here. Works perfectly and it's exactly what I was looking for. The questions that remain however are:
1. Why isn't this in the core?
2. Should you actually allow primary key fields to be edited ?!
In solving this bug I was hoping that django's admin model allowed for something like 'readonly = ( 'id_field')' or something like to that respect. Just thinking out aloud but shouldn't this be automatically done for all primary key fields in models.py ? Shouldn't the primary key be made non editable by default unless explicitly stated in the definition? Even then should the primary key be allowed to be edited? If a user were to be allowed to edit the primary key, would it really be editing or 'cloning' a new object with similar data but differing ids? I haven't really done a thorough search yet but I think by default all the fields is open to edit in the current django's ModelAdmin. Of course you hide the field away by chucking the field in the exclude tuple in your model's admin declaration but what if the user wanted to see that field? I was actually quite surprised when a declaration of fields that should be readonly in my ModelsAdmin did nothing. That was the first way I thought of doing it anyway.
The best would be a new tuple called readonly that would declare fields that are readonly in the change_form. To cut a long story short, I found someone had already created a snippet for it here. Works perfectly and it's exactly what I was looking for. The questions that remain however are:
1. Why isn't this in the core?
2. Should you actually allow primary key fields to be edited ?!
Labels:
django
Tuesday, June 16, 2009
Ownership and responsibility
One thing that most smaller companies or any company for that matter really treasures are employees who take ownership of what they do. This is because employees that take ownership often will go the extra mile for a project because they feel that they own the project. In my opinion this is hard to create in a employer-employee based company because of the inherent weakness of this structure. Employees are normally paid monthly. Whether they take on more or less projects, their salary will still come in, so the less they do the better off they are. Taking on the responsibility of a project will only give them more headache and more to answer to.
The boss who wants to create this kind of situation must be brave enough to take the leap into somewhat unchartered region. Profit share. You want partners start treating your employees like partners and I am sure they will rise up to the challenge. Trust them and allow them to make mistakes. Mistakes are the single most important asset for a small rising star. They are your mistakes and they are there to show you the way. Learn from them and cease making them mistakes. Mistakes are good. Expect to make many of them. The more you make the higher the chances of success. You know in the beginning your staff will screw up, except their work anyway making as least modifications as possible. Asking your staff to make a 10 page writeup only to at last use the last 2 sentences will be a killer on morale and will create the "I-don't-give-a-shit" employees. Let them make mistakes and take a chance with them. They will appreciate it.
Follow through on your words. Expecting people to own a project without any compensation or carrot will just create a situation where your people will feel exploited and words will sound hollow. Who in their right minds want to take on extra work for nothing? Bosses or companies owners who realize this early will be able to tap into the entrepreneur side of their employees and create a situation where their employees will naturally take on extra responsibilities without fuss. As the owner or the boss you main job is to step out of the way of your people as they do their work offering your support along the way. At the end of the day if your people feel that they have achieved something with the least help from you as possible then you are successful in creating people who take on ownership of projects and you in the would have gained leverage.
The boss who wants to create this kind of situation must be brave enough to take the leap into somewhat unchartered region. Profit share. You want partners start treating your employees like partners and I am sure they will rise up to the challenge. Trust them and allow them to make mistakes. Mistakes are the single most important asset for a small rising star. They are your mistakes and they are there to show you the way. Learn from them and cease making them mistakes. Mistakes are good. Expect to make many of them. The more you make the higher the chances of success. You know in the beginning your staff will screw up, except their work anyway making as least modifications as possible. Asking your staff to make a 10 page writeup only to at last use the last 2 sentences will be a killer on morale and will create the "I-don't-give-a-shit" employees. Let them make mistakes and take a chance with them. They will appreciate it.
Follow through on your words. Expecting people to own a project without any compensation or carrot will just create a situation where your people will feel exploited and words will sound hollow. Who in their right minds want to take on extra work for nothing? Bosses or companies owners who realize this early will be able to tap into the entrepreneur side of their employees and create a situation where their employees will naturally take on extra responsibilities without fuss. As the owner or the boss you main job is to step out of the way of your people as they do their work offering your support along the way. At the end of the day if your people feel that they have achieved something with the least help from you as possible then you are successful in creating people who take on ownership of projects and you in the would have gained leverage.
Labels:
general
Thursday, June 11, 2009
Google Docs Office killer Not?!
Those who know me a bit might know that I sell Google Apps as one of the many interest in business. Lately however I find that I cannot seriously sell Google Docs and expect them to use google docs to replace Microsoft Office. Google docs might just be good for doing simple documents and nothing more than that.
I say that for a few reasons:
1. Documents cannot exceed 50k. Now if an organization were to depend on Google docs entirely how is it able to do some serious document work if it has this limitation?
2. For simple formatting google docs is superb. In fact I like it so much I tried living of it exclusively for a little while until I found that for real work like doing quotes and invoicing you had to know html to edit the templates. Now how can you expect a company's secretary or the admin staff to know how to edit html?! Fail! Complete utter fail!
3. There is no way to password protect a document easily. Now I do not need comments telling with a webpage detailing the hows of doing this across a few pages of hacking. The point here is that it cannot be done by customers so it's a fail.
Until these issues are solved or at least worked around, google don't even expect to be able to dislodge Microsoft Office. Infact, looking at the technical challenges that you have to surmount, I am beginning to think that it might not even be worth the fight.
As for now I will continue to sell my services configuring Google Apps but I will concentrate more on pushing the email and the Calendaring.
I say that for a few reasons:
1. Documents cannot exceed 50k. Now if an organization were to depend on Google docs entirely how is it able to do some serious document work if it has this limitation?
2. For simple formatting google docs is superb. In fact I like it so much I tried living of it exclusively for a little while until I found that for real work like doing quotes and invoicing you had to know html to edit the templates. Now how can you expect a company's secretary or the admin staff to know how to edit html?! Fail! Complete utter fail!
3. There is no way to password protect a document easily. Now I do not need comments telling with a webpage detailing the hows of doing this across a few pages of hacking. The point here is that it cannot be done by customers so it's a fail.
Until these issues are solved or at least worked around, google don't even expect to be able to dislodge Microsoft Office. Infact, looking at the technical challenges that you have to surmount, I am beginning to think that it might not even be worth the fight.
As for now I will continue to sell my services configuring Google Apps but I will concentrate more on pushing the email and the Calendaring.
Labels:
general
Wednesday, June 10, 2009
Django CRM?
Few of our customer came up asking for some crm functionality from us. Quite a few actually. These are not full blown sugarcrm kind of requirements. Actually they are very simple functionality. Looking at these requirements, it made me think that going for something like sugar would be like taking three steps back to gain a step forward. I would have to spend my time editting out functionality and it would be a dumbing down process all over again. Rather than go this route I thought it would be nice to probably use Django to build them this.
Looking around here are some of the links I found relating Django to CRMs:
http://www.tschitschereengreen.com/blog/index.php/2007/10/15/django-based-crm/
http://code.google.com/p/django-projects-crm/
https://launchpad.net/django-cream/+download
Seems that most of them are either abandoned or not very usable. Sigh ...
That's about all that I found for django crm. The rest of the solutions are too much of an overkill in terms of hacking time or completeness. Okay now my favorite part ... back to the Django drawing board.
Looking around here are some of the links I found relating Django to CRMs:
http://www.tschitschereengreen.com/blog/index.php/2007/10/15/django-based-crm/
http://code.google.com/p/django-projects-crm/
https://launchpad.net/django-cream/+download
Seems that most of them are either abandoned or not very usable. Sigh ...
That's about all that I found for django crm. The rest of the solutions are too much of an overkill in terms of hacking time or completeness. Okay now my favorite part ... back to the Django drawing board.
Tuesday, June 2, 2009
Django talk
Just finished my Django talk yesterday and I must say while a few dude and dudettes walked out, there was generally quite good interest (better than I thought) from a few young people in the crowd. Initial talks are also underway to form a PUG or Python User Group locally here in Malaysia. I would really like to see this happen. The response has been quite good infact I would go so far as to push to even include Python in some programming curriculum in our universities.
That is the only way I would see Python growing and getting more acceptance. On that note too I also mentioned during my talk for frameworks like ROR, Django, TG and it's like to gain more acceptance in the industry, more articles or howtos concentrating on enterprise level deployments of these frameworks have go to be available.
Something like: http://enterprise.djangoproject.com?
I would really like to see Django grow from it's current girth to something that could rival all the J2EE stuff out there.
Nice link here to some good django slides ... damn could have used those for my talk :).
P/S -Thanks to all those readers who came to the talk :)
That is the only way I would see Python growing and getting more acceptance. On that note too I also mentioned during my talk for frameworks like ROR, Django, TG and it's like to gain more acceptance in the industry, more articles or howtos concentrating on enterprise level deployments of these frameworks have go to be available.
Something like: http://enterprise.djangoproject.com?
I would really like to see Django grow from it's current girth to something that could rival all the J2EE stuff out there.
Nice link here to some good django slides ... damn could have used those for my talk :).
P/S -Thanks to all those readers who came to the talk :)
Sunday, May 17, 2009
Review of Django 1.0 Template Development

Initially when Packt Publishing sent this ebook Django 1.0 Template Development by Scott Newman to review, I did what any self respecting Django entreprenuer with 3 project down his pants would do. I skimmed through the book to the code sections and read through the code and read about 10 lines after that. The blink that I obtained reading the book this way was not too good and I felt some of the explanation in certain sections lacking.
BAD mistake! This book SHOULD NOT and I repeat SHOULD NOT be read that way. After I hunkered down and read the book cover to cover, it was then I really began to like it. So now, Django projects later, I wish that I read the book the right way the first time, because it has given me so many ideas to improve my Django code. So my advice, get this book, read it cover to cover and then revisit, bookmark or underline sections that you will revisit later. This way I found out was the way to get the maximum benefit out of this great Django reference. I really like reading technical books that know how to put forth advanced concepts with sounding too technical. Good programmers do not automatically make good teachers and the ones that do are the rare breed. On that note I loved the simple way in this book was presented. So, my kudos to Scott for making this an easy understandable read.
This is a book for Django developers who already have the basics and have been playing around with the Django examples from the website. The writing is to the point and easy to understand and practical examples are littered around to help the Django developers get their heads around what is being discussed. Perfect book to have by your side to try out code as your read. While the examples are simple, you could easily incorporate the examples into a real project.
My guess is that this book was not really meant for the total novice to Django. This book is meant for the mid level Django developer who wants introduction so some deeper concepts in Django especially those who want to find out more about Django's templates. That being said however, some sections seem to be crying out for more explanation.
The urls.py section has a very useful section regarding splitting up of the urls.py file for bigger projects. I really like this! It shows a bit about the whys and hows about proper Django deployment that is not usually covered the normal docs and mostly has to be scraped of mailing lists. The section about views is covered briefly but does contain a few useful examples and explains the concepts of requests and responses well using a few of Django's built in functions. Now what would have made this part of the book exceptional would be if the author covered best practices of deploying enterprise Django applications such as splitting the views.py into apps to avoid the views.py getting too big or unwieldy. The generic views section gives a good and solid explanation regarding one of Django's more powerful functionality that is the generic views. Middling Django developers should find this section of the book a useful reference as well as guide in their projects.
The next chapter is all about Context and how to use them in Django. I like the examples here and they work well to explain the concepts about Context. What the author could have done to solidify the examples here would be probably to include an example of how a normal call to an object would be done versus the method of using the Context object. The examples in this section are simply superb! They assume minimal knowledge from the beginning and build up to cover some of the more complex topics in Django without ever letting the example getting too heady. One thing I learnt here is the alters_data attribute that is actually included in all the templates preventing a method being called from the templates. It serves as an introductory to this functionality but I would have liked it more if a bit more explanation was given as how this attribute is put together with the save and delete methods. A bit more explanation about what kind of data is carried in the request object would be nice here. Scott teaches us to write our own context processor with a simple example of returning the user's ip address.
In my opinion Django's tags and filters are most of the time underestimated in the light of some other more powerful templating engine for example ZPT in Zope, but understanding them will make you understand the power of them and how much you can really do. I rely on these guys most of the time in my projects and they are great to use. Here Scott goes through most of the tags and filters that you probably would use in your project life and more. If you ever wanted to understand what is the difference between tags and filters read the book! I understand now and he does it in a one liner! The next chapter is equally good focusing on loading and inheriting templates. Don't blast over this chapter thinking it's not important (I was tempted to). Scott cover one or two things you might not have known about your settings.py. I learnt a thing or two here namely the fact that you can get away by even distributing your template files as eggs! Template inheritance in Django is another one my favourites and Scott does a good job covering the basics using easy to understand diagrams, much like how I would explain the concept of template and inheritance to a template newbie, read it through though as Scott delves a bit deeper into templates with a few advanced concepts. The concept of multiple templates are covered next and the simple examples covered here are so easy to understand I could not wait to just
In the custom filter section, Scott offers a simple example and the nice thing here is he slowly builds in advanced concepts without getting too Django technical. The example is easy to follow and beginners will easily pick up filter writing skills. One issue I ran into when trying out the examples is that this section does not cover some of the strange error messages that the template filters shoots out if you do not follow the syntax exactly. On that note this book could do well with some examples showing the errors that might be encountered in the examples.
One of Django's “killer app” is the admin interface. With just a one liner you get a beautiful web interface to edit your objects with all the forms and validation thrown in. While this is all good, the templates for admin interface takes some digging in and sometime to understand. I used it for one of my projects and let's just say one or two times I had to think hard just to get a column to align. Doing some wishful thinking I wish now I have a Delorean so that I could take a copy of this ebook and pass it to my head scratching self in the past. That would have really sped things up and made life easier for me as Scott does a superb job here of explaining the intricacies of the admin templating system.
On the whole I would gladly add this book my collection of Django books as a good source of reference. While it does not cover all aspects of Django, the section that it does cover, it covers thoroughly and I find that it's very useful in satisfying my curiosity to find out more about Django's templating system, filters and tags. The information that it presents is very practical and good to be applied in projects.
Tuesday, May 12, 2009
Unexpected behavior of python's ConfigParser
While working on some python code today I ran into some scoping issues that involved ConfigParser. The issue that made the process of finding and fixing the issue in the code harder is an unexpected behavior (on my end at least).
The situation is like this. My script relies on a configuration file that resides in the same directory as my script. For example sake let's call this file example.conf. During the execution of my script however, the directory scope of my script is no longer at my script main directory, so when it tries to find example.conf again it fails. This however is the part that put me off. The part that put me off is the fact that the error coming from ConfigParser is not really descriptive.
Take this example for example:
">>> config_file = 'does_not_exist_stuff.conf'
>>> config_parser = SafeConfigParser()
>>> config_parser.read(config_file)
[] <------------- shouldn't this part shot up some error messages?
>>> config_parser.get('config1','test_config')
Traceback (most recent call last):
File "", line 1, in
File "/usr/lib64/python2.6/ConfigParser.py", line 531, in get
raise NoSectionError(section)
ConfigParser.NoSectionError: No section: 'config1'"
The thing I am trying to say here is when I try to feed SafeConfigParser a config file that does not exist should it complain? I understand that the onus should be on my code to check whether or not the config file exists but then an error saying "No section:" does not really help too much in finding the error fast. I am writing this post just hoping some one would chip in and show me if the way I am using ConfigParser is wrong or something if not I am considering writing a patch for ConfigParser.
On the other hand I found another Python configuration parser:http://pyfig.alecwh.com/. Going to look at this later on.
The situation is like this. My script relies on a configuration file that resides in the same directory as my script. For example sake let's call this file example.conf. During the execution of my script however, the directory scope of my script is no longer at my script main directory, so when it tries to find example.conf again it fails. This however is the part that put me off. The part that put me off is the fact that the error coming from ConfigParser is not really descriptive.
Take this example for example:
">>> config_file = 'does_not_exist_stuff.conf'
>>> config_parser = SafeConfigParser()
>>> config_parser.read(config_file)
[] <------------- shouldn't this part shot up some error messages?
>>> config_parser.get('config1','test_config')
Traceback (most recent call last):
File "
File "/usr/lib64/python2.6/ConfigParser.py", line 531, in get
raise NoSectionError(section)
ConfigParser.NoSectionError: No section: 'config1'"
The thing I am trying to say here is when I try to feed SafeConfigParser a config file that does not exist should it complain? I understand that the onus should be on my code to check whether or not the config file exists but then an error saying "No section:" does not really help too much in finding the error fast. I am writing this post just hoping some one would chip in and show me if the way I am using ConfigParser is wrong or something if not I am considering writing a patch for ConfigParser.
On the other hand I found another Python configuration parser:http://pyfig.alecwh.com/. Going to look at this later on.
Labels:
python
Saturday, April 11, 2009
Sum for the range of a column in Excel
Working on my current project, one requirement came out which made me all the more appreciative of django's nice array of built-in filters. I generated an Excel report for my clients which is generated from html. Good and nice. Then one of them said, hey can the Excel report have a row which outputs the sum of the whole column of say the column E? Something usual and easy in Excel but stumped me for a while of how to do it from html to excel. For the most part it was trying to figure out how to include that SUM formula at my html code.
A little visit to M$ support center got me the help I needed and then it was back to nice plethora of filters django has .... Turns out that you can include formula into a cell (or a table column in this case) by simply doing something like this '=SUM(E2:E70)' just after the opening tags for your column. This part actually took the longest for me as I did not really know how to do it correctly. My last row in this case was dynamic because it depended on the number of rows generated by my query. So all I did next was some little magicks like this ...
'=SUM(E2:E{{query.count|add:1}})', which takes the count of the results and adds one to it because Excel rows start from a 1 while the count function returns counts starting from row 0. Just something very simple. Don't really want to tax the brain during the weekends y'know ... anyway just a small thank you to django for really great tool box of useful built-in filters. You really made this post shorter than it ought to have been :).
A little visit to M$ support center got me the help I needed and then it was back to nice plethora of filters django has .... Turns out that you can include formula into a cell (or a table column in this case) by simply doing something like this '=SUM(E2:E70)' just after the opening tags for your column. This part actually took the longest for me as I did not really know how to do it correctly. My last row in this case was dynamic because it depended on the number of rows generated by my query. So all I did next was some little magicks like this ...
'=SUM(E2:E{{query.count|add:1}})', which takes the count of the results and adds one to it because Excel rows start from a 1 while the count function returns counts starting from row 0. Just something very simple. Don't really want to tax the brain during the weekends y'know ... anyway just a small thank you to django for really great tool box of useful built-in filters. You really made this post shorter than it ought to have been :).
Labels:
django
Thursday, March 19, 2009
Another strange django unicode error solved!
Today while trying to my final product to one of our customer's an ugly little error shot back at me when I tried editting any of the imported records into the database. "django ascii codec can't encode ..." bla bla it would complain and refuses to render the said template. Tracking down the problem down, I found it to the __str__ method being returned one of my models.
After changing the database encoding for my MySQL database ( sucky on windoze ), it still stubbornly spat back the same error at me. Finally after searching around more I found the smart_str method's usage here.
Kudos! It works now. So instead of returning the normal 'str(self.name)' I do a 'smart_str(self.name)' and everything is back to it's hunky doriness!
After changing the database encoding for my MySQL database ( sucky on windoze ), it still stubbornly spat back the same error at me. Finally after searching around more I found the smart_str method's usage here.
Kudos! It works now. So instead of returning the normal 'str(self.name)' I do a 'smart_str(self.name)' and everything is back to it's hunky doriness!
Monday, January 26, 2009
Multi row transaction capabilities
The project in which I wrote a python script to process rows in an Excel spreadsheet interfacing with a MySQLdb and fronted on the web end using django is coming to a close. There is just one more requirement to go. They want rollback capabilities with the ability to completely rollback all actions on a particular excel file if one of the rows in the file buffs up or errors out. This is so that they can correct just that one row in that Excel file then feed the same Excel file to my Python process script without worry about the sanctity of the data of the rest of the rows.
This means that I have to somehow implement a multi-row rollback functionality or application level transaction. Python's MySQLdb library supports rollback but only on each transaction. I need a "all or nothing" like functionality. This is something new for me, so I scouted around on google to get some inspiration or to see if someone else had done the same thing. I found this small transaction class by Harald Hoyer at his site. Planning to try out his stuff to see if they can fulfill my requirements. Instinctively, I feel the final solution that I come up with will be a bit different.
The only other way I can see this happening is to maintain another database table keeping track of all actions that are performed at the row level in the input Excel Forms eg: row1 : insert, row2: update. Then if the script detects an error or Exception while processing the form it will perform the reciprocal or rollback to all of the actions that have been performed on the data on the Excel file. So far whatever I have is still being worked out in the noggin' coming close to being implemented.
Update:
Something that I seem to have overlooked is that since I am using django for the web front maybe I can also use it's ORM at the backend. It all depends on how much work it would be to ORM-ize my existing code base.
This means that I have to somehow implement a multi-row rollback functionality or application level transaction. Python's MySQLdb library supports rollback but only on each transaction. I need a "all or nothing" like functionality. This is something new for me, so I scouted around on google to get some inspiration or to see if someone else had done the same thing. I found this small transaction class by Harald Hoyer at his site. Planning to try out his stuff to see if they can fulfill my requirements. Instinctively, I feel the final solution that I come up with will be a bit different.
The only other way I can see this happening is to maintain another database table keeping track of all actions that are performed at the row level in the input Excel Forms eg: row1 : insert, row2: update. Then if the script detects an error or Exception while processing the form it will perform the reciprocal or rollback to all of the actions that have been performed on the data on the Excel file. So far whatever I have is still being worked out in the noggin' coming close to being implemented.
Update:
Something that I seem to have overlooked is that since I am using django for the web front maybe I can also use it's ORM at the backend. It all depends on how much work it would be to ORM-ize my existing code base.
Friday, January 23, 2009
Lenovo's ideapad
Today I was tasked by the better half to bring a lenovo ideapad S10 to life. My date with the ideapad got off to a rough start when during the initial filling in of the customary username and password and other stuff, the power died on me. From there on the puppy was not too happy with me and refused to boot up slapping me in the face with a "Installation not complete" dialog box and proceeded to reboot every time I clicked on the 'Ok' button or tried to cancel it. Good stuff. Tried the reboot-click-on-dialog cycle for about 3 to 4 times before calling yer-olde' friendly tech support dude who began the conversation with "Don't mean to be offensive..." (I mean what was I supposed to do?! It doesn't even come with a damned rescue disk!) Ever notice how people always tend to start getting nasty every time after saying that.... ?! Yeehawe!
Okay, so after a little bit of good vibes rescue generally telling how manly he sounded on the horn, the guy tells me that there is a rescue button there that can reset the baby back to it's factory settings ... Ah! So that is what that little non-standard button is fer! The funny thing about this button is that it will only work if the unit is turned off! Tried it during boot up and no way Jose! The button was like non responsive, just like when you tell your gf _not_ to go MNG just this once for your date. It only worked after I finally hunkered down to read the damn manual! Now, why didn't I just save myself the aggravation and did that from the beginning? Why? I don't know, I just want to be cool like everyone?!
At this point the lappie is working fine installed with a printer and a hax0red version of office 2007. Hey! don't be barking down this pair of pants asking why not openoffice! When the customer is your gf and she ain't in the mood of hearing about your FSS evangelism, you shut up and and nod man! The customer is always right, doubly so when she is your better half!
On the performance of this little guy I can only say this ... "It's slow and steady". Stuff works but there is just that little bit of lag before starting up, hey give it a break it's just that little atom of a processor under the hood man! I am quite happy with the performance considering that we only forked out Rm 1,783 with printer thrown in! I can see the appeal this kind of lappie would have for the ever on the go exec that just wants to occasionally check his/her mail or do the usual proposal on the run. For the rest of us that needs that performance juice from their daily machine, and do more than that stay clear!
Well, at least it's been fun and I am still in her good books :)
Okay, so after a little bit of good vibes rescue generally telling how manly he sounded on the horn, the guy tells me that there is a rescue button there that can reset the baby back to it's factory settings ... Ah! So that is what that little non-standard button is fer! The funny thing about this button is that it will only work if the unit is turned off! Tried it during boot up and no way Jose! The button was like non responsive, just like when you tell your gf _not_ to go MNG just this once for your date. It only worked after I finally hunkered down to read the damn manual! Now, why didn't I just save myself the aggravation and did that from the beginning? Why? I don't know, I just want to be cool like everyone?!
At this point the lappie is working fine installed with a printer and a hax0red version of office 2007. Hey! don't be barking down this pair of pants asking why not openoffice! When the customer is your gf and she ain't in the mood of hearing about your FSS evangelism, you shut up and and nod man! The customer is always right, doubly so when she is your better half!
On the performance of this little guy I can only say this ... "It's slow and steady". Stuff works but there is just that little bit of lag before starting up, hey give it a break it's just that little atom of a processor under the hood man! I am quite happy with the performance considering that we only forked out Rm 1,783 with printer thrown in! I can see the appeal this kind of lappie would have for the ever on the go exec that just wants to occasionally check his/her mail or do the usual proposal on the run. For the rest of us that needs that performance juice from their daily machine, and do more than that stay clear!
Well, at least it's been fun and I am still in her good books :)
Labels:
general
Sunday, January 11, 2009
Gmail migration script done!
Hello there again! Nothing like a little good olde' scripting to usher in the new year. As most of you will recall I was working on a script to migrate email from an existing imap server on to google app. The only thing that my script needed to do was transform folders into labels. Easy peasy I thought sure someone else has encountered this. Boy, was I wrong. I could not for the life of me find anything of the sort out there!
Well, no matter. I proceeded to create my own script from scratch and after about 2 hours my script could already scan my existing imap server, list all the folders and re-create them as labels on my gmail account! It was then that I stumbled across this piece of genius xiao duo so graciously left of the web. Their script was made to migrate emails from an existing gmail account to another gmail account. I found that the script they had complemented mine to a T! Talk about Laws of Attraction!! Great! All I had to do now was to transplant my script using my vim scalpel on to my script and faster than you can say Fedora 10's release name I had a working super duper gmail migration script!
Everything works as it should. About the only drawback I noticed with my script was that with TmNut's great connection, the connection timed out on me a few times and after restarting the script it started screening the folders right from the top again. Maddening! This is one of the improvements I have noted down, but for now everything works just fine. I can't attach any code here, so if for some reason you need the script just drop me a comment or a line here.
Catch ya laterz.
Well, no matter. I proceeded to create my own script from scratch and after about 2 hours my script could already scan my existing imap server, list all the folders and re-create them as labels on my gmail account! It was then that I stumbled across this piece of genius xiao duo so graciously left of the web. Their script was made to migrate emails from an existing gmail account to another gmail account. I found that the script they had complemented mine to a T! Talk about Laws of Attraction!! Great! All I had to do now was to transplant my script using my vim scalpel on to my script and faster than you can say Fedora 10's release name I had a working super duper gmail migration script!
Everything works as it should. About the only drawback I noticed with my script was that with TmNut's great connection, the connection timed out on me a few times and after restarting the script it started screening the folders right from the top again. Maddening! This is one of the improvements I have noted down, but for now everything works just fine. I can't attach any code here, so if for some reason you need the script just drop me a comment or a line here.
Catch ya laterz.
Labels:
python
Subscribe to:
Posts (Atom)