Django and working with large database tables
The slides from my presentation for the Django Stockholm Meetup group. Contains small comparison between MySQL and PostgreSQL in terms of performance when modigying the tables structure.
The slides from my presentation for the Django Stockholm Meetup group. Contains small comparison between MySQL and PostgreSQL in terms of performance when modigying the tables structure.
Updated solution, please check below!
Some background: We have a model that is edited only via the Django admin. The save method of the model fires a celery task to update several other records. The reason for using celery here is that the amount of related objects can be pretty big and we decided that it is best to spawn the update as a background job. We have unit and integration tests, the code was also manually tested, everything looked nice and we deployed it.
On the next day we found out that the code is acting weird. No errors, everything looked like it has worked but the updated records actually contained the old data before the change. The celery task accepts the object ID as an argument and the object is fetched from the database before doing anything else so the problem was not that we were passing some old state of the object. Then what was going on?
Trying to reproduce it: Things are getting even weirder. The issues is happening every now and then.
Hm...? Race condition?! Let's take a look at the code:
class MyModel(models.Model):
def save(self, **kwargs):
is_existing_object = False if self.pk else True
super(MyModel, self).save(**kwargs)
if is_existing_object:
update_related_objects.delay(self.pk)
The bigger picture: (this is simplified version of what is happening for the full one check Django's source)
def changeform_view(...):
with transaction.atomic():
...
self.save_model(...) # here happens the call to MyModel.save()
self.save_related(...)
...
Ok, so the save is actually wrapped in transaction. This explains what is going on. Before the transaction is committed the updated changes are not available for the other connections. This way when the celery task is called we end up in a race condition whether the task will start before or after the transaction is completed. If celery manages to pick the task before the transaction is committed it reads the old state of the object and here is the error.
Solution (updated): The best solution is to use transaction.on_commit. This way the call to the celery task will be executed only after the transaction is completed succesfully. Also, if you call the method outside of transaction the function will be executed immediately so it will also work if you are saving the model outside the admin. The only downside is that this functionality has been added to Django in version 1.9. So it wasn't an option for us. Still, special thanks to Jordan Jambazov for pointing this approach to me, I'll definitely use it in the future.
Unfortunately we are using Django 1.8 so we picked a quick and ugly fix. We added a 60 seconds countdown to the task call giving the transaction enough time to complete. As the call to the task depends on some logic and which properties of the models instance are changes moving it out of the save method was a problem. Another option could be to pass all the necessary data to the task itself but we decided that it will make it too complicated.
However I am always open to other ideas so if you have hit this issue before I would like to know how you solved it.
Preface: This is quite a standard problem for apps/websites with low traffic or those using heavy caching and hitting the database quite seldom. Most of the articles you will find on the topic will tell you one thing - change the wait_timeout setting in the database. Unfortunately in some of the cases this disconnect occurs much earlier than the expected wait_timeout (default ot 8 hours). If you are in one of those cases keep reading.
This issue haunted our team for weeks. When we first faced it the project that was raising it was still in dev phase so it wasn't critical but with getting closer to the release data we started to search for solution. We have read several articles and decided that pool_recycle is our solution.
Adding pool_recycle: According to SQL Alchemy's documentation pool_recycle "causes the pool to recycle connections after the given number of seconds has passed". Nice, so if you recycle the connection in intervals smaller that the await_timeout the error above should not appear. Let's try it out:
import time
from sqlalchemy.engine import create_engine
url = 'mysql+pymysql://user:pass@127.0.0.1:3306/db'
engine = create_engine(url, pool_recycle=1).connect()
query = 'SELECT NOW();'
while True:
print('Q1', engine.execute(query).fetchall())
engine.execute('SET wait_timeout=2')
time.sleep(3)
print('Q2', engine.execute(query).fetchall())
So what does the code do - we create a connection to a local MySQL server and state that it should be recycled every second(line 7). Then we execute a simple query (line 12) just to verify that the connection is working.
We set the wait_timeout to 2 second and wait for 3. At this stage the connection to the server will timeout, but SQL Alchemy should recycle it, so the last query should be executed successfully and the loop should continue.
Unfortunately the results looks like:
sqlalchemy.exc.OperationalError: (pymysql.err.OperationalError)
(2013, 'Lost connection to MySQL server during query') [SQL: 'SELECT NOW();']
Wait, what happened, why is not the connection recycled?
Solution: Well, as with all such problems the solution was much simpler compared to the time it took us to find it (we fought with this for days). The only change that solved it was on line 7:
engine = create_engine(url, pool_recycle=1)
# the result
Q1 [(datetime.datetime(2016, 7, 24, 20, 51, 41),)]
Q2 [(datetime.datetime(2016, 7, 24, 20, 51, 44),)]
Q1 [(datetime.datetime(2016, 7, 24, 20, 51, 44),)]
Q2 [(datetime.datetime(2016, 7, 24, 20, 51, 47),)]
Have you spot the difference? We are not calling the connect() method any more.
Final words: To keep it honest, I don't know why this solved the issue. Hopefully someone more familiar with SQL Alchemy will come with a reasonable explanation for it. The bad part is that the examples in the official docs are using "connect". So it is either a bug or a bad documentation. I will send this article to SQL Alchemy's Twitter account so hopefully we will see some input from them. Till then, if any of you have an idea explanation about the behaviour I'll be happy to hear it.
In a few words PyCon Sweden 2015 was awesome. Honestly, this was my first Python conference ever but I really hope it won't be the last.
Outside the awesome talks and great organisation it was really nice to spend some time with similar minded people and talk about technology, the universe and everything else. I have met some old friends and made some new ones but lets get back to the talk. Unfortunately I was not able to see all of them but here is a brief about those I saw and found really interesting:
It all started with Ian Ozsvald and his awesome talk about "Data Science Deployed" (slides / video). The most important point here were:
Then Rebecca Meritz talked about "From Explicitness to Convention: A Journey from Django to Rails" (slides / video). Whether the title sounds a bit contradictive this was not the usual Django vs Rails talk. At least to me it was more like a comparison between the two frameworks, showing their differences, weak and strong sides. Whether I am a Django user, I am more and more keen to the position that none of the frameworks is better than the other one, they are just two different approaches for building great web apps.
Flavia Missi and "Test-Driven-Development with Python and Django" (slides / video). TDD will help you have cleaner, better structured and easy to maintian code. If you are not doing it the best moment to start is now. Whether it is hard at the beginning you will pretty soon realise how beneficial it is. Especially if someone pushed a bad commit and the tests saved your ass before the code goes to production.
Later Dain Nilsson talked about "U2F: Phishing-proof two-factor authentication for everyone" (video). Whether I don't use two-factor authentication at the moment I am familiar with the concept and I really like it. The U2F protocol looks like a big step towards making it more applicable over different tools and applications and the key holding devices are more and more accessible nowadays. Maybe it is time for me to get one )))
The second day started with Kate Heddleston who talked about ethics in computer programming (video). About how social networks can be used as a tool for ruining peoples lifes and that we as a developers should take a responsibility and work towards making the internet a safer place for everyone. A place where you can have your privacy and have protection if harassed. It is a big problem which won't be solved in a night, but talking about it is the first step towards solving it.
"Why Django Sucks" by Emil Stenström. Don't rush into raging cause this was one of best talks at the conference. Emil showed us the parts in Django where he sees flaws and that need improvement. The main point was the lack of common template language between Django and Javascript and what possible solutions are available or can be made.
Daniele Sluijters reminded us how easy is to work with "Puppet and Python" (video). No more fighting with Ruby, you can easily use your favorite language to build your own tools and helpers
Dennis Ljungmark and "Embedded Python in Practice". The last time I programmed embedded devices was 15 years ago as a part of short course in the high school. Dennis' work is much more complex than what I did then but his talk reminded me of things that are applicable to general programming. Whether using non-embedded systems we often have much more memory and processing power available that does not mean that we should waste it. So think when you code - RAM is not endless and processors are not that fast as we often wish. Also don't forget that Exceptions wil sooner or later occur so make your code ready to handle them.
"How to Build a Python Web Application with Flask and Neo4j" (video) by Nicole White. Well I have heard about Neo4J, but I have never used it or seen it in action so this one was really exciting. Neo4J offers you a whole new perspective about building databases and relations between objects but it is much far from panacea. Actually I can see it it is more like a special tool then as a general replacement of a relation database but it still worths to be tried. Oh, and the Neo4J browser - totally awesome.
In the lightning talks Tome Cvitan talked about Faker. If you are still not familiar with it now is a good time to try it. Especially if you are writing tests.
At the final Kenneth Reitz told us about "Python for Humans"(video). About the not that obvious things in Python and what solutions are out there. And also about The Hitchhiker’s Guide to Python! a great place for beginners and not only and shared the idea to make Python easier and more welcoming by introducing better tools for higher level of operations.
Finally I want to thank to everyone - the organisers, the speakers, the audience and practically everyone who was a part of the conference. Without you it would be the same (or be at all). Thanks, keep up the good work and hopefully we will sea each other again.
P.S. Have I mentioned that the whole conference was recorded on video so hopefully we will see be able to see all the talks pretty soon. I will try to keep this post updated with the links to the videos and/or slides when they become available. Of course if you know about any published slides from the conference that are not linked here please let me know.
The full set of videos are available at PyCon Sweden's Youtube channel.
Brief: In one of the project I work on we had to convert some old naive datetime objects to timezone aware ones. Converting naive datetime to timezone aware one is usually a straightforward job. In django you even have a nice utility function for this. For example:
import pytz
from django.utils import timezone
timezone.make_aware(datetime.datetime(2012, 3, 25, 3, 52),
timezone=pytz.timezone('Europe/Stockholm'))
# returns datetime.datetime(2012, 3, 25, 3, 52, tzinfo=<DstTzInfo 'Europe/Stockholm' CEST+2:00:00 DST>)
Problem: You can use this for quite a long time until one day you end up with something like this:
timezone.make_aware(datetime.datetime(2012, 3, 25, 2, 52),
timezone=pytz.timezone('Europe/Stockholm'))
# which leads to
Traceback (most recent call last):
File "", line 1, in
File "/home/ilian/venvs/test/lib/python3.4/site-packages/django/utils/timezone.py", line 358, in make_aware
return timezone.localize(value, is_dst=None)
File "/home/ilian/venvs/test/lib/python3.4/site-packages/pytz/tzinfo.py", line 327, in localize
raise NonExistentTimeError(dt)
pytz.exceptions.NonExistentTimeError: 2012-03-25 02:52:00
Or this:
timezone.make_aware(datetime.datetime(2012, 10, 28, 2, 52),
timezone=pytz.timezone('Europe/Stockholm'))
#throws
Traceback (most recent call last):
File "", line 1, in
File "/home/ilian/venvs/test/lib/python3.4/site-packages/django/utils/timezone.py", line 358, in make_aware
return timezone.localize(value, is_dst=None)
File "/home/ilian/venvs/test/lib/python3.4/site-packages/pytz/tzinfo.py", line 349, in localize
raise AmbiguousTimeError(dt)
pytz.exceptions.AmbiguousTimeError: 2012-10-28 02:52:00
Explanation: The reason for the first error is that in the real world this datetime does not exists. Due to the DST change on this date the clock jumps from 01:59 standard time to 03:00 DST. Fortunately (or not) pytz is aware of the fact that this time is invalid and will throw the exception above. The second exception is almost the same but it happens when switching from summer to standard time. From 01:59 DST the clock shifts to 01:00 standard time, so we end with a duplicate time.
Why has this happened(in our case)? Well we couldn't be sure how exactly this one got into our legacy data but the assumption is that at the moment when the record was saved the server has been in different timezone where this has been a valid time.
Solution 1: This fix is quite simple, just add an hour if the exception occurs.
try:
date = make_aware(
datetime.fromtimestamp(date_time, timezone=pytz.timezone('Europe/Stockholm'))
)
except (pytz.NonExistentTimeError, pytz.AmbiguousTimeError):
date = make_aware(
datetime.fromtimestamp(date_time) + timedelta(hours=1),
timezone=pytz.timezone('Europe/Stockholm')
)
Solution 2: Instead of calling make_aware call timezone.localize directly.
try:
date = make_aware(
datetime.fromtimestamp(date_time, timezone=pytz.timezone('Europe/Stockholm'))
)
except (pytz.NonExistentTimeError, pytz.AmbiguousTimeError):
timezone = pytz.timezone('Europe/Stockholm')
date = timezone.localize(datetime.fromtimestamp(date_time), is_dst=False)
The second solution probably needs some explanation. First lets check what make_aware does. The code bellow is take from Django's sourcecode as it is in version 1.7.7
def make_aware(value, timezone):
"""
Makes a naive datetime.datetime in a given time zone aware.
"""
if hasattr(timezone, 'localize'):
# This method is available for pytz time zones.
return timezone.localize(value, is_dst=None)
else:
# Check that we won't overwrite the timezone of an aware datetime.
if is_aware(value):
raise ValueError(
"make_aware expects a naive datetime, got %s" % value)
# This may be wrong around DST changes!
return value.replace(tzinfo=timezone)
To simplify it, what Django does is to use the localize method of the timezone object(if it exists) to convert the datetime. When using pytz this localize method takes two arguments: the datetime value and is_dst. The last argument takes three possible values: None, False and True. When using None and the datetime matches the moment of the DST change pytz does not know how to handle the datetime and you get one of the exceptions shown above. False means that it should convert it to standard time and True that it should convert it to summer time.
Why isn't this fixed in Django? The simple answer is "because this is how it should work". For a bit longer check the respectful ticket.
Reminder: Do not forget that the "fix" above does not actually care whether the original datetime is during DST or not. In our case this was not criticla for our app, but in some other cases it might be, so use it carefully.
Thanks: Special thanks to Joshua who correctly pointed out in the comments that I have missed the AmbiguousTimeError in the original post which made me to look a bit more in the problem, research other solutions and update the article to its current content.
Update 2: You can also check the follow up post Django interview questions.
For the last few weeks I have been interviewing several people for Python/Django developers so I thought that it might be helpful to show the questions I am asking together with the answers. The reason is ... OK, let me tell you a story first.
I remember when one of my university professors introduced to us his professor - the one who thought him. It was a really short visit but I still remember one if the things he said. "Ignorance is not bad, the bad thing is when you do no want to learn."
So back to the reason - if you have at least taken care to prepare for the interview, look for a standard questions and their answers and learn them this is a good start. Answering these question may not get you the job you are applying for but learning them will give you some valuable knowledge about Python.
This post will include the questions that are Python specific and I'll post the Django question separately.
# simple iteration
a = []
for x in range(10):
a.append(x*2)
# a == [0, 2, 4, 6, 8, 10, 12, 14, 16, 18]
# list comprehension
a = [x*2 for x in range(10)]
# dict comprehension
a = {x: x*2 for x in range(10)}
# a == {0: 0, 1: 2, 2: 4, 3: 6, 4: 8, 5: 10, 6: 12, 7: 14, 8: 16, 9: 18}
# the basic way
s = 0
for x in range(10):
s += x
# the right way
s = sum(range(10))
# the basic way
s = 1
for x in range(1, 10):
s = s * x
# the other way
from operator import mul
reduce(mul, range(1, 10))
Well I hope this will be helpful, if you have any question or suggestion feel free to comment.
Update: Due to the lots of comments on Reddit and LinkedIn, I understood that there is some misunderstanding about the post. First, the questions I have published are not the only ones I ask, the interview also includes such related to general programming knowledge and logical thinking. Second the questions above help me get a basic understanding of your Python knowledge but they are not the only think that makes my decision. Not answering some of them does not mean that you won't get the job, but it may show me on which parts we should to work.
After some delay(please excuse me for it) here are sample solutions of the problems defined in the presentation Functions in Python. Solutions are also available at GitHub.
Here is my presentation part of the in company Python course.
The last slide - "Problem to solve" is something like a simple homework. Sample solutions will be uploaded later this week.
Preface: Nine months ago(I can't believe it was that long) I created a script called Simple Site Checker to ease the check of sitemaps for broken links. The script code if publicly available at Github. Yesterday(now when I finally found time to finish this post it must be "A few weeks ago") I decided to run it again on this website and nothing happened - no errors, no warning, nothing. Setting the output level to DEBUG showed the following message "Loading sitemap ..." and exited.
Here the fault was mine, I have missed a corner case in the error catching mechanism i.e. when the sitemap URL returns something different from "200 OK" or "500 internal server error". Just a few second and the mistake was fix.
Problem and Solution: I ran the script again and what a surprise the sitemap URL was returning "403 Forbidden". At the same time the sitemap was perfectly accessible via my browser. After some thinking I remembered about that some security plugins block the access to the website if there is not User-Agent header supplied. The reason for this is to block the access of simple script. In my case even an empty User-Agent did the trick to delude the plugin.
urllib2.urlopen(urllib2.Request(url,
headers={'User-Agent': USER_AGENT}))
Final words: As a result of the issue mention above one bug in simple site checker was found fixed. At the same time another issue about missing status and progress was raised, more details can be found at Github but in a few words an info message was added to each processed URL to indicate the progress.
If you have any ideas for improvement or anything else feel free to comment, create issues and/or fork the script.
As a follow up post of Automated deployment with Ubuntu, Fabric and Django here are the slides from my presentation on topic "Automation, Fabric and Django". Unfortunately there is no audio podcast but if there is interest I can add some comments about each slide as part of this post.
If there is anything that need explanation feel free to ask.