Django and working with large database tables
The slides from my presentation for the Django Stockholm Meetup group. Contains small comparison between MySQL and PostgreSQL in terms of performance when modigying the tables structure.
The slides from my presentation for the Django Stockholm Meetup group. Contains small comparison between MySQL and PostgreSQL in terms of performance when modigying the tables structure.
Updated solution, please check below!
Some background: We have a model that is edited only via the Django admin. The save method of the model fires a celery task to update several other records. The reason for using celery here is that the amount of related objects can be pretty big and we decided that it is best to spawn the update as a background job. We have unit and integration tests, the code was also manually tested, everything looked nice and we deployed it.
On the next day we found out that the code is acting weird. No errors, everything looked like it has worked but the updated records actually contained the old data before the change. The celery task accepts the object ID as an argument and the object is fetched from the database before doing anything else so the problem was not that we were passing some old state of the object. Then what was going on?
Trying to reproduce it: Things are getting even weirder. The issues is happening every now and then.
Hm...? Race condition?! Let's take a look at the code:
class MyModel(models.Model):
def save(self, **kwargs):
is_existing_object = False if self.pk else True
super(MyModel, self).save(**kwargs)
if is_existing_object:
update_related_objects.delay(self.pk)
The bigger picture: (this is simplified version of what is happening for the full one check Django's source)
def changeform_view(...):
with transaction.atomic():
...
self.save_model(...) # here happens the call to MyModel.save()
self.save_related(...)
...
Ok, so the save is actually wrapped in transaction. This explains what is going on. Before the transaction is committed the updated changes are not available for the other connections. This way when the celery task is called we end up in a race condition whether the task will start before or after the transaction is completed. If celery manages to pick the task before the transaction is committed it reads the old state of the object and here is the error.
Solution (updated): The best solution is to use transaction.on_commit. This way the call to the celery task will be executed only after the transaction is completed succesfully. Also, if you call the method outside of transaction the function will be executed immediately so it will also work if you are saving the model outside the admin. The only downside is that this functionality has been added to Django in version 1.9. So it wasn't an option for us. Still, special thanks to Jordan Jambazov for pointing this approach to me, I'll definitely use it in the future.
Unfortunately we are using Django 1.8 so we picked a quick and ugly fix. We added a 60 seconds countdown to the task call giving the transaction enough time to complete. As the call to the task depends on some logic and which properties of the models instance are changes moving it out of the save method was a problem. Another option could be to pass all the necessary data to the task itself but we decided that it will make it too complicated.
However I am always open to other ideas so if you have hit this issue before I would like to know how you solved it.
Preface: This is quite a standard problem for apps/websites with low traffic or those using heavy caching and hitting the database quite seldom. Most of the articles you will find on the topic will tell you one thing - change the wait_timeout setting in the database. Unfortunately in some of the cases this disconnect occurs much earlier than the expected wait_timeout (default ot 8 hours). If you are in one of those cases keep reading.
This issue haunted our team for weeks. When we first faced it the project that was raising it was still in dev phase so it wasn't critical but with getting closer to the release data we started to search for solution. We have read several articles and decided that pool_recycle is our solution.
Adding pool_recycle: According to SQL Alchemy's documentation pool_recycle "causes the pool to recycle connections after the given number of seconds has passed". Nice, so if you recycle the connection in intervals smaller that the await_timeout the error above should not appear. Let's try it out:
import time
from sqlalchemy.engine import create_engine
url = 'mysql+pymysql://user:pass@127.0.0.1:3306/db'
engine = create_engine(url, pool_recycle=1).connect()
query = 'SELECT NOW();'
while True:
print('Q1', engine.execute(query).fetchall())
engine.execute('SET wait_timeout=2')
time.sleep(3)
print('Q2', engine.execute(query).fetchall())
So what does the code do - we create a connection to a local MySQL server and state that it should be recycled every second(line 7). Then we execute a simple query (line 12) just to verify that the connection is working.
We set the wait_timeout to 2 second and wait for 3. At this stage the connection to the server will timeout, but SQL Alchemy should recycle it, so the last query should be executed successfully and the loop should continue.
Unfortunately the results looks like:
sqlalchemy.exc.OperationalError: (pymysql.err.OperationalError)
(2013, 'Lost connection to MySQL server during query') [SQL: 'SELECT NOW();']
Wait, what happened, why is not the connection recycled?
Solution: Well, as with all such problems the solution was much simpler compared to the time it took us to find it (we fought with this for days). The only change that solved it was on line 7:
engine = create_engine(url, pool_recycle=1)
# the result
Q1 [(datetime.datetime(2016, 7, 24, 20, 51, 41),)]
Q2 [(datetime.datetime(2016, 7, 24, 20, 51, 44),)]
Q1 [(datetime.datetime(2016, 7, 24, 20, 51, 44),)]
Q2 [(datetime.datetime(2016, 7, 24, 20, 51, 47),)]
Have you spot the difference? We are not calling the connect() method any more.
Final words: To keep it honest, I don't know why this solved the issue. Hopefully someone more familiar with SQL Alchemy will come with a reasonable explanation for it. The bad part is that the examples in the official docs are using "connect". So it is either a bug or a bad documentation. I will send this article to SQL Alchemy's Twitter account so hopefully we will see some input from them. Till then, if any of you have an idea explanation about the behaviour I'll be happy to hear it.
First of all I want to thank Lifesum for having another "Innovation Week", it is a great opportunity and I hope that more companies will start following it. In a few words the idea is to allow everyone from the company to freely pick a project or idea that they want to develop and and work on it for one week. The benefits range from just making people happy because of the break of the routines and the opportunity to work on something a bit different, to seeing some pretty amazing prototypes that can be easily implemented in the company product.
Well, unfortunately service discovery in the real world is not that simple. In my presentation from the Stockholm Python MeetUp I talked a bit more about the complexity of service discovery, the suboptimal solutions and Smartstack - a solutions invented at AirBnB for simplifying the whole process. You can see more about it in my presentation:
However, the whole idea sounded so awesome that me and my colegue Esma decided to team up on that and try to explore a bit more the opportunities that Smarstack provides for us. In the matter of fact we decided to explore two different approaches: Smartstack and Consul. However we had some issue with the Consul setup and we found that it is not acting exactly the way we so at the end we focused all our attention to Smarstack.
We created a small project consisting of a pool of Zookeeper instances, two node for service A and one node for service B. We tested multiple scenarios of crashes of one or more nodes, both zookepers instances and service instances, how the systems operates during the crashes and how it recovers after the nodes are brought back on.
Well I haven't conducted any interviews recently but this one has been laying in my drafts for a quite while so it is time to take it out of the dust and finish it. As I have said in Python Interview Question and Answers these are basic questions to establish the basic level of the candidates.
There are a lot of other things to ask about internationalisation(i18n), localisation(l10n), south/migrations etc. Take a look at the docs and you can see them explained pretty well.
In a few words PyCon Sweden 2015 was awesome. Honestly, this was my first Python conference ever but I really hope it won't be the last.
Outside the awesome talks and great organisation it was really nice to spend some time with similar minded people and talk about technology, the universe and everything else. I have met some old friends and made some new ones but lets get back to the talk. Unfortunately I was not able to see all of them but here is a brief about those I saw and found really interesting:
It all started with Ian Ozsvald and his awesome talk about "Data Science Deployed" (slides / video). The most important point here were:
Then Rebecca Meritz talked about "From Explicitness to Convention: A Journey from Django to Rails" (slides / video). Whether the title sounds a bit contradictive this was not the usual Django vs Rails talk. At least to me it was more like a comparison between the two frameworks, showing their differences, weak and strong sides. Whether I am a Django user, I am more and more keen to the position that none of the frameworks is better than the other one, they are just two different approaches for building great web apps.
Flavia Missi and "Test-Driven-Development with Python and Django" (slides / video). TDD will help you have cleaner, better structured and easy to maintian code. If you are not doing it the best moment to start is now. Whether it is hard at the beginning you will pretty soon realise how beneficial it is. Especially if someone pushed a bad commit and the tests saved your ass before the code goes to production.
Later Dain Nilsson talked about "U2F: Phishing-proof two-factor authentication for everyone" (video). Whether I don't use two-factor authentication at the moment I am familiar with the concept and I really like it. The U2F protocol looks like a big step towards making it more applicable over different tools and applications and the key holding devices are more and more accessible nowadays. Maybe it is time for me to get one )))
The second day started with Kate Heddleston who talked about ethics in computer programming (video). About how social networks can be used as a tool for ruining peoples lifes and that we as a developers should take a responsibility and work towards making the internet a safer place for everyone. A place where you can have your privacy and have protection if harassed. It is a big problem which won't be solved in a night, but talking about it is the first step towards solving it.
"Why Django Sucks" by Emil Stenström. Don't rush into raging cause this was one of best talks at the conference. Emil showed us the parts in Django where he sees flaws and that need improvement. The main point was the lack of common template language between Django and Javascript and what possible solutions are available or can be made.
Daniele Sluijters reminded us how easy is to work with "Puppet and Python" (video). No more fighting with Ruby, you can easily use your favorite language to build your own tools and helpers
Dennis Ljungmark and "Embedded Python in Practice". The last time I programmed embedded devices was 15 years ago as a part of short course in the high school. Dennis' work is much more complex than what I did then but his talk reminded me of things that are applicable to general programming. Whether using non-embedded systems we often have much more memory and processing power available that does not mean that we should waste it. So think when you code - RAM is not endless and processors are not that fast as we often wish. Also don't forget that Exceptions wil sooner or later occur so make your code ready to handle them.
"How to Build a Python Web Application with Flask and Neo4j" (video) by Nicole White. Well I have heard about Neo4J, but I have never used it or seen it in action so this one was really exciting. Neo4J offers you a whole new perspective about building databases and relations between objects but it is much far from panacea. Actually I can see it it is more like a special tool then as a general replacement of a relation database but it still worths to be tried. Oh, and the Neo4J browser - totally awesome.
In the lightning talks Tome Cvitan talked about Faker. If you are still not familiar with it now is a good time to try it. Especially if you are writing tests.
At the final Kenneth Reitz told us about "Python for Humans"(video). About the not that obvious things in Python and what solutions are out there. And also about The Hitchhiker’s Guide to Python! a great place for beginners and not only and shared the idea to make Python easier and more welcoming by introducing better tools for higher level of operations.
Finally I want to thank to everyone - the organisers, the speakers, the audience and practically everyone who was a part of the conference. Without you it would be the same (or be at all). Thanks, keep up the good work and hopefully we will sea each other again.
P.S. Have I mentioned that the whole conference was recorded on video so hopefully we will see be able to see all the talks pretty soon. I will try to keep this post updated with the links to the videos and/or slides when they become available. Of course if you know about any published slides from the conference that are not linked here please let me know.
The full set of videos are available at PyCon Sweden's Youtube channel.
Brief: In one of the project I work on we had to convert some old naive datetime objects to timezone aware ones. Converting naive datetime to timezone aware one is usually a straightforward job. In django you even have a nice utility function for this. For example:
import pytz
from django.utils import timezone
timezone.make_aware(datetime.datetime(2012, 3, 25, 3, 52),
timezone=pytz.timezone('Europe/Stockholm'))
# returns datetime.datetime(2012, 3, 25, 3, 52, tzinfo=<DstTzInfo 'Europe/Stockholm' CEST+2:00:00 DST>)
Problem: You can use this for quite a long time until one day you end up with something like this:
timezone.make_aware(datetime.datetime(2012, 3, 25, 2, 52),
timezone=pytz.timezone('Europe/Stockholm'))
# which leads to
Traceback (most recent call last):
File "", line 1, in
File "/home/ilian/venvs/test/lib/python3.4/site-packages/django/utils/timezone.py", line 358, in make_aware
return timezone.localize(value, is_dst=None)
File "/home/ilian/venvs/test/lib/python3.4/site-packages/pytz/tzinfo.py", line 327, in localize
raise NonExistentTimeError(dt)
pytz.exceptions.NonExistentTimeError: 2012-03-25 02:52:00
Or this:
timezone.make_aware(datetime.datetime(2012, 10, 28, 2, 52),
timezone=pytz.timezone('Europe/Stockholm'))
#throws
Traceback (most recent call last):
File "", line 1, in
File "/home/ilian/venvs/test/lib/python3.4/site-packages/django/utils/timezone.py", line 358, in make_aware
return timezone.localize(value, is_dst=None)
File "/home/ilian/venvs/test/lib/python3.4/site-packages/pytz/tzinfo.py", line 349, in localize
raise AmbiguousTimeError(dt)
pytz.exceptions.AmbiguousTimeError: 2012-10-28 02:52:00
Explanation: The reason for the first error is that in the real world this datetime does not exists. Due to the DST change on this date the clock jumps from 01:59 standard time to 03:00 DST. Fortunately (or not) pytz is aware of the fact that this time is invalid and will throw the exception above. The second exception is almost the same but it happens when switching from summer to standard time. From 01:59 DST the clock shifts to 01:00 standard time, so we end with a duplicate time.
Why has this happened(in our case)? Well we couldn't be sure how exactly this one got into our legacy data but the assumption is that at the moment when the record was saved the server has been in different timezone where this has been a valid time.
Solution 1: This fix is quite simple, just add an hour if the exception occurs.
try:
date = make_aware(
datetime.fromtimestamp(date_time, timezone=pytz.timezone('Europe/Stockholm'))
)
except (pytz.NonExistentTimeError, pytz.AmbiguousTimeError):
date = make_aware(
datetime.fromtimestamp(date_time) + timedelta(hours=1),
timezone=pytz.timezone('Europe/Stockholm')
)
Solution 2: Instead of calling make_aware call timezone.localize directly.
try:
date = make_aware(
datetime.fromtimestamp(date_time, timezone=pytz.timezone('Europe/Stockholm'))
)
except (pytz.NonExistentTimeError, pytz.AmbiguousTimeError):
timezone = pytz.timezone('Europe/Stockholm')
date = timezone.localize(datetime.fromtimestamp(date_time), is_dst=False)
The second solution probably needs some explanation. First lets check what make_aware does. The code bellow is take from Django's sourcecode as it is in version 1.7.7
def make_aware(value, timezone):
"""
Makes a naive datetime.datetime in a given time zone aware.
"""
if hasattr(timezone, 'localize'):
# This method is available for pytz time zones.
return timezone.localize(value, is_dst=None)
else:
# Check that we won't overwrite the timezone of an aware datetime.
if is_aware(value):
raise ValueError(
"make_aware expects a naive datetime, got %s" % value)
# This may be wrong around DST changes!
return value.replace(tzinfo=timezone)
To simplify it, what Django does is to use the localize method of the timezone object(if it exists) to convert the datetime. When using pytz this localize method takes two arguments: the datetime value and is_dst. The last argument takes three possible values: None, False and True. When using None and the datetime matches the moment of the DST change pytz does not know how to handle the datetime and you get one of the exceptions shown above. False means that it should convert it to standard time and True that it should convert it to summer time.
Why isn't this fixed in Django? The simple answer is "because this is how it should work". For a bit longer check the respectful ticket.
Reminder: Do not forget that the "fix" above does not actually care whether the original datetime is during DST or not. In our case this was not criticla for our app, but in some other cases it might be, so use it carefully.
Thanks: Special thanks to Joshua who correctly pointed out in the comments that I have missed the AmbiguousTimeError in the original post which made me to look a bit more in the problem, research other solutions and update the article to its current content.
Brief: Working with intervals in Python is really easy, fast and simple. If you want to learn more just keep reading.
Task description: Lets say that the case if the following, you have multiple users and each one of them has achieved different number of points on your website. So you want, to know how many users haven't got any point, how many made between 1 and 50 points, how many between 51 and 100 etc. In addition at 1000 the intervals start increasing by 100 instead of 50.
Preparing the intervals: Working with lists in Python is so awesome, so creating the intervals is quite a simple task.
intervals = [0] + \ # The zero intervals
[x * 50 for x in range(1, 20)] + \ # The 50 intervals
[x * 100 for x in range(10, 100)] + \ # The 100 intervals
[x * 1000 for x in range(10, 102)] # the 1000 intervals
So after running the code above we will have a list with the maximum number of points for each interval. Now it is time to prepare the different buckets that will store the users count. To ease this we are going to use a defaultdict.
from collections import defaultdict
buckets = defaultdict(lambda: 0)
This way, we can increase the count for each bucket without checking if it exists. Now lets got to counting
for user in users:
try:
bucket = intervals[bisect.bisect_left(intervals, user.points)]
except IndexError:
# we are over the last bucket, so we put in in it
bucket = intervals[-1]
buckets[bucket] += 1
How it works: Well it is quite simple. The bisect.bisect_left uses a binary search to estimate the position where an item should be inserted to keep the list, in our case intervals sorted. Using the position we take the value from the invervals that represent the bucker where the specified number should go. And we are ready. The result will looks like:
{
1: 10,
10: 5,
30: 8,
1100: 2
}
Final words: As you see when the default dict is used it does not have values for the empty buckets. This can be good or bad depending from the requirements how to present the data but it can be esily fixed by using the items from the intervals as keys for the buckets.
P.S. Comments and ideas for improvement are always welcome.
Well, as some of you may have seen this blog was on hold for quite a long time. There were multiple reasons mainly my Ph.D. and changing my job but it is back online.
So, what is new?
As a start this blog is no longer running on wordpress. The reason is that I had some issues with wordpress - the blog was hacked twice due to security holes in wordpress/plugins, it was terribly slow and the code looked like shit. Lots of inline styles and javascript etc. So I made a simple Django based blog that generates static content. Alse we have new design and new domain, the last one much easier to remember )))
Also the comments are now handled by Disquss and the search functinality is provided by Google. The code of the blog, needs some minor cleaning and then it will be released publicly in the next few weeks. Meanwhile you can check my latest post Working with intervals in Python.
P.S. I have finally finished my Ph.D. so no more university/reasearch job and hopefully more time for blogging.
Update 2: You can also check the follow up post Django interview questions.
For the last few weeks I have been interviewing several people for Python/Django developers so I thought that it might be helpful to show the questions I am asking together with the answers. The reason is ... OK, let me tell you a story first.
I remember when one of my university professors introduced to us his professor - the one who thought him. It was a really short visit but I still remember one if the things he said. "Ignorance is not bad, the bad thing is when you do no want to learn."
So back to the reason - if you have at least taken care to prepare for the interview, look for a standard questions and their answers and learn them this is a good start. Answering these question may not get you the job you are applying for but learning them will give you some valuable knowledge about Python.
This post will include the questions that are Python specific and I'll post the Django question separately.
# simple iteration
a = []
for x in range(10):
a.append(x*2)
# a == [0, 2, 4, 6, 8, 10, 12, 14, 16, 18]
# list comprehension
a = [x*2 for x in range(10)]
# dict comprehension
a = {x: x*2 for x in range(10)}
# a == {0: 0, 1: 2, 2: 4, 3: 6, 4: 8, 5: 10, 6: 12, 7: 14, 8: 16, 9: 18}
# the basic way
s = 0
for x in range(10):
s += x
# the right way
s = sum(range(10))
# the basic way
s = 1
for x in range(1, 10):
s = s * x
# the other way
from operator import mul
reduce(mul, range(1, 10))
Well I hope this will be helpful, if you have any question or suggestion feel free to comment.
Update: Due to the lots of comments on Reddit and LinkedIn, I understood that there is some misunderstanding about the post. First, the questions I have published are not the only ones I ask, the interview also includes such related to general programming knowledge and logical thinking. Second the questions above help me get a basic understanding of your Python knowledge but they are not the only think that makes my decision. Not answering some of them does not mean that you won't get the job, but it may show me on which parts we should to work.