Category Archives: python

Conditional Cache Mixin for Django DRF

For a project I’m doing I’m looking at adding a conditional header to bypass cache when an X-No-Cache is present. In my case this allows external system to flush cache when certain conditions are met.

I’ve modified code from Django Rest Framework Extension to allow for such behaviour. There might be a better way to do it, but at the moment the flow of the code is clear to me. It also needs drf-extensions as it’s just an additional mixin that offloads the code to cache_response decorator.

from rest_framework_extensions.cache.decorators import cache_response
from rest_framework_extensions.settings import extensions_api_settings


class BaseCacheResponseMixin(object):
    object_cache_key_func = extensions_api_settings.DEFAULT_OBJECT_CACHE_KEY_FUNC
    list_cache_key_func = extensions_api_settings.DEFAULT_LIST_CACHE_KEY_FUNC


class ConditionalListCacheResponseMixin(BaseCacheResponseMixin):
    @cache_response(key_func="list_cache_key_func")
    def _cached_list(self, request, *args, **kwargs):
        return super().list(request, *args, **kwargs)

    def list(self, request, *args, **kwargs):
        if request.META.get("HTTP_X_NO_CACHE") == "1":
            return super().list(request, *args, **kwargs)
        else:
            return self._cached_list(request, *args, **kwargs)


class ConditionalRetrieveCacheResponseMixin(BaseCacheResponseMixin):
    @cache_response(key_func="object_cache_key_func")
    def _cached_retrieve(self, request, *args, **kwargs):
        return super().retrieve(request, *args, **kwargs)

    def retrieve(self, request, *args, **kwargs):
        if request.META.get("HTTP_X_NO_CACHE") == "1":
            return super().retrieve(request, *args, **kwargs)
        else:
            return self._cached_retrieve(request, *args, **kwargs)


class ConditionalCacheResponseMixin(
    ConditionalRetrieveCacheResponseMixin, ConditionalListCacheResponseMixin
):
    pass

Koornk network graph with pretty pictures

Continuing my saga of visualizing Koornk social network I decided that obvious next step is to map out who talks to who and how much. For this task I used excellent Python library NetworkX that uses pygraphviz to draw the pretty pictures in the end.

Just to explain what you’re looking at:

  • I downloaded all public conversations from Koornk and filtered out to the ones that use @ somewhere to reference someone else
  • You need to all-together reference or be referenced 60 times to get on the list (70 people from 1606 made it)
  • From those 70 people, if two of them talked more then 40-times they got a line between each other
  • Line thickness is then calculated based on how much they talked to each other
  • Circle size around each person tells you their cumulative chatter towards others

Fun statistic: about 22% of all message looked at (N=81990), contained @ reference

Pretty pictures

Top down view of all the 70 people who made the cut (click for bigger version)
Top down view of all the 70 people who made the cut (click for bigger version)

It turns out that there’s a smaller group of very vocal people within this view, so we naturally want to see zoomed version:

Who talks to who on Koornk and how much (click for bigger version)
Who talks to who on Koornk and how much (click for bigger version)

Lessons learned

  • It takes about two days to properly get a hang of NetworkX library to draw something like that. It doesn’t mean you know anything about graph theory, but at least you can start drawing pretty pictures.
  • Pictures are fun, but next step is probably interactive Flash diagram that allows you to explore these relationship for yourself
  • Throwing around these data structures actually takes a few seconds on modern PC. Finally something meaningful for it to process.
  • I wonder how much work would be to properly plot something like this for a subset of Twitter relationship if I maybe drink from their fire-hose long enough. Maybe Gnip guys can fill up a few Terabytes of Hard Drives with back log, if they have it and we start crunching this. (I’m  assuming that there’s already a post-graduate student somewhere that’s doing exactly this)
Reblog this post [with Zemanta]

Listing all keys in S3 bucket using python boto

Source: FlickrWhen using python-boto package to list keys in your S3 bucket, you might hit a limit of 1000 keys when using function call bucket.get_all_keys(). In order to get full list of keys just do something along the lines of

keylist = [k for k in self.bucket]

since as it turns out, bucket has an iterator over key name.

(Discovered through patch of Mitchell Garnaat to Duplicity project).

Prevoz.org in Google Earth

I’ve been playing a bit with writing Google Earth xml files to try to make some nice visualizations. Here is a first screenshot of how users of Slovenian carpooling site Prevoz.org are traveling around the country.

(historical data of all carshares, only showing locations with 10 or more entries in the database).

Back at doing open source and community work

Aaahh, it’s good to be awake again. Something in summer smell awoke my community and open source spirit in me again which in turn forced me to work once again on old projects and new open source ideas. Currently I only have to announce two things that managed to kick me back in action. Prevoz.org RSS feeds, and an addition of new machine to Morphix nightly builds.
It was of course community effort so greetings also go to administrators and other hackers that helped in making me active again.
It also seems that there is quite some interest in python myspace api, so check out the Google Code project that was setup by Laszlo: myspace-api.