? Async Python Frameworks

There is a new lightweight, fast, minimalist, you-name-it framework each week within the Python community. Why? Because it’s simple! You may want to take that path too and if you do, don’t make the same mistakes as we did. Come share your own frustrations and let’s build a new one together, in 10 minutes 🙃 — How to build an async pico web framework from scratch

I gave a talk at PyConCA on that topic, feeling legitimate given that we actually did it! Here are some notes about my intervention.


I’m working with the French government via State Startups. Our goal is to explore solutions for a given problem faced by citizens and/or administrations. These last 6 months, we experimented a way to localize unmanned aerial vehicles (UAVs a.k.a. drones) in real-time for the whole French territory. With a distributed team, we tried to figure out if the law was applicable and if a centralized database would make sense for all parties (citizens, industrials, police, army and so on).

There was a bunch of unknowns, especially related to performances. When you are (in)validating proof of concepts, you might think that performances are your last concern but we wanted to be able to try the service for real. Plus, you never know when a proof-of-concept finally goes into production…

We roughly estimated that we will have to be able to handle about 50k requests per second. In Python. With a unique server. Spoiler alert: for a lot of non-technical reasons, we did not stressed the API for real so we cannot guarantee that our framework can hit that peak. Nonetheless, I think the process was interesting because that is the first time in my career that I had to put my developments under such constraints.

We knew Django and Flask, we already experimented with Falcon. You cannot achieve that level of performances out of these frameworks given our resources. That is why we dug into the async side of Python 3. And it was fun! At first we thought that Sanic was the perfect candidate but soon enough we bumped into issues related to testing and the API did not click with us. So, after an intensive workweek of sleep deprivation and big waves in the ocean, we started to hack on our own framework. And it was crazy!


First of all, this is not a competition. It is challenging to compare concepts and implementations. We had no guilt to steal clever parts, documenting what we think is better for our needs. What we made is not a framework for anybody. Micro means specific and as such incomplete. That is why I encourage you to do the same for your own needs.

Go read the code of your current web framework, it is a gold mine of hacks to deal with inconsistent HTTP-related specifications :-). Seriously, assembling your own framework will make you learn a ton of things. From request parsing to header security, from CORS-related issues to multipart nightmare of latin-1 encoded filenames, from conflicting specifications on Cookies to crazy Accept headers.

Our philosophy was to give a decent developer experience while uncompromising performances. To achieve that we knew that we had to stay minimalists and reuse small and performant librairies.


We are still working on it and for sure a benchmark is a lie. We use wrk which is kind of ab on steroids. Anyway, I think we are missing a tool in the Python community to be able to compare performances across pull/merge-requests. Like continuous integration with metrics on introduced bottlenecks, continuous performances? If I missed anything on that topic, please drop me an email.

Edit: Ronan made me discover airspeed velocity, nice!

The smallest async web response we were able to compute is the following:

class Protocol(asyncio.Protocol):

    def connection_made(self, transport):
        self.writer = transport

    def data_received(self, data: bytes):
        self.writer.write(b'HTTP/1.1 200 OK\r\n')
        self.writer.write(b'Content-Length: 25\r\n')
        self.writer.write(b'Content-Type: application/json\r\n')
        self.writer.write(b'{"message":"Hello bench"}')

Once you have that hardcoded response, set uvloop as your event loop and create a task for it:

loop = asyncio.get_event_loop()
server = loop.create_server(Protocol, '', 8000)

Here we go, for more details, check out the whole file in our benchmarks. But basically you have all important parts here. That’s great but a bit too low-level for our taste!

Enters Roll which adds routing and request/response parsing in roughly 300 lines of code. An example of the same kind of response would be:


app = Roll()

async def hello(request, response, parameter):
    response.json = {'message': f'Hello {parameter}'}

We have routing and JSON serialization under the hood with dynamic response given the passed parameter in URL. No big deal but it’s quite handy compared to the raw version. Note that we lost about 63% of the initial number of requests per seconds just doing that!

Hint: it’s still better than the “concurrence”. 😇


Python 4 is already here, and it’s called Cython. — My coworker

Me: Rust is probably Python 5 (cache)! It escalates quickly when we are trolling :p

We My crazy coworker wrote a routing system using Cython to improve performances and the jump was significative to say the least. That is part of our approach: finding small existing pieces or write our own if we think we can code a faster solution. Once you expose what you consider is the right API, it’s easy to swap a given piece from one to another without introducing breaking changes.

For instance, we are currently using the awesome httptools to parse the request and the URL but that might change if we find a better approach. We only have these two dependencies for now and that is by design. We are really careful and conservative on introducing new features, trying to keep aesthetic in mind. Each new addition is benchmarked and heavily discussed in terms of developer experience. This is always a tradeoff.

One of our core principles is to ease the plug-ability via extensions and custom subclasses of core components. We are missing a way to get benchmarks of these additions though. Maybe tools like perf will help in the future, ideally it would allow us to make a table of performances given the activated extensions. Too many benchmarks are way far from real-life requests/responses treatments.

An option we did not explore yet is the use of newly introduced annotations like did APIStar to only parse/evaluate pertinent parts of the incoming request. Not sure it will work in our particular case though.

More than code

That part is really intimate. I enjoy taking the time to carve my own tools and challenge my assumptions with experimented colleagues. I can feel the progression, both on a personal point of view and as a team. You might think that time is lost and building your own “homemade framework” is such a waste and will be painful to maintain. That is OK and I used to think that too.

With as much hindsight as I can under such short notice, I think it saved us as a team to build that core reusable component when we were targeted by politics-related issues and pressure from all parts. It was kind of a safe place where we can focus for half a day and come into that pair-flow state, knowing that we were producing an open-source common.

Take aways

Do it! It has been a hell of a rollercoaster to code our own solution but we are quite happy of the result and we know that we will reuse it for future and ancient projects so the return over investment is not null even if we did not put the actual project to production. And by the way, the technical challenges along the way were as much interesting as the end result so no regrets whatsoever. Bonus: it built confidence in our capability to overcome problems and made us a better team.

Don’t do it!!! If you are not mature enough as a team to take the time to craft a tool that will fit your needs, it is probably better to reuse existing ones. It will be more generic and there is no problem with that: 99% of the time this is the more pertinent approach!

Have fun roll-ing out your own framework!