Load testing with locust.io

Testing code at scale is important and there are many tools out there to do so but recently I’ve fallen a little in love with a relatively new tool from locust.io. Locust.io is an easy to use Python load testing tool. It builds on the python requests library and zeromq to allow you to easily test millions of users hitting your website or API. In the past I’ve used Siege (which is great) or if on Google app engine furious. Furious spoils me because I’m able to spin off hundreds of thousands of complex tasks asynchronously and then monitor the aggregate statistics. I wanted something off of app engine that allowed me to do the same by composing user interactions with my APIs. For example, imagine you want to test several steps that all need to keep state between calls on the client.

  1. Send data to endpoint A
  2. Send data to endpoint B
  3. Kick off a processing job at endpoint C using results from A & B calls

Locust provides some key abstractions that enable me to use minimal code to write a test. It allows you to define different types of Users (locusts) that can all be hitting your API/website at once. It also allows you to have that user have different nested ‘tasks’ that all have different probabilities of being executed. This allows you to very quickly compose different types of users who can interact with your website or API in different ways. Since Locust is also built on top of the fantastic python requests library anything you can do with requests you can test and load test with Locust. Locust also has a built in web app to easily display and manipulate how many ‘locusts’ are running through tests. Since each locust is using a greenlet a single testing host can spin up tens of thousands of locusts easily. If that isn’t enough you can easily setup a distributed load test with multiple slaves to a single master (enabled by zeromq).

Locust is a little weak on the analytics side but you should be measuring everything already so you should have good metrics server side. Locust also makes it easy to dump out a csv of all the requests if you fancy doing some post processing (graph generation, analysis) on your own. Something I’d like to see is requests grouped by the actual locust user type so that you could easily identify which kinds of users are more prone to failures during high load. Another feature that would be nice is live editing the probability of specific tasks. Also, being able to see the statistics based on task sets instead of URL end points would be useful as well. Still, I’ve found the library to be very useful when combined with statsd and graphite.

This is Locust’s simple web interface that provides live analytics. You can also manipulate the number of users active through the web interface as well. All the key stats are clearly displayed.

screenshot

 

A simple example of testing the outline above is below

from locust import HttpLocust, TaskSet, task
class DataStepOneTasks(TaskSet):
     def on_start(self):
        self.a_data = open('test_a', 'r').read()
     @task
     def post_a(self):
        self.client.post("/a", data={'data_id': 'data_a', 'data': self.a_data})
        self.interrupt()
class DataStepTwoTasks(TaskSet):
     tasks = {DataStepOneTasks:1}
     def on_start(self):
         self.b_data = open('test_b', 'r').read()
    @task
    def post_b(self):
         self.client.post("/b", data={'data_id': 'data_b', 'data': self.b_data})
         self.interrupt()
class APITasks(TaskSet):
    tasks = {DataStepTwoTasks:1}
    
    @task
    def post_c(self):
         self.client.post("/c", data={'data_b_id': 'data_b', 
                                      'data_a_id': 'data_a'})
class APIUser(HttpLocust):
    task_set = APITasks
    min_wait = 5000
    max_wait = 15000

There are a couple things I had to do to ensure this worked correctly. To create the chain to ensure that A and B both happen before C is called I had to nest A into B and B then into C. So the chain is more A -> B -> C. The interrupt is to let the sub task sets ‘finish’ for the parent task sets so it will try something else.  What I love about locust is how in just a few lines of code I can easily test complicated behavior with millions of users hitting my website and server code (see the docs for how to run distributed loads).

You can also run the tests from the command line and then parse the output. This means you could add to your continuous build system a small quick smoke test of users interacting with your code while it is running after you build it. This helps you test that even if all your unittests pass your code will run in its expected environment (IE a wsgi app behind uwsgi or gunicorn). It makes integration testing easy.

If I didn’t want to have other tasks at each nested level I could have easily just written

class APITasks(TaskSet):

    def on_start(self):
        self.a_data = open('test_a', 'r').read()
        self.b_data = open('test_b', 'r').read()

    @task
    def post_c(self):
         a_result = self.client.post("/a", data={'data_id': 'data_a', 'data': self.a_data})
         b_result = self.client.post("/b", data={'data_id': 'data_b', 'data': self.b_data})
         self.client.post("/c", data={'data_b_id': b_result.body(), 
                                      'data_a_id': a_result.body()})
class APIUser(HttpLocust):
    task_set = APITasks
    min_wait = 5000
    max_wait = 15000

Which is even simpler and even more true since you would want to use whatever id was returned in order to kick off task C. All in all I’ve found Locust to be really simple to use and easy to quickly get running.

You may also like...

Leave a Reply

Your email address will not be published. Required fields are marked *