A former sysadmin now cloud engineer's blog

Author: Greg Campion

Asyncio unit testing the aioboto3 library

I’ve been working on a project for my company https://gitlab.com/paessler-labs/prtg-pyprobe and it’s using asyncio library for Python quite heavily. It’s a great framework and so far we’ve had very little issues implementing it but I recently implemented some new code that I wanted to mock for unit testing and it was a bit more challenging than I thought it would be.

Here is the code that I wanted to test

start = time.time()
            async with aioboto3.resource(
                "s3", aws_access_key_id=task_data["aws_access_key"], aws_secret_access_key=task_data["aws_secret_key"]
            ) as s3:
                all_buckets = s3.buckets.all()
                i = 0
                async for _ in all_buckets:
                    i += 1

                s3_bucket_data.add_channel(
                    name="Total Buckets", mode="integer", kind="Custom", customunit="buckets", value=i
                )

                s3_bucket_data.message = f"Your AWS account has {i} buckets."

                end = (time.time() - start) * 1000
                s3_bucket_data.add_channel(name="Total Query Time", mode="float", kind="TimeResponse", value=end)

And here is what the unit test ended up looking like

@pytest.mark.asyncio
class TestS3TotalWork:
    @asynctest.patch("aioboto3.resource")
    async def test_sensor_s3_total(self, aioboto_mock, s3_total_sensor):
        buckets = asynctest.MagicMock()
        buckets.__aiter__.return_value = ["bucket1", "bucket2", "bucket3"]

        aioboto_mock.return_value.__aenter__.return_value.buckets.all.return_value = buckets

        s3_total_queue = asyncio.Queue()

        await s3_total_sensor.work(task_data=task_data(), q=s3_total_queue)

        queue_result = await s3_total_queue.get()

        aioboto_mock.assert_called_once_with("s3", aws_access_key_id="1123124", aws_secret_access_key="jkh2089")
        assert queue_result["message"] == "Your AWS account has 3 buckets."
        assert {
            "customunit": "buckets",
            "kind": "Custom",
            "mode": "integer",
            "name": "Total Buckets",
            "value": 3,
        } in queue_result["channel"]

To be able to test this, I ended up using the https://pypi.org/project/asynctest/ which was really helpful for mocking async iterables and context managers.

The trick for me to figure out was what was returning what when for the call

all_buckets = s3.buckets.all()

The way that I tried to think of it is as such:

@asynctest.patch("aioboto3.resource")
...
aioboto_mock.return_value.__aenter__.return_value.buckets.all.return_value = buckets

‘aioboto_mock’ mocks the library and it’s attribute ‘resource’

The ‘return_value’ of that is the ‘s3’ context manager

Then we step into the context which is the ‘__aenter__’ method

Then we need another ‘return_value’ since this is where the context is returning the results from..

The method buckets.all()’s return_value.

buckets = asynctest.MagicMock()
buckets.__aiter__.return_value = ["bucket1", "bucket2", "bucket3"]

The buckets.all() method also returns an async iterable and so to patch the result of the s3.buckets.all() method we also have to set this to be a MagicMock that returns values for the __aiter__ method of async iterables..

Totally simple right :D. Hope this helps anyone else trying to understand how to mock async functions!

Checkout master for all repositories in a folder

When dealing with microservices, sometimes I’ve found myself needing to go through all the repositories for a project in a folder and checkout the most recent master so it can be deployed.

Again the other day I had to do this so I finally wrote a script to make it go a bit faster. If there are changes in your working tree, it will notify at the end so you don’t have to worry about losing changes in any repos. Below is the result:

#!/bin/bash

REPOSITORIES=${PWD}
RED='\033[0;31m'
NC='\033[0m' # No Color
IFS=$'\n'
MANUAL_UPDATE_REPOS=()

for REPO in `ls "$REPOSITORIES/"`
do
  if [ -d "$REPOSITORIES/$REPO" ]
  then
    echo "Updating $REPOSITORIES/$REPO at `date`"
    if [ -d "$REPOSITORIES/$REPO/.git" ]
    then
      cd "$REPOSITORIES/$REPO"
      repo_status=$(git status)
      if [[ $repo_status != *"nothing to commit, working tree clean"* ]] 
      then
        echo -e "You need to stash or commit your code before this repository ${RED}$REPO${NC} can be set to master"
        MANUAL_UPDATE_REPOS+=($REPO)
      else
        echo "Fetching from remote"
        git fetch
        echo "Checking out master"
        git checkout master
        echo "Pulling"
        git pull
      fi
    else
      echo "Skipping because it doesn't look like it has a .git folder."
    fi
    echo "Done at `date`"
  fi
done
echo "these repos:"
printf "${RED}%s${NC}\n" ${MANUAL_UPDATE_REPOS[@]}
echo "do not have clean working trees and msater cannot be checked out."

Default Git Commit Message

If you want to have a sentence or reminder pop-up in every commit subject it’s pretty easy to do. I was typing this by hand for most of my commits and got sick of it and found an easy way to solve the issue.

These directions will make it so that each time you commit to every repository some text will pop up in the subject of your commit but it’s also possible to do per repository.

First create a file in your home directory

touch ~/.commit-template

Then add the text you want to be added to the subject of every commit

[ci-skip] MYTICKET-

Once you have that you just have to run a simple command to set this as your template for each commit globally

git config --global commit.template ~/.commit-template

Monitorama!

TL:DR – This conference is really great as it’s put together by a group of people who aren’t tied to any company or beholden to any set of ideals, they just love monitoring 🙂 . There was a lot of opendiscourse not only about monitoring in the technical sense but also about the human side of monitoring which isn’t something that gets a lot of press.

The main thing I learned: We need to shift our focus about what we monitor.

Overview

The conference was in the Compagnietheater which was a beautiful venue.  There were only 400 people in attendance but from the many discussions that I had, they were the upper echelon of the monitoring community. I met and talked with quite a few people, including some of the top people who run monitoring operations for Apple, Yahoo, the BBC and Standard Chartered Bank.  Through these discussions that I had, I learned a lot about what companies are looking for and what the future of monitoring will likely look like.  These are the guys that are investing time and money into the open source projects that are used by millions of people like graphite, grafana, prometheus, icinga and are shaping the future of monitoring through those initiatives.

Continue reading

Well now I’ve done it….

I’ve been wanting to set up a blog / personal website to start documenting some of the things that I do at work and at home so others can benefit and this is the start.  I guess this will be something of a review since I wanted to write about how unbelievably easy it was to set up a wordpress server / site with AWS Lightsail.  I’ve accomplished registering a domain, getting a mail server up and running for said domain and a functioning wordpress server set up all whilst watching France and it looks like now England progress into the finals of the world cup.  Well hopefully I will keep this up and give something back to my tech community!

© 2021 The Campion's Blog

Theme by Anders NorenUp ↑