Cropping tiled satellite imagery to a GeoJSON boundary (Python)

April 19, 2019


When working with satellite imagery, invariably they are produced as tilesets. This makes sense as we’re working with large amounts of data and usually you’re only interested in a small part of the world in high resolution, so you only access that bit.

In a previous blog post I showed how to create a deforestation tracker using satellite data.

This was a hack and had some serious performance problems, so I’m working on the process of producing faster results and faster visualisations.

For example, here are two tiles we’ve looked at previously, with the GeoJSON boundary:


Note that the area we’re interested in is quite small compared to the size of the tile images, so there’s a lot of information we can throw away. The two original PNGs are 1.2MB and 1.4MB in size:

original images

What we’re going to do is join the two tiles into one image, crop and mask it to the GeoJSON boundary, then render the single, cropped & masked image.

Setup your dev environment

Before we get cracking with Python, why am I using Python in the first place?

  1. Speed. Python has Numpy which can handle parallel array and matrix manipulations (and images can be represented as such). For this algorithm, I built it in Javascript first, then Python, and the speed-up was approx. 2000% (see the end of this post).
  2. Machine Learning. Python is the de-facto language for machine learning and AI. If I want to integrate some deep learning into the imaging pipeline (e.g. identifying features from satellite photos), I want to build in Python-based processing from the start.


  • Git
  • Python 3.5.x

The code is on Github so clone the repo, and create and start new virtual Python environment:

$ git clone
$ python -m venv .env
$ source .env/bin/activate
$ pip install numpy Pillow

Combining the images

Using the two tiles as described above, we create a tileset with the image data (parsed by Pillow, a fork of PIL, the Python Imaging Library), and the geographic boundaries of those images:

from PIL import Image
from util import BBox, V2

tiles = [
        'bbox': BBox(V2(30, -25), V2(40, -15)),
        'bbox': BBox(V2(30, -35), V2(40, -25)),

(BBox and V2 are small utility classes I wrote for a bounding box and 2-dimensional vector, see for the code.)

What we need to calculate is the full geographic bounding box of all the images in the tileset, as well as the size of the resulting image in pixels:

def calculate_composite_bbox_and_dimensions(tiles):
    calculate the geographic bounding box and the required
    composite image dimensions for the given tiles.
    bbox = None
    for i, tile in enumerate(tiles):
        if i == 0:
            bbox = BBox(tile['bbox'].min, tile['bbox'].max)

    # Use the first tile bounding box and dimensions - the assumption here is
    # that all the tile are of the same size, but this is not checked
    # The fucntion should really assert this assumption.
    tile_image_width = tiles[0]['img'].size[0]
    tile_image_height = tiles[0]['img'].size[1]
    width = round(bbox.width / tiles[0]['bbox'].width) * tile_image_width
    height = round(bbox.height / tiles[0]['bbox'].height) * tile_image_height
    return (bbox, (width, height))

and executing:

# 1. Calculate the composite bounding box and dimensions:
(composite_bbox, composite_dims) = calculate_composite_bbox_and_dimensions(
print('composite image bounding box:', composite_bbox)
print('composite image dimensions:', composite_dims)


composite image bounding box: min: (30, -35) max: (40, -15)
composite image dimensions: (1120, 2240)

So we know to stitch the tiles together we need an image 1120 by 2240 pixels, and that image represents a geographic region 30 to 40 degrees longitude and -35 to -15 degrees latitude.

We can use the PIL Image class to create a new composite image from the tiles:

def create_composite_image(tiles, bbox, dims):
    With the given tileset, create a new image with the given
    bounding box and dimensions. Returns a PIL image.
    (width, height) = dims

    # Default is black with 0 opacity
    composite_img ='RGBA', dims, (0, 0, 0, 0))

    for i, tile in enumerate(tiles):
        offset_x = round(
            (tile['bbox'].min.x - bbox.min.x) / bbox.width * width)
        offset_y = round(
            -(tile['bbox'].max.y - bbox.max.y) / bbox.height * height)
        composite_img.paste(tile['img'], (offset_x, offset_y))
    return composite_img
# 2. Create a simple composite image:
composite_img = create_composite_image(tiles, bbox, dims)'./output/composite.png')

And the resulting image will be the two stitched together:


Crop the composite image to the GeoJSON boundary

Now we have a single image, we can crop it to the GeoJSON boundary, which will dramatically reduce its size.

def geo_json_bbox(coordinates):
    Calculate the bounding box from the GeoJSON coordinates.
    bbox = None
    for i, coordinate in enumerate(coordinates):
        vec = V2(coordinate[0], coordinate[1])
        if i == 0:
            bbox = BBox(vec, vec)
    return bbox

def calculate_crop(image, uncropped_bbox, boundary_bbox):
    Crop an image with a given geographic bounding to the new bounding box.
    As we round up the pixel size of the new image, the geographic bounding
    box of new image will be potentially slightly larger then the required one,
    so also calculate the correct bounding box for the result.
    pixel_min_x = floor(
        (boundary_bbox.min.x - uncropped_bbox.min.x) /
        uncropped_bbox.width * image.size[0])
    pixel_min_y = floor(
        (uncropped_bbox.max.y - boundary_bbox.max.y) /
        uncropped_bbox.height * image.size[1])
    pixel_max_x = ceil(
        (boundary_bbox.max.x - uncropped_bbox.min.x) /
        uncropped_bbox.width * image.size[0])
    pixel_max_y = ceil(
        (uncropped_bbox.max.y - boundary_bbox.min.y) /
        uncropped_bbox.height * image.size[1])
    cropped_pixels = (pixel_min_x, pixel_min_y, pixel_max_x, pixel_max_y)

    cropped_bbox = BBox(
            uncropped_bbox.min.x + pixel_min_x /
            image.size[0] * uncropped_bbox.width,
            uncropped_bbox.max.y - pixel_max_y /
            image.size[1] * uncropped_bbox.height),
            uncropped_bbox.min.x + pixel_max_x /
            image.size[0] * uncropped_bbox.width,
            uncropped_bbox.max.y - pixel_min_y /
            image.size[1] * uncropped_bbox.height))
    return (cropped_pixels, cropped_bbox)

def crop_to_boundary(boundary_coordinates, composite_img, composite_bbox):
    Crop an image to the given GeoJSON boundary.
    boundary_bbox = geo_json_bbox(boundary_coordinates)
    (cropped_pixels, cropped_bbox) = calculate_crop(
        composite_img, composite_bbox, boundary_bbox)
    return (composite_img.crop(cropped_pixels), cropped_bbox)


# 3. Crop it to the GeoJSON boundary boundingbox
with open('./input/knp_boundary.geojson', 'r') as f:
    knp = json.load(f)
coordinates = knp['features'][0]['geometry']['coordinates'][0]
(cropped_to_boundary_image, cropped_bbox) = crop_to_boundary(
print('cropped to boundary size: {0}'.format(cropped_to_boundary_image.size))
print('cropped to boundary bbox: \n\t{0}\n\t{1}'.format(


cropped to boundary size: (129, 360)
cropped to boundary bbox:
	(30.883928571428573, -25.535714285714285)
	(32.035714285714285, -22.32142857142857)

and an image cropped to the bounding box of GeoJSON boundary:


Mask to the GeoJSON boundary

The final step is to mask out the pixels outside the boundary, and Pillow has some handy functions for creating a mask from a polygon.

def create_boundary_pixels(coordinates, bbox, image_dims):
    For the given GeoJSON coordinates, map to a list
    of pixel positions.
    (image_width, image_height) = image_dims

    def coordinate_to_pixels(coordinate):
        return (
                (coordinate[0] - bbox.min.x) / bbox.width * image_width),
                (bbox.max.y - coordinate[1]) / bbox.height * image_height))

    pixels = map(coordinate_to_pixels, coordinates)
    return list(pixels)

def mask_pixels_outside_boundary(cropped_to_boundary_image,
    Mask the image to only include pixels inside the GeoJSON boundary.
    polygon_pixels = create_boundary_pixels(
    mask_image ='L', cropped_to_boundary_image.size, 0)
    ImageDraw.Draw(mask_image).polygon(polygon_pixels, outline=0, fill=1)
    mask_raw = np.array(mask_image)

    cropped_to_bbox_raw = np.asarray(cropped_to_boundary_image)
    result_raw = np.empty(cropped_to_bbox_raw.shape, dtype='uint8')
    # Mask all channels to reduce file size.
    # How to do RGB in one step?
    result_raw[:, :, 0] = mask_raw * cropped_to_bbox_raw[:, :, 0]
    result_raw[:, :, 1] = mask_raw * cropped_to_bbox_raw[:, :, 1]
    result_raw[:, :, 2] = mask_raw * cropped_to_bbox_raw[:, :, 2]
    result_raw[:, :, 3] = mask_raw * 255
    result_image = Image.fromarray(result_raw, 'RGBA')
    return result_image
# 4. Remove pixels outside boundary
masked_image = mask_pixels_outside_boundary(

which only keeps the pixels we’re interested in:


What is the resulting file size?


So we’ve gone from almost 3MB to 27KB 👍🏼.

(incidentally, if we keep the RGB data but just set alpha to 0 for the masked pixels the file size is 54KB).


It’s not really fair to compare the JS implementation to the Python one as I’ve done nothing to make the JS one faster (e.g. using native Canvas operations - if that’s even possible in a headless environment). Think of it as a language-agnostic improvement.

For 100 repetitions:

  • Naive JS: 128 seconds
  • Python: 3.2 seconds

Naive JS vs Python


I’m working on a web application to output NDVI statistics for any GeoJSON-delimited geographic region, which can be used to monitor deforestation and carbon capture. I’ll be using this code in the Python image processing pipeline.

All the code I’ve listed here is on Github, I hope you can find it useful.