Hi fellow wiki editors!

To help newly registered users get more familiar with the wiki (and maybe older users too) there is now a {{Welcome to the wiki}} template. Have a look at it and feel free to add it to new users discussion pages (and perhaps your own).

I have tried to keep the template short, but meaningful. /Johan G

Howto:Processing d-tpp using Python

From FlightGear wiki
Jump to: navigation, search
This article is a stub. You can help the wiki by expanding it.

Screenshot showing the scraped, converted and transformed approach chart for KSFO 28R (aspect ratio doesn't matter for machine learning purposes, i.e. we can use random scaling/ratios here to come up with artificial training data).
Simplified navigation charts merged into a single texture


if processing actual PDFs to "retrieve" such navigational data procedurally is ever supposed to "fly", I think it would have to be done using OpenCV runnning in a background thread (actually a bunch of threads in a separate process), i.e. using machine learning - basically, feeding it a bunch of manually-annotated PDFs, segmenting each PDF into sub-areas (horizontal/vertical profile, frequencies, identifier etc) and running neural networks.

Basically, such a thing would need to be very modular to be feasible - i.e. parallel processing of the rasterized image on the GPU, to split the chart into known components and retrieve the identifiers, frequencies, bearings etc that way (i.e. would require an OCR stage, too).

It is kind of an interesting problem and it would address a bunch of legal issues, too - just like downloading such data from the web works for a reason, but it would definitely be a rather complex piece of software I believe, and we would want to get people involved familiar with machine learning and computer vision (OpenCV) - it is kinda a superset of doing OCR on approach charts, i.e. not just looking for a character set, but actual document structure and "iconography" for airports, navaids, route markers and so on.


Screenshot showing scrapy scraping d-TPPs

Come up with the Python machinery to automatically download aviation charts and classify them for further processing/parsing (data extraction):

We will be downloading two different AIRAC cycles, i.e. at the time of writing 1712 & 1713:

Each directory contains a set of charts that will be post-processed by converting them to raster images.

Data sources

  • d-TPP
  • EuroControl [1]
  • VATSIM Charts [2] [3]
  • IVAO Charts [4]

Chart Classification

  • STARs - Standard Terminal Arrivals
  • IAPs - Instrument Approach Procedures
  • DPs - Departure Procedures


XML Processing


Note  This will download roughly 4gb of data in ~17k files, for each AIRAC cycle!
  • this should support caching
  • and interrupting/resuming scraping
  • Alternatively, use a media pipeline [1]
import os
import urlparse
import scrapy

from scrapy.crawler import CrawlerProcess
from scrapy.http import Request

ITEM_PIPELINES = {'scrapy.pipelines.files.FilesPipeline': 1}

def createFolder(directory):
        if not os.path.exists(directory):
    except OSError:
        print ('Error: Creating directory. ' +  directory)

class dTPPSpider(scrapy.Spider):
    name = 'dTPPSpider'
    # https://doc.scrapy.org/en/latest/topics/settings.html 
    custom_settings = {
        'HTTPCACHE_STORAGE': 'scrapy.extensions.httpcache.FilesystemCacheStorage',
	'HTTPCACHE_POLICY': 'scrapy.extensions.httpcache.RFC2616Policy'    

    allowed_domains = [""]

    start_urls = [	"",

    def parse(self, response):
        for href in response.css('a::attr(href)').extract():
            yield Request(

    def save_pdf(self, response):
	directory = './PDF/'
        path = response.url.split('/')[-1]
	cycle = response.url.split('/')[-2]
	# TODO: split folder (AIRAC cycle)
        self.logger.info('Saving PDF %s (cycle:%s)', path, cycle)
        with open(directory + '/'+cycle+'/' + path, 'wb') as f:

process = CrawlerProcess({
    'USER_AGENT': 'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1)'

process.start() # the script will block here until the crawling is finished

Converting to images

Note  By default, all PDF files will be 387 x 594 pts (use pdfinfo to see for yourself)

pip3 install pdf2image [2]

from pdf2image import convert_from_path, convert_from_bytes
import tempfile

with tempfile.TemporaryDirectory() as path:
     images_from_path = convert_from_path('/folder/example.pdf', output_folder=path)
     # Do something here

Simplification / Feature Extraction

screenshot showing simplified charts, based on creating thumbnails that are added to a new image.

We can easily simplify our charts by creating thumbnails for each chart and merging all files into a larger image/texture:

Image Randomization

Since we only have very little data, we need to come up with artifical data fo training purposes - we can do so by randomizing our existing image set to create all sorts of "charts". e.g. by transforming/re-scaling our images or changing their aspect ratio:

Uploading to the GPU



We don't just need to do character recognition, but also deal with aviation specific symbology/iconography. Once again, we can refer to PDF files for the specific symbols [3]


pip install --user

  • requests
  • pdf2image

Python resources

See also


  • http://sergeis.com/web-scraping/downloading-files-scrapy-mediapipeline/
  • https://github.com/Belval/pdf2image
  • https://www.icao.int/safety/ais-aimsg/AISAIM%20Meeting%20MetaData/AIS-AIMSG%204/SN%208%20Att%20B.pdf