MediaWiki has been updated to version 1.35.1. Please report any problems here.

Coordinate systems

From FlightGear wiki
Revision as of 19:20, 15 September 2013 by Philosopher (talk | contribs) (→‎Disclaimer: remove heading)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search


This page describes the various reference frames for coordinates used in flightgear. This is mostly targeted at shader development, for cartesian/geodetic etc. systems, we have other pages.

Disclaimer: I wrote these from my memory, if you notice some is wrong, please fix it! :)

Coordinates in shader programs

Vertex program

Vertex program handles every vertex of a model. It is executed for every vertex only, and can use attributes like vertex position, color, normal etc. Vertex program's purpose is to transform vertices to screen space (i.e. calculate their position on the display).

gl_Vertex has the vertex in object's reference frame. After multiplying by gl_ModelViewMatrix, we get the vertex coordinate with relation to camera (eye point). We call this later on the eye frame. Further multiplying this with gl_ProjectionMatrix (or with the combined gl_ModelViewProjectionMatrix) we get a screen space coordinate. We call this later screen or view frame.

Fragment program

Fragment program handles all pixels that are on the screen. It cannot change pixel's position, but only calculate its color and transparency. It gets parameters from a vertex shader, and interpolates between the values at those points.

You can access the screen coordinate with gl_FragCoord and gl_FragDepth.

Light sources

FlightGear uses gl_LightSource[0] to represent the sun.

In GLSL (fixed function i.e. not using uniforms or similar to pass the coordinates), light sources are defined in eye space. It means, that to take e.g. dot product of the light source's and normal's direction, you need to use dot(gl_NormalMatrix*gl_Normal, gl_LightSource[0].position).

Coordinate reference frames


Terrain is referenced with origo (0,0,0) being at the center of the tile, lying on the mean sea level. Reference frame is z = up, x and y are in lat and lon directions.

Every tile's frame is in relation to that tile, so the z is always up on that tile's center point.


Objects depend a bit on the modeller, but usually origo is at the bottom of the building, in the center. Z is up, x and y most likely are lat and lon, since usually people use some reference images underneath. But not always, x and y might be to any direction depending on the modeller.


Clouds/cloudlets have origo at the center of the cloud. gl_Color has the center of each cloudlet with relation to origo. Z is up, x and y are lat/lon direction.

Cloud layer used to be flat, so that z is same way on all clouds, but I think this got fixed and now the layers are curved. I don't know if z is relative to that position or not.


Skydome is always centered on the camera, so that origo is under the view, at mean sea level. Z is up, x and y are rotated so that x (or y...) axis points towards the sun. That way the sunset effects are easier to do.


Forests are rendered with a block of trees in one draw. Origo is somewhere in the center of that block. Also, iirc a vertex attribute defines the tree's location on the terrain, so gl_Vertex is center of the forest, and that attribute must be added to it to find the correct position. Z is up, x and y are probably lat/lon? Might also be in any direction, depending on how the block is rotated to be Z upwards.