AI Traffic: Difference between revisions

Jump to navigation Jump to search
2,040 bytes added ,  22 November 2021
→‎Ground traffic rendering: Add more info for different options to pack vertex data. Add a part about passenger and ground personnel animation options.
m (cleanup)
(→‎Ground traffic rendering: Add more info for different options to pack vertex data. Add a part about passenger and ground personnel animation options.)
Line 298: Line 298:


* Have all the different path segments stored in one buffer that can be read by the vertex shader. This buffer can be generated at runtime for each airport, maybe using NASAL scripts, and not updated.It's also possible to store the 1st buffer in TerraSync.  
* Have all the different path segments stored in one buffer that can be read by the vertex shader. This buffer can be generated at runtime for each airport, maybe using NASAL scripts, and not updated.It's also possible to store the 1st buffer in TerraSync.  
* Have a second buffer store the object ID (i.e. ground vehicle), any vehicle animation state, which path segment each object is on, the starting time, and a speed scaling factor. The second buffer is updated regularly by the CPU.  
* Have a second buffer store all per-object data in an array. The array is indexed by the object ID. This includes things like vehicle animation state, which path segment each object is on, the starting time, and a speed scaling factor. The second buffer is updated regularly by the CPU.
* The object ID (an integer) can be
* Two approaches to packing different types of objects in a 1 or few meshes
** '''a).''' added to each vertex as an vertex attribute - every type of object is combined in one vertex buffer and rendered with the same shaders, texture atlas or array, material parameters that change between objects or within objects like specularity as as vertex attributes or textures, and a shader that can handle all object animations, The entire set of objects takes up one scene graph node. It's possible to do ground vehicles and aircraft as two meshes.
**'''a).''' Every type of object is combined in one vertex buffer and rendered with the same shaders, texture atlas or array. Every vehicle is packed one after the other in the vertex buffers. Multiple types of vehicles are contained in the same mesh. Multiple instances of the same vehicle take up extra space.
** '''b).''' added to a per-instanced vertex attribute - each different type of object (different ground vehicle) is instanced and takes up one scene-graph node (this can allow different shaders and textures per object type),
*** All material parameters or uniform values that change between objects, or within objects, like specularity are added as vertex attributes or textures. A vertex shader that can handle all object animations is used.
* The vertex shader can trivially look at the object ID, path segment, the starting time, speed scaling factor, and current time to position each object. If the current time is past the end of the current path, the object will stay at the end position.  
*** The entire set of objects takes up one scene graph node, and is rendered in one draw call. It's possible to do ground vehicles and AI aircraft as two meshes.
*** The object ID (an integer) can be added to each vertex as an vertex attribute. This is used to lookup per-object info in the 2nd buffer, including the paths from the first buffer.
** '''b).''' Each different type of object (different ground vehicle) is instanced and takes up one scene-graph node. This can allow different shaders and textures per object type. There is less vertex data taking up RAM and VRAM. There are more scenegraph nodes, and draw calls.
*** The object ID (an integer) can be added as a per-instance vertex attribute.
* The vertex shader can trivially look at the object ID, then find the path segment, the starting time, speed scaling factor, and current time to position each object. If the current time is past the end of the current path, the object will stay at the end position.
* The buffers can be a uniform array (minimum of points in each path segment and more smoothing), Uniform Buffer Object (UBO - not available until Vulkan due to Macs not supporting it), a Texture Buffer Object (TBO - Mac support unknown), or a texture looked up in a vertex shader (maybe slower).
* The buffers can be a uniform array (minimum of points in each path segment and more smoothing), Uniform Buffer Object (UBO - not available until Vulkan due to Macs not supporting it), a Texture Buffer Object (TBO - Mac support unknown), or a texture looked up in a vertex shader (maybe slower).
* Data format: Each path segment is x,y,z position at regular times - a time-series of positions - e.g. [10m, 20m, 0m elevation] at t = 1s. [20m, 25m, 0m] at t=2s . [30m, 30m, 0m] at t= 3s. 16 bit integers are enough, but 32 but floats can also be used at the cost of 2x the occupancy. 16 bit implementation: If a texture is used it can be 16bit RGB - R=x, G=y, B=z.  2 consecutive 8 bit values can also be read from a texture.16 bit integers can cover an ground traffic area of ~65km with a spacing of 1m. 16bits can give a ~13km ground traffic area with an accuracy of 20cm. The accuracy of the positions doesn't matter too much - the vertex shader can look at 2-3+ path positions and create a smooth interpolation (a smooth curve joining the points). The time steps can also be large.  
* Data format: Each path segment is x,y,z position at regular times - a time-series of positions - e.g. [10m, 20m, 0m elevation] at t = 1s. [20m, 25m, 0m] at t=2s . [30m, 30m, 0m] at t= 3s. 16 bit integers are enough, but 32 but floats can also be used at the cost of 2x the occupancy. 16 bit implementation: If a texture is used it can be 16bit RGB - R=x, G=y, B=z.  2 consecutive 8 bit values can also be read from a texture.16 bit integers can cover an ground traffic area of ~65km with a spacing of 1m. 16bits can give a ~13km ground traffic area with an accuracy of 20cm. The accuracy of the positions doesn't matter too much - the vertex shader can look at 2-3+ path positions and create a smooth interpolation (a smooth curve joining the points). The time steps can also be large.  
* If a path has a slowly moving segment, it will take up more points. If a path segment takes a short amount of time, it can have a special xyz numner to indicate path has ended - otherwise the path start/end time information can be stored as uniforms or in a UBO. Vehicles that move faster can have a higher speed scaling. There can be different path segment variations, for example if vehicles have a different acceleration pattern - e.g. speeding up and braking hard, or sharper turns, in an emergency.
* If a path has a slowly moving segment, it will take up more points. If a path segment takes a short amount of time, it can have a special xyz numner to indicate path has ended - otherwise the path start/end time information can be stored as uniforms or in a UBO. Vehicles that move faster can have a higher speed scaling. There can be different path segment variations, for example if vehicles have a different acceleration pattern - e.g. speeding up and braking hard, or sharper turns, in an emergency.
* Both ground vehicles, and taxiing aircraft can be done this way. It's possible to replace a conventional AI aircraft model, with a fast ground traffic model once an aircraft starts taxiing.
* Both ground vehicles, and taxiing aircraft can be done this way. It's possible to replace a conventional AI aircraft model, with a fast rendering ground traffic model once an aircraft starts taxiing.
* If models use animations, they should ideally use a vertex shader that can handle all these animations - so having multiple draw calls per model and CPU-side animation is avoided.
* If models use animations, they should ideally use a vertex shader that can handle all these animations - so having multiple draw calls per model and CPU-side animation is avoided.
* It's possible to split of different parts of craft into separate meshes for parts that are very dissimilar - at the cost of more scenegraph nodes and draw calls. For example, a separate mesh for lights of various types - these lights will follow the same paths as the vechiles and position themselves correctly.
*




The way animation is handled (moving objects and parts of moving objects) in modern 3d application is to store the animation data in arrays/buffers on the GPU side and render multiple objects in one draw call.
 
The way animation is handled (animation meaning moving objects, and parts of moving objects) in modern 3d application is to store the animation data in arrays/buffers on the GPU side and render multiple objects in one draw call.


From a quick google search - a 2010 blog about whether UBOs or TBOs are more suited for different tasks:
From a quick google search - a 2010 blog about whether UBOs or TBOs are more suited for different tasks:
Line 316: Line 323:
''"Personally I use them [UBOs] for instanced rendering by storing the model-view matrix and related information of each and every instance in a common uniform buffer and use the instance id as an index to this combined data structure. This usage performs very well on my system.''
''"Personally I use them [UBOs] for instanced rendering by storing the model-view matrix and related information of each and every instance in a common uniform buffer and use the instance id as an index to this combined data structure. This usage performs very well on my system.''


''Also uniform buffers can be used to store the matrices of bones and use them for implementing skeletal animation, however, I personally prefer using normal 2D textures for this purpose to take advantage of the free interpolation thanks to the dedicated texture fetching units but that’s another story." - Rastergrid blog, 2010 [https://www.rastergrid.com/blog/2010/01/uniform-buffers-vs-texture-buffers/][https://web.archive.org/web/20211119134448/https://www.rastergrid.com/blog/2010/01/uniform-buffers-vs-texture-buffers/]''
''Also uniform buffers can be used to store the matrices of bones and use them for implementing skeletal animation, however, I personally prefer using normal 2D textures for this purpose to take advantage of the free interpolation thanks to the dedicated texture fetching units but that’s another story.''
 
''[..] Personally I use texture buffers for different geometry deformation techniques, to resolve batching issues when the size limitation of uniform buffers is a blocking factor, and for some inverse kinematics effects." - Rastergrid blog, 2010 [https://www.rastergrid.com/blog/2010/01/uniform-buffers-vs-texture-buffers/][https://web.archive.org/web/20211119134448/https://www.rastergrid.com/blog/2010/01/uniform-buffers-vs-texture-buffers/]''
 
This is from a 2010 blog - this general approach has been the way to do lots of animated objects for a long time.


This is from a 2010 blog - this general approach has been the way to do lots of animated objects for a long time.
Even really complicated animations like moving bodies and cloth physics are handled by moving animation data into buffers - this approach would be needed if FlightGear ever did a simulation of crowds at an airport - not just for boarding one plane which probably won't be too cripplingly slow. But for crowds boarding lots of planes, and moving about large airports. For animating humans, there are will be standard animation formats compatible with output of tools in blender or make human[http://www.makehumancommunity.org/]. This approach may also be worth while for rendering lots of seated passengers with really simple movements - and a few walking about with a simple walk cycle. It may also be justified for large amounts of ground personnel at big airports (not just a few relevant for the users plane).


== Warnings and Limitations ==
== Warnings and Limitations ==
1,746

edits

Navigation menu