Difference between revisions of "Talk:Resource Tracking for FlightGear"

From FlightGear wiki
Jump to: navigation, search
(+ Article Notes, section)
Line 155: Line 155:
== Article Notes ==
Feel free to edit any changes i make that you think is misleading from a coder standpoint -- [[User:Hamzaalloush|Hamzaalloush]] ([[User talk:Hamzaalloush|talk]]) 07:46, 24 August 2015 (EDT)

Revision as of 07:47, 24 August 2015


A while ago, I played with tracking the duration of Nasal callback execution, to get a list of scripts that tend to trigger the GC more frequently (i.e. higher "GC pressure"). And here, it's quite obvious that some scripts and function have a higher probability of triggering the Nasal GC than others - including some of your LW/AW loops, some of them trigger the GC very rarely, while others trigger it regularly. It shouldn't be too hard adapt that code and expose additional GC-internals to the property tree, so that we know how much memory is consumed per memory pool, and possibly even per loaded script or sub module.

VRAM tracking (pm2wiki)

looking at the wiki, the patch you've updated now should be fairly easy to extend for getting VRAM utilization, too - are you on NVIDIA or ATI/AMD hardware ?

Here are code snippets for AMD/ATI and NVIDIA, which can probably be copied directly into the ::update() method of the LinuxMemoryInterface class:

This would give you the amount of VRAM that is available/used, which should also be useful for troubleshooting/feature-scaling purposes.

the right way for doing this the "OOP way" would be to come up with a "GPUInfo" class and inherit from that 2-3 classes for ATI/AMD, NVIDIA and Intel - i.e. to be filled in with the stubs for each vendor:

    // this is the base class, with the only method being a virtual method 
    // which needs to be implemented by any child classes
    // a pointer of this class will be added to the LinuxMemoryInterface class
    class GPUInfo {
    virtual int getVRAMUsageInKB() = 0;

    // Actually implement the GPUInfo class for all 3 GPU vendors:

    class NVIDIA_GPU: public GPUInfo {
    virtual int getVRAMUsageInKB() {
    SG_LOG(SG_GENERAL, SG_ALERT,"NVIDIA VRAM tracking function not yet implemented !");
    return 100;


    class ATI_GPU: public GPUInfo {
    virtual int getVRAMUsageInKB() {
    SG_LOG(SG_GENERAL, SG_ALERT,"ATI VRAM tracking function not yet implemented !");
     return 200;


    class INTEL_GPU : public GPUInfo {
    virtual int getVRAMUsageInKB() {
    SG_LOG(SG_GENERAL, SG_ALERT,"Intel VRAM tracking function not yet implemented !");
     return 500;


(100/200 and 500 would be replaced with the corresponding code)

In the LinuxMemoryInterface::LinuxMemoryInterface() constructor, it would need to read the gpu-vendor property (as per the help/about dialog) and instantiate the correct class, by adding a pointer to it that holds the _gpu object:

    class LinuxMemoryInterface {
    + LinuxMemoryInterface() {}
    + typedef map<const char*, double> RamMap;
    + virtual void update() = 0;
    + double getTotalSize() const {return _total_size;}
    + //virtual void setTotalSize(double t) {_total_size=t;}
    + double getSwapSize() const {return _swap_size;}
    + //virtual void setSwapSize(double s) {_swap_size=s;}
    + RamMap _size;
    + std::string _path;
    + std::stringstream _pid;
    + GPUInfo* _gpu;
    + double _total_size;
    + double _swap_size;

Next, we would need to change the constructor there to read the GL-vendor property and dynamically select the correct GPU class (ATI/AMD, NVIDIA or Intel):

    + LinuxMemoryInterface() {
    + SG_LOG(SG_GENERAL, SG_ALERT, "TODO: Check for GPU vendor here, and dynamically allocate correct class");
    + }

For that, we need to know the correct property to check, for instance, here's the help/about dialog showing the gl-vendor property:

About dialog 2.10.png

The file being $FG_ROOT/gui/dialogs/about.xml, the exact line (=property) being: http://sourceforge.net/p/flightgear/fgdata/ci/next/tree/gui/dialogs/about.xml#l196

So, a corresponding check would have to do a case-insensitive check to look for the "NVIDIA" token as a substring in the property /sim/rendering/gl-vendor using the fgGetString() API: http://wiki.flightgear.org/Howto:Work_with_the_Property_Tree_API#Strings

    #include <algorithm>
    #include <string>

    MemoryInterface() {
    // get the string
    std::string fallback = "NVIDIA"; //default value

    // get the actual GL vendor string from the property tree, using the fallback
    std::string glvendor = fgGetString("/sim/rendering/gl-vendor",fallback.c_str() );

    // make it upper case: http://stackoverflow.com/questions/735204/convert-a-string-in-c-to-upper-case
    std::transform(glvendor.begin(), glvendor.end(), glvendor.begin(), ::toupper);

    // look for the "NVIDIA" substring: http://www.cplusplus.com/reference/string/string/find/
    std::size_t found = str.find("NVIDIA");
      if (found!=std::string::npos) {
      SG_LOG(SG_GENERAL, SG_ALERT, "Supported GPU found: NVIDIA");
      _gpu = new NVIDIA_GPU;
    // else if ATI/AMD ...
    // else if INTEL ...
    else {
    SG_LOG(SG_GENERAL, SG_ALERT, "Unsupported GPU vendor:" << glvendor);

    // the destructor would need to be changed to delete the _gpu pointer:

    LinuxMemoryInterface::~LinuxMemoryInterface() {
    delete _gpu;

next, we need to change the update method to also call _gpu->getVRAMUsageInKB():

    // or log the whole thing to the property tree
    if (_gpu)
    SG_LOG(SG_GENERAL, SG_ALERT, "VRAM usage:" << _gpu->getVRAMUsageInKB() << std::endl );
    else SG_LOG(SG_GENERAL, SG_ALERT, "VRAM tracking unsupported for your CPU");

Nasal GC

WIP.png Work in progress
This article or section will be worked on in the upcoming hours or days.
See history for the latest developments.

Article Notes

Feel free to edit any changes i make that you think is misleading from a coder standpoint -- Hamzaalloush (talk) 07:46, 24 August 2015 (EDT)