Talk:North American P-51 Mustang

Performance issues on non-high end computers

Congratulations!
This is a really good looking and very detailed aircraft. Though it's unusable on not close to high-end systems. It's cutting FG's frame rate to 10 to 15 fps where it normally has more than 30. Even worse, it not even has a downgraded version in AI/Aircraft. If some other pilot shows up on MP with the p51d my FG suffers from the worst lags ever, making it completely unusable.

$ du -hs Aircraft/p51d/
1.1G    Aircraft/p51d/

What the frell!? FG is a simulator which renders the scene in real-time not an animated 3D movie which has 25 times the time to calculate rendering. I am not able to comment on the fdm. Even though it might be good, any fdm goes bad at that frame rate.
Flughund (talk) 12:32, 10 August 2014 (UTC)

Calm down! Hal V. Engel made several announcements about the heaviness of this aircraft. And compared with other aircraft like 777, computer perfomance is even equal.
Compared with commercial sims like X-Plane and FSFX vertice numbers and details are even in a similar range. Nethertheless, some optimations is always appreciated.
What are your computer specs?
--HHS (talk) 15:24, 10 August 2014 (UTC)
Hi Heiko, thanks for the reply.
Don't worry, I am way too lazy to be really upset, just expressed my disappointment of not being able to fly the p51d anymore and that it kills the MP fun when another pilot shows up with it. Specs of my system can be found here. Anything but up to date, I know, though I am certain there are many FG users with an even less powerful one.
I can see why people do want eye-candy, however wouldn't it be possible to preserve the old version? Btw, comparing it to other overdone models or even other sims is not something I count as a criterion. I am not a 3D modeler so I am not really able to make suggestions on how to improve the model. But is it really necessary to model every single tiny detail? Like little holes in the fuselage or the interior of the gear housing. Most of those details can be done by a good texture.
But don't mind users with lower spec systems, they'll find a way. I've looked closer at the situation and decided to use the old version of the Mustang as an AI model and switch to another cool, powerful taildragger for flying.
Flughund (talk) 16:55, 10 August 2014 (UTC)


Hello
So it looks like we have a volunteer to create an AI version of the exterior model or am I mistaken? If users want things to get better they need to do more than complain. After all I have put literally 10's of thousands of hours of work into this aircraft. There have only been a few who have volunteered to help and most of them didn't do anything at all (other than waist my time). Literally the only things in the current model that were produced by a volunteer are the rudder pedals and related linkage. New rudder pedals will be made sometime soon based on factory blue prints (the existing ones are not very accurate) and I expect this to take perhaps 10 to 12 hours to do. So volunteers are responsible for literally less than 0.1% of this aircraft.
The new exterior model only affected my frame rates by about 5% compared to the old external model but I am running modern high end hardware but with the eye candy near max and I am still seeing frames rates > 55FPS and it often locks onto vsync, which is 60FPS, on a 2560x1600 display (IE. the resolution significantly increases the GPU load compared to a more normal monitor size) at high LOD airports. I am also using the 8Kx8K textures rather then the default 2Kx2K textures. So this is a worst case setup.
A suggestion that might help out. How old is the version are you running? The last version I pushed had texture sizes defaulted to 2Kx2K which should work on most hardware. This setup was pushed only a few days ago so there is a significant possibility that you are using a version that predates this change. Earlier versions have only 8Kx8K textures and these would definitely cripple older/lower end hardware even if, as is the case with your hardware/drivers, the 8k textures are supported. The latest version is setup to allow you to change texture resolution/size although this does require you to do some manual file copying. This is to allow users to adjust texture size to their hardware capabilities both high and low end. You can choose 1K, 2K, 4K or 8K textures for both the livery and the effects textures - if you have 1K, 2K, 4K and 8K directories in the Models/Livery and Models/Effects/Textures directories you are using the 2K textures. I have found that using the same resolution for both the effects textures and the livery textures seems to perform better on my hardware and look cleaner (IE. crisper details). I have also found that even on my high end hardware that there is a noticeable increase in frame rate going from 8K to 4K textures but not going to lower resolution textures. But again this is on very high end hardware so I would expect lower end hardware to show an even bigger frame rate hit with the large textures. Lower end/older hardware might benefit from going all the way down to the 1K level but I don't have any way to test this. However I did make it possible for users to select these lower resolution textures.
The number of vertices in the exterior model is significantly higher than the old model at around 100K (the old one is about 15K) but compared to the scenery (which can have millions of vertices) this is a relatively small number. From what I have read the 777 is slow on very high end hardware were as the frame rate hit of the P-51D on high end hardware is relatively small compared to the old exterior model. So it is definitely "lighter" than the 777 or the Extra 500 but there is definitely older/lower end hardware that it not up to the job of running the new P-51D. I don't know were the cutoff line is since I don't have a bunch of old hardware laying around to do testing on. Also various versions of this have been in git for sometime now and one responsibility of users is to do testing well before the release date. If this feedback had happened months ago there might have been time to test some things to see where the bottle neck is on low end hardware and make some adjustments. At this late hour this will not be possible before the 3.2 release which is days away. I am not saying that this can not be made to run better on lower end system only that it takes some effort and time to do that and this includes some effort from those with lower end hardware to help figure out what the issues are and to test fixes. This is an on going issue in the FG community - IE. users with low end hardware complaining about performance but being unwilling to help resolve the issues. Not saying that this applies to you and perhaps you are an exception to the rule.
As to your system hardware. It is not even close to being "not close to high-end systems" although in it day it was "close to a high end system" and I think clearly stating that it is very old and by modern standards very low end is in order. This is lower end hardware than I had 10 years ago so it definitely has it's limitations and may not be up to handling higher end models like the P-51D. My older hardware would only run the FG v2.4 P-51D with the eye candy turned down significantly at around 25FPS when running FG 2.4 with scenery 1.0 and a 1600x1200 monitor. This is just barely useable IMO. Going from scenery 1.0 to 2.0 had a significant frame rate hit on my high end system and I shutter to think what it would have done to my old system.
Aircraft devs (and the scenery devs as well) are caught between a rock and a hard place on this because one group of users is always complaining about aircraft (scenery) not being detailed/accurate enough and those with lower end hardware are always complaining that the high end aircraft (scenery) don't run well on their systems. Basically users can have either detailed accurate models or models that run well on older/lower end hardware but having both is very difficult, at best, if not impossible. After all none of us have magical powers and I can't just will your graphics card to have more CUDA cores. Again there are likely some optimizations that can be done to the new P-51D to make it run on a wider range of hardware and I suspect that your hardware is very close to the line between can and can't be made to work OK.
On the other hand eventually you and other users with low end/old hardware will be replacing their systems and almost anything purchased to replace it, as long as it has at least a mid level graphics card, should run the P-51D nicely but perhaps not the 777 or Extra 500 and perhaps not with full eye candy. Over time fewer and fewer users will have low end/older hardware and this is a major consideration for aircraft devs. In fact at some point my hardware is likely to be considered old/low end so this is a cycle we all go through and at some point all of us need to do hardware upgrades even if we don't want to.
Should I do a bunch of work to create something that works on low end/old hardware but is really not a big improvement over the existing models and that will eventually need to be redone or should I look to the future and make something that will hold up over a longer time frame and will run nicely on mid (IE. $100 graphics card) to high end current hardware? After all I have over 600 hours into the new exterior model and it is still a work in progress (meaning it needs a lot of work to be "complete") - why would I do that much work to produce something that is not a big upgrade to the existing models? I started working on the new exterior model about 10 months ago so I will likely have 1 1/2 years into this effort before I consider it to be "done" and I have averaged close to 20 hours per week on this. This is a huge undertaking that I don't want to repeat anytime soon. In fact I think the current external model will hold up well for several decades and will likely out live me.
Also I work for a software company and the software I work on is based on certain minimum hardware and system specs. If the users hardware does not meet that minimum spec and they call in with an issue they are told to upgrade their hardware and system software as the first step in the support process. That minimum spec is way newer and higher end than your hardware so expecting to run current high end software (which is more or less what the new P-51D is) on your hardware is not a realistic expectation and most commercial software companies support people would basically LOL and then tell you to upgrade your hardware if such a request is made.
The wheel well details are only about 1% to 2% of the total vertices of the exterior model. These are all low poly models (some of these models are a few as 12 vertices for example). The same thing is true for what holes exist in the exterior. These are at most a few hundred vertices total and are well under 1% of the vertices in the model. So these things will have almost no impact on frame rate.
As has been pointed out on the forums many times there is nothing stopping users with low end hardware from using older lighter models like the 3.0 P-51D model which will run just fine on FG 3.2 or FG 2.4 or anything i between. Just download it from the 3.0 aircraft page and install it in place of the new one. You can do this going all the way back to 2.4 aircraft if you need to. "however wouldn't it be possible to preserve the old version" yes it has been preserved just get it from the right location and install it as has been pointed on the forum MANY MANY times. But maintaining the old version in place beside the new work is something I will not do as it significantly increases the maintenance overhead for no reason and in the process slows down the rate at which improvements can be made.
If you are limited to running very old hardware (for what ever reason) then you should be aware that you may have to run older software that is suited to that hardware. That applies to many things in addition to FlightGear aircraft. From a flying point of view there is very little difference between the 3.0 version and the 3.2 version of the P-51D since the FDM, systems and cockpit are almost exactly the same although all of these have had some, mostly minor, changes for 3.2. After all the upgrade project is aimed at the exterior model. Future plans do include significant upgrades to the cockpit to make it more accurate (probably about the same LOD however) since the current cockpit models are more or less eye balled from photos rather than being based on factory blue prints and are not very accurate. I would also be interested in how the 3.0 version runs on your hardware since it appears that you have not run the older versions based on you comments. IE. is the older version really 2 to 3 times as fast on your hardware?
If you are actually interested in helping out let me know and we will figure how to make that possible. Even if it is just to be a "low end system" tester something that FG needs way more of in general if low end systems are going to be supported going forward. After all we can't fix the issues with low end systems if the people with these systems refuse to help with diagnostics and testing.
Also I experienced a lot of issues with the Blender to ac3d exporter and one of these has to do with parented objects not being correctly exported (IE. the export moved and rotated these objects in what appeared to be random ways). As a result, currently, the objects involved in the livery stuff are not correctly parented. I know that this has some performance implications. I would like to figure out a good way to fix this but so far have not been able to locate the information needed to do this is a way that is not a huge amount of work (to redo work that I already did). This could have a significant impact on users with low end systems but on my system it seems to impact how long it takes to start up the sim and how long it takes to switch liveries. There is a rumor that the solution to this issue was talked about on the dev list but I have not been able to locate this information. If anyone knows how to solve this issue please let me know.
--Hvengel (talk)
Hi, sorry, but you are falling in the same trap the 777 modeller did, although one order of magnitude lower. I'm not saying your work isn't great (or the 777), I'm saying it's not ok for realtime rendering, while it would be a killer asset on a CG movie set, or in a 3d-printing shop. You've so far fallen in the huge texture trap while they fell into overly heavy vertex counts and "over modelled details that never end up visible" trap.
8kX8k textures are overkill, unless they are a proper texture atlas and the whole model uses just that one texture. Keep in mind that that texture will be uncompressed (since the .dds format is effectively banned), then mipmapped (which makes it use 1.5x system/video memory than the original uncompressed texture) and it's a waste of system and video ram, not to mention the time spent on CPU to generate the mipmaps, which will cause huge freezes to any system when your aircraft joins any MP session.
Before anyone comments, yes the global water depth-map is 16kX8k, but that was necessary to have proper detail, and it's only one texture used by any "Ocean" tile in the world. And now with Stuart's changes to the materials handling, splitting it into 4 textures becomes possible and worth investigating.
OSG has its builtin LOD/Decimation, any individual "feature" that would be rasterized to less than 8px (default, could be set to an arbitrary size) will be culled away. Everybody's whining for LOD, but they forget one essential thing, LOD comes at a cost (roughly speaking 1.5x increase in memory usage, both system and video, so if your model is already 'heavy', LOD won't help, it will actually make things worse; either that or thrashing as you load LODs individually if you went that route).
There are a few very "cheap" and effective strategies to optimize your model. Biggest wins come from merging all "static" meshes into one, and using one single texture sheet (so called atlas) for the whole model (except of course transparent parts/textures that need to be separate) . Then clean up your textures, don't leave alpha channels in opaque textures (causes unneeded overdraw and depth testing, not to mention weird culling/transparency bugs; I'm talking about the diffuse texture, not the additional ones assigned/used by effects). Then you can look into optimizing vertex/tri count, and there you can start with removing faces that will never be seen. Then you can look into detail optimizing, like what features are easier and cheaper to do via a texture+normalmap vs. what can be modeled. It's all smoke and mirrors, and the fewer resources your result needs the better, even if that means that it's no longer 100% accurate (irrespective of what GPU it ends up on)
BTW, minimal system requirements for fg are an OpenGL2.1/GLSL1.2 capable card, period. That's what it needs to run, until further notice (a port to OpenGL3.1+ will have to be done sooner or later). That means a generation 8xxx nVidia card or the ATI equivalent (I'm not even going to discuss the IGP solutions as they're not worth it). They're by far not current cards, but they can and should handle it very well, if things are properly optimized. (Would it interest you if I told you I can run trees and random buildings at density 2.9 on an 8800GT with minimal framerate impact? Oh yeah, with dense 'grass' generated nearby too? This requires minimal changes to the source, but its not really prime-time ready yet)
Please note that above I'm talking about 'optimization', and not about simplified/featureless models/textures/etc. for "low end specs". What you're doing with a "modern" GPU is simply offsetting the rather huge unoptimized part with raw "power", and that's not ok any way you look at it. (And yes, this optimization will help the higher end GPUs too, as they will be able to cram even more stuff in the scene, or do some extra interesting things without choking). And frankly, the fact that FG can't run at a steady vsync-locked 60fps on that system, would ring lots of alarm bells if I were you.
As to the issue with parenting in blender. You have to reset your objects origins and make them all have a 0 offset to the parent. Shift+c to reset 3d cursor to scene origin (0,0,0). Select the parent -> Object -> Transform ->Origin to 3d Cursor. Then with the parent still selected-> Shift + g -> Children (to select all children objects). Alt+p -> Clear parent and keep transform (to clear the "wrong" parent relationship), then keeping all the children selected Object -> Transform -> Origin to 3d Cursor. Then with the children still selected, shift select the parent, ctrl + p -> Object (to reparent them).
Do not parent glass and/or other transparent objects to the main parent. Also if using different effects on different objects, those with the different effect cannot be parented to the main parent (they will have their effect overwritten by the parent's)
I4dnf (talk) 01:51, 11 August 2014 (UTC)
P.S. The Extra500 is a very special case, and its performance issues have nothing to do with a detailed model on old hardware (like some would like you to think), but there the nasal beast rears its ugly head and starts biting ;) (it runs just as bad with a cube replacing the 3d-model)


Hello Hal,
As a 3d-modeller who like to see accurate and detailed models I can understand your feelings here. I can see a whole lot of love and time you put into your new model, and everyone who ever saw a real (low)flying P51 can understand. On the other side I can understand Flughund very well. My computer is a bit older, but still able to run the latest softwares and still above the most given minimum specs.
I'm still able to use an avarage number of aircraft due to a more or less decent up-to-date powerfull GPU. But I had to make some adjustements to my fgfs settings (reduced visibility of trees, clouds and overall, no random buildings and objects, AI Traffic, multiplayer) and I won't never ever be able to max out the settings in FGFS even with the ufo. ::But compared with other users it seems I'm still on the bright side. But compared with a commercial competitor (X-Plane 10), overall framerates in FGFS seems much lower though I use X-Plane with expensive features like great visibilities, reflecting water surfaces, shadows, a lot objects, moving vehicles on the ground, denser forests....
And I'm always surprised to see how detailed default aircraft are in X-Plane compared with FGFS! But this X-Plane 10 and not FGFS 3.2.
I have to admit I have some problems to you use the late P51 now. Not that worse like the 777 or even the EXTRA500, but to a point I have to switch of features.
I looked at the model and I'm sure - though you tried to optimize it already- there are few more things to consider: I was surprised to see you even modelled the vertical frames inside the fuselage.
Why? They are not vsisible from the outside. On your powerfull machine you don't see much difference on impact, but this impact is much, much bigger on lower hardware. You can say the difference get bigger the lower the hardware is. I must agree to that 8k x8k px texture are too oversized. Even 4k x4k px are too much until you use real photographs as textures - FGFS has big issues with (V)RAM useage, 1-2 liveries changed and most of the GPUs will run out of RAM. A high number of vertice isn't a problem, that's right, but when they all concentrated on a small spot, they are: I remember some time ago that a small scenery object dramatically decreased fps, when ever I changed the view to it. It didn't use textures with alpha channel, so I was surprised. Looking at it in Blender I saw a very high number of vertices on a small spot. The same applies to aircraft, even when those objects are hide by other objects and not visible.
Whenever you decide to create a new model you will always have to consider who is the target audience, and what can we expect from them? I agree it is more difficult in FGFS since the interests are really widespreaded. If you wanted to adress those with very good gamers hardware, then go on. If you want to adress the majority of the users here then you should think how you can improve perfomance and make it usuable to them.
But remember: no one forces them to use your model, and no one forces you to change the model to their favor.
Cheers --HHS (talk) 12:25, 12 August 2014 (UTC)


[OT:Impact of Nasal scripts ]

We've had several discussions regarding extra500 performance, and like you say, some things -like the 3D model- seem to have almost negligible impact on performance, while Nasal code is certainly one of the main culprits involved - but even the way the two Canvas textures are updated is fairly expensive. So it's not just due to Nasal itself. Fairly efficient/fast Nasal code can be written, to see for yourself, look at any code written by TheTom, AndersG or Philosopher. As far as I know, the extra500/Avidyne Entegra R9 developers have already adopted some optimizations, and they even looked at some NavDisplay and MapStructure techniques to make things faster.
But overall, there's quite a bit of slow code remaining - and the whole instrument could definitely be faster by adopting the corresponding frameworks and augmenting slow code with C++ extensions, which is something we've been working towards. We've also posted patches that make it possible to identify "slow" callbacks (timers/listeners) by hooking into FGNasalSys. I think it's fairly safe to say that the extra500 developers are interested in improving performance, but it's not just Nasal that's involved here, the main issue is a structural one, i.e. code organization. To see for yourself, use the "fps_debug" branch and disable all 3D models and even the IFDs. In general, it would help to add our patches to FG and allow aircraft developers to look under the hood and better understand where resources are spent, including stuff in scripting space (especially the GC, but also all callbacks). Just saying "it's due to Nasal" is as unhelpful and non-helpful as it can get: We've also seen the property rules subsystem consuming plenty of CPU cycles. So "bugs" can be introduced in several places, and that alone doesn't make the technology bad per se. Thus, no matter if it's the 777, the P51, the 747, the 787 or the extra500, what people really need is some way to see where all the horsepower is going instead of uninformed musings that lack data to back up such claims.--Hooray (talk) 18:06, 11 August 2014 (UTC)
So, what of my mention earlier that the extra500 issues are not caused by the model, or by 'older' hw, but by the nasal code was not true? Nevermind the fact that this isn't about the extra500 or about nasal. And yeah, I have no clue on how to test stuff, I just pull figures and results out of thin air... (hint: the cube remark should have already shown you that I've tested the extra500 in various scenarios to reach my conclusion, although the cube was a bit superfluous as setting the draw masks already proved that the 3d-model had nothing to do with the performance issues). And, as clueless and trollish as I am I never bothered to check deeper what was going on, and I never saw that mark and reap are the two most frequent calls (followed by various naGC stuff)... wonder to which subsystem those belong to? Right, all, together, repeat after me: N A S A L. Oh, btw, you can get your nasal golden-boy into trouble just by using the scroll wheel to zoom in and out, and ignoring the evidence you stand here declaring that it's the next best thing since sliced bread... Is the GC part of nasal or isn't it (it is), will you run into GC issues sooner or later (you are), so keep it as "glue" between stuff, but please stop doing overly complex stuff in it, like reimplementing half of a FDM in it, and please stop claiming that it doesn't cause performance issues, as it does, and is one of the prime reasons FG has such poor performance (along with raw number of objects (drawables) in scene).
I4dnf (talk) 00:37, 12 August 2014 (UTC)
Like I said, the issue is not as simple as some make it sound. There's a thing we commonly call "GC pressure", i.e. the pressure that is put on the GC scheme by certain code.
Some code is much more likely to increase GC pressure. I'm not sure where you read that I said you'd be wrong - quite the opposite actually, I believe this to be one of the few instances, where we have overlapping interests.
As you may know (or not), I once documented our GC scheme, How the Nasal GC works - part of this also involved tinkering with different GC schemes, and generalizing the existing scheme such that new schemes could be implemented next to the existing one - so yeah, I guess you could say that I am a little familiar with the current GC scheme, so there's really no need to resort to polemics here.
I am genuinely interested in understanding the problem, but not in an aircraft-specific context, but in a global context. And I am interested in providing a toolbox for aircraft developers that they can use to look "behind the scenes", which involves Nasal, but is certainly not restricted to scripting.
As you may have also read elsewhere, we've also been playing with a generational GC scheme, a suggestion made by Andy Ross himself, which would just be a modification of the existing scheme, so not totally new - but it would decrease the workload for mark/reap considerably.
However, like I said, the issue is not just a "black/white" one (hey, a real GC pun ...) - we've seen massive features implemented in Nasal that have almost negligible impact, while we've seen trivial code implemented, using naive algorithms, that basically blocked the whole main loop.
So, believe it or not, I actually share some of your concerns here, but I believe in a different solution - I also find it unfortunate that some people used Nasal to do extensive systems modeling without ever benchmarking their code to find hot spots. But like we've previously said, that's primarily due to the core development bottleneck, and not because people generally want to write Nasal over C++ - C++ patches typically take many months to get discussed, reviewed and committed (some of my own merge requests have been up for months without anybody taking a look).
Nasal is very much like JavaScript here, and isn't formally reviewed by others, which is why its accessibility is so much better, i.e. the entry barrier so much lower. Things like FDMs should probably not be implemented in Nasal - we've once toyed with OpenCL bindings that would be a much better choice here, without sacrificing any flexibility, and even superior to native C++ code, because main loop impact could be significantly reduced.
People can misuse technologies, no matter if it's C++, GLSL, XML/property-mapped subsystems like JSBSim/YASim or scripting engines like Nasal - but even if we were to fix the GC within the next couple of weeks, there's still plenty of algorithmically-naive code in various places, no just Nasal, but also the core C++ code.
For instance, many users have been reporting that FG/osgEarth performance is superior for them over our native scenery engine. And the "minimal startup profile" provides a fairly good starting point to gather a list of "heavy" subsystems. Seriously, Nasal is just a symptom here, because it's accessibility is so much better, so that non-coders can use it to implement features that don't have to go through the hassle of being peer-reviewed across months (if not years ...). And certain problems would remain, even if the GC were to be optimized tomorrow (which we know perfectly well how to do meanwhile). As I said previously, I once had to rip out Nasal entirely, and performance still wasn't very impressive back then. But I think providing such a startup mode (=zero Nasal) could actually be useful to help us better understand/troubleshoot and fix subsystem-specific issues - i.e. if performance without Nasal is so much better, the Nasal engine needs to be reviewed accordingly.
This is something that we've been working towards, funnily via Initializing Nasal earlier and delegating bootstrapping to scripting space, which makes it straightforward to exclude many modules on a subsystem-specific basis, i.e. analogous to "Run-Levels". And things like the Interactive Nasal REPL can easily provide 400+ hz/fps with ~10 ms once a bunch of native C++ subsystems are completely disabled, using the existing mark/sweep GC BTW.
I don't know just how familiar you are with various GC schemes, but the next challenge here is that things are not as deterministic as you want them to be, i.e. GC pressure can be increased by code that ran long before/after the actual GC invocation occurs. But like I said, fixing the GC isn't exactly rocket science, but it also isn't exactly exciting work, and it would be primarily "visible" in terms of improved frame rates/frame spacing that is GC induced. However, I feel, that would just be another step to keep on masking more pressing issues, such as getting rid of algorithmically-naive code, and using bindings to native C++ code for such purposes, or really OpenCL.
But I really don't think that polemics will get us very far, or even just empty/unsubstantiated claims. I am sincerely interested in helping fix such issues, and think this is one -of the very few- opportunities where the two of us could team to come up with a subset of overlapping goals (better troubleshooting/debugging/profiling/performance). The patches I've posted on the forum make it possible to get a list of expensive callbacks for the whole session, i.e. by hooking into timer/listener callbacks. Likewise, another patch can measure GC impact - extending that to determine GC pressure per module (created/removed naRefs per namespace/submodule) would not be very complicated - ThorstenB put up patches doing this on a wider scale, tracking active/dead naRefs per module would help aircraft developers better understand how much workload they're adding. As you undoubtedly know, dumb code can be written in all languages, including even GLSL or assembly language. A language that is "closer to the metal" (like C++) will just help masking the problem until it adds up, as can be seen in the FG main loop meanwhile, which for many people still is CPU-bound. Then again, ripping Nasal out of the main loop would be another option here, but people would need to follow certain coding patterns to make that work. --Hooray (talk) 07:41, 12 August 2014 (UTC)
Hooray,
some times I think the way nasal scripts used by some authors are like the way you are writing: much too much, and far away from what is really needed.
Why using nasal coded filters, when we already have cheap filters? (example: alternative camera...)
I often wishes Melchior Franz would be back.
i4dnf, me and others know what wonderfull tool nasal is and what can be achieved with it. But the usage of it gives me big headaches: it has the same origin like other overdone aircraft like 777 or sceneries. Having a powerful "super" machine and now they wants it all and already maxing it out. But forgetting that only few people have such a machines. And forgetting that there are a lot more features beside their work which wants to be used.
So that's the real problem behind: devs completely forgetting their users, completeley being in a mania.

--HHS (talk) 12:57, 12 August 2014 (UTC)

I don't quite agree with you here, I tend to agree more with i4dnf in this case. Filters are obviously a straightforward example, but the main issue here is that native code isn't currently sufficiently exposed to Nasal (yet) - TorstenD was originally planning to expose the corresponding AP/PID logic to scripting space[1] - that was in the pre-Canvas, and thus, pre-Nasal/CppBind days.
Meanwhile, it should be a fairly straightforward optimization to get rid of such scripting space workarounds, but there's still a certain overhead involved in porting existing legacy code. FGCamera obviously has to work around existing limitations, so cannot rely on patches or optional features. But given the quality of the code involved there, I am sure that its developer would be happy to adopt native code (if/once available). I do agree that it would be good to have a Nasal reviewer, but this is a clear case where existing Nasal code should be made obsolete and re-implemented on top of existing C++ code, like i4dnf suggests, and like it is being done with Canvas-some core developers are actually working towards doing this in other areas, including the geo.nas module which is increasingly used/important, and which should really be using native C++ code, instead of re-implemented algorithms in scripting space Plan-zakalawe. I don't think it's fair to suggest that people don't use more efficient techniques as long as those are not made available, i.e. as long as the corresponding patches are not reviewed/committed. Scripting-space workarounds for filters, PID controllers or even geodetic calculations are also a workaround IMO, no matter just how sophisticated they are-ThorstenR even implemented fairly advanced quadtrees in scripting space a few years ago, and they did work very well - but obviously cannot beat the performance offered by native code. Nasal is just a tool, and people don't need to know much about coding to use it and to come up with working features - unfortunately, at the cost of performance more often than not. The main issue here is not that "devs" are forgetting about end-users and their horsepower, but rather that middleware developers have no good way to tell just how expensive certain features/coding constructs are - there are very few contributors who actually understand how to benchmark/profile a piece of Nasal code, and how to tell where/why certain things are indeed slow - ThorstenR, i4dnf, AndersG, Philosopher, TheTom and a few others come to mind as counter-examples.
But usually, people have no clue how to tell where/why a certain piece of code is slow. Usually, you need to have a background in computing, absent that, you need to have at least a background in statistics/maths and run dozens of benchmarks to come up with figures for different coding constructs/combinations of features to better understand what's going on. The number of people able to interpret Nasal internals, and scripting-induced slowness, is very low, probably not even half a dozen. And I would hope that it is left to them to decide if/how to address certain problems - aircraft developers have been demonstrating for years that they cannot be expected to properly manage scripting resources, including timers, listeners and memory.
I am not trying to put words in mfranz's mouth here, but I have zero doubt that he would agree with these sentiments based on his track record, i.e. code/contributions and his postings on the devel list.
Overall, the majority of aircraft/middleware developers seem just ill-informed when it comes to writing fast code - for understandable reasons, and we wouldn't be in a much better situation with a different programming language, such as Lua, Perl or JavaScript. There are simply certain algorithmic issues involved here, in combination with technical restrictions on the FlightGear side of things. There are very few people who are able to write fast code, and the few that do, happen to be core developers/contributors, or at least have some form of background in computing, so that the language used to solve a problem, usually doesn't matter at all.
People complaining about the degree of Nasal usage should keep in mind that Nasal usage is really just a symptom that's contributing to a deeper issue here, i.e. pace/lack of core development in comparison to base package development - i.e. there's more manpower on the base package side of things, and it's the kind of manpower that is difficult to harness/educate otherwise. It is pretty safe to say that all aircraft developers would love their aircraft to perform sufficiently well - so nobody wants degrade performance - but what's happening is that core development cannot keep up with base package development. Just look at native C++ code that is getting phased out currently (HUD, 2D panels, Map, GUI, KLN89, wxradar, agradar etc): often, it's been in the works for years, usually unfinished and unmaintained, and things like Gijs NavDisplay are now making stuff obsolete that hasn't been touched in years, all within just a few months. And this has nothing to do with Nasal or scripting in general, it's mainly about accessibility, i.e. barriers to entry.
Previously, such features would be developed by core developers within 18-24 months, and left in the source tree, where base package developers couldn't easily extend existing features - now, such things are exposed to scripting space, and it's obvious that manpower to continue developing such features is indeed available. People can condemn technologies like Nasal or Canvas, but the only thing these technologies are proving is that we simply have more "unskilled manpower" in the base package department than we have "skilled manpower" in the core area. Which kinda is the whole reason for introducing top-level frameworks that serve as wrappers for native code integrated via cppbind. From then on, it's the less-experienced department, i.e. base package developers, that actually need to adopt such best practices, or their way of contributing "naive" solutions in scripting space will continue to slow down the simulator, simply because more efficient coding paths are never leveraged - we've seen that in the whole AP/RM area, where aircraft developers like omega95 were really frustrated with all the progress that had been made, which meant that existing aircraft/code had to be ported due to the non-generic nature of the corresponding code - which put a lot of stress on core developers, but it really is the right thing to do - and I have no doubt that even i4dnf will agree with this sentiment, even though people are usually part of the problem without even realizing it, i.e. by rejecting frameworks that people are providing in order to localize functionality that can be easily wrapped/replaced via native code eventually. This is one of the main issues preventing the extra500 from being up to 1.5-2.5 X faster probably - and here, the main challenge is not a technical one, but simply inertia. But like I said, I'd rather have this discussion with people who have demonstrated an understanding of the underlying issues, because we cannot possibly expect to turn aircraft developers into software engineers (or vice versa!).


Likewise, if I were to do a JSBSim FDM or a GLSL shader, I might very well only use a tiny subset of the features offered by the FDM/GPU, and my constructs would be fairly inefficient - it probably wouldn't matter to me (but just work well enough), but once we have dozens of people contributing in a similar fashion, things (=slow code) will obviously add up and add some point cripple overall performance. So this is not a Nasa specific issue - it comes with "de-skilling", and can even be seen outside scripting, i.e. in XML expressions, AP/PID/JSBSim systems and even GLSL shaders. And there's a ton of Nasal code written by very knowledgeable people like mfranz, but even most of that code should probably be wrapped/replaced using native code these days - simply because use of scripting has grown so rapidly over the years (bombable, local weather, FGCamera, Canvas etc) that having "slow" scripting code may add up quickly, because a module written 5+ years ago (such as e.g. the geodinfo API) was never designed to handle use-cases that people have meanwhile come up with. So this is not just about "reviewing" Nasal code, but more about identifying optimization opportunities, and about enforcing best practices by rejecting certain contributions until they're sufficiently reviewed. It is pretty safe to say that A local weather system probably would have never stood a chance of being committed if it had been reviewed/dissected by mfranz - still, it's become one of the most popular features added "recently" - but the developer who wrote it, was also learning about FlightGear scripting back then, and he even agrees that the structure would be very much different if he had to write the same system today. So there's pros & cons here, a stringent code review of a new Nasal addon might have very well alienated a new contributor a few years ago - these days, even mediocre/inferior implementations may be committed, and given time to improve. And in the case of the LW/AW system, one of the most active FGData committers has evolved meanwhile, without him ever having touched a single aircraft AFAIK. So quality standards are a double-edged sword obviously. What we're seeing is not a problem introduced by "developers" per se, but by aircraft developers becoming "programmers" in a new environment where the barrier to entry, due to Nasal, is very low.
If the same people had to deal with raw memory management and threading (i.e. by writing C or C++ code), we would not just see a slow-ish simulator, but one having even more segfaults than we have already... And having/supporting a more mainstream-scripting language like Lua, Python or Perl/Ruby would simply mean that the problem would be magnified tremendously because the barrier to entry would be even lower than it is already, and we'd see a ton of proprietary/platform-specific extensions (libraries/DLLs) being used by a huge community of "modders", where features may only work for certain platforms/operating systems. Nasal being a "niche" language isn't exactly problematic from that standpoint, because it helps streamline contributions, without us having to ship all sorts of scripting frameworks. --Hooray (talk) 22:23, 13 August 2014 (UTC)

[OT: trees and random buildings]

quote i4dnf: (Would it interest you if I told you I can run trees and random buildings at density 2.9 on an 8800GT with minimal framerate impact? Oh yeah, with dense 'grass' generated nearby too? This requires minimal changes to the source, but its not really prime-time ready yet)
and how much is the impact on RAM? I remember that random buildings at their beginning had been very dense and framerate had been really good- but RAM usage was horrible.
--HHS (talk) 14:30, 11 August 2014 (UTC)
With only trees and buildings (no "grass") RAM usage tops in the 2.3-2.4GB range (with --prop:/sim/tile-cache/enable=false, in very detailed areas of EU). But the changes I was talking about are not a 'refactoring' of the code, just enabling some options/flags on the generated geometry, so the code is the same as the current one. The old random buildings were a completely different approach.
The example was to illustrate that the underlying technology FG is using is old and can be handled by that "antiquated" hardware just fine, and the "modern" hardware only (partly) masks un(der)optimized stuff.
I4dnf (talk) 15:00, 11 August 2014 (UTC)

Download link not working

Hm... looks like the download link does not work... --Jarl Arntzen (talk) 13:59, 11 August 2014 (UTC)

@Jarl Arntzen: look into current FGData (gitorious.org)
--HHS (talk) 14:30, 11 August 2014 (UTC)
Return to "North American P-51 Mustang" page.