reinhardj wrote:. . . ALL software for critical software (criticality level A to D, also RTOS) MUST be certified to DO-178C. No Microsoft software is fulfilling this requirement. A glass cockpit has high criticality (Level A), failure condition is catastrophic.
I get this Reinhard and agree with the importance of such standards, although I might be more suspicious of them than some. At the end of the day, we should pay less attention to what such standards claim for themselves on the front side and more to what software designed under them demonstrates empirically over time and when deployed in the real environments. As I suspect is true for many of us, I ran many NT servers for years without a single failure, many thousands of hours each. In contrast, my G1000 has less than 500 hours TT and a mean time between reboots of around 70 minutes, and has failed on two occasions. I know other G1000 operators who have reported software failures and I saw a discussion on a commercial pilot's board concerning the need to reboot one of the A380 systems from time-to-time. While my experience is an 'n' of 1, it has meaning in relation to upside claims.
Separately, it seems odd to imply that Microsoft is not up to par in this regard since documents pertaing to the standards themselves mention Microsoft tools in several places for components of the validation. You will find the discussion below in the following document.
http://web1.see.asso.fr/erts2012/Site/0P2RUC89/2D-2.pdf
4. DO-178C Scope of VCC
Microsoft Research’s VCC [18] is a tool that can be
used to verify that existing code conforms to requirements.
The workflow starting with “code” it suggests
conceptually therefore is “opposite” to VSE of
the previous section 3. The largest piece of software
verified by VCC has been Microsoft’s Hyper-V [17].
In sum, this revision to DO-178 'C' level is still a standard that prescribes debugging practices according to the putative level of downside risk. If you want Level A, you make a few more checks that if you want lower levels. There is no meaningfully qualitative difference and these levels and the number of validations is infinitesimally small in relation to the actual number of unique operations the software will be called upon to execute in its intended environment. Ironically, I think it was Bill Gates who noted that MTBF claims in the realm of software running millions of lines of code with billions or more possible permutations in execution involves a certain amount of creative fiction. How many parallel independent systems did NASA run in space shuttles because they determined that it was impossible to fully debug code?
This may sound like I work for MS. I do not, although they were a client many years ago (so long ago that Gates was the CEO). It was then -- after getting to see how they work inside -- that I developed a deep respect for how they work. Incidentally, one of my G1000 failures looked like a case of failing to load drivers correctly. Some of my colleagues believe that this was NTs most common basis for failing at boot time.