Defense Software for a Contested Future At the request of the DARPA, the National Academies conducted a study to explore how to enhance the assurance and agility of large-scale, integrated software-based systems. This report recommends ways the Department of Defense can engineer and manage its software systems to reduce cyber risk and enable more rapid system evolution to meet changing mission needs. Report is here: https://lnkd.in/eDrUdrUu Neat section on use and rapid maturing of formal methods to help with software assurance. Examples given: - CompCert: formally verified compiler for the C. An automated test tool that found hundreds of bugs in mainstream compilers like gcc and clang/LLVM found no bugs in CompCert's verified components after years of testing. - seL4: A high-assurance, open-source microkernel that serves as a trustworthy foundation for security-critical systems. It was successfully used in a Defense Advanced Research Projects Agency (DARPA) program to build a quadcopter drone that could resist red-team attacks. - NATS iFACTS: A large-scale air traffic control system in the United Kingdom, comprising 250,000 lines of code, that was formally proven to be free of runtime exceptions and to have functional correctness. It is written in SPARK, a subset of the Ada programming language designed for high-assurance systems. - Project Everest: A collaboration that produced formally verified, high-performance implementations of components of the HTTPS ecosystem, such as the TLS protocol and cryptographic algorithms. This verified code is now widely deployed in Mozilla Firefox, the Linux kernel, and Microsoft's Hyper-V hypervisor, among others.
Test Engineering Methods for Defense Technology
Explore top LinkedIn content from expert professionals.
Summary
Test engineering methods for defense technology are specialized approaches that help verify and validate complex systems used in military applications, ensuring their reliability, security, and performance before deployment. These methods include formal software verification, hands-on experimentation, and advanced simulation techniques to catch issues early and improve readiness.
- Embrace formal verification: Use mathematically-backed tools and processes to prove that software behaves as intended and is free from critical vulnerabilities.
- Prioritize real-world experimentation: Test technology in realistic environments and collaborate with operators to uncover practical challenges and unexpected uses.
- Utilize in-the-loop simulation: Combine real hardware or software with high-fidelity models to test systems virtually, exploring edge cases and refining controls before building prototypes.
-
-
Air Force Research Lab Pioneers New AI Testing Framework for Military Systems ROME, NY - In a groundbreaking development, researchers from the Air Force Research Laboratory (AFRL) and Information Systems Laboratories, Inc. (ISL) have introduced a novel framework for #testing #deeplearning (#DL) #artificialintelligence (#AI) systems used in military applications. The research, detailed in a recently approved technical report, addresses one of the most significant challenges in modern military technology: how to thoroughly test and validate AI-driven systems before deployment. Dr. Joe Guerci, from ISL and lead author of the study, along with colleagues Dr. Sandeep Gogineni and Dr. Daniel L. Stevens, developed what they call "DE-T&E" (#DigitalEngineering Testing & Evaluation). The framework builds upon decades of AFRL's experience in radar systems and recent advances in digital engineering. "Traditional testing methods simply weren't designed for the complexity of modern AI systems," explains Dr. Guerci. "Our approach combines digital twin technology with generative AI to identify potential failures before they occur in real-world operations." The team demonstrated their framework using an advanced #radar system, showcasing how it can detect potential problems that conventional testing might miss. The work leverages ISL's RFView simulation software, which has been refined over decades of radar systems modeling. The research comes at a crucial time, following the Department of Defense's recent Instruction 5000.97, which mandates digital engineering approaches for new military programs. "What makes this approach particularly valuable is its ability to discover 'Black Swan' events - rare but potentially catastrophic scenarios that traditional testing might miss," notes Dr. Gogineni, a Senior Member of IEEE and expert in radar systems. The framework's development involved collaboration between ISL's San Diego facility and AFRL's Information Directorate in Rome, NY. The research team also included Robert W. Schutz, Gavin I. McGee, Brian C. Watson, and Hoan K. Nguyen from ISL, contributing expertise in various aspects of systems engineering and AI. This breakthrough comes as the military increasingly relies on AI-driven systems, from autonomous vehicles to advanced radar systems. The new testing framework provides a path forward for validating these complex systems while meeting rigorous military specifications. The research has been approved for public release by AFRL and represents a significant step forward in ensuring the reliability and safety of AI systems in military applications. As AI continues to play a larger role in defense technology, frameworks like DE-T&E will be crucial in maintaining the U.S. military's technological edge while ensuring system safety and reliability.
-
Super tactical playbook for defense tech companies who are building products (and gaining traction) through testing & experimentation events. These are gleaned from several decades of ⚓ experience from Jason Galvan, Robert Leonard, and Jason Knudson (all incredible colleagues at BMNT). Bottom line: experimentation, done well, can be an accelerant for any capability and the business providing that capability. Dos - Figure out what else you want besides money. These events are a place where you can receive those non-monetary resources. - Focus on learning how your capability performs in a realistic environment. They tend to fail due to friction (e.g. interfaces, logistics) not the lack of features. - Pay attention to the operators. They will surprise you with how they end up using the tool. - Have a Plan B. S#it will always happen. Be ready with a back up. - Prepare for a second wave of experiments. There is almost always another opportunity at the same event. Get there early and stay late. You're already there! - Attend the preparatory planning calls. This will set you apart from the folks who just show up. Don'ts - Don't forget to bring your technical folks. Ideally you'll be making tweaks in real-time. - Don't try to avoid failure. Operators want to see you push past the point of failure to see what your capability can really do. These events are judged different than ones where you're talking to acquisitions folks. - Don't be proprietary. It takes a village to deter our adversaries. You better be prepared to work well with others. - Don't go to an event only one time. If it's worth going to, it's worth going at least two years in a row to figure out how to get the most out of it. - Don't forget to do all the paperwork. You won't be let in without it. - Don't delegate the handling of paperwork. Companies get rejected all the time for giving incomplete or unsatisfactory answers. The more you know... 🇺🇸 #testing #experimentation #warfighting #strategy #defensetech #dualuse #bestpractices #lessonslearned
-
How do you validate complex systems before the first prototype exists? In-the-loop testing brings real-time realism to virtual validation. Whether using hardware or software modules, this technique allows engineers to test control systems against simulated environments that behave like the real thing, without requiring the full physical asset. In this short conversation, Jake Conway, Head of Systems Integration at Williams Grand Prix Technologies, explains how in-the-loop methodologies improve accuracy, speed and test coverage. By inserting real components into high-fidelity models, engineers can explore edge cases, inject faults and refine control logic before committing to full-scale builds. For industries with long lead times and high cost of failure, such as defence or aerospace, this approach enables faster design cycles, better risk management and earlier decision confidence. Take a look to see how engineering teams are achieving more with fewer iterations and better data. #InTheLoopTesting #WilliamsGrandPrixTechnologies