automation Archives | DefenseScoop https://defensescoop.com/tag/automation/ DefenseScoop Tue, 18 Mar 2025 13:10:16 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.2 https://defensescoop.com/wp-content/uploads/sites/8/2023/01/cropped-ds_favicon-2.png?w=32 automation Archives | DefenseScoop https://defensescoop.com/tag/automation/ 32 32 214772896 How the Air Force is experimenting with AI-enabled tech for battle management https://defensescoop.com/2025/03/13/air-force-ai-shoc-nellis-capstone-toc-light/ https://defensescoop.com/2025/03/13/air-force-ai-shoc-nellis-capstone-toc-light/#respond Thu, 13 Mar 2025 19:59:10 +0000 https://defensescoop.com/?p=108532 The 805th Combat Training Squadron is testing new technologies to assess their applicability for tactical command and control.

The post How the Air Force is experimenting with AI-enabled tech for battle management appeared first on DefenseScoop.

]]>
As advancements in artificial intelligence capabilities proliferate, the Air Force is using a series of capstone events in 2025 to serve as a proving ground for how the technology can be incorporated into future battle management operations.

Led by the 805th Combat Training Squadron at Nellis Air Force Base, Nevada, the biannual capstone allows the service to test new tech and assess their applicability for battle management and tactical command and control. After a successful iteration at the end of last year, the unit is poised to continue experimentation and rapid development of new capabilities and concepts to support the Defense Department’s Combined Joint All Domain Command and Control (CJADC2) effort throughout the next year.

Although effective execution of CJADC2 involves countless technical and bureaucratic challenges, the 805th — also known as the Shadow Operations Center – Nellis (ShOC-N) — used its most recent capstone event in December 2024 to understand how AI-enabled technologies could assist battle managers in conducting dynamic targeting.

“Modern battlefields are exceedingly complex and require the ability to distill an immense amount of information into a cohesive, actionable amount,” ShOC-N commander Lt. Col. Shawn Finney told DefenseScoop. “The emergence of artificial intelligence in warfighting applications potentially gives battle managers the ability to focus on the most salient aspects of the operational area by reducing the volume of information they must evaluate.”

At the recent Capstone 24B event, the unit experimented with advanced prototypes across three lines of effort: human-machine teaming; international partner and allied integration; and cloud-based C2 decision advantage.

The capstone simulated multiple “combat-representative” scenarios, including offensive counter-air, defense counter-air, electronic warfare and special operations, Finney said.

Notably, officials tested artificial intelligence platforms such as the Maven Smart System and Maverick AI application. The tech allowed battle managers to conduct “tactical control, execution, and assigning” of both friendly and adversarial assets within a common operating picture, according to an Air Force news release. The AI was also able to ingest planning data to give battle managers insights into complex and evolving scenarios.

During the event, the Maven system was for the first time successfully integrated into the Air Force’s new battle management kit, known as the Tactical Operations Centers-Light (TOC-L), at a live exercise.

TOC-L is a mobile, scalable C2 kit embedded with different software and applications that creates a single air picture from hundreds of fused data feeds. The service began prototyping it in 2022 and has since delivered 16 kits to different units around the world — including to ShOC-N — so they could be used in experimentation efforts.

The program is “constructed in a way that we can quickly deliver these prototypes, get them in the hands of our operators, and inform future TOC-L requirements — and really inform, more broadly, the control and reporting center weapons system,” Lt. Col. Carl Rossini III, deputy chief of the deployable systems branch at the Air Force’s Advanced Battle Management System Division, said in an interview.

Battle management teams used the TOC-L system and AI capabilities during Capstone 24B to simulate a dynamic targeting cell, able to rapidly identify and defeat assets that weren’t planned for during operational planning. Rossini said they gleaned insights from the event that ranged from very technical procedures to broader concepts.

“One was how well that [dynamic targeting] cell could operate with some other systems we were evaluating for operational command and control, and [intelligence, surveillance and reconnaissance] for how we manage dynamic targets and authorize those targets for prosecution,” he said. “We also had good learning on the construct of that [dynamic targeting] cell in particular, like the roles on that battle management team.”

The Air Force is developing an integrated “system-of-systems” called the DAF Battle Network to support the Pentagon’s goals for CJADC2. Broadly, the concept looks to connect disparate sensors and weapons operated by both the U.S. military and foreign partners under a single network to enable rapid data transfer across all warfighting systems and domains.

Steve Ciulla, TOC-L program manager, told DefenseScoop the Air Force is investing in the AI-enabled tools featured at ShOC-N’s recent capstone as a way to accelerate decision-making.

“Those are the specific things they were looking at, in terms of testing some of those cutting-edge software capabilities and speeding up the process of identifying striking targets — the dynamic targeting — and looking at how AI could help do some of those things, [and] also some human-machine learning as well,” Ciulla said.

While both Maven and Maverick AI successfully demonstrated automated capabilities during the capstones, Finney noted that the 805th will continue to experiment with them to mature the technology further.

“The human-machine team concept continues to evolve as we uncover new and better ways to unlock the potential of both the hardware and software while also understanding where software still has gaps that humans must perform,” he said.

Moving forward, the 805th plans to execute an experimentation campaign series throughout 2025 comprising four experiments — three of which will contribute to the Air Force’s Bamboo Eagle exercise and the Army’s Project Convergence — culminating in a final capstone event. Finney described the series as taking a “building block approach” in how the team uses lessons from previous events as baselines for subsequent experiments.

“This approach exposes large training audiences of warfighters to experiment results in a rapid and iterative fashion. We firmly believe in the experimentation-to-exercise process,” he said. “Through this, potentially immature capabilities can gain significant reps and sets within a single calendar year.”

As for the TOC-L team, Ciulla said they are focused on exercising the systems in the Indo-Pacific region over the next year. The goal is to conduct as much joint and international integration as they can across multiple exercises — including Project Convergence 25, Bamboo Eagle, Return of Forces to the Pacific (REFORPAC) and others.

The exercises will help inform the Air Force’s next iteration for TOC-L acquisition, expected to kick off by summer. The service intends to improve on current kits and scale the number of systems globally, Ciulla said. 

“It’s not going to just end with this phase one experimentation effort,” he added. “We’re still going to be getting this feedback loop [and] user data coming in to support our development, design for the next iteration of the system to tell us what the biggest risks are, what’s working [and] what’s not working.”

The post How the Air Force is experimenting with AI-enabled tech for battle management appeared first on DefenseScoop.

]]>
https://defensescoop.com/2025/03/13/air-force-ai-shoc-nellis-capstone-toc-light/feed/ 0 108532
Losing $200M annually on unliquidated obligations, DIA looks to automate https://defensescoop.com/2024/04/17/dia-losing-200m-unliquidated-obligations-looks-automate/ https://defensescoop.com/2024/04/17/dia-losing-200m-unliquidated-obligations-looks-automate/#respond Wed, 17 Apr 2024 21:44:34 +0000 https://defensescoop.com/?p=88677 “I think AI can really contribute into network configuration and edge computing," the agency's chief financial officer said.

The post Losing $200M annually on unliquidated obligations, DIA looks to automate appeared first on DefenseScoop.

]]>
It’s no secret that the U.S. intelligence community sometimes lags in adopting new technologies, as its inherent need to securely operate on multiple, and sometimes foreign, networks can slow down innovation. However, the Defense Intelligence Agency is cautiously and carefully puzzling out the areas within its enterprise that are poised for automation and other artificial intelligence-enabled improvements, according to its finance leader.

“Every year, I lose about $200 million a year, out the door, because of unliquidated obligations and an inability to automatically track that and monitor if we obligate funds in a timely fashion. Our officers are really, really good at obligating funds on contracts — and not really, really good at cost recovery and filing invoices,” DIA’s Chief Financial Officer Steven Rush said Wednesday. “This is another area that’s ripe for automation.”

At the UiPath on Tour Public Sector conference hosted by Scoop News Group, Rush spotlighted a variety of other emerging use cases and opportunities for technology-driven improvements across DIA’s processes and portfolios.

While officials are beginning to introduce some automation, cybersecurity is one realm that “is still very manual” for the agency, in the CFO’s view.

“I think AI can really contribute into network configuration and edge computing. God forbid we get into some sort of conflict or hostilities with China — edge computing is going to be critical and using AI to be able to configure networks at the edge is going to be critical, as well as monitoring the network. So it’s an area we’re heavily invested in and funded heavily in the last couple of years,” Rush said.

He confirmed it “will take about another five to seven years for modernization” of the top secret/sensitive compartmented information fabric called JWICS that DIA runs for the intelligence community. 

“But AI is a critical component in that for cybersecurity for us, for sure,” Rush said.

In defense finance, there are very rigid compliance procedures for meeting governance requirements — an area where there’s much room for AI improvements and automation at DIA.

“One of the challenges we have is, before AI is necessarily useful, we have to understand our processes. We have multiple legacy systems that don’t talk to one another. Understanding and how to communicate between those processes is critical,” Rush noted.

The agency has been on what he referred to as “an 11-year journey to understand” all of those complex processes.

“I hope in the next couple of years we’ll be to the point where AI is a critical factor in just maintaining compliance and maintaining a clean or unmodified opinion on our audit, but it’s another area ripe for investment and innovation. The challenge, though, is just understanding our legacy systems and processes, so we know how to automate smartly,” the CFO said.

When it comes to IT and other technology investments, he recognizes how the “business side” of the agency has to compete with intelligence operations.

“You can imagine it’s similar to the Navy. If you’re buying an aircraft carrier, it’s really easy to invest in. When you’re investing in back-office operations, it’s tough. We have that challenge in the financial area,” Rush said.  

Another issue that DIA is confronting when it comes to automating functions to boost staffs’ experience around the enterprise is unique to its multi-generational workforce.

“My team loves spreadsheets. They like fax machines. Believe it or not, we have a dial-up modem and floppy disk for part of our operations. One of my counterparts in the intel community actually still uses microfiche. We’re using 1960s, ‘70s, ‘80s, ‘90s, technology in 2024. So, simple things like asking a data query might take my team hours and hours and hours to comb through spreadsheets, collate data, look through products and then present static PowerPoints to me,” Rush said.

“So now, we’re really in the infancy of our AI journey. We used [automation and robotics] to transition travel voucher statements from our unclassified side to the high side, that is just barely scratched the surface. I want for the day when we’re living and working in the live dashboards,” he added.

But in the same offices and on some of the same teams where officials are “actually afraid of having live dashboards where other people can see the information and analyze it and question it,” Rush noted, there’s also younger coworkers who want much more access to advanced tech to do their jobs.

“Some of the younger members of our workforce we’ve had leave, because they come in and they go, ‘What do you mean I can’t use [generative AI like] ChatGPT on a top secret network? What do you mean, you don’t have these tools?’ One of our young officers from our comptroller shop luckily stayed with us. But he went to another department and he said, ‘I could write script in 20 minutes if you would give me the tools to do this on the top secret level — and I could automate 80% of my work. So, he left from being an accountant in our comptroller shop, he’s working another part of the budget process,” Rush said. “So that cultural challenge is real for us.”

The post Losing $200M annually on unliquidated obligations, DIA looks to automate appeared first on DefenseScoop.

]]>
https://defensescoop.com/2024/04/17/dia-losing-200m-unliquidated-obligations-looks-automate/feed/ 0 88677
Defense Department moving to automate polygraph processes https://defensescoop.com/2023/05/25/defense-department-moving-to-automate-polygraph-processes/ https://defensescoop.com/2023/05/25/defense-department-moving-to-automate-polygraph-processes/#respond Thu, 25 May 2023 14:37:53 +0000 https://defensescoop.com/?p=68857 The Defense Innovation Unit (DIU) has released a new solicitation for its “Polygraph+” program aimed at acquiring new capabilities to help the Pentagon with credibility assessments.

The post Defense Department moving to automate polygraph processes appeared first on DefenseScoop.

]]>
Faced with a security clearance backlog, the Pentagon is looking for technologies to automate polygraph processes and credibility assessments to make them more efficient and effective.

The Defense Innovation Unit (DIU) — which has outposts in Silicon Valley and other commercial tech hubs — has released a new solicitation for its “Polygraph+” program aimed at acquiring new capabilities to help the Pentagon detect lies and weed out untrustworthy people.

The Department of Defense “relies on credibility assessments (CAs) to vet new personnel during onboarding, evaluate existing DoD personnel for access to special or classified information, assist in determining source credibility, and interview subjects in criminal investigations. The DoD’s current CA standard requires trained evaluators to manually prepare, gather, and analyze data from polygraphs. Due to the manual efforts required under current CA protocols, the DoD sees room for improvement, optimization, and automation,” the solicitation states.

DIU is eyeing commercial solutions to optimize credibility assessments through a combination of user experience and automation upgrades, and it’s planning a multi-stage prototyping initiative with the potential for follow-on production awards.

“A minimum viable product should include a configurable system with new sensing tools, automated scoring, and usable interfaces that exceeds performance achieved by current capabilities and reduces the potential for human error and bias,” per the solicitation.

The first line of effort for the program is non-invasive physiological or behavioral sensing.

“A successful solution will measure physiological and behavioral signals with non-contact sensors that are validated against current CA measures,” the solicitation states, including signals such as respiration, heart rate, blood pressure, electrodermal activity, pupil diameter, and ocular movements.

“Remote, ‘nearable,’ and/or off-body assessments are preferred (e.g., visual perception sensing using high definition, depth, or thermal cameras; optical sensing; etc),” per the solicitation.

The technology must also be able to accommodate “next-generation signals.”

A second line of effort is for tools that automate data fusion and credibility assessment scoring.

“A successful solution will provide an automated scoring option in lieu of CA scoring performed by humans,” according to DIU.

“Scoring solutions should be capable of fusing and scoring current data physiological and behavioral signals (e.g., respiration, heart rate, electrodermal activity, ocular characteristics, etc),” the solicitation states.

Analytic tools with forward-compatibility to incorporate new sensor inputs — and extraction and classification models capable of discovering new credibility assessment measures — are also desirable, it notes.

A third line of effort is for intuitive tools to aid credibility evaluators’ decision-making, such as a dashboard that displays relevant sensor and scoring info.

“A successful solution will deliver CA scoring information through an interface that supports the adjudication process in real-time,” per the solicitation.

These types of tools “should remain configurable to accept new inputs, work in diverse contexts, and meet different end user preferences,” DIU notes.

In addition to those lines of effort, the Pentagon is also interested in technologies that can apply natural language processing to credibility assessments.

Natural language processing involves artificial intelligence capabilities that enable machines to understand speech or text.

“Automated, deep natural-language processing (NLP) technology may hold a solution for more efficiently processing text information and enabling understanding connections in text that might not be readily apparent to humans … Improving human language technology to incorporate these capabilities is essential for enabling automated exposure of important content to facilitate analysis,” according to the Defense Advanced Research Projects Agency, which has pursued that type of tech.

Prototype development for DIU’s Polygraph+ program is expected to take place in three phases. The first will include a “benchmark” of credibility assessment scoring and sensing technologies, such as computational testing and usability testing. The second will include “in-lab validation and iterative development cycles.” The third will include network accreditation to ensure “full functionality and deployment” on DOD systems.

Industry responses to the solicitation are due June 5.

The department will use other transaction authority and the Commercial Solutions Opening mechanism — which are intended to help DOD cut through red tape and move faster with acquisition — to issue awards for prototyping.

“Companies are advised that any Prototype Other Transaction (OT) agreement awarded in response to this solicitation may result in the direct award of a follow-on production contract or agreement without the use of further competitive procedures. Follow-on production activities may result from successful prototype completion,” per the solicitation. “The follow-on production contract or agreement will be available for use by one or more organizations within the Department of Defense. As a result, the magnitude of the follow-on production contract or agreement could be significantly larger than that of the Prototype OT agreement.”

The post Defense Department moving to automate polygraph processes appeared first on DefenseScoop.

]]>
https://defensescoop.com/2023/05/25/defense-department-moving-to-automate-polygraph-processes/feed/ 0 68857
Pentagon’s CISO warns that zero trust will ‘fail’ without automation https://defensescoop.com/2023/05/23/pentagon-dod-zero-trust-automation-dave-mckeown/ https://defensescoop.com/2023/05/23/pentagon-dod-zero-trust-automation-dave-mckeown/#respond Tue, 23 May 2023 19:16:28 +0000 https://defensescoop.com/?p=68707 DOD CISO Dave McKeown said that “there are lots of areas where automation can come into play — I think we’re going to fail if we don’t automate as we implement zero trust.”

The post Pentagon’s CISO warns that zero trust will ‘fail’ without automation appeared first on DefenseScoop.

]]>
As the Department of Defense works to implement zero-trust cybersecurity measures over the next four years, automation tools that can assist in handling large volumes of data and an increasingly complex network must be incorporated to ensure its success, the Pentagon’s chief information security officer said Tuesday.

Speaking at the UiPath Together Public Sector summit, produced by FedScoop, DOD CISO Dave McKeown said that “there are lots of areas where automation can come into play — I think we’re going to fail if we don’t automate as we implement zero trust.”

Numerous government agencies are working to deploy zero-trust architectures, and the Pentagon has set itself a deadline of fully implementing the framework by 2027. Unlike traditional cybersecurity standards that grant users and data in a network implicit trust, a zero-trust framework requires all users and data to be continuously authenticated and authorized as they move throughout the network.

In its 2022 zero-trust strategy, the Pentagon outlined seven pillars to guide the department’s efforts — one of which is “automation and orchestration,” which calls on the Pentagon to automate manual security and other processes across the enterprise.

“We have to log everything that’s going on on the network, and that becomes very voluminous. We have to then go through those logs and look for anomalous behavior,” McKeown said. “These are things that we kind of do now. We don’t do them real well, but we need to scale that up and do that very, very well.”

McKeown noted that automation could play a crucial role in labeling large amounts of data coming in from the Pentagon’s systems, as well as data stored in its repositories.

Automated account provisioning is also being built into the identity, credential and access management (ICAM) solution being implemented across the department, he said. 

“We have 10,000 information systems, at any time we may have had to have 10,000 different accounts created. We want to be able to go into a central place, create accounts, create accounts for any one of those systems, many of those systems and have it done in a reliable fashion where it isn’t the same and all of the lockdowns or permissions are correct,” McKeown said. “Automation can play a huge role there as we move forward with that automated account provisioning.”

Access control functions will also need to largely be automated in order to leverage large amounts of data points and make decisions on whether or not an account can access which sets of data, he said.

“We want to restrict access from places in the world which are dangerous. We want to grant access when all of your tickets are right,” McKeown said. “Your computer that has been scanned shows that it is secure and we’re going to allow you and you’re going to be able to see the data that you want to look at.”

McKeown also noted that automation-powered zero trust could prevent future insider leaks of classified documents — such as those allegedly distributed online by Air National Guardsman Jack Teixeira in April.

He said the Pentagon wants to get involved with automated user activity monitoring to look for anomalous behavior, flag it and even take direct actions to stop it before excessive damage is done.

“Anytime you see anomalous behavior, like after-hours activities, people going to areas of the internet, people going to areas of the network where they’re not supposed to be — you can totally automate the reporting of that and the response to that if you wanted to,” McKeown said.

The post Pentagon’s CISO warns that zero trust will ‘fail’ without automation appeared first on DefenseScoop.

]]>
https://defensescoop.com/2023/05/23/pentagon-dod-zero-trust-automation-dave-mckeown/feed/ 0 68707
AI capabilities ‘of highest importance’ for NATO transformation efforts  https://defensescoop.com/2022/10/26/ai-capabilities-of-highest-importance-for-nato-transformation-efforts/ https://defensescoop.com/2022/10/26/ai-capabilities-of-highest-importance-for-nato-transformation-efforts/#respond Wed, 26 Oct 2022 20:41:20 +0000 https://defensescoop.com/?p=62081 Modernizing weapons and achieving warfare advantage will depend on nations' abilities to unleash AI, a NATO exec said.

The post AI capabilities ‘of highest importance’ for NATO transformation efforts  appeared first on DefenseScoop.

]]>
To be ready for ultra-modern warfare that will likely span across all domains in the future, NATO allies must now jointly prioritize artificial intelligence capabilities, according to German Gen. Chris Badia. 

AI could enable faster decision-making and improve systems interoperability, he noted.

“This is why AI, to us, is of highest importance,” Badia, NATO’s Deputy Supreme Allied Commander Transformation, said Tuesday at the Association of Old Crows international convention in Washington. 

The alliance over the past year or so has approved a new artificial intelligence strategy and autonomy implementation plan.

AI is a topic that members are “discussing and looking into almost daily,” he noted.

Envisioned to be a force multiplier in back-office operations and on future battlefields, AI is expected to allow machines to perform operations faster and with less human input. NATO partners have been increasingly investing in the development of associated technologies for their own militaries over the last few years. But, for Badia and his team, recent moves by the West’s adversaries and the unfolding Ukraine-Russia war are further demonstrating the need for the alliance to prioritize AI.

“[NATO members] focused for too long on asymmetric threats, terrorism, and nation-building. What was China and Russia doing during this time over the past 20 years? They chose [to prioritize] technology,” he said, explicitly mentioning hypersonics, drones and other capabilities that could be used at the “tactical edge” as examples of areas where Beijing has been moving forward.

With expertise as both a fighter pilot for the German air force and NATO executive, Badia has been considering the possible lessons that Russia is learning from the war it provoked in Ukraine — to “gain a much better understanding of the capabilities needed in order to defend the alliance” in the future. 

Badia said takeaways for NATO include that conflicts in the future will be more “missile related” and in “digitized” domains. 

While traditional platforms such as tanks may always be of use, automated systems and other technological capabilities will more likely enable combat advantage in the future, in his view.

“How do we become better and succeed over our opponent?” Badia said. “This is where AI comes in — because you need to be much faster, you have to have a much better understanding, and you have to go across domains.”

The U.S. and its allies’ military funding and plans are based on long-term approaches spanning years and don’t match the speed at which technologies can mature and disrupt the status quo., he suggested.

Badia noted that the F-35 fighter jet — which is operated by many NATO nations — is sophisticated and connects to a combat cloud. But more systems across the alliance need to be interoperable.

“What are we doing about the F-16s, the 15s, the 18s, and all the other jets that are out there?” he said. “I’m more concerned about that. And this is where AI comes in — because we need to enable the systems we have in order to really enable everybody to go to multi-domain main operations.”

The post AI capabilities ‘of highest importance’ for NATO transformation efforts  appeared first on DefenseScoop.

]]>
https://defensescoop.com/2022/10/26/ai-capabilities-of-highest-importance-for-nato-transformation-efforts/feed/ 0 62081
With new duties, NGA plans to hasten automation opportunities and create open-data repository https://defensescoop.com/2022/09/29/with-new-duties-nga-plans-to-hasten-automation-opportunities-and-create-open-data-repository/ https://defensescoop.com/2022/09/29/with-new-duties-nga-plans-to-hasten-automation-opportunities-and-create-open-data-repository/#respond Thu, 29 Sep 2022 20:17:48 +0000 https://defensescoop.com/?p=60954 The agency is looking for the full spectrum of technologies to achieve its requirements for Project Maven.

The post With new duties, NGA plans to hasten automation opportunities and create open-data repository appeared first on DefenseScoop.

]]>
The National Geospatial-Intelligence Agency (NGA) is strategically expanding its government and commercial partnerships and introducing new mechanisms to spur artificial intelligence deployments in support of its components and stakeholders, its research chief said Wednesday.

These moves are necessary to fulfill new responsibilities recently delegated to NGA to scale maturing AI efforts, according to Director of its Research Directorate Cindy Daniell, who spotlighted ongoing initiatives and fresh opportunities at Capital Factory’s Fed Supernova conference in Austin, Texas. In her view, they’ll likely also prove essential to ensuring the U.S. maintains a technological advantage as technology and global competition evolve in the near term.  

“The future is unknowable — but our resources, our strengths, opportunities, weaknesses, as well as the trends that lie before us, are not. From these, we can make educated guesses on how to enable the future, and where possible, accelerate it,” Daniell said. 

Speaking to an audience of government officials, technologists, investors and entrepreneurs in Texas’ capital region, she also repeatedly emphasized “our national security challenges are too difficult for the government to solve alone.”

NGA provides geospatial intelligence (GEOINT) that is fundamental to the nation’s security. And that GEOINT “tells you what is happening, where, and what might happen when,” Daniell said. The term refers to the exploitation and analysis of imagery and geospatial information to describe, assess and visually depict physical features and geographically referenced activities on the Earth.

While GEOINT capabilities have been “a critical source of information for all eternity,” she said, that has especially been the case for the modern national security apparatus. 

“When we layer this with multiple types of imagery and combine it with other intelligence sources, it literally reveals the ground truth — informing critical national security decisions surrounding the Cuban Missile Crisis, the raid on Osama bin Laden’s compound, and support for humanitarian assistance and disaster relief,” Daniell explained. “Needless to say, it has also played a crucial role in the war in Ukraine.”

Given the shifting conflict and political landscapes of this era, the NGA leader and former electrical engineer said she could “not emphasize enough” how important it is for the public, private and academic sectors to work together to drive “novel practical solutions” at this moment.

From a research perspective, Daniell said NGA capability development is informed by three lines of effort. 

“The first is validation” or accurate, high-resolution and continually updated representations of Earth’s elements, through physical or activity models, and positioning navigation and timing (PNT), she noted. The second, “collection technologies,” encompasses efficient strategies and methods to deliver spatial-temporal data from an ever-increasing number of sources. The third area, analytic technologies, is all about “accurate, timely, reliable, and scalable methods for data exploitation and analysis,” which are constantly evolving, she noted. 

GEOINT elements of the Pentagon’s trailblazing early AI effort — Project Maven — are presently being transitioned to NGA’s purview, and the project is now a major priority for the agency.

“Maven will enable us to integrate and institutionalize automated, geospatial AI capabilities with NGA’s strategic priorities. This will ultimately enable NGA to provide our military service customers with the critical and timely insights they require,” Daniell said. “To this end, we’re looking for the full spectrum of capabilities — everything from discrete, bespoke software products to complete end-to-end solutions.” 

The agency is developing software and requirements for future systems that are specifically designed to ensure the information is accessible, and it has legacy and new datasets that must be fused. New sensors and information sources are being built and refined “at a rate we’ve never seen before,” Daniell also said, noting that NGA needs to incorporate more automation, advanced modeling and machine learning to push the envelope. 

“We need your help — every single one of you — to accelerate this development, integration, sustainment and refresh of automation and computer vision,” she told the audience in Texas. “And we are expanding our partnerships with all sorts of innovation in the commercial space. This means we’re providing new engagement venues and mechanisms to enable government teaming with smaller non-traditional companies who are no less innovative than their larger counterparts.”

Last year, NGA launched a broad agency announcement to speed up how it buys specific priority technologies to perform its required functions. 

“We will be releasing more topics, both broad and specific, with the BAA in the future. So, stay tuned,” Daniell said.

Beyond expanding collaboration with government research labs and military branches and other pursuits, the agency also recently launched a new Data, Digital and Innovation initiative (DDI) to help to lead the GEOINT community “through the agency’s adoption of AI,” the director also noted.

To build advanced models, NGA needs accurate test data that closely represents the format and structure of operational data. 

“So over the next year, we are working to create an open-data repository with a variety of sample datasets made just for you, for use by academia and industry requirements,” Daniell noted. Further, her team is looking at producing “a truly representative development sandbox architecture” to run, test and demonstrate models in an operationally relevant architecture.

That work will be tested and accelerated as a permanent feature at NGA’s brand new “moonshot laboratory facility” in St. Louis.

“Our goal is to enable you to quickly iterate on solutions while maintaining operational security and control,” Daniell said.

The post With new duties, NGA plans to hasten automation opportunities and create open-data repository appeared first on DefenseScoop.

]]>
https://defensescoop.com/2022/09/29/with-new-duties-nga-plans-to-hasten-automation-opportunities-and-create-open-data-repository/feed/ 0 60954
Pentagon reaches important waypoint in long journey toward adopting ‘responsible AI’ https://defensescoop.com/2022/06/29/pentagon-reaches-important-waypoint-in-long-journey-toward-adopting-responsible-ai/ Wed, 29 Jun 2022 15:23:33 +0000 https://www.fedscoop.com/?p=54689 Experts weigh in on the department's new Responsible AI Strategy and Implementation Pathway.

The post Pentagon reaches important waypoint in long journey toward adopting ‘responsible AI’ appeared first on DefenseScoop.

]]>
There’s a lot to unpack in the Pentagon’s new high-level plan of action to ensure all artificial intelligence use under its purview abides by U.S. ethical standards. Experts are weighing in on what the document means for the military’s pursuit of this crucial technology.

In many ways, the Responsible AI Strategy and Implementation Pathway, released last week, marks the culmination of years of work in the Defense Department to drive the adoption of such capabilities. At the same time, it’s also an early waypoint on the department’s long and ongoing journey that sets the tone for how key defense players will help safely implement and operationalize AI, while racing against competitors with less cautious approaches.

“As a nation, we’re never going to field a system quickly at the cost of ensuring that it’s safe, that it’s secure, and that it’s effective,” DOD’s Chief for AI Assurance Jane Pinelis said at a recent Center for Strategic and International Studies event.

“Implementing these responsible AI guidelines is actually an asymmetric advantage over our adversaries, and I would argue that we don’t need to be the fastest, we [just] need to be fast enough. And we need to be better,” she added.

The term “AI” generally refers to a blossoming branch of computer science involving systems capable of performing complex tasks that typically require some human intelligence. The technology has been widely adopted in society, underpinning maps and navigation apps, facial recognition, chatbots, social media monitoring, and more. And Pentagon officials have increasingly prioritized procuring and developing AI for specific mission needs in recent years.

“Over the coming decades, AI will play a role in nearly all DOD technology, just as computers do today,” Gregory Allen, AI Governance Project director and Strategic Technologies Program senior fellow at CSIS, told FedScoop. “I think this is the right next step.”

The Pentagon’s new 47-page responsible AI implementation plan will inform its work to sort through the incredibly thorny known and unknown issues that could come with fully integrating intelligent machines into military operations. FedScoop spoke with more than a half dozen experts and current and former DOD officials to discuss the nuances within this foundational policy document and their takeaways about the road ahead.

“I’ll be interested in how this pathway plays out in practice,” Megan Lamberth, associate fellow in the Center for a New American Security’s Technology and National Security Program, noted in an interview.

“Considering this implementation pathway as a next step in the Department’s RAI process — and not the end — then I think [its] lines of effort begin to provide some specificity to the department’s AI approach,” she said. “There’s more of an understanding of which offices in the Pentagon have the most skin in the game right now.”

Principles alone are not enough

Following leadership mandates and consultations with a number of leading AI professionals over many months, the Pentagon officially issued a series of five ethical principles to govern its use of the emerging technology in early 2020. At the time, the U.S. military was the first in the world to adopt such AI ethics standards, according to Pinelis. 

A little over a year later, the department reaffirmed its commitment to them and released corresponding tenets that serve as priority areas to shape how the department approaches and frames AI. Now, each of those six tenets — governance, warfighter trust, product and acquisition, requirements validation, the responsible AI ecosystem, and workforce — have been fleshed out with detailed goals, lines of effort, responsible components and estimated timelines via the new strategy and implementation plan. 

Source: DOD’s AI Strategy and Implementation Pathway

“Principles alone are not enough when it comes to getting senior leaders, developers, field officers and other DOD staff on the same page,” Kim Crider, managing director at Deloitte, who leads the consulting firm’s AI innovation for national security team, told FedScoop. “Meaningful governance of AI must be clearly defined via tangible ethical guidance, testing standards, accountability checks, human systems integration and safety considerations.”

Crider, a retired major general who served 35 years in the military, was formerly the chief innovation and technology officer for the Space Force and the chief data officer for the Air Force. In her view, “the AI pathway released last week appears to offer robust focus and clarity on DOD’s proposed governance structure, oversight and accountability mechanisms,” and marks “a significant step toward putting responsible AI principles into practice.”

“It will be interesting to see the DOD continue to explore and execute these six tenets as new questions concerning responsible AI implementation naturally arise,” she added.

The pathway’s rollout comes on the heels of a significant bureaucratic shakeup that merged several of DOD’s technology-focused components — Advana, Office of the Chief Data Officer, Defense Digital Service, and Joint Artificial Intelligence Center (JAIC) — under the nascent Chief Digital and Artificial Intelligence Office (CDAO). 

David Spirk, the Pentagon’s former chief data officer who helped inform the CDAO’s establishment, said this pathway’s “emphasis on modest centralization of testing capability and leadership with decentralized execution” is an “indication of the maturity of thought in how the Office of the Secretary of Defense is positioning the CDAO to drive the initiatives successfully into the future when they will be even more important.”

It’s “a clear demonstration of the DOD’s intent to lead the way for anyone considering high consequence AI employment,” Spirk, who is now a special adviser for CalypsoAI, told FedScoop.

Prior to joining CSIS, Allen was Spirk’s colleague in DOD — serving the JAIC’s director of strategy and policy — where he, too, was heavily involved in guiding the enterprise’s early pursuits with AI. Even where the new pathway’s inclusions seem modest, in his view, “they are actually quite ambitious.”

“The DOD includes millions of people performing hundreds of billions of dollars’ worth of activity,” he said. “Developing a governance structure where leadership can know and show that all AI-related activities are being performed ethically and responsibly, including in situations with life-and-death stakes, is no easy task.”

Beyond clarifying how the Pentagon’s leadership will “know and show” that their strategy is being implemented as envisioned, other experts noted how the pathway provides additional context and distinctions for programs, offices and industry partners to guide their planning in their connected paths toward robust RAI frameworks.

“Perhaps most importantly, the document provides additional structure and nomenclature that industry can utilize in collaboration activities, which will ultimately be required to achieve scale,” Booz Allen Hamilton’s Executive Vice President for AI Steve Escaravage, an early advocate in RAI, told FedScoop.

“I view it as industry’s responsibility to provide the department insights on the next layer of standards and practices to assist the department’s efforts,” he said. 

A journey toward ‘trust’

The Pentagon’s “desired end-state for RAI is trust,” officials wrote in the new pathway. 

Though a clear DOD-aligned definition of the term isn’t included, “trust” is mentioned dozens of times throughout the new plan. 

“In AI assurance, we try not to use the word ‘trust,’” Pinelis told FedScoop. “If you look up ‘trust,’ it has something like 300 definitions, and most of them are very qualitative — and we’re trying to get to a place that’s very objective.”

In her field, experts use the term “justified confidence,” which is considered evidence-based, more well-defined, and embraces testing and metrics to back it up. 

“But of course, in some of the like softer kind of sciences and software language, you will see ‘trust,’ and we try to reserve it either for kind of warfighter trust in their equipment, which manifests in reliance — like literally will the person use it — and that’s how I kind of measure it very tangibly. And then we also use it kind of in a societal context of like our international allies trusting that we won’t field weapons or systems that are going to cause fratricide or something along those lines,” Pinelis explained.

While complicated by the limits of language, this overarching approach is meant to help diverse Pentagon AI users have justifiable and appropriate levels of trust in all systems they lean on, which would in turn help accelerate adoption.

Source: DOD’s AI Strategy and Implementation Pathway

“AI only gets employed in production if the senior decision-makers, operators, and analysts at echelon have the confidence it works and will remain effective regardless of mission, time and place,” Spirk noted. During his time at the Pentagon, he came to recognize that “trust in AI is and will increasingly be a cultural challenge until it’s simply a norm — but that takes time.” 

Spirk and the majority of other officials who FedScoop spoke to highlighted the significance of the new responsibilities laid out in the goals and lines of efforts for the second tenet in the pathway, which is warfighter trust. Through them, DOD commits to a robust focus on independent testing and validation — including new training for staff, real-time monitoring, harnessing commercially available technologies and more. 

“This is one of the most important steps in making sure that [the Office of the Secretary of Defense] is setting conditions to provide the decision advantage the department and its allies and partners need to outpace our competitors at the speed of the best available commercial compute, whether cloud-based or operating in a disadvantaged and/or disconnected comms environment,” Spirk said.  

Allen also noted that tasks under that second tenet “are big.” 

“One of the key challenges in accelerating the adoption of AI for DOD is that there generally aren’t mature testing procedures that allow DOD organizations to prove that new AI systems meet required standards for mission critical and safety critical functions,” he explained. By investing now in maturing the AI test and evaluation ecosystem, DOD can prevent a future process bottleneck where promising AI systems in development can’t be operationally fielded because there is not enough capacity.

“Achieving trust in AI is a continuous effort, and we see a real understanding of this approach throughout the entire plan,” Deloitte’s Crider said. She and Lamberth both commended the pathway’s push for flexibility and an iterative approach.

“I like that the department is recognizing that emerging and future applications of AI may require an updated pathway or different kinds of oversight,” Lamberth noted. 

In her view, all the categories under each line of effort “cover quite a bit of ground.”

One calls for the creation of a DOD-wide central repository of “exemplary AI use cases,” for instance. Others concentrate on procurement processes and system lifecycles, as well as what Lamberth deemed “much-needed talent” initiatives, like developing a mechanism to identify and track AI expertise and completely staffing the CDAO. 

“While all the lines of effort listed in the pathway are important to tackle, the ones that stick out to me are the ones that call for formal activation of key processes to drive change across the organization,” Crider said.  

She pointed to “the call for best practices to incorporate operator input and system feedback throughout the AI lifecycle, the focus on developing responsible AI-related acquisition resources and tools, the use of a Joint Requirements Oversight Council Memorandum (JROC-M) to drive changes in requirement-setting processes and the development of a legislative strategy to ensure appropriate engagement, messaging and advocacy to Congress,” in particular. 

“Each of these lines of effort is critical to the long-term success of the pathway because they help drive systemic change, emphasize the need for resources and reinforce the goals of the pathway at the highest levels of the organization,” she said.

A variety of assigned activities are also associated with what could soon be major military policy revamps. 

For example, the department commits to addressing AI ethics in its upcoming update of the policy on autonomy in weapons systems, DOD directive 3000.09. And it calls for CDAO and other officials to explore whether a review procedure is needed to ensure the warfare capabilities will be consistent with the DOD’s ethics principles.

Spirk and other experts noted that such an assessment would be prudent for the Pentagon.

“As AI is developed and deployed across the department, it will reshape and improve our warfighting capabilities. Therefore, it is critical to consider how AI principles and governance align with overall military doctrine and operations,” Crider said.

Allen added that that line of effort demonstrates how the department recognizes that there are a lot of relevant existing DOD processes, such as weapons development legal reviews and existing safety standards, that apply to all systems — and not just AI-enabled ones.

“The DOD is still assessing whether the right approach is to consider a new, standalone review process focused on RAI or whether to update the existing processes. I’m strongly in favor of the latter approach,” he said. “DOD should build RAI into — not on top of — the existing institutions and processes that ensure lawful, safe and ethical behavior.”

Wait and see

It is undoubtedly difficult for large, complex bureaucratic organizations like the Pentagon to prioritize the implementation of tech-driving strategic plans while balancing other mission-critical work. 

Experts who spoke to FedScoop generally agreed that by identifying specific alignment tasks with existing DOD directives and frameworks from other offices, and outlining who will carry them out, the implementation pathway ensures greater integration and some accountability for everyone to execute on.

Still, some concerns remain. 

“Looking ahead, I think that many of the ambitions of the … pathway are in tension with the department’s technology infrastructure and security requirements. Creating shared repositories and workspaces requires the cloud, and it doesn’t work if data are siloed and access to open applications is restricted,” Melanie Sisson, a fellow in the Brookings Institution’s Center for Security, Strategy, and Technology, told FedScoop.

Spirk also noted that “a vulnerability in compressive oversight and leadership exists here, as the technical talent with domain expertise to understand how to both measure and overcome obstacles to gaps and weaknesses that will be illuminated will likely be significant.” 

To address these and many other unforeseen concerns, DOD could potentially benefit from developing a feedback mechanism and working body among the individuals and teams tasked as operational responsible AI leads, some experts recommended.

“It is important to keep the lines of communication open — both horizontally and vertically. Challenges that may come up during the implementation phase at the team or project level may be common issues across DOD,” Crider said.

The impact of the implementation plan remains to be seen. And investments in people, power and dollars will be needed to effectively guide, drive, test, apply and integrate responsible AI across the enterprise. 

But the officials FedScoop spoke to are mostly hopeful about what’s to come. 

“Looking at the lines of effort and the offices responsible for each, it is clear the department has made strides in establishing offices and processes for responsible AI development and adoption. While a lot of hard work remains, the department continues to show that it is committed to AI,” Lamberth said. 

“I’ll be interested to see how this guidance is communicated to the rest of the department,” she added. “How will it be communicated that these efforts are important across the services, and how will this pathway impact how the services develop and acquire potential AI technologies?” 

The post Pentagon reaches important waypoint in long journey toward adopting ‘responsible AI’ appeared first on DefenseScoop.

]]>
54689
What Russia’s invasion of Ukraine is revealing about tech in modern warfare https://defensescoop.com/2022/05/19/what-russias-invasion-of-ukraine-is-revealing-about-tech-in-modern-warfare/ Thu, 19 May 2022 10:31:17 +0000 https://www.fedscoop.com/?p=52430 Experts argue that the U.S. government needs to better understand the weaknesses of its autocratic rivals — and then find ways to exploit them. 

The post What Russia’s invasion of Ukraine is revealing about tech in modern warfare appeared first on DefenseScoop.

]]>
Russia’s ongoing invasion of Ukraine is teaching national security experts new things about the current status of artificial intelligence and automation in modern warfare — and how to prepare for possible future conflicts with authoritarian regimes.

Much of the devastation so far is the result of conventional military systems. That likely won’t always be the case, former Defense Department officials and military experts warned this week.

“We haven’t seen a conflict on this scale in quite a long time, but many aspects of this conflict really highlight what has been changing in the 21st century. Unmanned systems, remotely piloted systems and autonomous systems were all the sorts of things that some have argued were not going to be a part of a high-intensity fight, they were only going to be relevant to counterinsurgency conflicts. I think that myth has been blown wide open. But what we’ve only seen is the first move, in which there will always be countermoves,” Gregory Allen, a senior fellow with the Center for Strategic and International Studies’ strategic technologies program, said Tuesday at the Nexus 2022 national security symposium.

Allen served as the director of strategy and policy review at the Pentagon’s Joint Artificial Intelligence Center (JAIC) before leaving the department earlier this year.

He recently assessed evidence alleging that Russia is using artificial intelligence-enabled autonomous weapons systems against Ukraine and, ultimately, did not find the claims to be credible. Still, many of the remotely piloted, unmanned systems operating in this conflict have been “really remarkably effective,” Allen noted.

Observations from this initial phase of the war can suggest how nations’ organizational structures and technological investments might need to adapt to ensure competitive military advantage down the line.

“What we’ve been seeing in Ukraine is munitions … where these are kamikaze drones that cost somewhere in the low tens of thousands of dollars a shot, that are annihilating million-dollar tanks at volume. There is a cost and competitiveness revolution going on in military technology, all of which is underpinned by the progress that we’ve seen in commercial digital technology — not least of which is artificial intelligence,” Allen said.

In following Russian-language media over the last few weeks, Allen observed a narrative that he said suggests that, as more electronic warfare systems and drone countermeasures are introduced in this unfolding conflict, pressure is mounting on all sides — but particularly from Russian military organizations — to deploy increasingly autonomous systems.

“I think we’ve seen, throughout history, Russia really underperforming in the early stage of just about every war, and that not necessarily being a great predictor of what the long-term outlook looks like,” Allen noted.

Margarita Konaev, a native-Russian speaker and non-resident senior fellow at the Atlantic Council who studies AI-related defense applications and Russian military innovation, said right now she feels like she and other analysts have “gotten a lot of things wrong.”

“If you are looking at the performance of the Russian military right now, it is very difficult to tell that the last decade and a half has been in fact dedicated to significant reforms that focus on professionalization, on new equipment, autonomous capabilities, a lot of robotics, unmanned systems, electronic warfare, AI for command and control, information, cyber warfare. There were grand expectations, and we have not seen them. And so it’s a great point of reflection for the community that has studied Russia,” she said.

In Konaev’s view, the invasion at this point is highlighting a sharp difference between development of military technology and actual adoption. 

“What we’re seeing right now is that the technical barriers to innovation are really not the most significant barriers to the use and scaling and integration of some of these sophisticated and advanced technologies and operations. A lot of it has to do with institutional, bureaucratic, cultural, human trust issues, let alone between humans and machines,” she noted.

By all assessments, Russia has access to some of the most sophisticated electronic warfare capabilities in the world. “So the fact that the Ukrainian military is able to inflict such massive damage with quite rudimentary and relatively cheap drones is significant,” she said.

Capabilities to counter automated technologies must be considered as a potential future priority in terms of Russia’s modernization pursuits, according to Konaev.

Looking ahead, Konaev is worried about “the pendulum swinging to a point where we completely underestimate what comes next from the Russian perspective.”

Russia is a nuclear power and still has access to a significant amount of conventional fighting capabilities, she noted, adding that “the relationship between Russia and China is also going to be very interesting and complicated.”

While he was at the JAIC, Allen made a number of trips to China where he spoke with dozens of Chinese officials and experts about artificial intelligence. 

“The Chinese military is in the midst of a major AI-enabled modernization effort. They are really changing a lot of the way that they do what they do,” he said.

At one recent conference, a senior Chinese weapons executive Allen had met told a global audience that “in the future, there will be no people fighting the wars,” and that his China-based company is building autonomous systems now to prepare. 

Allen also pointed to a recent report that China’s leader Xi Jinping was shaken by what he has seen in the Ukraine-Russia conflict, where commercially-derived drones are taking out expensive military-designed hardware. 

“That work may or may not be true, but it is absolutely the case that the Chinese military has drawn a lot of lessons from what we’re seeing in Ukraine, and that’s why time matters a lot,” Allen said. “I think we should expect a lot to change in a relatively short period of time.”

August Cole, a non-resident senior fellow at the Atlantic Council and author of novels about the use of emerging military technologies, added that any future conflict with China is going to be “fundamentally decided by data.”

Liza Tobin, senior director of research and analysis for economy at the Special Competitive Studies Project — who previously served on National Security Council staff as China director — said Beijing has a comprehensive plan to “control the networks, the platforms, and importantly, the standards of this emerging digital economy.”

To China, data marks a new source of innovation and economic growth. So much so, Tobin noted, that the nation’s leaders have updated their Marxist theory to add data as a fourth factor of production. 

“For those of you who may be rusty on your Marxist theory, the original three are land, labor and capital. So, when you put them all together in creative ways, it produces economic growth. But unfortunately, the Chinese economy is slowing. The era of easy industrialization and demographic growth is over. So they can’t squeeze any more marginal productivity out of land, labor and capital. Enter data, this new fourth factor of production. And so they are betting that this is a way out of the middle income trap and that by exploiting the many benefits and opportunities of data, they can actually grow their economy in ways that we can’t,” she explained.

The post What Russia’s invasion of Ukraine is revealing about tech in modern warfare appeared first on DefenseScoop.

]]>
52430