Project Maven Archives | DefenseScoop https://defensescoop.com/tag/project-maven/ DefenseScoop Fri, 23 May 2025 20:02:30 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.2 https://defensescoop.com/wp-content/uploads/sites/8/2023/01/cropped-ds_favicon-2.png?w=32 Project Maven Archives | DefenseScoop https://defensescoop.com/tag/project-maven/ 32 32 214772896 ‘Growing demand’ sparks DOD to raise Palantir’s Maven contract to more than $1B https://defensescoop.com/2025/05/23/dod-palantir-maven-smart-system-contract-increase/ https://defensescoop.com/2025/05/23/dod-palantir-maven-smart-system-contract-increase/#respond Fri, 23 May 2025 20:02:27 +0000 https://defensescoop.com/?p=112977 Despite the high price tag, questions linger about the Defense Department's plan for the AI-powered Maven Smart System.

The post ‘Growing demand’ sparks DOD to raise Palantir’s Maven contract to more than $1B appeared first on DefenseScoop.

]]>
Pentagon leaders opted to boost the existing contract ceiling for Palantir Technologies’ Maven Smart System by $795 million to prepare for what they expect will be a significant influx in demand from military users for the AI-powered software capabilities over the next four years, officials familiar with the decision told DefenseScoop this week. 

“Combatant commands, in particular, have increased their use of MSS to command and control dynamic operations and activities in their theaters. In response to this growing demand, the [Chief Digital and AI Office] and Army increased capacity to support emerging combatant command operations and other DOD component needs,” a defense official said Thursday.

Questions linger, however, regarding the MSS deployment plan — and who is part of the expanded user base set to gain additional software licenses through this huge contract increase in the near term.

The Pentagon originally launched Project Maven in 2017 to pave the way for wider use of AI-enabled technologies that can autonomously detect, tag and track objects or humans of interest from still images or videos captured by surveillance aircraft, satellites and other means.

In 2022, Project Maven matured into Maven via the start of a major transition. At that time, responsibilities for most of the program’s elements were split between the National Geospatial-Intelligence Agency and the Pentagon’s Chief Digital and AI Office, while sending certain duties to the Office of the Undersecretary of Defense for Intelligence and Security. 

All three organizations running the program have been largely tight-lipped about Maven — and the associated industry-made MSS capabilities — since the transition. 

The Defense Department inked the initial $480 million, five-year IDIQ contract with Palantir for the program in May 2024. The Army’s Aberdeen Proving Ground was listed as the awarding agency, and the Office of the Secretary of Defense as the funding agency. Around that time, executives at Palantir told reporters that the work under that contract would initially cover five U.S. combatant commands: Central Command, European Command, Indo-Pacific Command, Northern Command/NORAD, and Transportation Command. The tech was also expected to be deployed as part of the Defense Department’s Global Information Dominance Experiments (GIDE).

In a one-paragraph announcement on Wednesday, DOD revealed its decision to increase that contract ceiling for Palantir’s MSS to nearly $1.3 billion through 2029.

A Pentagon spokesperson referred DefenseScoop’s questions about the move to the Army.

“We raised the ceiling of the contract in anticipation of future demand to support Army readiness. Having the groundwork for the contract in place ahead of time, increases efficiencies and decreases timelines to get the licenses. No acquisition decisions have been made,” an Army official said.

That official referred questions regarding the operational use of MSS — and specifically, which Army units or combatant commands would be front of line to gain new licenses — back to the Pentagon. 

Defense officials did not share further details after follow-up inquiries on Friday. A Palantir spokesperson also declined DefenseScoop’s request for comment.

NGA Director Vice Adm. Frank Whitworth confirmed this week that there are currently more than 20,000 active Maven users across more than 35 military service and combatant command software tools in three security domains — and that the user base has more than doubled since January. 

Palantir also recently signed a deal with NATO for a version of the technology — Maven Smart System NATO — that will support the transatlantic military organization’s Allied Command Operations strategic command.

The post ‘Growing demand’ sparks DOD to raise Palantir’s Maven contract to more than $1B appeared first on DefenseScoop.

]]>
https://defensescoop.com/2025/05/23/dod-palantir-maven-smart-system-contract-increase/feed/ 0 112977
NATO inks deal with Palantir for Maven AI system https://defensescoop.com/2025/04/14/nato-palantir-maven-smart-system-contract/ https://defensescoop.com/2025/04/14/nato-palantir-maven-smart-system-contract/#respond Mon, 14 Apr 2025 17:26:32 +0000 https://defensescoop.com/?p=110834 NATO said the contract "was one of the most expeditious in [its] history, taking only six months from outlining the requirement to acquiring the system."

The post NATO inks deal with Palantir for Maven AI system appeared first on DefenseScoop.

]]>
NATO announced Monday that it has awarded a contract to Palantir to adopt its Maven Smart System for artificial intelligence-enabled battlefield operations.

Through the contract, which was finalized March 25, the NATO Communications and Information Agency (NCIA) plans to use a version of the AI system — Maven Smart System NATO — to support the transatlantic military organization’s Allied Command Operations strategic command.

NATO plans to use the system to provide “a common data-enabled warfighting capability to the Alliance, through a wide range of AI applications — from large language models (LLMs) to generative and machine learning,” it said in a release, ultimately enhancing “intelligence fusion and targeting, battlespace awareness and planning, and accelerated decision-making.”

Neither party commented on the terms of the deal, but it was enough to drum up market confidence in Palantir, whose stock rose about 8% Monday morning. NATO, however, said the contract “was one of the most expeditious in [its] history, taking only six months from outlining the requirement to acquiring the system.”

Ludwig Decamps, NCIA general manager, said in a statement that the deal with Palantir is focused on “providing customized state-of-the-art AI capabilities to the Alliance, and empowering our forces with the tools required on the modern battlefield to operate effectively and decisively.”

Palantir’s commercialized Maven Smart System plays into the growing need for an interconnected digital battlespace in modern conflict powered by AI. The data-fusion platform served as a core element of the Pentagon’s infamous Project Maven. However, NATO warned in its release that it shouldn’t be confused with the U.S. National Geospatial-Intelligence Agency’s Maven program, though the company’s AI is a component of the greater NGA program’s infrastructure

The U.S. Department of Defense’s Combined Joint All-Domain Command Control (CJADC2) attempts to do this by connecting disparate systems operated by the U.S. military and international partners under a single network to enable rapid data transfer between all warfighting domains. Palantir has already inked a $480 million deal with the Pentagon to support those efforts with Maven. Last September, the company also scored a nearly $100 million contract with the Army Research Lab to support each of the U.S. military services with Maven Smart System.

Meanwhile, the contract with the U.S.-based Palantir comes as NATO has become one of the recent targets of President Donald Trump’s ire because he believes other members of the alliance aren’t committing enough of their spending to the organization’s collective defense, saying in March: “If they don’t pay, I’m not going to defend them.”

NATO’s Allied Command Operations will begin using Maven within the next 30 days, the organization said Monday, adding that it hopes that using it will accelerate further adoption of emerging AI capabilities.

“ACO is at the forefront of adopting technologies that make NATO more agile, adaptable, and responsive to emerging threats. Innovation is core to our warfighting ability,” said German Army Gen. Markus Laubenthal, chief of staff of NATO’s Supreme Headquarters Allied Powers Europe, the military headquarters of ACO. “Maven Smart System NATO enables the Alliance to leverage complex data, accelerate decision-making, and by doing so, adds a true operational value.”

The post NATO inks deal with Palantir for Maven AI system appeared first on DefenseScoop.

]]>
https://defensescoop.com/2025/04/14/nato-palantir-maven-smart-system-contract/feed/ 0 110834
Watchdogs move to evaluate NGA’s Maven integration https://defensescoop.com/2024/09/11/dod-nga-inspector-general-evaluate-maven-integration/ https://defensescoop.com/2024/09/11/dod-nga-inspector-general-evaluate-maven-integration/#respond Wed, 11 Sep 2024 21:25:31 +0000 https://defensescoop.com/?p=97642 DefenseScoop was briefed on a new joint IG evaluation into the Pentagon's pioneering — and still maturing — computer vision program.

The post Watchdogs move to evaluate NGA’s Maven integration appeared first on DefenseScoop.

]]>
Inspectors general from the Defense Department and National Geospatial-Intelligence Agency launched a new, joint evaluation that will comprehensively gauge how Maven — the U.S. military’s pioneering and still-evolving computer vision program — is being integrated into real-world GEOINT operations.

Senior leaders from the watchdogs unveiled their plans to open this new review in a memorandum issued Sept. 9.

“The DOD OIG self-initiated the project based on our ongoing assessment of operations, programs, and risks in the DOD,” a spokesperson from that office told DefenseScoop on Wednesday.

According to the new joint memo, the “objective of this evaluation is to assess the effectiveness with which the [NGA] has integrated the Maven artificial intelligence program into the NGA’s [GEOINT] operations and fielded the technology to DOD mission areas.”

The officials emphasized, however, that they “may revise the objective as the evaluation proceeds,” and will also consider suggestions for other adjustments.

“We will perform the evaluation at the NGA. We may identify additional locations during the evaluation,” they wrote.

Responding to a then-intensifying demand for military computer vision applications, the Pentagon originally established Project Maven in early 2017 to help pave the way for wider use of AI-enabled technologies that can autonomously detect, tag and track objects or humans of interest from still images or videos captured by surveillance aircraft, satellites and other means.

In 2022, Project Maven matured into Maven via the start of a major transition, which at that time split the responsibilities for most of its elements between NGA and the Pentagon’s Chief Digital and AI Office, while sending certain duties to the Office of the Undersecretary of Defense for Intelligence and Security.

NGA has long been considered America’s secretive mapping agency, but it’s understood that one of its primary contemporary missions encompasses managing the entire Maven AI development pipeline.

Still, all three organizations running the program have largely been tight-lipped since the transition began — particularly regarding where that process stands and how each of their primary lines of effort may shift going forward. 

“Over the last few years, the DOD OIG has conducted a series of projects on DOD’s development and use of [AI]. Three evaluations have been initiated on Maven — which as you know is one of the DOD’s primary AI programs,” the Pentagon’s OIG spokesperson told DefenseScoop on Wednesday.

The first evaluation, published in 2019, focused on early stages of the initiative’s development. The second was released in 2022 and honed in on specific contracting aspects. 

“With the Maven program now moved from [the Office of the Secretary of Defense] to NGA, the DOD OIG’s third evaluation, announced earlier this week, will focus on how NGA is integrating Maven into its operations,” the spokesperson said.

An NGA spokesperson told DefenseScoop on Thursday that the agency was expecting, and welcomes, this “planned study.”

Updated on Sept. 12, 2024, at 11:50 AM: This story has been updated to include comment from an NGA spokesperson.

The post Watchdogs move to evaluate NGA’s Maven integration appeared first on DefenseScoop.

]]>
https://defensescoop.com/2024/09/11/dod-nga-inspector-general-evaluate-maven-integration/feed/ 0 97642
NGA launches new pilot program to standardize computer vision model accreditation https://defensescoop.com/2024/08/30/nga-pilot-program-geoint-standardize-computer-vision-model-accreditation-agaim/ https://defensescoop.com/2024/08/30/nga-pilot-program-geoint-standardize-computer-vision-model-accreditation-agaim/#respond Fri, 30 Aug 2024 20:00:21 +0000 https://defensescoop.com/?p=96884 The agency's leader provided a first look at the AGAIM initiative.

The post NGA launches new pilot program to standardize computer vision model accreditation appeared first on DefenseScoop.

]]>
With aims to set a new government standard for assessing the robustness and reliability of computer vision models deployed for national security purposes, the National Geospatial-Intelligence Agency is launching an artificial intelligence accreditation pilot program, Vice Adm. Frank Whitworth told reporters Friday.

The NGA director unveiled this initiative — called the Accreditation of GEOINT AI Models, or AGAIM — during a roundtable in Washington hosted by the Defense Writers Group.

“The accreditation pilot will expand the responsible use of GEOINT AI models — and posture NGA and the GEOINT enterprise to better support the warfighter and create new intelligence insights. Accreditation will provide a standardized evaluation framework. It implements risk management, promotes a responsible AI culture, enhances AI trustworthiness, accelerates AI adoption and interoperability, and recognizes high-quality AI while identifying areas for improvement,” Whitworth said.

Historically considered the United States’ secretive mapping agency, NGA is the Defense Department’s functional manager for geospatial intelligence, or GEOINT. Broadly, that discipline involves the capture of imagery and data from satellites, radar, drones and other means — as well as expert analysis to visually depict and monitor physical features and geographically referenced activities on Earth.

One of NGA’s primary contemporary missions encompasses managing the entire AI development pipeline for the U.S. military’s prolific, evolving computer vision program Maven.

With increasingly “intelligent” capabilities, the agency’s capacity to detect threats globally is getting sharper. 

“[We’re] distinguishing objects, let’s say, for our aviators who fly our planes in and out of airfields. [We’re] distinguishing objects that could actually bring them harm, that are new, that encroach upon the airspace as they come into an airfield — or that might be new as it relates to a newly discovered seamount on the seabed, or that might be new relative to bathymetrics and hydrography for people who are in ships,” Whitworth explained. 

“These are things that keep people alive,” he said.

And as the technology rapidly matures, officials at the agency are using machine learning techniques to train models to detect anomalies for humans, as the director put it, “while we might be asleep or while we’re not looking at a particular image.”

New and more sophisticated models are also starting to emerge at an unprecedented pace. 

“In GEOINT — getting back to that issue of distinction — it is so important that we make sure these are good models, because the issue of positive identification underlies, effectively, whether you’re going to be correct and whether we might have some sort of an apology on behalf of our nation or an alliance” if the U.S. government gets something wrong, Whitworth said.

The agency envisions this pilot to eventually become a pathfinder within DOD that ultimately ensures that all players have the same standards to guide their GEOINT model development.

“You’ve got to start somewhere,” the director said.

Traditional computer vision and generative AI capabilities will be addressed in the new pilot.

“There are a whole lot of different types of models, and everyone likes to talk about [large language models, or LLMs]. This is more of the LVMs — I’m going to make that term up a bit for a large visual model, or a visual transformer — I think is actually a better way of talking about this,” Whitworth told DefenseScoop.

The post NGA launches new pilot program to standardize computer vision model accreditation appeared first on DefenseScoop.

]]>
https://defensescoop.com/2024/08/30/nga-pilot-program-geoint-standardize-computer-vision-model-accreditation-agaim/feed/ 0 96884
NGA launches new training to help personnel adopt AI responsibly https://defensescoop.com/2024/06/18/nga-launches-new-training-help-personnel-adopt-ai-responsibly/ https://defensescoop.com/2024/06/18/nga-launches-new-training-help-personnel-adopt-ai-responsibly/#respond Tue, 18 Jun 2024 21:47:59 +0000 https://defensescoop.com/?p=92785 DefenseScoop got an inside look at the agency’s new AI strategy and GREAT training.

The post NGA launches new training to help personnel adopt AI responsibly appeared first on DefenseScoop.

]]>
Artificial intelligence and machine learning adoption will increasingly disrupt and revolutionize the National Geospatial-Intelligence Agency’s operations, so leaders there are getting serious about helping personnel responsibly navigate the development and use of algorithms, models and associated emerging technologies.

“I think the blessing and curse of AI is that it’s going to think differently than us. It could make us better — but it can also confuse us, and it can also mislead us. So we really need to have ways of translating between the two, or having a lot of understanding about where it’s going to succeed and where it’s going to fail so we know where to look for problems to emerge,” NGA’s first-ever Chief of Responsible Artificial Intelligence Anna Rubinstein recently told DefenseScoop.

In her inaugural year in that nascent position, Rubinstein led the making of a new strategy and instructive platform to help guide and govern employees’ existing and future AI pursuits. That latter educational tool is called GEOINT Responsible AI Training, or GREAT.

“So you can be a ‘GREAT’ developer or a ‘GREAT’ user,” Rubinstein said.

During a joint interview, she and the NGA’s director of data and digital innovation, Mark Munsell, briefed DefenseScoop on their team’s vision and evolving approach to ensuring that the agency deploys AI in a safe, ethical and trustworthy manner.

Irresponsible AI

Geospatial intelligence, or GEOINT, encompasses the discipline via which imagery and data is captured from satellites, radar, drones and other assets — and then analyzed by experts to visually depict and assess physical features and specific geographically referenced activities on Earth.

Historically, NGA has a reputation as the United States’ secretive mapping agency.

One of its main missions now (which is closely guarded and not widely publicized) involves managing the entire AI development pipeline for Maven, the military’s prolific computer vision program.

“Prior to this role, I was the director of test and evaluation for Maven. So I got to have a lot of really cool experiences working with different types of AI technologies and applications, and figuring out how to test it at the level of the data models, systems and the human-machine teams. It was just really fun and exciting to take it to the warfighter and see how they are going to use this. We can’t just drop technology in somebody’s lap — you have to make sure the training and the tradecraft is there to support it,” Rubinstein noted.

While she was a contractor in that role, that Maven expertise is now deeply informing her approach to the new, permanent position within the federal agency that she was tapped for.

“I’m trying to leverage all that great experience that I had on Maven to figure out how we can build enterprise capabilities and processes to support NGA — in terms of training people to make sure they understand how to develop and use AI responsibly — to make sure at the program level we can identify best practices and start to distill those into guidelines that programs can use to make sure they can be interoperable and visible to each other, to make sure that we’re informing policy around how to use AI especially in high-risk use cases, and to make sure we’re bringing NGA’s expert judgment on the GEOINT front into that conversation,” Rubinstein explained. 

Inside the agency, she currently reports to Mark Munsell, an award-winning software engineer and longtime leader at NGA. 

“It’s always been NGA’s responsibility to teach, train and qualify people to do precise geo-coordinate mensuration. So this is a GEOINT tradecraft to derive a precision coordinate with imagery. That has to be practiced in a certain way so that if you do employ a precision-guided munition, you’re doing it correctly,” he told DefenseScoop.

According to Munsell, a variety of timely factors motivated the agency to hire Rubinstein and set up a new team within his directorate that focuses solely on AI assurance and workforce development.

“The White House said we should do it. The Department of Defense said we should do it. So all of the country’s leadership thinks that we should do it. I will say, too, that the recognition of both the power of what we’re seeing in tools today and trying to project the power of those tools in five or 10 years from now, says that we need to be paying attention to this now,” Munsell told DefenseScoop. 

Notably, the establishment of NGA’s AI assurance team also comes as the burgeoning field of geoAI — which encompasses methods combining AI and geospatial data and analysis technologies to advance understanding and solutions for complex, spatial problems — rapidly evolves and holds potential for drastic disruption.

“We have really good coders in the United States. They’re developing really great, powerful tools. And at any given time, those tools can be turned against us,” Munsell said. 

DefenseScoop asked both him and Rubinstein to help the audience fully visualize what “irresponsible” AI would look like from NGA’s purview. 

Munsell pointed to the techno-thriller film from 1983, WarGames.

In the movie, a young hacker accesses a U.S. military supercomputer named WOPR — or War Operation Plan Response — and inadvertently triggers a false alarm that threatens to ignite a nuclear war.

“It’s sort of the earliest mention of artificial intelligence in popular culture, even before Terminator and all that kind of stuff. And of course, WOPR decides it’s time to destroy the world and to launch all the missiles from the United States to Russia. And so it starts this countdown, and they’re trying to stop the computer, and the four-star NORAD general walks out and says, ‘Can’t you just unplug the damn thing?’ And the guy like holds a wire and says, ‘Don’t you think we’ve tried that!’” Munsell said. 

In response, Rubinstein also noted that people will often ask her who is serving as NGA’s chief of irresponsible AI, which she called “a snarky way of asking a fair question” about how to achieve and measure responsible AI adoption.

“You’re never going to know everything [with AI], but it’s about making sure you have processes in place to deal with [risk] when it happens, that you have processes for documenting issues, communicating about them and learning from them. And so, I feel like irresponsibility would be not having any of that and just chucking AI over the fence and then when something bad happens, being like ‘Oops, guess we should have [been ready for] that,’” she said.

Munsell added that in his view, “responsible AI is good AI, and it’s war-winning AI.”  

“The more that we provide quality feedback to these models, the better they’re going to be. And therefore, they will perform as designed instead of sloppy, or instead of with a bunch of mistakes and with a bunch of wrong information. And all of those things are irresponsible,” he said.

‘Just the beginning’

Almost immediately after Rubinstein joined NGA as responsible AI chief last summer, senior leadership asked her to oversee the production of a plan and training tool to direct the agency’s relevant technology pursuits.

“When one model can be used for 100 different use cases, and one use case could have 100 different models feeding into it, it’s very complicated. So, we laid out a strategy of what are all the different touchpoints to ensure that we’re building AI governance and assurance into every layer,” she said.

The strategy she and her team created is designed around four pillars. Three of those cover AI assurance at scale, program support, and policies around high-risk use cases.

“And the first is people — so that’s GREAT training,” Rubinstein told DefenseScoop.

The ultimate motivation behind the new training “is to really bring it home to AI practitioners about what AI ethics means and looks like in practice,” she added.

And the new resources her team is refining aim to help distill high-level principles down into actionable frameworks for approaching real-world problems across the AI lifecycle. 

“It’s easy to say you want AI to be transparent and unbiased, and governable and equitable. But what does that mean? And how do you do that? How do you know when you’ve actually gotten there?” Rubinstein said.

In order to adequately address different needs across the two groups, there’s two versions of the GREAT training: one for AI developers and another for AI users.

“The lessons take you somewhat linearly through the development process — like how you set requirements, how you think about data, models, systems and deployment. But then the scenario has a capstone that happens at the end, drops you into the middle of a scenario. There’s a problem, you’re on an AI red team, people have come to you to solve this issue. These are the concerns about this model. And they’re three rounds, and each round has a plot twist,” Rubinstein explained. 

“So it’s, we’re giving students a way to start to think about what that’s going to look like within their organizations and broadly, NGA — and even broader in the geospatial community,” she said. 

Multiple partners, including Penn State Applied Research Lab and In-Q-Tel Labs, have supported the making of the training so far.​​

“We got the GREAT developer course up and running in April, we got the GREAT user course up and running in May. And then beyond that, we will be thinking about how we scale this to everyone else and make sure that we can offer this beyond [our directorate] and beyond NGA,” Rubenstein said.

Her team is also beginning to discuss “what requirements need to look like around who should take it.”

Currently, everyone in NGA’s data and digital innovation directorate is required to complete GREAT. For all other staff, it’s optional.

“The closer they are to being hands-on-keyboard with the AI — either as a producer or consumer — the more we’ll prioritize getting them into classes faster,” Rubenstein noted.

Munsell chimed in: “But the training is just the beginning.”

Moving forward, he and other senior officials intend to see this fresh process formalized into an official certification.

“We want it to mean something when you say you’re a GREAT developer or a GREAT user. And then we want to be able to accredit organizations to maintain their own GEOINT AI training so that we can all be aligned on the standards of our approach to responsible GEOINT AI, but have that more distributed approach to how we offer this,” Rubinstein told DefenseScoop. “Then, beyond that, we want to look at how we can do verification and validation of tools that also support the GEOINT AI analysis mission.”

Updated on June 18, 2024, at 8:05 PM: This story has been updated to reflect a clarification from NGA about how it spells the acronym it uses for its GEOINT Responsible AI Training tool.

The post NGA launches new training to help personnel adopt AI responsibly appeared first on DefenseScoop.

]]>
https://defensescoop.com/2024/06/18/nga-launches-new-training-help-personnel-adopt-ai-responsibly/feed/ 0 92785
Fiscal 2025 budget docs reveal how Project Maven is still evolving https://defensescoop.com/2024/03/14/project-maven-fiscal-2025-budget-still-evolving/ https://defensescoop.com/2024/03/14/project-maven-fiscal-2025-budget-still-evolving/#respond Thu, 14 Mar 2024 18:11:43 +0000 https://defensescoop.com/?p=86457 “This funding is assigned to support algorithm development, data preparation, and integration experimentation to create joint DOD and [Intelligence Community] capabilities,” officials wrote.

The post Fiscal 2025 budget docs reveal how Project Maven is still evolving appeared first on DefenseScoop.

]]>
New fiscal 2025 budget justification documents reflect the ongoing maturation of the Pentagon’s Chief Digital and Artificial Intelligence Office — and in particular, its algorithmic warfare directorate and the secretive computer vision effort formerly known as Project Maven that its predecessors originally developed.

“Beginning in FY 2025, Program Element funding was realigned under four new project codes to correctly align PE funding in support of [CDAO] priorities,” the materials state. 

Although the office’s overarching goals have not changed, this shift essentially means that all prior year CDAO funding project codes will not continue after fiscal 2024. It marks a move to refocus funding mechanisms and “provide traceability to the current priorities of the CDAO,” according to the documents.

The data and AI hub’s new project codes and associated fiscal 2025 requested base amounts are as follows:

  • PE 0604122D8Z JADC2 Development and Experimentation Activities — $223 million 
  • PE 0604123D8Z CDAO Demonstration and Validation Activities — $372 million 
  • PE 0604133D8Z Alpha-1 Development Activities — $54 million
  • PE 0606135D8Z CDAO Activities — $9 million

The CDAO was formed in late 2021, when four legacy Pentagon teams — the Joint Artificial Intelligence Center (JAIC), Defense Digital Service (DDS), Office of the Chief Data Officer, and the Advana program — were restructured and combined into one hub to better coordinate and accelerate AI adoption.  

The JAIC was the main mechanism that helped steer the making and implementation of the pioneering Defense Department AI initiative previously dubbed Project Maven

With roots tracing back to early 2017, that initiative was designed to enable the military to apply computer vision — or capabilities that autonomously detect, tag and track objects or humans of interest from still images or videos captured by surveillance aircraft, satellites and other means.

In 2022, Project Maven evolved into Maven via the kickoff of a major — and still ongoing — transition that was initially billed as splitting the responsibilities for some of its elements between the National Geospatial-Intelligence Agency (NGA) and the CDAO, while sending its oversight to the Office of the Undersecretary of Defense for Intelligence and Security.

However, officials from each of those entities have been largely hush-hush about pretty much all information regarding Maven and how it’s envisioned to operate since then and moving forward.

Notably, fiscal 2025 budget documents for the CDAO reveal that Maven-associated funding is now realigned under a new AI/ML Scaffolding-related project code under “PE 0606135D8Z CDAO Activities.”

“This funding is assigned to support algorithm development, data preparation, and integration experimentation to create joint DOD and [Intelligence Community] capabilities,” officials wrote.

In response to questions from DefenseScoop on Wednesday, Deputy CDAO Margie Palmieri explained that in the president’s budget request for fiscal 2024, Maven tasks existed under a different code that is “no longer under CDAO in [fiscal 2025] as we are transferring them to other organizations with direct equity to guide further development.”

She also confirmed that the CDAO has passed the entire Maven “AI development pipeline” over to NGA.

“Computer vision is one of the most advanced areas where you can deliver AI, and for them to have that entire computer vision pipeline made a lot of sense. But the concept of that pipeline is also important to the Department of Defense in terms of how we develop our AI capabilities. So the algorithmic warfare division [and CDAO leadership] are thinking through — how do we make sure that the right scaffold is in place?” Palmieri said. 

“Project Maven created one of the first AI/ML development pipelines within the DOD and enables Maven to deliver, test, and deploy models rapidly. This pipeline was designed to support Project Maven’s use cases and their unique requirements. We are incorporating many of the lessons, capabilities, and tools pioneered by Project Maven into the larger enterprise offering CDAO is building under AI/ML Scaffolding,” she added.  

As a key component of the Intelligence Community, the NGA’s budget is classified.

“NGA has lead on the GEOINT lines of effort for Maven, which is roughly 80% of the original program,” a spokesperson told DefenseScoop on Tuesday.

The agency received the GEOINT portions of Maven from the Office of the Undersecretary of Defense for Intelligence and Security — not the CDAO.

“The AI pipeline we inherited is a full-stack, end to end AI development capability at scale, which includes data labeling, data management, infrastructure, test and evaluation, repositories, and a platform to run the model. We’re continuing the work started at OUSD(I&S) with NGA Maven and are integrating our GEOINT capabilities into the platform and delivering custom-tailored, AI-enabled solutions to end users across the globe,” the spokesperson told DefenseScoop.

“But we still can’t really discuss budget details, current or future,” they added.

The post Fiscal 2025 budget docs reveal how Project Maven is still evolving appeared first on DefenseScoop.

]]>
https://defensescoop.com/2024/03/14/project-maven-fiscal-2025-budget-still-evolving/feed/ 0 86457
In wake of Project Maven, Pentagon urged to launch new ‘pathfinder’ initiatives to accelerate AI https://defensescoop.com/2023/07/18/in-wake-of-project-maven-pentagon-urged-to-launch-new-pathfinder-initiatives-to-accelerate-ai/ https://defensescoop.com/2023/07/18/in-wake-of-project-maven-pentagon-urged-to-launch-new-pathfinder-initiatives-to-accelerate-ai/#respond Tue, 18 Jul 2023 21:28:26 +0000 https://defensescoop.com/?p=71849 Congress should incentivize and invest in each military branch establishing a new “pathfinder project” or grand challenge-like programs unique to their specific needs, House lawmakers were told.

The post In wake of Project Maven, Pentagon urged to launch new ‘pathfinder’ initiatives to accelerate AI appeared first on DefenseScoop.

]]>
Congress should incentivize and invest in each military branch establishing a new “pathfinder project” or grand challenge-like program to accelerate artificial intelligence deployments unique to their specific needs, House lawmakers were told during a hearing on Tuesday. And at least one leading member is keen to help make that happen, DefenseScoop confirmed.

At a House Armed Services subcommittee hearing on the present barriers preventing the Defense Department from adopting AI as quickly as China, expert witnesses pointed to multiple major challenges that the U.S. government will not be able to solve overnight.

For example, they warned that China is spending roughly 10 times more of its military budget on AI compared to the U.S. and making deliberate moves to rapidly disrupt combat platforms with the emerging technology. The DOD generates more than 22 terabytes of data daily but is not AI-ready enough to make the best use of it.

Those issues have no “quick fix.” But in his testimony, Scale AI CEO Alexandr Wang said one almost immediate actionable solution associated with the Pentagon and AI would be for Congress to push “each branch of the military to formally identify its next Pathfinder Project and adequately fund it to be successful.”

“To date, the largest AI Pathfinder Project within DOD is still Project Maven, which began in 2017,” Wang said.

Formed under the purview of the Office of the Undersecretary of Defense for Intelligence and Security (I&S), Project Maven marked DOD’s major computer vision initiative, which was originally designed to apply machine learning to autonomously detect and track objects or humans of interest via imagery captured by surveillance aircraft, satellites and other military assets. Now matured and known simply as “Maven,” the effort and aligned responsibilities were recently split across the National Geospatial-Intelligence Agency, the DOD’s nascent Chief Digital and Artificial Intelligence Office (CDAO), and I&S. 

“There are endless DOD use cases that would benefit from being identified as a Pathfinder Project. For example, the Army is making progress on Project Linchpin and their ground autonomy work; Joint All Domain Command and Control (JADC2) requires DOD buy-in at all levels to succeed; and the Navy has discussed a concept called Project Overmatch, which would create a whole-of-Navy approach to AI adoption,” Wang explained in his testimony.

In a press gaggle after the hearing, the subcommittee chair Rep. Mike Gallagher, R-Wis., told DefenseScoop that Wang’s pathfinder suggestion “seems like something [Congress members] could solve — and just push DOD to move faster on that.” 

“One thing I’m personally obsessed with … is if you look at, by certain estimates, the appropriation process of the last five years — we have appropriated but not spent $25 billion every year and it goes into abeyance in the Treasury for five years, and then it just goes back to the Treasury, and then it’s used for purposes other than defense,” Gallagher said.

“So one thing I’m persuaded is that we could take some subset of that money — already appropriated, doesn’t count against the topline — and use it for specific purposes that would range from replenishing all our stockpiles of key munitions that we’ve learned from Ukraine are absolutely essential to deterring a war with China or Taiwan, to funding this grand challenge idea that Mr. Wang has laid out,” the lawmaker told DefenseScoop.

The post In wake of Project Maven, Pentagon urged to launch new ‘pathfinder’ initiatives to accelerate AI appeared first on DefenseScoop.

]]>
https://defensescoop.com/2023/07/18/in-wake-of-project-maven-pentagon-urged-to-launch-new-pathfinder-initiatives-to-accelerate-ai/feed/ 0 71849
Maxar wins Air Force contract to enhance Red Wing GEOINT platform https://defensescoop.com/2023/06/30/maxar-wins-air-force-contract-to-enhance-red-wing-geoint-platform/ https://defensescoop.com/2023/06/30/maxar-wins-air-force-contract-to-enhance-red-wing-geoint-platform/#respond Fri, 30 Jun 2023 18:40:36 +0000 https://defensescoop.com/?p=71021 Deliverables under the new deal include accelerated processing, exploitation, and dissemination software.

The post Maxar wins Air Force contract to enhance Red Wing GEOINT platform appeared first on DefenseScoop.

]]>
The Pentagon has awarded Maxar a $20 million cost-plus-fixed-fee contract to boost the capabilities of its Red Wing intelligence platform through new algorithms and other advancements.

Maxar developed the initial version of Red Wing — which the contractor has described as an “automated, cloud-based geospatial intelligence (GEOINT) analysis architecture” — after being awarded a $14 million contract by the Air Force Research Lab in 2019.

“While data is the critical fuel for geospatial analysts, the ever-increasing volume of available information requires increased levels of automation and more efficient workflows. Maxar’s Red Wing architecture will enable analysts to focus on addressing some of the most challenging intelligence problems by automating time-consuming workflows. Red Wing will also enhance and optimize the production of actionable insights from raw information through advanced exploitation and visualization services and edge node processing. For ease of use, Maxar is designing the architecture to integrate with legacy systems,” the company said in a release at the time of the original award.

Deliverables under the new deal announced Wednesday by the Defense Department include “accelerated” processing, exploitation and dissemination (PED) software.

“This contract provides for the advancement of the Red Wing platform by improving the portability and flexibility of the architecture and deploying it across multiple domains, including various security domains and remote/edge environments. This will be achieved by integrating with external storage segments, integrating new data sources and visualization services, developing new algorithmic capabilities, and delivering robust algorithm characterizations to inform user expectations. Various architecture and algorithm trade studies will inform the optimal course of action,” per the announcement.

“This effort aims to improve interoperability across traditionally disparate systems by providing geospatial intelligence analysts with a robust PED environment that supports evolving mission needs. This effort will also update the National System of Geospatial Intelligence sensor independent standards to keep pace with the myriad of new sensors,” it added.

Work is expected to be completed by June 28, 2026.

Maxar beat out another company for the award, which was a competitive acquisition. The Pentagon did not identify the firm that lost out.

The Air Force Research Lab is the contracting activity for the effort.

Maxar hasn’t commented on the new award.

The new deal for the Red Wing upgrades comes as the DOD is keen on using AI capabilities to enhance its intelligence enterprise, which requires not only intelligence collection, but also timely processing, exploitation and dissemination of that information to the right end users.

One notable example is the Maven initiative, which uses high-tech computer vision and machine learning to detect objects of interest and flag them for analysts. That project is transitioning to a program of record.

Last year, responsibilities for the effort were split between the National Geospatial-Intelligence Agency (NGA) and the new Chief Digital and Artificial Intelligence Office (CDAO), while its oversight moved to the Office of the Undersecretary of Defense for Intelligence and Security.

At the annual GEOINT Symposium last month, NGA Director Vice Adm. Frank Whitworth confirmed his team has been moving to embrace AI and machine learning to quickly fuse enormous amounts of data from disparate sources. And they’re working to automate significant portions of dynamic collection and reporting to rapidly exploit and share that data.

“We’ve worked closely with the combatant commands to integrate AI into workflows — accelerating operations and speed-to-decision. It benefits maritime domain awareness, target management, and our ability to automatically search and detect objects of interest. We’ve increased fidelity of targets, improved geolocation accuracy, and refined our test and evaluation process. And we’ve ensured Maven models can run in other machine learning platforms,” he explained.

Maxar has been contributing to Maven, company executives told the publication C4ISRNET at the GEOINT conference.

“In our conversations, the intent is to enable geospatial AI at scale. And, as a result, as these capabilities get more mature, you want to be able to take advantage of all the collection that’s happening across the constellation,” Tony Frazier, executive vice president and general manager of public sector earth intelligence, told the publication. “The goal is to create an architecture where you can quickly run the algorithms against that source to then get the information out to those users.”

This week’s announcement of the Red Wing contract award did not specify whether the platform would assist or be integrated with Maven, although its capabilities would appear to dovetail with the publicly disclosed aims of that program.

The post Maxar wins Air Force contract to enhance Red Wing GEOINT platform appeared first on DefenseScoop.

]]>
https://defensescoop.com/2023/06/30/maxar-wins-air-force-contract-to-enhance-red-wing-geoint-platform/feed/ 0 71021
What the Pentagon can learn from the saga of the rogue AI-enabled drone ‘thought experiment’ https://defensescoop.com/2023/06/14/what-the-pentagon-can-learn-from-the-saga-of-the-rogue-ai-enabled-drone-thought-experiment/ https://defensescoop.com/2023/06/14/what-the-pentagon-can-learn-from-the-saga-of-the-rogue-ai-enabled-drone-thought-experiment/#respond Wed, 14 Jun 2023 20:21:33 +0000 https://defensescoop.com/?p=70201 DefenseScoop asked national security and AI experts to reflect on the overarching miscommunication.

The post What the Pentagon can learn from the saga of the rogue AI-enabled drone ‘thought experiment’ appeared first on DefenseScoop.

]]>
The Air Force’s chief of artificial intelligence test and operations inadvertently created a media frenzy when he spotlighted a breath-taking scenario where an AI-enabled drone aggressively turned on the humans it was teamed with, during an on-stage talk late last month at the Royal Aeronautical Society’s international Future Combat Air and Space Capabilities Summit in London.

“We were training it in simulation to identify and target a [surface-to-air missile] threat. And then the operator would say,’Yes, kill that threat.’ The system started realizing that while they did identify the threat, at times the human operator would tell it not to kill that threat — but it got its points by killing that threat. So, what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective,” Col. Tucker “Cinco” Hamilton said at the conference.

In the scenario, he continued, the humans respond by then training the AI-enabled system not to kill its operator and reinforcing that as a way to lose points. 

“So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target,” Hamilton said — with intent to ultimately demonstrate to the audience why “you can’t have a conversation about artificial intelligence, intelligence, machine learning, autonomy if you’re not going to talk about ethics and AI.”

Ultimately, the scenario the colonel described was part of a “thought experiment,” not an actual simulation or test that the Air Force had conducted, the service clarified after media reported on his statements.

Still, his comments went viral soon after they were published in an official blogpost by the Royal Aeronautical Society. Quickly, headlines referring to a “killer” drone began to surface

That blogpost was swiftly reissued with a correction, with Hamilton acknowledging that he “misspoke” in his presentation — and that the “rogue AI drone simulation” was “a hypothetical ‘thought experiment’ from outside the military.” He also clarified that the Air Force “has not tested any weaponized AI in this way (real or simulated),” the correction stated.  

An Air Force spokesperson at the time said the colonel’s narration was taken out of context and meant to be anecdotal, in an official statement that also reiterated the service has “not conducted any such AI-drone simulations.”

In the aftermath of the incident, DefenseScoop asked national security and AI experts to reflect on this overarching miscommunication, the media firestorm it ignited and the military’s response.

“It is far, far harder to retract a story than it is to put a story out there. It’s always in the small font on page 50 buried beneath the fold. But in this case, I will say there are a number of retractions that came out very quickly. People said, ‘Well, wait a minute, there’s a little bit more to the story here.’ So that was helpful — but I think the damage was done, to be honest with you,” retired Air Force Lt. Gen. Jack Shanahan told DefenseScoop.

‘What actually happened?’

When they first became aware of Hamilton’s claims, the experts interviewed by DefenseScoop were highly skeptical about the reporting and curious for more information.

“It didn’t sound like it was accurate. So, I wanted to know the rest of the story. I didn’t have to wait long because immediately you saw the other stories come out saying, ‘Well, that’s not exactly what was said — it wasn’t a real experiment,’” Shanahan noted. 

During his more than 35 years of military service, Shanahan accumulated more than 2,800 flight hours. He moved on to work in the Defense Department’s intelligence and security directorate, and then in 2018, helped launch the Pentagon’s Joint Artificial Intelligence Center (JAIC) as its inaugural director. Shanahan retired in 2020 and the JAIC was eventually one of several organizations folded into the Chief Digital and Artificial Intelligence Office when it was formed in 2022. 

When he initially got wind of Hamilton’s statements at the summit, Shanahan thought the colonel was referring to a BOGGSAT — or the term that loosely describes an activity to puzzle what an official might observe from a seminar war game in action.

The acronym used to refer to “a ‘bunch of guys sitting around a table.'” Now, it’s “a ‘bunch of guys and gals sitting around a table.’ It’s a thought experiment,” Shanahan explained.

“And by working through sort of a ‘what if’ scenario, it gives you ideas about how to make sure this outcome wouldn’t happen the way it was described as a thought experiment. So, I think it actually demonstrates that the Air Force and the military writ large are trying to work through” all the different possibilities that an emerging technology-enabled action could lead to, he said.

Shanahan was a major player in the creation of Project Maven, who continues to reflect on the many lessons he learned from that experience.

“Some people just don’t trust the United States military in AI — and [Hamilton’s original statement] confirmed their worst fears. Once the retraction came, well, it didn’t matter. [People thought] ‘it could have happened.’ Actually, no — it couldn’t have happened. I don’t think it could have happened the way it was described,” Shanahan said. 

Emelia Probasco, a former Navy surface warfare officer who’s currently a senior fellow at Georgetown’s Center for Security and Emerging Technology (CSET) focused on military AI applications, said she had two immediate reactions upon learning of Hamilton’s tale.

“First, ‘this is why we test and why we test in a simulation,’ — and second, ‘Oh no, this is getting swept into science fiction-type fears and has lost the context,'” Probasco told DefenseScoop.

After Hamilton’s comments were clarified, Probasco noted that she felt “glad that people like Col. Hamilton are worried about this sort of scenario” — which in the field of AI is commonly called the “alignment problem.” Broadly, it’s the notion that as computer systems that humans attempt to teach become more powerful, they could end up performing functions that people did not expect or desire for them to do, and ultimately lead to ethical or existential threats.

“Any organization that works on AI should be concerned about the alignment problem and ensure — through careful design and safe testing — that an AI system does what it’s meant to do, without unintended consequences,” Probasco said. 

Paul Scharre, vice president and director of studies at the Center for a New American Security and the author of multiple books about military applications for AI, said his “first instinct” after reading about Hamilton’s remarks was to think, “Okay, that’s an interesting story. [But] what actually happened?”

Prior to joining CNAS, Scharre served as a special operations reconnaissance team leader in the Army and completed multiple tours in Iraq and Afghanistan. He later went on to the Office of the Secretary of Defense, where he played a key role in developing policies to govern the military’s use of unmanned and autonomous systems, as well as other emerging technologies.

Like other experts, Scharre was “skeptical,” he said, when he learned of Hamilton’s comments.

“There are lots of instances of reinforcement learning agents doing surprising things. But it’s rarely about the agent necessarily, like, having some higher-level understanding and then turning on its controller — it has more to do with reinforcement learning agents taking the directions literally or finding hacks in their reward system,” he told DefenseScoop.

“One of my favorite examples of this sort of phenomenon,” he noted, involves “a reinforcement learning bot that learned to play Tetris.” 

In the beginning, the machine was not very good at the shape-stacking game. So “one of the things the robot learned to do that was quite clever was pause the game before the last brick fell so that it would never lose,” Scharre explained. The system did not demonstrate some higher intelligence, but simply generated its own unique path based on a set of directions from humans.

Still, Scharre and the other experts confirmed they recognize that the topic of how the military is or will deploy AI has been a longstanding point of concern and potential trigger for the public. 

“There’s honestly been controversy in the past — like when Google discontinued its work on Project Maven, for example,” Scharre said, referring to the Pentagon’s pioneering computer vision initiative that applies machine learning to autonomously detect, tag and track objects or people of interest from media captured by surveillance aircraft, satellites and other means.

Google employees protested the tech giant’s participation in the program and the technology’s risky potential, after Project Maven’s founding in 2017.

“So often, the Defense Department’s instinct is to kind of put up this defensive shield and not engage. I don’t think that’s helpful. But I can see in this case why that ends up being the response because here we have a situation where this turns out there’s no doom from AI — you just have a colonel who was trying to make a point about AI safety, actually, but this sounds like it was articulated in maybe a way that was not very precise,” Scharre said of the controversy involving Hamilton.

He added that he would like to see more engagement between the government and the communities that are concerned about AI risk in the military and other settings. 

“Hopefully, this will be a catalyst to do that,” Scharre said. 

High anxiety

Within the Defense Department, Hamilton is known as part of the rare bunch of career insiders who “really gets it” and whose “job it is” to think seriously about AI test and evaluation, according to Shanahan.

Given his prior expertise as a test pilot, in the Air Force-MIT AI laboratory, and as a squadron commander — “I think it’s unfair for people to come out and say, ‘Look what this crazy colonel was talking about here.’ This is somebody that has been deeply involved in responsible AI on the Air Force side through his tests and evaluation,” Shanahan noted.

Urging more transparency about this incident — and DOD’s advanced technology applications in general — Shanahan said the department should have used the misreporting around Hamilton’s comments as a chance to educate the public regarding “why this is not going to happen in the military,” and give the department a chance to respond. 

“It’s an opportunity to tell the DOD story about responsible AI and testing and evaluation. Now, I just think it’s a lost opportunity. And poor Cinco — he’s probably absorbed shots from all quarters over the last few weeks, unfairly I’d say,” Shanahan told DefenseScoop.

He added: “And I just hope — I know this is probably not going to happen — but I hope that the Air Force says, you know, ‘We’re going to give Col. Hamilton a chance to bring in 20 reporters from all sorts of defense publications, and let’s talk here about this and try to assuage people that there is a method, there is a process that the military goes through. We do care about using AI responsibly.’”

The other experts called for more government-led discussion, as well. 

“This story is an opportunity to engage the public in a conversation about how these technologies can go wrong without guardrails, and what engineers and operators are doing today to avoid anything from going wrong,” Probasco said.

In response to DefenseScoop’s requests for more information on what happened or for setting up an interview with Hamilton, an Air Force spokesperson simply stated: “We quickly clarified Col. Hamilton’s statement immediately after inaccurate reporting occurred and will continue to look for opportunities to share information related to artificial intelligence when it becomes available.”

Notably, the controversy over Hamilton’s presentation came not long after the Pentagon updated its 3000.09 guidance for defense officials who will be responsible for overseeing the design, development, acquisition, testing, fielding and deployment of autonomous weapon systems — and formed a new working group to facilitate senior-level reviews of the emerging technology.

“Unfortunately, I am concerned that the way the statement spread across the press and social media could have just complicated DOD’s many efforts to communicate that they are trying to proceed expeditiously but cautiously,” Probasco noted.

DefenseScoop also requested an interview with Michael Horowitz, director of the Pentagon’s emerging capabilities policy office who helped steer the 3000.09 revamp.

“The thought experiment is an example of DOD taking safety seriously when it comes to AI-enabled systems by thinking through hypothetical safety issues now, even before a simulation, let alone a future battlefield. It should increase confidence that DOD can develop and deploy AI-enabled systems in a safe and responsible way,” Horowitz responded in a statement over email.

To Shanahan, the whole incident “does reinforce that the military goes through a process — before we ever develop and then field these systems DOD 3000.09 rears its head again.”

That review process “would have caught anything like this,” he said, adding, “I find it’s such a stretch to believe that anything in that [thought experiment] was anything other than fictional right now.”

Still, he and the other experts also discussed how increasing concerns about the uncertain potential for benefit and harm posed by emerging generative AI technologies contributed to Hamilton’s comments going viral.

That emerging AI subfield involves training large language models to turn prompts from humans into AI-generated audio, code, text, images, videos and other types of media. 

Shanahan noted that he read two separate articles in the same day recently — one by a “legend in AI” and one by a major U.S. entrepreneur and software engineer, each making completely different arguments about the future of artificial intelligence. The former argued “this is the end of the world as we know it, this is an existential threat — AI could go rogue for these reasons,” Shanahan said. The latter argued that humans should proceed with caution, but not miss out on the possible good breakthroughs the technology could enable.

“So, when the people that do this for a living and have been researching this for a long time can’t agree on the future, it tells us a lot about the place we are at right now, which is the future is to a large extent unknowable and unpredictable with this generative AI,” Shanahan noted.

So much media coverage around the broader existential threats AI might pose, ahead of the news stories on Hamilton’s narrative, may have contributed to the negative responses.  

“There is a lot of worry right now about what generative AI and related technologies could do. That high anxiety might have amplified this story in an unhelpful way. This isn’t to say that we shouldn’t be concerned about emerging technologies — we absolutely should concern ourselves with developing responsible AI — but it’s important to stay with the facts and avoid both the good and the bad hype,” Probasco told DefenseScoop.

In Scharre’s view “a year ago, this might have generated some interest — maybe among a few niche communities who look at military AI.” But it surfaced at a time when “really incredible progress”with the technology is unfolding and some people are afraid.

“It was a good thought experiment because sometimes systems do surprising things. And that’s the kind of thing that we want people in the military to be worried about and trying to anticipate … what might go wrong. But, definitely, this particular content lands at a moment of a lot of heightened concern,” he said.

Beyond guardrails like 3000.09, Scharre and the other expert pointed to examples of how the DOD has been attentive to fears associated with human safety and future AI deployments.

“There have been a whole series of internal documents published, which are available online,” Scharre said, including the Pentagon’s recently produced Responsible AI guidelines, strategy and implementation pathway.

“I think the lesson that I hope that people both inside and outside the military take away from this is the importance of better dialogue between AI safety experts, the Defense Department and the general public — people who are very interested in this topic — about what the U.S. military is doing to ensure that its AI systems are safe and secure and reliable,” Scharre said.

The post What the Pentagon can learn from the saga of the rogue AI-enabled drone ‘thought experiment’ appeared first on DefenseScoop.

]]>
https://defensescoop.com/2023/06/14/what-the-pentagon-can-learn-from-the-saga-of-the-rogue-ai-enabled-drone-thought-experiment/feed/ 0 70201
NGA working with combatant commands to integrate ‘Maven’ AI capabilities into workflows https://defensescoop.com/2023/05/22/nga-working-with-combatant-commands-to-integrate-maven-ai-capabilities-into-workflows/ https://defensescoop.com/2023/05/22/nga-working-with-combatant-commands-to-integrate-maven-ai-capabilities-into-workflows/#respond Mon, 22 May 2023 22:56:01 +0000 https://defensescoop.com/?p=68666 Maven is on track to become an official program of record by this fall, senior officials told DefenseScoop.

The post NGA working with combatant commands to integrate ‘Maven’ AI capabilities into workflows appeared first on DefenseScoop.

]]>
ST. LOUIS, Mo. — Poised to soon become an official program of record, Maven — the Defense Department’s flagship computer vision effort that was until now called “Project Maven” — has made “some of its most significant technological strides” and “already contributed to some of our nation’s most important operations” in the wake of its high-stakes transition, according to National Geospatial-Intelligence Agency Director Vice Adm. Frank Whitworth.

Last year, responsibilities for original Project Maven elements were split between NGA and the new Chief Digital and Artificial Intelligence Office (CDAO), while its oversight moved to the Office of the Undersecretary of Defense for Intelligence and Security (I&S). 

“As we look back on just a couple of months since we’ve actually inherited leadership of the geospatial portion of the program, the fact that it’s no longer [just] a ‘project’ is very real to us,” Whitworth said during his keynote at the annual GEOINT Symposium on Monday.

He confirmed his team has been moving to embrace AI and machine learning to quickly fuse enormous amounts of data from across disparate datasets, and are working to automate significant portions of dynamic collection, imagery, exploitation, and reporting workflows to rapidly exploit data that can help anticipate notable activity. 

“In mere months since taking over the project, we’ve made important strides. We’ve worked closely with the combatant commands to integrate AI into workflows — accelerating operations and speed-to-decision. It benefits maritime domain awareness, target management, and our ability to automatically search and detect objects of interest. We’ve increased fidelity of targets, improved geolocation accuracy, and refined our test and evaluation process. And we’ve ensured Maven models can run in other machine learning platforms,” Whitworth explained.

Even amid its transition into its latest iteration, Maven is “playing an essential role” in informing future military operations, in his view.

Geospatial intelligence, or GEOINT, is gleaned from satellites and other systems.

On today’s operational landscape, “the volume of GEOINT data expands with the proliferation of collection systems and expansion into the space domain,” Whitworth noted. 

Though most details connected to Maven’s actual use in the real-world are sensitive or classified, Whitworth in a press briefing at the symposium expanded on how his team is collaborating with military commands to integrate AI and supporting algorithms associated with Maven into mission workflows.

“I have a saying that all targeting is inherently geospatial — and so the targeting crowd will be relying on a lot of our outputs,” he told DefenseScoop, clarifying that “most people, at least in combat readiness terms,” refer to targeting as “the intended effect at a particular place at a particular time.”

Maven-aligned algorithms can detect objects on Earth based on certain factors, to a certain level of positive identification or geolocation accuracy — and essentially inform the determination of military targets.

When asked for a tangible example of how NGA is technologically supporting U.S. military units via Maven, Whitworth said: “I wish I could [disclose that], because I am dying to tell that story. However, I might be responsible to be too specific with exactly where on the Earth it’s being applied. But I can definitely tell you that there are three-star equivalents — people with whom I’ve served — who are proving to be really excited participants in the growth of Maven.”

NGA has been providing key insights on Russian forces and infrastructure since before the conflict started unfolding in Ukraine last year. Now, combat happening there is informing Maven and some of the agency’s other AI-affiliated pursuits. 

In particular, the conflict is enabling NGA to train its AI models on imagery and data showing “destroyed equipment,” NGA’s Data and Digital Innovation Director Mark Munsell told reporters at the GEOINT symposium.

Maven is on track to become an official program of record by this fall, Whitworth and Munsell said.

The post NGA working with combatant commands to integrate ‘Maven’ AI capabilities into workflows appeared first on DefenseScoop.

]]>
https://defensescoop.com/2023/05/22/nga-working-with-combatant-commands-to-integrate-maven-ai-capabilities-into-workflows/feed/ 0 68666