responsible AI Archives | DefenseScoop https://defensescoop.com/tag/responsible-ai/ DefenseScoop Fri, 03 Jan 2025 20:43:33 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.2 https://defensescoop.com/wp-content/uploads/sites/8/2023/01/cropped-ds_favicon-2.png?w=32 responsible AI Archives | DefenseScoop https://defensescoop.com/tag/responsible-ai/ 32 32 214772896 Via genAI pilot, CDAO exposes ‘biases that could impact the military’s healthcare system’ https://defensescoop.com/2025/01/03/cdao-genai-pilot-llm-cairt-exposes-biases-could-impact-military-healthcare-system/ https://defensescoop.com/2025/01/03/cdao-genai-pilot-llm-cairt-exposes-biases-could-impact-military-healthcare-system/#respond Fri, 03 Jan 2025 20:43:30 +0000 https://defensescoop.com/?p=104051 The Pentagon's AI hub is now producing a playbook for other Defense Department components, which is informed by this work.

The post Via genAI pilot, CDAO exposes ‘biases that could impact the military’s healthcare system’ appeared first on DefenseScoop.

]]>
The Pentagon’s Chief Digital and AI Office recently completed a pilot exercise with tech nonprofit Humane Intelligence that analyzed three well-known large language models in two real-world use cases aimed at improving modern military medicine, officials confirmed Thursday.

In its aftermath, the partners revealed they uncovered hundreds of possible vulnerabilities that defense personnel can account for moving forward when considering LLMs for these purposes.

“The findings revealed biases that could impact the military’s healthcare system, such as bias related to demographics,” a Defense Department spokesperson told DefenseScoop.

They wouldn’t share much more about what was exposed, but the official provided new details about the design and implementation of this CDAO-led pilot, the team’s follow-up plans and the steps they took to protect service members’ privacy while using applicable clinical records. 

As the name suggests, large language models essentially process and generate language for humans. They fall into the buzzy, emerging realm of generative AI

Broadly, that field encompasses disruptive but still-maturing technologies that can process huge volumes of data and perform increasingly “intelligent” tasks — like recognizing speech or producing human-like media and code based on human prompts. These capabilities are pushing the boundaries of what existing AI and machine learning can achieve. 

Recognizing the potential for both major opportunities and yet-to-be-known threats, the CDAO has been studying genAI and coordinating approaches and resources to help DOD to deploy and experiment with it in a “responsible” manner, officials say.

After recently sunsetting the genAI-exploring Task Force Lima, the office in mid-December launched the Artificial Intelligence Rapid Capabilities Cell to accelerate the delivery of proven and new capabilities across DOD components.

The CDAO’s latest Crowdsourced AI Red-Teaming (CAIRT) Assurance Program pilot, which focused on tapping LLM chatbots with the aim of enhancing military medicine services, “is complementary to the [cell’s] efforts to hasten the adoption of generative AI within the department,” according to the spokesperson.

They further noted that the CAIRT is one example of CDAO-run programs intended “to implement new techniques for AI Assurance and bring in a wide variety of perspectives and disciplines.” 

Red-teaming is a resilience methodology for applying adversarial techniques to internally test systems’ robustness. For the recent pilot, Humane Intelligence crowdsourced red-teaming for clinical note summarization and a medical advisory chatbot — marking two prospective use cases in the context of contemporary military medicine.

“Over 200 participants, including clinical providers and healthcare analysts from [the Defense Health Agency], the Uniformed Services University of the Health Sciences, and the Services, participated in the exercise, which compared three popular LLMs. The exercise uncovered over 800 findings of potential vulnerabilities and biases related to employing these capabilities in these prospective use cases,” officials wrote in a DOD release published Thursday. 

When asked to disclose the names and makers of the three LLMs that were leveraged, the DOD spokesperson told DefenseScoop: “The identities of the large language models (LLMs) used in the study were masked to prevent bias and ensure data anonymity during the evaluation.”

The team carefully designed the exercise to minimize selection bias, gather meaningful data, and protect the privacy of all participants. Plans for the pilot also underwent thorough internal and external reviews to ensure its integrity before it was conducted, according to the official.

“Once announced, providers and healthcare analysts from the Military Health System (MHS) who expressed interest were invited to participate voluntarily. All participants received clear instructions to generate interactions that simulated real-world scenarios in Military Medicine, such as summarizing patient records or seeking clinical advice, ensuring the use of fictional cases rather than actual patient data,” the spokesperson said.

“Multiple measures were implemented to ensure the privacy of participants, including maintaining the anonymity of providers and healthcare analysts involved in the exercise,” they added. 

The DOD announcement suggests that certain learnings in this pilot will play a major role in shaping the military’s policies and best practices for responsibly using genAI. 

The exercise is set to “result in repeatable and scalable output via the development of benchmark datasets, which can be used to evaluate future vendors and tools for alignment with performance expectations,” officials wrote. 

Furthermore, if — “when fielded” — these two use cases are deemed to be covered AI as defined in the recent White House national security memo governing federal agencies’ pursuits of the technology, officials noted that “they will adhere to all required risk management practices.”

Inside the Pentagon’s top AI hub, officials are now scoping out new programs and partnerships for CAIRT-related efforts that make sense within the department and other federal partners. 

“CDAO is producing a playbook that will enable other DOD components to set up and run their own crowdsourced AI assurance and red teaming programs,” the spokesperson said.

DefenseScoop has reached out to Humane Intelligence for comment.

The post Via genAI pilot, CDAO exposes ‘biases that could impact the military’s healthcare system’ appeared first on DefenseScoop.

]]>
https://defensescoop.com/2025/01/03/cdao-genai-pilot-llm-cairt-exposes-biases-could-impact-military-healthcare-system/feed/ 0 104051
Marines release new AI strategy https://defensescoop.com/2024/07/11/marine-corps-new-ai-strategy-goals/ https://defensescoop.com/2024/07/11/marine-corps-new-ai-strategy-goals/#respond Thu, 11 Jul 2024 17:59:29 +0000 https://defensescoop.com/?p=93526 Lt. Gen. Matthew Glavy, deputy commandant for information, described the document as a major milestone in the Corps' pursuit of digital modernization.

The post Marines release new AI strategy appeared first on DefenseScoop.

]]>
The Marine Corps issued a new artificial intelligence strategy that is expected to guide the service’s efforts to integrate the technology across its enterprise, from the back office to the battlefield.

Lt. Gen. Matthew Glavy, deputy commandant for information, described the release of the document, which was announced publicly Wednesday, as a major milestone in the Marines’ pursuit of digital modernization.

“Our fight for and with information needs AI now,” he wrote in the foreword for the strategy, noting that the war in Ukraine is demonstrating how the tech can enable faster decision-making.

“This strategy sets the conditions for delivering modern AI capabilities to support decision advantage in expeditionary advanced base operations and littoral operations in contested environments,” Glavy added.

Leaders of the Corps see opportunities for AI applications across warfighting functions as well as business operations.

However, the service faces AI-related challenges, including misalignment of the technology with mission objectives, competency gaps, difficulty deploying capabilities at scale, governance frameworks that hinder innovation, and barriers to collaboration, according to the strategy.

“Addressing these challenges will require significant resources,” officials noted.

The document lays out five key goals for improving the Corps’ posture.

“The primary aim of this strategy is to gain a comprehensive understanding of mission-specific problems where AI offers a solution,” officials wrote. The Deputy Commandant for Information Service Data Office has been tasked with shepherding that effort.

The Marines intend to create a repository of “candidate Al use cases” and a mechanism to manage the use case process that will inform service-level decisions and activities.

Boosting service members’ know-how for building, supporting and sustaining artificial intelligence systems and related tech is another top aim, which it will support with “stop-gap” training and education “at all levels” of the force.

“Immediate action is required to address current skill and knowledge shortfalls, while the long-term solution of transforming the Marine Corps workforce is being planned and executed,” according to the strategy.

To improve AI talent management, officials will be looking to provide financial or other incentives to personnel with specialized high-demand skills and to pursue organizational shakeups to better match individuals’ abilities, skills and interests with the military’s warfighting needs.

The need to scale the deployment of artificial intelligence technologies, including through modernization of data management, is also top of mind.

To facilitate those efforts, one of the strategy’s goals is to establish “enterprise-to-edge infrastructure, develop and publish standards, and integrate security that enables reliable, fast, and effective Al solutions,” officials wrote, noting that adoption and reuse of existing joint, allied and partner capabilities “will be maximized before developing unique capabilities” for the Marine Corps.

Infrastructure requirements for enterprise- and tactical-level employment of these types of technologies are to be determined by a working group led by the Data Service Office.

Officials also want cybersecurity to be baked into artificial intelligence efforts.

“Cybersecurity is integral to the development, deployment, and maintenance of Al capabilities. The Marine Corps will adopt best-in-class Al capabilities and software coupled with cybersecurity to protect our advantage against potential threats,” according to the strategy.

Promoting “responsible AI” is a key pillar of the strategy’s focus on governance issues, which includes creating a framework for oversight and management of innovation and algorithms.

“This governance will be lean but effective to encourage innovation while providing and enforcing standards and compliance,” officials wrote.

The Corps’ pursuit of so-called responsible AI meshes with a broader effort across the entire Defense Department to make sure artificial intelligence capabilities are safe, reliable, and effective, and operate in accordance with ethical and legal standards.

The final goal outlined in the strategy is to better leverage opportunities for collaboration, including with other Defense Department components, international allies, industry and academia.

“This will accelerate Al innovation and adoption within the Marine Corps, ensure alignment with broader defense objectives, and enhance interoperability with key partners. These partnerships will improve collective capabilities and provide cumulative resource savings,” the document states.

A detailed implementation plan — which will be executed by new AI task groups set up across the Corps — will be forthcoming to help the service achieve the goals laid out in the new strategy.

The task groups will “support commanders in identifying their use cases, acting as the Al advisor, and serving as the key link between Headquarters Marine Corps, Fleet Marine Force, and supporting establishments,” according to the document.

“The approach presented in this strategy provides a logical framework that aligns to Joint and National initiatives and sets the Marine Corps on a path to maintain pace with the rapidly evolving Al landscape, outpacing our adversaries and enemies across the competition continuum. Leveraging the esprit de corps and innovative nature of our Marines and civilians will allow us to remain agile, focused, and ready to Fight Smart,” officials wrote.

Artificial intelligence is seen as a key enabler of the Pentagon’s Combined Joint All-Domain Command and Control (CJADC2) warfighting construct, which calls for better connecting sensors, platforms and data streams of the U.S. military and key allies under a more unified network. The Army, Navy, Air Force, Space Force and the Office of the Secretary of Defense are also pursuing AI strategies and tools to further these aims.

Marine Commandant Gen. Eric Smith has been talking about how he envisions artificial intelligence improving the operations of unmanned systems and data-sharing architectures.

“There’s a human in the loop, the human turns control over to the machine at some point. And so I think that is kind of where we’re going to have to go. Because human in the loop on all of our systems is important and it’s required really by law,” Smith said last week at a Brookings Institution event. “You’ve got a human in the loop, but it doesn’t say how far back the human has to be. And I do think automation is kind of the wave of the future. I mean, it’s already here. And machine-to-machine learning is key, which is why our MQ-9s [drones] are so important because they’re talking to each other, they’re learning. They’re bouncing off ground sensors. They’re picking up signals from destroyers, from frigates. And they’re sensing and making sense of what’s happening and they’re ubiquitously passing that data to the ground force, to the surface force.”

The post Marines release new AI strategy appeared first on DefenseScoop.

]]>
https://defensescoop.com/2024/07/11/marine-corps-new-ai-strategy-goals/feed/ 0 93526
NGA launches new training to help personnel adopt AI responsibly https://defensescoop.com/2024/06/18/nga-launches-new-training-help-personnel-adopt-ai-responsibly/ https://defensescoop.com/2024/06/18/nga-launches-new-training-help-personnel-adopt-ai-responsibly/#respond Tue, 18 Jun 2024 21:47:59 +0000 https://defensescoop.com/?p=92785 DefenseScoop got an inside look at the agency’s new AI strategy and GREAT training.

The post NGA launches new training to help personnel adopt AI responsibly appeared first on DefenseScoop.

]]>
Artificial intelligence and machine learning adoption will increasingly disrupt and revolutionize the National Geospatial-Intelligence Agency’s operations, so leaders there are getting serious about helping personnel responsibly navigate the development and use of algorithms, models and associated emerging technologies.

“I think the blessing and curse of AI is that it’s going to think differently than us. It could make us better — but it can also confuse us, and it can also mislead us. So we really need to have ways of translating between the two, or having a lot of understanding about where it’s going to succeed and where it’s going to fail so we know where to look for problems to emerge,” NGA’s first-ever Chief of Responsible Artificial Intelligence Anna Rubinstein recently told DefenseScoop.

In her inaugural year in that nascent position, Rubinstein led the making of a new strategy and instructive platform to help guide and govern employees’ existing and future AI pursuits. That latter educational tool is called GEOINT Responsible AI Training, or GREAT.

“So you can be a ‘GREAT’ developer or a ‘GREAT’ user,” Rubinstein said.

During a joint interview, she and the NGA’s director of data and digital innovation, Mark Munsell, briefed DefenseScoop on their team’s vision and evolving approach to ensuring that the agency deploys AI in a safe, ethical and trustworthy manner.

Irresponsible AI

Geospatial intelligence, or GEOINT, encompasses the discipline via which imagery and data is captured from satellites, radar, drones and other assets — and then analyzed by experts to visually depict and assess physical features and specific geographically referenced activities on Earth.

Historically, NGA has a reputation as the United States’ secretive mapping agency.

One of its main missions now (which is closely guarded and not widely publicized) involves managing the entire AI development pipeline for Maven, the military’s prolific computer vision program.

“Prior to this role, I was the director of test and evaluation for Maven. So I got to have a lot of really cool experiences working with different types of AI technologies and applications, and figuring out how to test it at the level of the data models, systems and the human-machine teams. It was just really fun and exciting to take it to the warfighter and see how they are going to use this. We can’t just drop technology in somebody’s lap — you have to make sure the training and the tradecraft is there to support it,” Rubinstein noted.

While she was a contractor in that role, that Maven expertise is now deeply informing her approach to the new, permanent position within the federal agency that she was tapped for.

“I’m trying to leverage all that great experience that I had on Maven to figure out how we can build enterprise capabilities and processes to support NGA — in terms of training people to make sure they understand how to develop and use AI responsibly — to make sure at the program level we can identify best practices and start to distill those into guidelines that programs can use to make sure they can be interoperable and visible to each other, to make sure that we’re informing policy around how to use AI especially in high-risk use cases, and to make sure we’re bringing NGA’s expert judgment on the GEOINT front into that conversation,” Rubinstein explained. 

Inside the agency, she currently reports to Mark Munsell, an award-winning software engineer and longtime leader at NGA. 

“It’s always been NGA’s responsibility to teach, train and qualify people to do precise geo-coordinate mensuration. So this is a GEOINT tradecraft to derive a precision coordinate with imagery. That has to be practiced in a certain way so that if you do employ a precision-guided munition, you’re doing it correctly,” he told DefenseScoop.

According to Munsell, a variety of timely factors motivated the agency to hire Rubinstein and set up a new team within his directorate that focuses solely on AI assurance and workforce development.

“The White House said we should do it. The Department of Defense said we should do it. So all of the country’s leadership thinks that we should do it. I will say, too, that the recognition of both the power of what we’re seeing in tools today and trying to project the power of those tools in five or 10 years from now, says that we need to be paying attention to this now,” Munsell told DefenseScoop. 

Notably, the establishment of NGA’s AI assurance team also comes as the burgeoning field of geoAI — which encompasses methods combining AI and geospatial data and analysis technologies to advance understanding and solutions for complex, spatial problems — rapidly evolves and holds potential for drastic disruption.

“We have really good coders in the United States. They’re developing really great, powerful tools. And at any given time, those tools can be turned against us,” Munsell said. 

DefenseScoop asked both him and Rubinstein to help the audience fully visualize what “irresponsible” AI would look like from NGA’s purview. 

Munsell pointed to the techno-thriller film from 1983, WarGames.

In the movie, a young hacker accesses a U.S. military supercomputer named WOPR — or War Operation Plan Response — and inadvertently triggers a false alarm that threatens to ignite a nuclear war.

“It’s sort of the earliest mention of artificial intelligence in popular culture, even before Terminator and all that kind of stuff. And of course, WOPR decides it’s time to destroy the world and to launch all the missiles from the United States to Russia. And so it starts this countdown, and they’re trying to stop the computer, and the four-star NORAD general walks out and says, ‘Can’t you just unplug the damn thing?’ And the guy like holds a wire and says, ‘Don’t you think we’ve tried that!’” Munsell said. 

In response, Rubinstein also noted that people will often ask her who is serving as NGA’s chief of irresponsible AI, which she called “a snarky way of asking a fair question” about how to achieve and measure responsible AI adoption.

“You’re never going to know everything [with AI], but it’s about making sure you have processes in place to deal with [risk] when it happens, that you have processes for documenting issues, communicating about them and learning from them. And so, I feel like irresponsibility would be not having any of that and just chucking AI over the fence and then when something bad happens, being like ‘Oops, guess we should have [been ready for] that,’” she said.

Munsell added that in his view, “responsible AI is good AI, and it’s war-winning AI.”  

“The more that we provide quality feedback to these models, the better they’re going to be. And therefore, they will perform as designed instead of sloppy, or instead of with a bunch of mistakes and with a bunch of wrong information. And all of those things are irresponsible,” he said.

‘Just the beginning’

Almost immediately after Rubinstein joined NGA as responsible AI chief last summer, senior leadership asked her to oversee the production of a plan and training tool to direct the agency’s relevant technology pursuits.

“When one model can be used for 100 different use cases, and one use case could have 100 different models feeding into it, it’s very complicated. So, we laid out a strategy of what are all the different touchpoints to ensure that we’re building AI governance and assurance into every layer,” she said.

The strategy she and her team created is designed around four pillars. Three of those cover AI assurance at scale, program support, and policies around high-risk use cases.

“And the first is people — so that’s GREAT training,” Rubinstein told DefenseScoop.

The ultimate motivation behind the new training “is to really bring it home to AI practitioners about what AI ethics means and looks like in practice,” she added.

And the new resources her team is refining aim to help distill high-level principles down into actionable frameworks for approaching real-world problems across the AI lifecycle. 

“It’s easy to say you want AI to be transparent and unbiased, and governable and equitable. But what does that mean? And how do you do that? How do you know when you’ve actually gotten there?” Rubinstein said.

In order to adequately address different needs across the two groups, there’s two versions of the GREAT training: one for AI developers and another for AI users.

“The lessons take you somewhat linearly through the development process — like how you set requirements, how you think about data, models, systems and deployment. But then the scenario has a capstone that happens at the end, drops you into the middle of a scenario. There’s a problem, you’re on an AI red team, people have come to you to solve this issue. These are the concerns about this model. And they’re three rounds, and each round has a plot twist,” Rubinstein explained. 

“So it’s, we’re giving students a way to start to think about what that’s going to look like within their organizations and broadly, NGA — and even broader in the geospatial community,” she said. 

Multiple partners, including Penn State Applied Research Lab and In-Q-Tel Labs, have supported the making of the training so far.​​

“We got the GREAT developer course up and running in April, we got the GREAT user course up and running in May. And then beyond that, we will be thinking about how we scale this to everyone else and make sure that we can offer this beyond [our directorate] and beyond NGA,” Rubenstein said.

Her team is also beginning to discuss “what requirements need to look like around who should take it.”

Currently, everyone in NGA’s data and digital innovation directorate is required to complete GREAT. For all other staff, it’s optional.

“The closer they are to being hands-on-keyboard with the AI — either as a producer or consumer — the more we’ll prioritize getting them into classes faster,” Rubenstein noted.

Munsell chimed in: “But the training is just the beginning.”

Moving forward, he and other senior officials intend to see this fresh process formalized into an official certification.

“We want it to mean something when you say you’re a GREAT developer or a GREAT user. And then we want to be able to accredit organizations to maintain their own GEOINT AI training so that we can all be aligned on the standards of our approach to responsible GEOINT AI, but have that more distributed approach to how we offer this,” Rubinstein told DefenseScoop. “Then, beyond that, we want to look at how we can do verification and validation of tools that also support the GEOINT AI analysis mission.”

Updated on June 18, 2024, at 8:05 PM: This story has been updated to reflect a clarification from NGA about how it spells the acronym it uses for its GEOINT Responsible AI Training tool.

The post NGA launches new training to help personnel adopt AI responsibly appeared first on DefenseScoop.

]]>
https://defensescoop.com/2024/06/18/nga-launches-new-training-help-personnel-adopt-ai-responsibly/feed/ 0 92785
CDAO shapes new tools to inform Pentagon’s autonomous weapon reviews https://defensescoop.com/2024/04/04/cdao-new-tools-inform-pentagon-autonomous-weapon-reviews/ https://defensescoop.com/2024/04/04/cdao-new-tools-inform-pentagon-autonomous-weapon-reviews/#respond Thu, 04 Apr 2024 20:51:17 +0000 https://defensescoop.com/?p=87762 DefenseScoop recently discussed these new resources with the CDAO's acting chief of responsible artificial intelligence.

The post CDAO shapes new tools to inform Pentagon’s autonomous weapon reviews appeared first on DefenseScoop.

]]>
The Chief Digital and Artificial Intelligence Office team behind the Pentagon’s nascent Responsible AI Toolkit is producing new, associated materials to help defense officials determine if capabilities adhere to certain mandates in the latest 3000.09 policy directive that governs the military’s making and adoption of lethal autonomous weapons. 

“Obviously, the 3000.09 process is not optional. But in terms of how you demonstrate that you are meeting those requirements — we wanted to provide a resource [to help],” Matthew Johnson, the CDAO’s acting Responsible AI chief, told DefenseScoop in a recent interview. 

The overarching toolkit Johnson and his colleagues have developed — and will continue to expand — marks a major deliverable of the Defense Department’s RAI Strategy and Implementation Pathway, which Deputy Secretary Kathleen Hicks signed in June 2022. That framework was conceptualized to help defense personnel confront known and unknown risks posed by still-emerging AI technologies, without completely stifling innovation.

Ultimately, the RAI Toolkit is designed to offer a centralized process for tracking and aligning projects to the DOD’s AI Ethical Principles and other guidance on related best practices.

Building on early success and widespread use of that original RAI toolkit, Johnson and his team are now generating what he told DefenseScoop are “different versions of the toolkit for different parties, or personas, or use cases” — such as one explicitly for defense acquisition professionals.

“It’s not to say that these different versions that kind of come out of the foundational one are all going to be publicly released,” Johnson said. “There will be versions that have to live at higher classification levels.”

One of those in-the-works versions that will likely be classified once completed, he confirmed, will pertain to DOD Directive 3000.09

In January 2023, the first-ever update to the department’s long-standing official policy for “Autonomy in Weapon Systems” went into effect. Broadly, the directive assigns senior defense officials with specific responsibilities to oversee and review the development, acquisition, testing and fielding of autonomous weapon platforms built to engage military targets without troops intervening.

“So, that came out as the official policy. This isn’t like the official toolkit that operationalizes it. This is a kind of voluntary, optional resource that my team [is moving to offer],” Johnson said. 

The directive’s sixth requirement mandates that staff have plans in place to ensure consistency with the DOD AI Ethical Principles and the Responsible AI Strategy and Implementation Pathway for weapons systems incorporating AI capabilities — and incorporate them in pre-development and pre-fielding reviews. 

“We’re just providing a kind of resource or toolkit that enables you to demonstrate how you have met that requirement for either of those two reviews,” Johnson said. 

“Basically what we’re developing is something very similar to what you see in the public version of the toolkit — where, basically, you have assessments and checklists and those route you to certain tools to engage with, and then those can be basically pulled forward and rolled up into a package that can either show how you’re meeting requirement 6, or actually how you’re meeting all of the requirements,” he explained. 

Recognizing that “there’s certainly some overlap that can happen between the requirements,” Johnson said his team also wants “to provide basically an optional resource you can use to either show how you’re meeting requirement 6, or how you’re meeting all the requirements — through a process that basically eliminates, as much as possible, some of those redundancies in your answers.” 

These assets are envisioned to particularly support officials who are packaging 3000.09-aligned pre-development and pre-fielding reviews.

“This is the first kind of policy that has a review process, that has a requirement to be able to demonstrate alignment or consistency with the DOD AI ethical principles — and so, what we’re really interested in here is kind of collecting lessons learned [about] what having a requirement like this does for overall mission success and what using the toolkit to meet a requirement like this does for mission success. And we’re hoping to basically acquire some really good data for this that will help us refine the toolkit and help us understand basically, like, is this a good requirement for future policies and what future policies should have a requirement like this?” Johnson told DefenseScoop.

The post CDAO shapes new tools to inform Pentagon’s autonomous weapon reviews appeared first on DefenseScoop.

]]>
https://defensescoop.com/2024/04/04/cdao-new-tools-inform-pentagon-autonomous-weapon-reviews/feed/ 0 87762
Army strategizes to promote ‘responsible AI’ adoption https://defensescoop.com/2024/02/27/army-responsible-ai-strategy/ https://defensescoop.com/2024/02/27/army-responsible-ai-strategy/#respond Tue, 27 Feb 2024 22:18:13 +0000 https://defensescoop.com/?p=85667 DefenseScoop has new details on an in-development strategy that's being crafted to iteratively guide the service's AI pursuits.

The post Army strategizes to promote ‘responsible AI’ adoption appeared first on DefenseScoop.

]]>
U.S. Army officials are crafting and refining a broad but adaptable new plan to help ensure all artificial intelligence capabilities across the service are responsibly adopted and deployed now and in the future.

“We’re developing what we’re calling the ‘Army’s Responsible AI Strategy,” Dr. David Barnes, the chief AI ethics officer at the Army AI Integration Center (AI2C), said at the Advantage Defense and Data Symposium.

“The strategy isn’t the end in itself. The idea is to set the conditions for the next generation of the Army’s AI strategy, to ensure that the principles are captured in this and into the work that we do — both on the ‘responsible build’ side, but also on [the] ‘responsible use’ side,” he explained. 

Barnes, who also serves as the deputy head of English and Philosophy at West Point, is one of the military’s top experts on legal, ethical, and policy concerns relating to AI-enabled systems. 

He has played a leading role in the Defense Department’s ‘responsible AI’ journey

The Army first launched its own far-ranging AI adoption strategy in 2020. More recently though, Barnes noted, his team realized the need to more explicitly articulate an iterative approach for how the Army can lead in AI ethics and safety, while deliberately incorporating new practices encouraged by the Defense Department in newer guides and resources, which will result in the new responsible AI strategy.

“We have four major lines of effort. The first one, in no particular order, is on workforce development. Unsurprising, right? Building the expertise within the Army that has an understanding of responsible AI. Also, what’s AI for every soldier — and what does everyone in the Army, from the youngest private up to the secretary — need to know about artificial intelligence relative to her position,” Barnes said.

The second line of effort focuses on helping the Army participate in more productive collaboration across government, academia and industry.

“The third area is governance. Obviously, governance is a big concern. From our perspective, it’s about scaling up across the Army — it’s ideas like the potential of a Responsible AI Board, and where might that sit in the current Army leadership structure?” Barnes said.

He concluded that the last line of effort, which he pointed out is also a major focus area for his team, is to take federal principles and guidelines that already exist and produce innovative metrics to gauge Army AI use cases and develop better risk assessments.

“It probably won’t ever be published as its own strategy. But the idea is how do we pull all these different elements together, and present it back? Because, right now, the DOD takes a somewhat narrow focus on what responsible AI is. And it becomes an afterthought, like so many other things, and it’s not as interwoven,” Barnes told DefenseScoop on the sidelines of the symposium after his panel.

“We want the next generation and future versions” of the Army’s processes and policies “to just have [responsible AI] built in as part of it all,” he added.

The post Army strategizes to promote ‘responsible AI’ adoption appeared first on DefenseScoop.

]]>
https://defensescoop.com/2024/02/27/army-responsible-ai-strategy/feed/ 0 85667
IC preparing its own tailor-made artificial intelligence policy  https://defensescoop.com/2024/02/22/odni-artificial-intelligence-policy-tailor-made/ https://defensescoop.com/2024/02/22/odni-artificial-intelligence-policy-tailor-made/#respond Thu, 22 Feb 2024 22:22:38 +0000 https://defensescoop.com/?p=85541 The effort by the Office of the Director of National Intelligence is building on its recently implemented AI ethics framework and other existing standards.

The post IC preparing its own tailor-made artificial intelligence policy  appeared first on DefenseScoop.

]]>
Experts in the Office of the Director of National Intelligence are producing a sweeping new AI-governing policy that’s deliberately bespoke for all members of the intelligence community.

“The intent is always to make sure that what we do is transparent,” Michaela Mesquite, the acting chief for ODNI’s nascent Augmenting Intelligence using Machines (AIM) group, told DefenseScoop. 

She said the organization essentially leads and handles oversight for all AI capabilities across the entire enterprise. A longtime federal official and analyst, Mesquite played a leading role in ODNI’s recent development of its AI ethics framework.

During a panel discussion about operationalizing AI ethics in the military and intelligence domains at the Advantage Defense and Data Symposium on Thursday, she hinted at some of the next steps her team is pursuing when it comes to ensuring the responsible use of machine learning and other emerging technologies across the IC.

“AIM is focused very much on this governance piece. Knowing that we’ve had the ethical principles and an AI ethics framework for a while — there’s a lot more policy coming, and a new strategy coming, and governance structures to be stood up. So, we are busy,” Mesquite said.  

Although she didn’t go into much detail about those unfolding standards and policy-making endeavors during the panel, Mesquite did note that the overarching intention is to guarantee everyone in the IC — not just technology developers, but acquisition experts and all other end users — has a strong grasp on what AI capabilities are appropriate and useful or inappropriate for their jobs. 

“How do we make sure we are looking at the breadth of the policy to make sure our entire organization is mature enough to think about, fully, what is an effective use and appropriate use, and therefore already embedded is the ethical use — because if it’s effective, if it’s appropriate, it’s already going to be ethical. So how do we do that? We get to make our own IC AI policy for this,” she said.

In a sideline conversation after her panel, Mesquite briefed DefenseScoop regarding the ongoing process and what this work really looks like on the ground.

“[ODNI has] a policy team. It’s their job to write these policies. And because this is such a touchy [topic] — these are big deals, right — they do it in their own sort of ‘black box’ and there are process protections around it. So, we [the AIM group] bring the expertise and ideas to inform them,” she explained.

“There are a very limited number of policy instruments. Of those policy instruments, there’s directives, guidance, standards and memorandums. So, first, we have to have the directive part — then everything hangs off of that,” Mesquite told DefenseScoop.

She declined to provide a time frame for when an initial AI policy directive for the intelligence community will be completed.

The post IC preparing its own tailor-made artificial intelligence policy  appeared first on DefenseScoop.

]]>
https://defensescoop.com/2024/02/22/odni-artificial-intelligence-policy-tailor-made/feed/ 0 85541
US eyes first multinational meeting to implement new ‘responsible AI’ declaration https://defensescoop.com/2024/01/09/us-eyes-first-multinational-meeting-to-implement-new-responsible-ai-declaration/ https://defensescoop.com/2024/01/09/us-eyes-first-multinational-meeting-to-implement-new-responsible-ai-declaration/#respond Tue, 09 Jan 2024 22:37:08 +0000 https://defensescoop.com/?p=82679 The Political Declaration on Responsible Military Use of AI and Autonomy was unveiled in 2023.

The post US eyes first multinational meeting to implement new ‘responsible AI’ declaration appeared first on DefenseScoop.

]]>
U.S. Defense and State Department officials aim to meet with delegates from at least 50 other nations by mid-2024 to discuss the nascent framework and standards doctrine they’ve recently signed onto, pledging to “responsibly” develop and deploy artificial intelligence and autonomous military technologies, according to a top Pentagon policymaker.

Originally produced in early 2023, State spotlighted the Political Declaration on Responsible Military Use of AI and Autonomy in November — and confirmed then that more than 40 countries formally endorsed it.

“We’re up to 51 now, including the United States, and we’re proud of the fact that it’s not just the usual suspects,” Michael Horowitz, deputy assistant secretary of defense for force development and emerging capabilities, said on Tuesday during a webcast hosted by the Center for Strategic and International Studies.

“We’re actually working toward a potential plenary session in the first half of 2024 with those states that have endorsed the political declaration — and we hope that even more will come on board before that happens, and will come onboard afterwards,” he added. 

While State has not shared the full declaration publicly, a summary released last year notes that it sets “voluntary guidelines describing best practices for use of AI in a military context” and is designed to “put in place measures to increase transparency, communication, and reduce risks of inadvertent conflict and escalation.” 

Previously, the department had confirmed that “endorsing states will meet in the first quarter of 2024 to begin this next phase” of implementing responsible practices associated with the declaration. 

Spokespersons from the Pentagon and State Department did not answer DefenseScoop’s questions on Tuesday regarding what this plenary session will involve or why the timeline for it has been seemingly extended.

During the CSIS event, Horowitz also acknowledged that China and Russia are not among the nations participating in the multinational agreement at this point. He did note, however, that the pact made between President Biden and Chinese President Xi Jinping in November to restore some military-to-military communications included a forthcoming meeting “to have conversations about AI safety and general AI capabilities.” 

“We think that, again, dialogue between the U.S. and the [People’s Republic of China] is helpful. And wherever the substance of those conversations leads, or focuses, we think that’ll be a good thing,” Horowitz said.

Some concepts underpinning DOD Directive 3000.09 governing “Autonomy in Weapon Systems” — which last year was updated under Horowitz’s leadership, for the first time since 2012 — are essentially reflected in U.S. foreign policy via the new multinational declaration.

A major shift in that directive is that it now explicitly lays out a new senior-level review process for evaluating contemporary applications for military AI ahead of their use.

“When you have multiple undersecretaries and the vice chair [of the Joint Chiefs of Staff] who have to both approve an autonomous weapon system prior to development and approve it prior to fielding — that is really like deep, internal DOD politics — that’s a huge bureaucratic lift, frankly, to do something like that. But it reflects how seriously we take our responsibility when it comes to ensuring that any autonomous weapon systems that are fielded, that we can be confident that they’re safe,” Horowitz said.

He (again) did not comment on whether specific weapon systems have been, are, or will be subject to the freshly updated 3000.09 review process.

The post US eyes first multinational meeting to implement new ‘responsible AI’ declaration appeared first on DefenseScoop.

]]>
https://defensescoop.com/2024/01/09/us-eyes-first-multinational-meeting-to-implement-new-responsible-ai-declaration/feed/ 0 82679
Pentagon developing repository to document when AI goes wrong https://defensescoop.com/2023/11/16/pentagon-developing-repository-to-document-when-ai-goes-wrong/ https://defensescoop.com/2023/11/16/pentagon-developing-repository-to-document-when-ai-goes-wrong/#respond Thu, 16 Nov 2023 18:32:32 +0000 https://defensescoop.com/?p=79598 The development effort was noted in a new "Responsible AI Toolkit” that the Defense Department unveiled on its Tradewinds website.

The post Pentagon developing repository to document when AI goes wrong appeared first on DefenseScoop.

]]>
The Department of Defense is in the process of creating a new “incident repository” that will catalog problems that Pentagon officials encounter with artificial intelligence.

The development effort was mentioned in a new “tools list” that was released by the department this week as part of a broader AI “toolkit” that it unveiled on its Tradewind website.

“The Responsible Artificial Intelligence (RAI) Toolkit provides a centralized process that identifies, tracks, and improves the alignment of AI projects toward RAI best practices and the DoD AI Ethical Principles while capitalizing on opportunities for innovation. The RAI Toolkit provides an intuitive flow guiding the user through tailorable and modular assessments, tools, and artifacts throughout the AI product lifecycle. Using the Toolkit enables people to incorporate traceability and assurance concepts throughout their development cycle,” according to an executive summary.

Among about 70 items on the list is an AI Incident Repository that is not yet accessible for department personnel because it’s still being developed. Once it’s up and running, it will feature a “collection of AI incidents and failures for review and to improve future development” of the technology, according to the Pentagon.

The new toolkit already includes links to an open-source database that is “dedicated to indexing the collective history of harms or near harms realized in the real world by the deployment of artificial intelligence systems. Like similar databases in aviation and computer security, the AI Incident Database aims to learn from experience so we can prevent or mitigate bad outcomes,” according to the website. Examples of such incidents include autonomous vehicles hitting pedestrians and faulty facial recognition systems, among others.

Other aids that are in the works as part of the Pentagon’s responsible AI implementation effort include an executive dashboard laying out project goals; incident response guidance including an interactive web application for end-user auditing; an acquisition guide for potential buyers of artificial intelligence products; and a senior leadership guide for reviewing program managers overseeing AI projects.

There’s also a “use case repository” in development that will include a rundown of artificial intelligence use cases, and a tool to help organizations define and establish roles and responsibilities for AI projects. The Pentagon recently established Task Force Lima to look at a slew of potential use cases for generative AI.

A “human bias red-teaming toolkit” and a bias bounty guidebook are also expected to be released.

In July, the Pentagon’s Chief Digital and AI Office (CDAO) issued a call for “discovery papers” in its search for vendors to set up a new bounty program.

“The DoD is interested in supporting grassroots/crowdsourced red-teaming efforts to ensure that their AI-enabled systems — and the contexts in which they run — are safe, secure, reliable, and equitable. Bias — the systematic errors that an AI system generates due to incorrect assumptions of various types, is a threat to achieving this outcome. Therefore, as part of this priority, the current call seeks industry partners to help develop and run an AI bias bounty program to algorithmically audit models, facilitate experimentation with addressing identified risks, and ensure the systems are equitable given their particular deployment context,” according to a notice posted on the CDAO’s Tradewind website.

The Pentagon expects vendors to have equipped the department and its components with the tools needed to organize and run their own bias bounty programs in the future, according to the notice.

Meanwhile, the CDAO this week also launched of a new digital on-demand learning platform to boost the AI know-how of the Defense Department’s military and civilian personnel by providing them access to MIT’s Horizon library, which will offer “bite-sized learning assets” related to artificial intelligence, the Internet of Things, 5G, edge computing, cybersecurity and big data analytics, according to a release.

The capability — which will be provided through the Air and Space Forces’ Digital University — is intended to “foster a baseline understanding of AI systems and other emerging technologies,” CDAO chief Craig Martell said in a statement. “This resource demonstrates to the DoD workforce how they fit into the future of these advancements and further enables their adoption throughout the Department.”

In a statement, Kathleen Kennedy, senior director of MIT Horizon and executive director of the MIT Center for Collective Intelligence, said: “The DoD is on a historical journey of building a digital workforce. When it comes to AI and emerging technologies, it is really important that their employees are all speaking the same language.”

To use the library, DOD personnel should create an account via the digitalu.af.mil website using their .mil email address, and search for “MIT Horizon,” according to the release.

The post Pentagon developing repository to document when AI goes wrong appeared first on DefenseScoop.

]]>
https://defensescoop.com/2023/11/16/pentagon-developing-repository-to-document-when-ai-goes-wrong/feed/ 0 79598
Pentagon’s Chief Digital and AI Office to host procurement forum for industry https://defensescoop.com/2023/11/02/pentagons-chief-digital-and-ai-office-to-host-procurement-forum-for-industry/ https://defensescoop.com/2023/11/02/pentagons-chief-digital-and-ai-office-to-host-procurement-forum-for-industry/#respond Thu, 02 Nov 2023 19:13:59 +0000 https://defensescoop.com/?p=78802 The CDAO event is slated for Nov. 30, according to a special notice.

The post Pentagon’s Chief Digital and AI Office to host procurement forum for industry appeared first on DefenseScoop.

]]>
The Pentagon organization tasked with spearheading the adoption of artificial intelligence capabilities and other digital tools across the department will hold a conference Nov. 30 to brief industry on its procurement plans, according to a special notice published Thursday.

The inaugural Chief Digital and AI Office (CDAO) procurement forum is scheduled to take place at an office building in the Rosslyn neighborhood of Arlington, Virginia.

“As CDAO’s ambition, objectives, and budget have doubled within the last year, we are actively seeking ambitious, innovative organizations to learn about our mission, discover opportunities, and compete to contribute to cutting-edge AI standards development within the Department of Defense,” per the announcement, posted on Sam.gov.

The briefing is expected to include the organization’s fiscal 2024 procurement forecast, information about assisted acquisition procurements, an “acquisition ecosystem” primer, and discussions about the Pentagon’s needs related to “Responsible AI,” Joint All-Domain Command and Control (JADC2), Task Force Lima, digital talent management, and Advana enterprise platform capabilities.

Members of industry who want to attend are instructed to fill out an interest form attached to the special notice. The submission deadline is Nov. 12.

On Thursday, the Pentagon also rolled out its new data, analytics and AI adoption strategy. During a call with reporters, CDAO chief Craig Martell said feedback from industry at the upcoming procurement forum will help shape the implementation plan that’s being developed.

“How we partner with industry … is going to be extremely important to delivering this strategy. We will not be able to do this without our industrial partners, without academic partners and without our actual, you know, country partners and allies. So it’s going to have a big impact,” he told DefenseScoop during the call.

“If I come with a vision that says, ‘Here’s how I want to pay you because this is what I need,’ and they all say, ‘Nope, that’s not going to work’ — well great, then I have to rethink that. And then I have to ask them, ‘Well, you know, what is it that’s going to be sustainable for your business?’ … I need those industrial partners to continue to build and sustain this. If I have some crazy idea about what I want to build and nobody wants to build it for me, well that’s not going to work. Right? So we absolutely have to do this in partnership with lots of folks but particular to your question, industry,” he added.

Additional CDAO procurement forums are expected to be held next year, according to the announcement.

Updated on Nov. 3, 2023 at 3:20 PM: This story has been updated to include comments from CDAO’s Craig Martell.

The post Pentagon’s Chief Digital and AI Office to host procurement forum for industry appeared first on DefenseScoop.

]]>
https://defensescoop.com/2023/11/02/pentagons-chief-digital-and-ai-office-to-host-procurement-forum-for-industry/feed/ 0 78802
Inside the DOD’s trusted AI and autonomy tech review that brought together hundreds of experts https://defensescoop.com/2023/09/01/inside-the-dods-trusted-ai-and-autonomy-tech-review-that-brought-together-hundreds-of-experts/ https://defensescoop.com/2023/09/01/inside-the-dods-trusted-ai-and-autonomy-tech-review-that-brought-together-hundreds-of-experts/#respond Fri, 01 Sep 2023 19:44:02 +0000 https://defensescoop.com/?p=75114 More than 200 attendees from government, industry and academia participated in a three-day conference hosted by the Office of the Under Secretary of Defense for Research and Engineering.

The post Inside the DOD’s trusted AI and autonomy tech review that brought together hundreds of experts appeared first on DefenseScoop.

]]>
More than 200 attendees — representing the government, military, and approximately 60 companies, universities and federally funded research centers — participated in a three-day conference June 20-22 that the Office of the Under Secretary of Defense for Research and Engineering organized and hosted to deliberate on key advancements and issues in the fields of artificial intelligence and autonomy within the U.S. defense sector. 

“The way that the days unfolded was that industry and academia heard our problems, about where we needed help and what technical gaps we needed closed, and then we heard from industry and academia about their research as applied to those gaps. Then, we took down actions on how to move forward to address those gaps,” a senior Defense Department official who helped lead the conference told DefenseScoop on the condition of anonymity this week. 

Broadly, AI-enabled and autonomous platforms and computer software can function, within constraints, to complete actions or solve problems that typically require human intelligence — with little to no supervision from people. Certain Defense Department components have been developing and deploying AI for years, but at this point such assets have not been scaled enterprise- or military-wide and some associated guidance is still lacking

And as they speedily grow in sophistication, emerging and rapidly evolving large language models and related generative AI capabilities are also posing new potential for help and harm across Pentagon components. 

This week, DefenseScoop obtained an official summary of DOD’s recent conference to directly address some of those threats and possibilities with experts across sectors — dubbed the Trusted AI and Autonomy (TAIA) Defense Technology Review (DTR). The document was written by R&E leadership but hasn’t been publicly released.

The event provided a platform for the government to communicate its objectives and challenges in the realm of AI and autonomy, and “experts to engage in in-depth discussions on specific areas of concern and aspiration within” those emerging technology realms, it states.

Among the prominent industry organizations present were Amazon, IBM, NVIDIA, Boeing, BAE, Boston Dynamics, Dynetics, Applied Intuition, Skydio and TwoSix.

On the first day of the event (which was held at a MITRE facility, with support from that firm’s CTO and team) DOD’s Chief Technology Officer and Undersecretary for R&E Heidi Shyu delivered the keynote address spotlighting critical technologies and investment areas for trusted AI and autonomy, including a new initiative to stand up new strategic AI hubs.

Other notable speakers who gave presentations over the course of the three days included Lt. Gen. Dagvin Anderson (Joint Staff, J7), Chief Digital and AI Office CTO William Streilein, DARPA’s John Kamp and representatives from Indo-Pacific Command, Central Command, European Command and the military services.

After Shyu’s keynote, DOD’s Principal Director for Trusted AI and Autonomy Kimberly Sablon presented her team’s strategic vision, “with a focus on cognitive autonomy development within a system of systems framework,” according to the summary. Sablon stressed the significance of continuous adversarial testing and red-teaming to ensure a resilient operations or machine learning operations (MLOps) pipeline. She also announced two fresh AI initiatives.

One of those initiatives encompasses a new “community of action that integrates mission engineering, systems engineering and research via integrated product teams and with emphasis on rapid experimentation with mission partners to address interoperability earlier,” officials wrote in the summary.

The other is a pilot Center for Calibrated Trust Measurement and Evaluation (CaTE) that will bring the test, evaluation, verification and validation, acquisition, and research-and-development  communities together “to develop standard methods and processes for providing evidence for assurance and for calibrating trust in heterogenous and distributed human-machine teams,” the summary explains. Led by Carnegie Mellon University’s Software Engineering Institute in collaboration with the services and other FFRDCs, that pilot center will pursue operationalizing responsible AI, taking a warfighter-in-the-loop design, development and training approach.

On the second and third days of the conference, attendees engaged in different breakout sessions designed to focus on specific tracks that encompassed a wide range of critical AI and autonomy topics for DOD.

Those tracks included: large language models; multi-agent autonomous teaming; deception in AI; advanced AI processing; human-machine teaming; AI-enabled military course of action generation; the R&E AI hubs initiative; intelligent edge retraining; responsible AI and lethal autonomy; MLOps/development platforms; synthetic data for emerging threats; and calibrated trust in autonomous systems.

“The conference facilitated the exchange of knowledge and ideas, providing valuable input to shape the direction of government research in critical areas of AI and autonomy. Furthermore, it laid the groundwork for focused workshops on AI Hubs, sparked interest in the future R&E Community of Action and CaTE, and paved the way for a much larger follow-on event to be scheduled for January 2024,” officials wrote in the summary. 

To the senior defense official who briefed DefenseScoop on what unfolded, this event demonstrates one way in which DOD is working deliberately to “address safety concerns” to develop and deploy capabilities that have appropriate guardrails associated with “constitutional AI.”

“Constitutional AI is a new approach to AI safety that shapes the outputs of AI systems according to a set of principles,” the official said.

Via this approach, an artificial intelligence system has a set of principles, or “constitution” against which it can evaluate its own outputs.

“CAI enables AI systems to generate useful responses while also minimizing harm. This is important because existing techniques for training models to mirror human preferences face trade-offs between harmlessness and helpfulness,” the senior defense official said.

“The department is cognizant of what the state of the art is and recognizes that we want to safely deploy it,” they told DefenseScoop.

The post Inside the DOD’s trusted AI and autonomy tech review that brought together hundreds of experts appeared first on DefenseScoop.

]]>
https://defensescoop.com/2023/09/01/inside-the-dods-trusted-ai-and-autonomy-tech-review-that-brought-together-hundreds-of-experts/feed/ 0 75114