<![CDATA[C4ISRNet]]>https://www.c4isrnet.comSat, 13 Jul 2024 05:33:29 +0000en1hourly1<![CDATA[Embrace AI to maintain global talent pool for US innovation, security]]>https://www.c4isrnet.com/opinions/2024/07/05/embrace-ai-to-maintain-global-talent-pool-for-us-innovation-security/https://www.c4isrnet.com/opinions/2024/07/05/embrace-ai-to-maintain-global-talent-pool-for-us-innovation-security/Fri, 05 Jul 2024 20:59:16 +0000The U.S. faces significant challenges, including climate change, cyber threats, resource scarcity, and global health crises. To address these and many other issues, the nation has long attracted international scientists to collaborate on scientific and technological breakthroughs.

Well known examples include the Human Genome Project, space exploration, Silicon Valley, and biomedical research. Further, in the ever-evolving landscape of science and innovation, the U.S. stands as a beacon for global talent, attracting brilliant minds to its research institutions and universities. This influx of international scientists and engineers has undeniably enriched the nation’s scientific endeavors, leading to countless breakthrough discoveries, transformative innovations, and advancements.

However, amidst the celebration of diversity and collaboration lies a growing concern – the need to balance the openness of the U.S. research enterprise and maintain the nation’s competitive edge on the global stage with the imperative to safeguard national security interests.

Revelations and discussions from this year, as articulated in a GAO Report on Research Security released in January 2024; in remarks made by the Director of National Intelligence, Avril Haines, in her March 2024 testimony before the Senate Select Committee on Intelligence; and in widely reported revelations by the FBI, have highlighted the potential risks associated with foreign researchers working on sensitive projects.

Now, especially as we approach the election, the conversation has turned to how best to protect intellectual property and prevent the unwanted compromise and nefarious exploitation of critical technology by foreign adversaries and other bad actors, including terrorist groups and those that mean to do harm to our national well being. While this is not an easy tension to navigate, it is one we can indeed manage consistent with our national values as we continue to forge ahead with advanced scientific research and technology development to the benefit of all people, everywhere.

At the heart of the matter lies the challenge of finding a solution that does not stifle the flow of talent while ensuring that the nation’s interests are protected. This delicate balance requires a departure from conventional approaches to risk assessment, which often rely on simplistic country-based criteria or blanket restrictions that paint various foreign researchers with the same brush of suspicion.

Instead, what is needed is a nuanced and agile approach – one that leverages technology to identify and mitigate potential risks swiftly and effectively and supplements the work that teams focused on innovation and national security are already doing. Enter the concept of a triage tool – a sophisticated mechanism that incorporates AI capabilities designed to assess the risk posed by international scientists and engineers while expediting the entry of low-risk individuals into the U.S. research ecosystem.

The ideal triage tool, tied to policy that favors the fast tracking of the many scientists who will provide valuable contributions without jeopardizing national security, would possess several key attributes: it must operate without bias, respecting the diversity of backgrounds and nationalities among applicants; it should be automated, ensuring swift processing and scalability without compromising accuracy; and it must prioritize privacy, abstaining from intrusive data collection methods such as biometrics or personally identifiable information.

Furthermore, the tool should complement existing vetting processes, seamlessly integrating into the fabric of U.S. research institutions and agencies. It should be cost-effective, offering significant savings in both time and resources while delivering reliable results with minimal false positives or negatives.

By implementing such a tool, the U.S. can achieve a delicate balance between fostering international collaboration and protecting its national security interests and by establishing robust mechanisms for protecting sensitive information, the U.S. can bolster trust and encourage fruitful collaboration between domestic and international partners. This would not only enhance the experience of foreign researchers seeking opportunities in the U.S. but also provide reassurance to domestic institutions and agencies tasked with safeguarding sensitive information.

The U.S. stands at a crossroads, where the nurturing of global talent is not merely an option but a strategic imperative. Embracing international scientists and engineers is not only a testament to our commitment to excellence but also a catalyst for driving forward the frontiers of human knowledge and ingenuity. In the end, it is not about erecting barriers or shutting the door to international talent – it is about striking a harmonious chord between openness and vigilance, rather than a forced tradeoff between the two.

By embracing innovation in risk assessment and vetting processes, and thereby welcoming the contributions of international talent, the U.S. can continue to lead the world in scientific discovery, address the existing skill gaps, foster global collaboration and culture exchanges, while safeguarding its technological edge for generations to come and fortifying the country’s position as a leader in innovation.

Donald (Don) J. Blersch is Clearspeed’s SVP of Government Innovation. With multi-agency experience, including NASA, the National Oceanic and Atmospheric Administration (NOAA), Central Intelligence Agency (CIA), the Office of the Director of National Intelligence (ODNI), the National Reconnaissance Office (NRO), the Missile Defense Agency (MDA), and the U.S. Department of State’s Bureau of Diplomatic Security, Blersch led the implementation of technology innovation while advising the executive leadership bench on a wide range of security disciplines, enabling the department to meet vital national security responsibilities with a well-vetted and trusted workforce, hyperfocused on the protection of sensitive, classified information.

]]>
Patrick Semansky
<![CDATA[Pentagon wants to make AI acceleration initiative a long-term fixture]]>https://www.c4isrnet.com/artificial-intelligence/2024/07/02/pentagon-wants-to-make-ai-acceleration-initiative-a-long-term-fixture/https://www.c4isrnet.com/artificial-intelligence/2024/07/02/pentagon-wants-to-make-ai-acceleration-initiative-a-long-term-fixture/Tue, 02 Jul 2024 15:41:13 +0000TAMPA, Fla. — Three years after launching an effort to help combatant commands adopt artificial intelligence tools and concepts, the Pentagon is crafting a long-term vision for the program.

Deputy Secretary of Defense Kathleen Hicks announced the AI and Data Acceleration initiative in May 2021, just months after she took office. The goal was to use experimentation and exercises to help combatant commands apply digital tools to operational concepts like joint all-domain command and control and other key functions, including maintenance and logistics.

As part of the effort, the department embedded teams of data scientists, engineers and coders in each of the 11 combatant commands. Those experts were tasked with assessing each command’s digital readiness and providing feedback about where the Defense Department should invest to speed up their progress.

Radha Plumb, the Pentagon’s chief digital and AI officer, told C4ISRNET in a recent visit to U.S. Central Command headquarters in Tampa, Fla., the teams have been “wildly successful” — so much so that DoD leaders want to find the right model for making them a more permanent fixture.

“Having this capability in the COCOMs with connectivity back to the Pentagon headquarters and [the Office of the Secretary of Defense] is really valuable for a lot of reasons — highlighting data blockers, identifying where there are priority needs that we need to accelerate or invest in at a central level,” she said. “Now, we’re working through what is the long-term model.”

The initial funding for ADA ends this fiscal year, and the Pentagon plans to extend that through 2029 while it determines what a longer-term model might look like. The department requested $14 million for the program in its fiscal 2025 budget.

Plumb said she’s in the early phases of developing that plan. Part of her reason for traveling to CENTCOM was to meet with their ADA team, get a sense for how they’re organized within the command and hear about their successes and their needs.

That effort involves understanding how ADA teams fit within a command’s organizational construct, how big they are and how they’re being utilized. Those three areas, she said, will help the department determine how to manage and provide funding for the ADA hubs — whether through a centralized data team within CDAO, as it does today, or a different model.

Scaling tools

The team at CENTCOM is well integrated into the command, Plumb said. Since ADA began, they’ve completed their readiness assessment and have worked with the innovation office and operators to create multiple AI and data tools.

“They’re building tools they can experiment with and can make them operational,” she said. “And then if they want to mature and scale them, we at CDAO can get pathways for them.”

If the tools are adapted from commercial products, she said, that could mean finding resources to expand their use. For government-owned tools, CDAO may help provide the necessary authorizations to scale them more broadly.

Plumb’s office recently announced a new approach to scaling AI and analytics tools across the department that could make this process easier for the ADA teams. The Open Data and Applications Government-owned Interoperable Repositories construct, or Open DAGIR, aims to help DoD and industry bring together data platforms, development tools and applications.

The effort focuses on three types of capabilities: mature applications that it wants to make buy enterprise licenses for; experimental apps that address high-priority needs but require funding to accelerate their development; and applications built within a combatant command.

While Open DAGIR is meant to provide a pathway to make digital tools more widely available, Plumb said she’s heard from the ADA teams that data access is a significant roadblock.

One reason for that is DoD’s own struggle to automate and centralize data. Having the ADA teams identify where those bottlenecks are and why that data is needed helps Plumb and her office know where to focus their efforts and devote more resources.

“We have work to do,” she said. “The demand signal they’re creating, while it can be frustrating for them, gives us the signal and confidence we need to go and make those back end improvements.”

]]>
Lisa Ferdinando
<![CDATA[How the military is preparing for AI at the edge]]>https://www.c4isrnet.com/opinion/2024/06/26/how-the-military-is-preparing-for-ai-at-the-edge/https://www.c4isrnet.com/opinion/2024/06/26/how-the-military-is-preparing-for-ai-at-the-edge/Wed, 26 Jun 2024 19:29:37 +0000The Defense Department has long used artificial intelligence to detect objects in battlespaces, but the capability has been mainly limited to identification. New advancements in AI and data analysis can offer leaders new levels of mission awareness with insights into intent, path predictions, abnormalities, and other revealing characterizations.

The DoD has an extensive wealth of data. In today’s sensor-filled theaters, commanders can access text, images, video, radio signals, and sensor data from all sorts of assets. However, each data type is often analyzed separately, leaving human analysts to draw — and potentially miss — connections.

Find a way to retain cyber pros, Pentagon personnel guru says

Using AI frameworks for multimodal data analysis allows different data streams to be analyzed together, offering decision-makers a comprehensive view of an event. For example, Navy systems can identify a ship nearby, but generative AI could zero in on the country of origin, ship class, and whether the system has encountered that specific vessel before.

With an object of interest identified, data fusion techniques and machine learning algorithms could review all the data available for other complementary information. Radio signals could show that the ship stopped emitting signals and no crew members are using cell phones. Has the vessel gone dark to prepare for battle, or could it be in distress? Pulling in recent weather reports could help decide the next move.

This enhanced situational awareness is only possible if real-time analysis happens at the edge instead of sending data to a central location for processing.

Keeping AI local is critical for battlefield awareness, cybersecurity, and healthcare monitoring applications requiring timely responses. To prepare, DoD must adopt solutions with significant computing power at the edge, find ways to reduce the size of their AI/ML models and mitigate new security threats.

With most new AI tools and models being open, meaning that the information placed into these technologies is publicly available, agencies need to implement advanced security measures and protocols to ensure that this critical data remains secure.

Pushing processing power

Historically, tactical edge devices collect information and send data back to command data centers for analysis. Their limited computing and processing capabilities slow battlefield decision-making, but they don’t have to. Processing at the edge saves time and avoids significant costs by allowing devices to upload analysis results to the cloud instead of vast amounts of raw data.

However, AI at the edge requires equipment with sufficient computing power for today and tomorrow’s algorithms. Devices and sensors must be able to operate in a standalone manner to perform computing, analysis, learning, training, and inference in the field, wherever that may be. Whether on the battlefield or attached to a patient in a hospital, AI at the edge learns from scenarios to better predict and respond for the next time. For the Navy crew, that could mean identifying what path a ship of interest may take based on previous encounters. In a hospital, sensors could flag the symptoms of a heart attack before arrest happens.

Connectivity will be necessary, but systems should also be able to operate in degraded or intermittent communication environments. Using 5G or other channels allows sensors to talk and collaborate while disconnected from headquarters or a command cloud.

Another consideration is orchestration: Any resilient system should include dynamic role assignments. For example, if multiple drones are flying and the leader gets taken out, another system component needs to assume that role.

Shrinking AI to manageable size

A battlefield is not an ideal environment for artificial intelligence. AI models like ChatGPT operate in climate-controlled data centers on thousands of GPU servers that consume enormous energy. They train on massive datasets, and their computing requirements increase exponentially in operational inference stages. The scenario presents a new size, weight, and power puzzle for what the military can deploy at the edge.

Some AI algorithms are now being designed for SWAP-constrained environments and novel hardware architectures. One option is miniaturizing AI models. Researchers are experimenting with multiple ways to make smaller, more efficient models through compression, model pruning, and other options.

Miniaturization has risks. A trained model could undergo “catastrophic forgetting” when it no longer recalls something previously learned. Or it could increasingly generate unreliable information — called hallucinations — due to flaws introduced by compression techniques or training a smaller model pulled from a larger one.

Computers without borders

While large data centers can be physically walled off with gates, barriers, and guards, AI at the edge presents new digital and physical security challenges. Putting valuable, mission-critical data and advanced analytics capabilities at the edge requires more than protecting an AI’s backend API.

Adversaries could feed bad or manufactured data in a poisoning attack to taint a model and its outputs. Prompt injections could lead a model to ignore its original instructions, divulge sensitive data, or execute malicious code. However, defense-in-depth tactics and hardware features like physical access controls, tamper-evident enclosures, along with secure boot and trust execution environments / confidential computing can help prevent unauthorized access to sensitive equipment, applications, and data.

Still, having AI capabilities at the tactical edge can provide a critical advantage during evolving combat scenarios. By enabling advanced analytics at the edge, data can be quickly transformed into actionable intelligence, augmenting human decision-making with real-time information and providing a strategic advantage over adversaries.

Steve Orrin is Federal Chief Technology Officer at Intel.

]]>
Damian Dovarganes
<![CDATA[Safran in talks to buy French AI startup Preligens for €220 million]]>https://www.c4isrnet.com/global/europe/2024/06/24/safran-in-talks-to-buy-french-ai-startup-preligens-for-220-million/https://www.c4isrnet.com/global/europe/2024/06/24/safran-in-talks-to-buy-french-ai-startup-preligens-for-220-million/Mon, 24 Jun 2024 13:46:58 +0000PARIS — Aerospace firm Safran is in exclusive talks to buy French defense artificial-intelligence startup Preligens, whose algorithms are used to analyze satellite data for the French and U.S. militaries, for an enterprise value of €220 million, or $236 million.

Safran said the potential deal is a “unique opportunity” to add cutting-edge AI to its product offering. The transaction is subject to the usual regulatory approvals, and is expected to close in the third quarter of 2024, the Paris-based company said in a statement on Monday

“The proposed acquisition of Preligens will boost the adoption of AI within the group,” Safran CEO Olivier Andriès said in the statement. “It will represent a step-change for our defense and space technology businesses.”

The acquisition would ensure French control of a technology that the country’s Armed Forces Ministry has identified as crucial in the competition between global powers. The French government owns 11.2% of Safran and 18.1% of voting rights. Other bidders for Preligens included Sweden’s Hexagon and the Leonardo-Thales joint venture Telespazio, Les Echos reported in April.

The Preligens AI has been trained specifically for detecting military equipment such as armored vehicles, aircraft and ships on satellite or drone images, and France’s military intelligence uses the technology to monitor activity at strategic sites. The startup also works with NATO, the U.S., the U.K. and the EU, and last month announced a new contract with an Asia-Pacific customer for AI analysis of high volumes of government satellite images.

The startup was approached in 2020 by the CIA-sponsored investment fund In-Q-Tel, prompting French government-owned defense investment fund Definvest to participate in a €20 million funding round that same year to keep ownership fully in France. The French armament agency DGA signed a framework contract for AI analysis with Preligens in 2022 with a value of as much as €240 million over seven years.

France in March announced plans to reallocate €2 billion of funding from its 2024-2030 defense budget to artificial intelligence. Armed Forces Minister Sébastien Lecornu last week announced plans to build Europe’s most powerful classified supercomputer to take the lead in AI for defense purposes, saying France will be the European power that will devote most resources to military AI.

Preligens had sales of €28 million in 2023 and employs about 220 people, including 140 engineers in research and development. The company’s products include Xerus, which uses AI to map terrain for military purposes such as mission planning, and Robin, which provides AI-based monitoring of activity at strategic sites such as air bases. The Paris-based startup is also working with the French Navy on AI-powered analysis of underwater acoustic signals.

Adding the Preligens technology will allow Safran deploy AI-enabled digital inspection focused on flight safety and quality, the company said. Safran gets more than three-quarters of its revenue from civilian aerospace.

Safran Electronics & Defense presented an AI solution called Advanced Cognitive Engine (ACE) at the Eurosatory defense show here last week, adding AI-based target detection and tracking to the company’s optronics for land vehicles, naval sights and aircraft. The company plans to integrate ACE with its drones and robotic systems.

Preligens was founded in 2016 by Arnaud Guérin, a former executive at French government-owned nuclear-power technology firm Areva, and Renaud Allioux, previously an engineer at Airbus Defence and Space focusing on remote sensing for Earth observation.

]]>
EMMANUEL DUNAND
<![CDATA[The best way to counter bad artificial intelligence is using good AI]]>https://www.c4isrnet.com/opinion/2024/06/20/the-best-way-to-counter-bad-artificial-intelligence-is-using-good-ai/https://www.c4isrnet.com/opinion/2024/06/20/the-best-way-to-counter-bad-artificial-intelligence-is-using-good-ai/Thu, 20 Jun 2024 15:41:32 +0000Could terrorists or other bad actors use artificial intelligence to create a deadly pandemic? Scientists at Harvard and the Massachusetts Institute of Technology conducted an experiment to find out last year.

Researchers asked a group of students, none of whom had specialized training in the life sciences, to use AI tools, such as OpenAI’s ChatGPT-4, to develop a plan for how to start a pandemic. In just an hour, participants learned how to procure and synthesize deadly pathogens like smallpox in ways that evade existing biosecurity systems.

AI cannot yet manufacture a national security crisis. As Jason Matheny at Rand reiterates, while biological know-how is becoming more widely accessible through AI, it’s not currently at a level that would substitute for a lack of biological research training. But as biotechnology becomes both more advanced -- think of Google DeepMind’s AlphaFold, which uses AI to predict how molecular structures will interact -- policymakers are understandably worried that it’ll be increasingly easy to create a bioweapon. So they’re starting to take action to regulate the emerging AI industry.

Their efforts are well-intentioned. But it’s critical that policymakers avoid focusing too narrowly on catastrophic risk and inadvertently hamstring the creation of positive AI tools that we need to tackle future crises. We should aim to strike a balance.

AI tools have enormous positive potential. For instance, AI technologies like AlphaFold and RFdiffusion have already made large strides in designing novel proteins that could be used for medical purposes. The same sort of technologies can also be used for evil, of course.

In a study published last year in the journal Nature Machine Intelligence, researchers demonstrated how the AI MegaSyn could generate 40,000 potential bioweapon chemicals in just six hours. Researchers asked the AI to identify molecules that are similar to VX, a highly lethal nerve agent. In some cases, MegaSyn devised compounds that were even more toxic.

It’s possible that bad actors could one day use such tools to engineer new pathogens far more contagious and deadly than any occurring in nature. Once a potential bioweapon is identified -- maybe with the help of AI -- a malicious actor could order a custom strand of DNA from a commercial provider, who would manufacture synthetic DNA in a lab and return it via mail. As experts at the Center for Security and Emerging Technology at Georgetown University has posited, perhaps that strand of genetic material “codes for a toxin or a gene that makes a pathogen more dangerous.

It’s even possible that a terrorist could evade detection by ordering small pieces of a dangerous genetic sequence, and then assemble a bioweapon from the component parts. Scientists frequently order synthesized DNA for projects like cancer and infectious disease research. But not all synthetic DNA providers screen orders or verify their customers.

Closing such loopholes will help, but we can’t regulate away all of the risk. It’d be wiser to beef up our defenses by investing in AI-enabled early-detection systems.

Today, the Centers for Disease Control and Prevention’s Traveler-based Genomic Surveillance program partners with airports nationwide to gather and analyze wastewater and nasal swab samples to catch pathogens as they enter our borders. Other systems are in place for tracking particular pathogens within cities and communities. But existing detection systems are likely not equipped for novel agents designed with AI’s help.

The U.S. intelligence community is already investing in AI-powered capabilities to defend against next-generation threats. IARPA’s FELIX program, in partnership with private biotech firms, yielded first-in-class AI that can distinguish genetically engineered threats from naturally-occurring ones, and identify what has been changed and how.,A similar technology could be used for DNA synthesis screening -- with AI, we could employ algorithms that predict how novel combinations of genetic sequences might function.

We have barely begun to tap the potential of AI to detect and protect against biological threats. In the case of a novel infectious disease, these systems have the power to determine how and when a pathogen has mutated. That can enable the speedy development of vaccines and treatments specifically tailored to new variants. AI can also help predict how a pathogen is likely to spread.For these technologies to play their vital role, leaders in Washington and around the world must take steps to build up our AI defenses. The best way to counter “bad AI” isn’t “no AI” -- it’s “good AI.”

Using AI to its full potential to protect against deadly pandemics and biological warfare demands an aggressive policy effort. It’s time for policymakers to adapt. With adequate foresight and resources, we can get ahead of this new class of threats.

Andrew Makridis is the former Chief Operating Officer of the CIA, the number-three position at the agency. Prior to his retirement from the CIA in 2022, he spent nearly four decades working in national security.

]]>
Eugene Mymrin
<![CDATA[US Army to launch AI pilot project for acquisition workforce]]>https://www.c4isrnet.com/artificial-intelligence/2024/06/19/us-army-to-launch-ai-pilot-project-for-acquisition-workforce/https://www.c4isrnet.com/artificial-intelligence/2024/06/19/us-army-to-launch-ai-pilot-project-for-acquisition-workforce/Wed, 19 Jun 2024 15:16:07 +0000The U.S. Army wants to better understand how its acquisition and contracting workforce could use generative AI to improve efficiency and is launching a pilot next month to explore those questions.

Jennifer Swanson, deputy assistant secretary of the Army for data, engineering and software, said the effort will shed light on how the service’s acquisition and logistics enterprise could take advantage of generative AI tools to make processes like contract writing and data analysis more efficient.

“The pilot’s not just about increasing our productivity, which will be great, but also — what are the other things that we can do and what are the other industry tools that are out there that we might be able to leverage or add on,” Swanson said June 18 at Defense One’s Tech Summit in Arlington, Va.

The Army is the latest Defense Department agency to announce efforts to experiment with generative AI. The Air Force and Space Force last week unveiled their own experimental tool — the Non-classified Internet Protocol Generative Pre-Training Transformer, or NIPRGPT. And in 2023, the Navy rolled out a conversational AI program called Amelia that sailors could use to troubleshoot problems or provide tech support.

Swanson said she’s optimistic about the potential for generative AI, especially for laborious specialties like contract writing and policy where automation could release some strain on the Army’s workforce.

“In the area of contracts and in the area of policy, I think there’s a huge return on investment for us,” she said. “Might [AI] one day be able to write a contract? We hope so. But we’ve got to pilot and test it and make sure everybody’s comfortable with it first.”

The large language model the service will use for the effort is different from systems like ChatGPT, Swanson said, because it is trained on Army data. It will also provide citations that indicate where the data it provides originated, a feature that will help the service fact-check that information.

The pilot is part of a broader effort within the Army to identify both the pitfalls and the opportunities that come with widely adopting AI tools. In March, the service announced a 100-day plan focused on reducing the risk associated with integrating AI algorithms.

As part of that exercise, Swanson said, the Army reviewed its spending on AI research and found that testing and security are the two biggest gaps toward fielding these tools more broadly. The service also identified 32 risks and 66 mitigations it can implement to reduce their impact. Further, it created a generative AI policy that it will apply to the pilot in order to set parameters for the effort. That policy includes a requirement that there be a “human in the loop.”

The generative AI pilot will lead into the next phase of the effort — a 16-month focus on how to use the technology operationally. Findings from that work will inform the Army’s budget for fiscal 2026.

“So the 100 day plan is setting the conditions — where are we at — and then the 500 day plan is really about operationalizing it,” she said.

Florent Groberg, vice president of strategy and optimization at private investment firm AE Industrial partners, said that as the Army moves through these review processes and experiments with AI, it should be transparent with industry about what it wants and then move quickly to leverage the tools companies are developing.

“To me, it’s really understanding the framework of what you want to accomplish,” he said during the same panel with Swanson. “Put some boundaries out there and then go do it.”

]]>
da-kuk
<![CDATA[France preps Europe’s fastest classified supercomputer for defense AI]]>https://www.c4isrnet.com/global/europe/2024/06/18/france-preps-europes-fastest-classified-supercomputer-for-defense-ai/https://www.c4isrnet.com/global/europe/2024/06/18/france-preps-europes-fastest-classified-supercomputer-for-defense-ai/Tue, 18 Jun 2024 11:44:31 +0000PARIS — France plans to build Europe’s most powerful classified supercomputer to take the lead in artificial intelligence for defense purposes, Armed Forces Minister Sébastien Lecornu announced at the Eurosatory defense show here.

The Armed Forces Ministry will make computing time available to the Higher Education Ministry and other government departments, and also allow French defense firms to run AI solutions in a secure and protected environment, according to Lecornu. Lecornu said he’s not providing details on the capacity of the super calculator for AI applications, which is planned for 2025.

France in March announced plans to reallocate €2 billion of funding from the 2024-2030 defense budget to artificial intelligence. Lecornu said AI will be a differentiating factor between countries, creating a break between those left behind and those who manage to hang on.

“The challenge for the French team is obviously to be among those that stand out in this field,” Lecornu said. “When it comes to military AI, we’ll be the European power that’s best prepared, that’s going to devote the most resources to it.”

France, with its army that deploys operationally, will be capable of developing “not theoretical AI but combat AI, and that fundamentally is going to change the game.”

France turns to AI for signals analysis in underwater acoustics war

Lecornu said the technology is already being used in the French Army “everywhere,” with the Caesar howitzer using AI to aid with target acquisition by drone, a development resulting from the needs of Ukrainian gunners in their war against Russia, while the Air Force uses artificial intelligence in pilot training.

In terms of capacity sharing with civilian applications, the ministry will use the same model as that used by the French Atomic Energy Commission in the 1960s, Lecornu said. The Armed Forces Ministry in March announced the creation of a ministerial agency for artificial intelligence in defense, known by its French acronym Amiad.

“It’s such a revolution, artificial intelligence, in many respects as profound as the atom in the aftermath of World War II,” Lecornu said. “The particularity of our business is that it’s unthinkable to run AI on potentially classified material in a network that isn’t completely secure.”

France isn’t the only country to have identified AI as a military priority. U.S. Department of Defense spending on AI nearly tripled in the one-year period ending in August 2023 to $557 million, according to analysis by the Brookings Institution published in March.

The fastest supercomputer in Europe is Finland’s LUMI, with a performance rating of 380 Petaflop/s, or 380 quadrillion floating-point operations per second, which puts it in fifth position globally, according to the Top500 ranking as of June. Two of the world’s most powerful supercomputers use architecture by Eviden, a unit of French company Atos: Leonardo in Italy and MareNostrum in Spain.

One challenge France faces is retaining skilled engineers, with competition from tech companies such as Google and Apple that offer better salaries, Lecornu said. Retaining talent is a question of sovereignty, and it’s an issue the AI agency will tackle, according to the minister, who didn’t provide details on how that might be achieved.

]]>
AMAURY CORNU
<![CDATA[MBDA unveils AI-based Ground Warden tool for finding hidden targets]]>https://www.c4isrnet.com/artificial-intelligence/2024/06/17/mbda-unveils-ai-based-ground-warden-tool-for-finding-hidden-targets/https://www.c4isrnet.com/artificial-intelligence/2024/06/17/mbda-unveils-ai-based-ground-warden-tool-for-finding-hidden-targets/Mon, 17 Jun 2024 18:47:46 +0000PARIS — The missile manufacturer MBDA has unveiled an artificial intelligence-based capability to allow military forces to see hidden targets in challenging combat environments.

The multinational firm presented its Ground Warden beyond line-of-sight technology here during the Eurosatory defense and security conference.

In one scenario presented by MBDA, a firing post is located above a village, concealed within hilly terrain. As an enemy target is detected from the shooter’s point of view, that data is relayed to a control system.

When the first missile is launched, it processes images in real time during its flight trajectory and before hitting the identified threat. That data is then fed to the new man-portable module, dubbed Ground Warden, and used to inform any subsequent missiles of concealed threats.

The Ground Warden, shown, is based on MBDA’s combat-proven Akeron anti-tank guided missile system. (MBDA)

The Ground Warden is based on MBDA’s combat-proven Akeron anti-tank guided missile system and is the company’s answer to a “growing need for reactivity” in increasingly fast-paced combat.

A second simulation presented to reporters stemmed from an unmanned aerial vehicle’s perspective around an area heavily populated by trees, which company representatives said make it difficult for missiles to reach their targets.

This scenario was meant to show the Ground Warden can work alongside drones. In this context, the UAV can provide an overview and recording of the operational environment that it then relays to the AI module and command system. The Ground Warden can further provide insights to the gunner of where the target — in this case a tank — can be intercepted and when to fire.

MBDA makes a similarly named platform, the Sky Warden, which is a counter-drone system designed to control a large range of sensors and effectors.

]]>
<![CDATA[How to harness AI and Zero Trust segmentation to boost cyber defenses]]>https://www.c4isrnet.com/opinions/2024/06/11/how-to-harness-ai-and-zero-trust-segmentation-to-boost-cyber-defenses/https://www.c4isrnet.com/opinions/2024/06/11/how-to-harness-ai-and-zero-trust-segmentation-to-boost-cyber-defenses/Tue, 11 Jun 2024 19:32:22 +0000Modern cyber threats have become increasingly sophisticated, posing significant risks to federal agencies and critical infrastructure organizations alike. Critical infrastructure organizations face numerous challenges, including outdated systems and insufficiently patched software, which make them attractive targets for cyber attackers.

These weaknesses often arise due to the complexity of maintaining and updating legacy systems which often lack basic security controls, as well as the challenges of ensuring comprehensive security measures across expansive and interconnected IT enterprises. As artificial intelligence continues to advance, its use in the federal space is becoming more prevalent, leading agencies to increase their use of the technology as part of their cyber defenses.

However, recent research reveals that although 80 percent of cybersecurity decision-makers believe accelerating AI adoption is vital for their organization’s resilience against emerging threats, only 31 percent report that their organization currently utilizes AI for cybersecurity. Notably, 54 percent of leaders who have implemented AI say that it has helped to accelerate incident response times, highlighting AI’s potential as a powerful defensive tool.

AI serves as both a formidable defense mechanism for protecting sensitive data and a potent tool for cyber attackers. AI’s ability to continuously learn and improve from each interaction makes it an invaluable asset in defending against evolving threats. However, malicious actors also exploit AI to develop sophisticated cyberattacks, targeting vulnerabilities and bypassing traditional defenses with alarming precision.

Maximizing AI’s defensive capabilities With ZTS

To combat evolving threats and address vulnerabilities, AI and Zero Trust Segmentation offer a path forward. AI rapidly automates tasks, detects threats, and provides predictive analytics – analyzing vast amounts data in real-time to identify and mitigate anomalies quickly. ZTS complements AI by ensuring continuous verification of every access request within an enterprise, segmenting the applications with strict access controls and monitoring, thus limiting lateral movement by attackers and containing breaches.

AI’s defensive capabilities can be maximized when integrated with ZTS. Since ZTS involves continuously verifying and monitoring all user and device activities within a enterprise, no entity is trusted by default, even if it is already inside the enterprise. The integration of AI and ZTS means that even if an attacker manages to infiltrate the enterprise, their ability to move laterally and escalate privileges is severely limited.

While ZTS alone provides robust defenses by restricting access and enforcing strict verification protocols, the addition of AI enhances these capabilities by automating threat detection and response, identifying potential breaches in real time, and adapting to new attack vectors dynamically. Auto-labeling, for example, enhances AI’s effectiveness by streamlining data classification, reducing manual intervention, and allowing faster, more accurate anomaly detection. This leads to improved operational efficiency and heightened security as AI systems better recognize patterns, predict issues, and implement safeguards in real-time.

Together, AI and ZTS form a proactive, comprehensive defense strategy for critical infrastructure organizations, enhancing resilience against sophisticated cyber adversaries and helping organizations to stay one step ahead of attackers.

Pushing for responsible AI in critical infrastructure

Deploying AI across critical infrastructure organizations demands a strong commitment to ethics, focusing on transparency, fairness, and accountability. Transparency ensures AI systems are understandable and trustworthy. Fairness aims to prevent biases so that no one group is disadvantaged. Accountability requires organizations to take responsibility for AI outcomes, with protocols to address errors and mechanisms for stakeholders to raise concerns.

To deploy AI responsibly across critical infrastructures, organizations should adhere to several best practices. The Department of Homeland Security has ramped up its focus on AI with its new Artificial Intelligence Safety and Security Board, which offers recommendations for safely preventing and preparing for AI-related disruptions to critical services; addressing AI risk and threats; trainings, deployments, and usage of AI; responsibly leveraging AI while protecting individuals’ privacy, civil rights, and civil liberties. Proactive approaches like these are essential to stay ahead of adversaries exploiting AI for malicious purposes.

To effectively harness AI’s defensive capabilities and protect critical infrastructure, responsibly integrating AI with ZTS is essential. This integration creates a dynamic defense mechanism that is difficult for attackers to bypass. AI’s continuous monitoring and real-time threat analysis enhance the ability to swiftly identify and respond to threats, forming a robust cybersecurity posture.

Combining AI’s real-time data processing and predictive analytics with ZTS’ stringent access controls significantly boosts resilience against evolving cyber threats. This approach addresses current vulnerabilities and anticipates future challenges, ensuring the security of critical infrastructure in a complex threat landscape.

Gary Barlet is federal chief technology officer at Illumio.

]]>
da-kuk
<![CDATA[Air Force, Space Force unveil tool for AI experimentation]]>https://www.c4isrnet.com/artificial-intelligence/2024/06/10/air-force-space-force-unveil-tool-for-ai-experimentation/https://www.c4isrnet.com/artificial-intelligence/2024/06/10/air-force-space-force-unveil-tool-for-ai-experimentation/Mon, 10 Jun 2024 18:41:25 +0000The Air Force and Space Force launched a generative AI tool on Monday, encouraging airmen and guardians to experiment with using the technology for tasks like summarizing reports, IT assistance and coding.

The services want to use the tool, which it’s calling the Non-classified Internet Protocol Generative Pre-training Transformer, or NIPRGPT, to better understand how AI could improve access to information and to gauge whether there’s demand within the force for the capability.

Alexis Bonnell, chief information officers and director of digital capabilities at the Air Force Research Laboratory, said the overarching goal is to make data more accessible and customizable within the services and to determine whether generative AI can facilitate that.

“Our goal is always to be able to say what technologies are relevant to our mission now and in the future,” Bonnell told reporters in a June 10 briefing. “NIPRGPT will really provide a place where people can exercise that curiosity in a safe and appropriate manner, where they can understand what parts of their work can potentially be complemented.”

The Defense Department has been exploring how it might use generative AI tools like ChatGPT to make daily tasks like finding files and answering questions more efficient. The Navy in 2023 rolled out a conversational AI program called “Amelia” that sailors could use to troubleshoot problems or provide tech support.

General Dynamics Information Technology rolled out a conversational artificial intelligence known as Amelia, rendered here, as part of the U.S. Navy Enterprise Service Desk endeavor. (Photo provided/GDIT)

Collen Roller, senior computer scientist at AFRL, said in the briefing that his team at the lab has made a concerted effort in recent years to research how the Air Force and Space Force might use the technology for administrative tasks, but also for tactical operations.

“The area’s changing so rapidly and fast, we have to be able to adapt to these new things that are coming out,” he said. “It’s super important from a [research and development] standpoint that we’re able to adapt to whatever’s coming out so that we can evaluate these things for our specific use cases.”

AFRL developed NIPRGPT using publicly available AU models and Bonnell noted that the service hasn’t committed to a particular approach or vendor as it builds on that baseline. As airmen and guardians begin using the system, AFRL will work with commercial partners to test and integrate their tools and determine whether they have utility for the services.

“We’re hoping that not only will this kick off the curiosity and experimentation that we can see in our users, but it will also, for those providers that have models, it will give us a way to actually test those,” she said. “We fully expect that some models are going to be great at some use cases and not so great at others.”

Along with helping companies experiment with different tools and models, the effort will also help the Air Force and Space Force determine the best approach for buying theses capabilities, Bonnell noted. The right strategy, she said, will likely depend on how the services use NIPRGPT and whether there’s sufficient demand.

“I expect the interest to be robust and hopefully to be able to drive a lot of our learning in a very quick way,” Bonnell said. “This tool helps us understand what we want the end state to look like. And so as commercial tools come down and navigate our process or our system or security flow, then we are all the more smart when we buy them.”

]]>
Worawee Meepian
<![CDATA[DARPA sees automated tools helping streamline software certification]]>https://www.c4isrnet.com/cyber/2024/06/05/darpa-sees-automated-tools-helping-streamline-software-certification/https://www.c4isrnet.com/cyber/2024/06/05/darpa-sees-automated-tools-helping-streamline-software-certification/Wed, 05 Jun 2024 19:08:33 +0000The Defense Advanced Research Projects Agency is working to push tools it’s developing to automatically prove that software is secure out to the commercial sector and help companies overcome the cumbersome Pentagon verification process, according to Benjamin Bishop, deputy director of transition in the agency’s Adaptive Capabilities Office,

“One of the things that we hear from the warfighter is we’ll have a technology solution that is available, but getting it through that process to have the authority to operate and be able to get it through the approval process is very laborious,” Bishop said Wednesday said at the annual C4ISRNet conference. “I will add, for good reason, because in the past these DOD steps have shown improvement to generate a higher quality solution.”

While humans doing math can prove software works as it’s designed, there are tools that can look at file metadata, which contains proof that the software is secure, and automatically verify its safety, he explained at the virtual event.

DARPA’s program Automated Rapid Certification of Software, or ARCOS, is working on developing that capability, Bishop said. The goal of the program is to automate the evaluation of software assurance evidence to enable certifiers to determine rapidly that system risk is acceptable, according to the agency.

The capability is viable, he added, but “what I’m really interested in is not just that it’s technically viable, but we can we do it in a way that can be adopted across the DOD ecosystem?”

In order to do so, the DOD will need to provide incentive for commercial partners to use it, Bishop noted.

“We have seen big tech or large tech companies are embracing some of these tools and they’re moving out with it because they see the value in these methods,” according to Bishop.

Another element in developing such a capability is ensuring its user friendliness. “Are there ways that we can get these tools, not only to be able to be acceptable by the certifying organizations but can they be used by people that don’t have PhDs,” Bishop said, “to be able to navigate these tools.”

]]>
ipopba
<![CDATA[AI regulators fear getting drowned out by hype of wars]]>https://www.c4isrnet.com/global/europe/2024/06/03/ai-regulators-fear-getting-drowned-out-by-hype-of-wars/https://www.c4isrnet.com/global/europe/2024/06/03/ai-regulators-fear-getting-drowned-out-by-hype-of-wars/Mon, 03 Jun 2024 09:33:18 +0000BERLIN — A fighter jet hurtles toward an adversary head-on. Mere moments before a collision, it swerves — but not before dealing a lethal blow to its opponent.

This risky maneuver would be reckless even for the most skilled pilot. But to artificial intelligence, such a simulation scenario showcases one of the most effective dogfighting techniques, scoring kill rates of nearly 100% against human pilots.

In a warfighting revolution turbocharged by the conflict in Ukraine, autonomous decision-making is quickly reshaping modern combat, experts told Defense News in a series of interviews.

Weapons that can decide for themselves whom or what to target — and even when to kill — are entering military arsenals. They have experts worried that an uncontrolled arms race is emerging, and that warfare could become so fast-paced that humans cannot keep up.

It is the speed, in particular, that may prove a “slippery slope,” said Natasha Bajema, a senior research associate at the James Martin Center for Nonproliferation Studies, a nongovernmental organization. As the speed of conflict increases with greater autonomy on the battlefield, the incentives to delegate even more functions to the machines could become ever stronger.

“Do we really think that in the middle of a battle between China and the U.S., someone is going to say: ‘Hold on, we can’t let the machine do that’?” Bajema asked, referring to the allure of what she described as war moving at machine speed.

“It’s the most competitive race for advantage that we’ve seen since the race for nuclear weapons,” she added.

The appetite for more autonomy in weapons, fanned by combat in Ukraine and Gaza, has drowned out long-standing calls for limits on AI in military applications. But they still exist.

Ambassador Alexander Kmentt, the director of the Disarmament, Arms Control and Non-Proliferation Department of the Austrian Foreign Ministry, called the scenario of trigger-pulling, AI-enabled robots a true “Oppenheimer moment,” a reference to the birth of the atomic bomb in the 1940s.

Austria has been leading an international push to bring governments from around the world to the table to draft the rules of war for a new era.

In late April, the country’s government hosted the first global conference on autonomous weapon systems in Vienna’s grand Hofburg Palace. Kmentt said it exceeded his expectations.

“At times during the preparations, I was concerned about attendance, that the room would be half empty,” the ambassador recalled in an interview with Defense News. Instead, there were more than 1,000 delegates from 144 countries present in Vienna.

“Even those states that used to see the topic as some sort of sci-fi now perceive it as being incredibly timely,” he said.

Much of the Global South — a term sometimes used to couple countries that reject the hierarchy of world politics — now seems interested in restricting the technology, according to Kmentt, though little could be achieved without buy-in from the major global powers.

Unintended consequences

For all their military appeal, AI-enabled weapons come with the flaws of a technology still in its infancy. Machine vision, in particular, is still too prone to errors, said Zachary Kallenborn, lead researcher at Looking Glass USA, a consultancy that deals with questions surrounding advanced weapons systems.

“A single pixel is enough to confuse a bomber with a dog, a civilian with a combatant,” he said.

In the coming years, experts expect to see an accumulating number of autonomous weapons on the battlefield with increasingly sophisticated abilities. Even without technological mishaps, this might lead to a heightened risk of misunderstanding.

The disposable nature of drones, for example, could lead to more aggressive or risky behaviors, said Bajema. Intercepting an autonomous system would likely elicit a different reaction among adversaries than downing a crewed plane, she said, but where precisely the line falls is hard to determine.

The race toward AI is governed by what she called the “terminator problem” — if one state has it, all believe they need it to feel secure — an environment that makes regulating the technology so difficult.

Moreover, today’s geopolitical climate is not very amenable to multilateral arms control, she added.

Given those odds, Kmentt said he is merely looking for a compromise.

“It’s clear that there will be no universal consensus on the topic,” he noted. “There’s hardly any issue where this exists, and certain countries seem to have no interest in developing international law. So we have to accept that, and instead work together with those countries that are interested in developing these rules.”

But he admitted to being somewhat pessimistic about the chances of success.

“These weapons will majorly define the future of armed conflicts, and as a result the voices of militaries worldwide who want these weapons will become louder and louder,” Kmentt predicted.

For now, the target date of 2026 looms large for the community of AI nonproliferation advocates; it refers to the United Nations’ mandate of setting “clear prohibitions and restrictions on autonomous weapon systems,” in the words of U.N. Secretary-General António Guterres.

“So far, there is insufficient political will to make something happen due to the difficult geopolitical situation,” Kmentt said.

The 2026 target is not an arbitrary date, he added. “If we haven’t succeeded with anything by then, the window for preventive action has closed.”

]]>
Kyle Brasier
<![CDATA[Leveraging AI, digital twins, AR/VR for military aircraft maintenance]]>https://www.c4isrnet.com/opinion/2024/05/31/leveraging-ai-digital-twins-arvr-for-military-aircraft-maintenance/https://www.c4isrnet.com/opinion/2024/05/31/leveraging-ai-digital-twins-arvr-for-military-aircraft-maintenance/Fri, 31 May 2024 11:56:32 +0000The integration of advanced technologies including artificial intelligence, digital twins, and augmented reality/virtual reality is drastically changing traditional approaches to aircraft maintenance and repair. Military and aerospace manufacturers are increasingly turning to innovative solutions to optimize maintenance procedures, enhance safety protocols, and reduce operational costs.

The aerospace, defense and other industrial sectors have a mission need to modernize their infrastructure to improve the operational efficacy by using digital twin technologies. The existing processes of operation, training and maintenance heavily relies on two-dimensional paper-based manuals with minimal digital modeling available.

Boeing expands drone exams to Lockheed C-5 with eye on broader fleet

The lack of existing digital models severely hampers operational efficiency, mission planning and aircraft readiness. Digital twins now revolutionize the way we design, build, operate, and repair physical objects and systems. The digital transformation of the industrial processes requires it to incorporate digital twin technologies that help provide the best possible tools for decades to come.

Aerospace manufacturers and military aircraft service and repair crew still face a bevy of challenges, including a lack of extensive 3D CAD models. For legacy aircrafts, very limited 3D models are available, and most of the models, requirements, and specifications are in 2D form. Generating accurate 3D models using dedicated scanners and digital modifications based on the 2D data using traditional methods is very expensive and time consuming. Additionally, most 3D scanning software keep the models in proprietary formats significantly limiting the usefulness of the models due to restricted interoperability.

Additional challenges include the ability to Incorporate the generated 3D models into existing SysML workflows and/or creating flexible workflows which are not tied into proprietary models and systems. To simulate the standalone behavior of each model and sub-system, as well as the interaction between different sub-systems, manufacturers need to incorporate the 3D model and their physical behavior into a system simulation model using SysML. This requires creating a framework for ingesting all the individual and combined system requirements into a SysML workflow, parameterizing the model configurations, simulating, and monitoring the behavior of individual components as well as their interactions.

AI-powered predictive maintenance

Maintenance for military and defense aircraft has traditionally relied on scheduled checks and reactive repairs based on reported issues. However, AI-driven predictive maintenance is now transforming this approach by leveraging data analytics and machine learning algorithms to predict potential failures before they occur. Airlines are harnessing AI to monitor vast amounts of data collected from sensors embedded within aircraft components, engines, and systems. This real-time data is analyzed to detect subtle patterns indicative of impending malfunctions or performance degradation.

AI algorithms can detect anomalies in data patterns, such as engine temperature fluctuations or irregular vibration signatures, which might indicate underlying issues. By continuously monitoring and analyzing this data, AI can accurately forecast when specific components might require maintenance or replacement, enabling airlines to schedule repairs proactively during routine maintenance intervals. This shift from reactive to predictive maintenance not only enhances safety by reducing the risk of unexpected failures but also optimizes operational efficiency and minimizes downtime.

The role of digital twins

Digital twins are virtual representations of physical assets, such as aircraft, created using real-time data collected from sensors, historical maintenance records, and operational inputs. This technology enables aerospace manufacturers and airlines to simulate and visualize the performance of aircraft components and systems in a virtual environment. By integrating AI algorithms into digital twin models, operators can gain valuable insights into the health and operational status of individual aircraft and their components.

For aircraft maintenance, digital twins offer a transformative approach by providing a comprehensive understanding of an aircraft’s condition and behavior. Maintenance crews can utilize digital twins to simulate different operational scenarios and assess the potential impact on aircraft performance and maintenance requirements. This allows for more accurate planning of maintenance activities, optimized spare parts inventory management, and enhanced decision-making based on predictive analytics.

Digital twins also facilitate remote monitoring and diagnostics, enabling maintenance teams to identify issues without physical inspection. For instance, using real-time data from digital twins, AI algorithms can recommend specific maintenance actions based on the current condition of critical components, thereby reducing the need for manual inspections and improving overall maintenance efficiency.

Incorporating 3D technology

Leading digital twin solution providers today are reshaping how industrial sectors utilize AI and spatial computing for digital twins, automation and robotics applications. These providers leverage the advancements in immersive XR interfaces, AI, and cloud technologies to provide an open, modular, high-precision, and scalable AI-powered cloud platform for fast, accurate and cost-effective 3D digital twin creation that boosts efficiency, automation and productivity in manufacturing, operations, training and sustainment.

With the proliferation of high-quality sensors, namely high-resolution color cameras, depth sensors (such as LIDARs), motion sensors, and eye-trackers, built into these COTS devices – providers have access to very high-quality spatial data to generate accurate 3D spatial maps in near real-time. Companies are primarily limited by the computation and power (battery) of these mobile devices. Today’s platforms streamline 3D scanning and digital twin workflows while using cloud computing to enable affordable consumer hardware to exceed its standard capability.

These solutions overcome mobile device limitations in battery life and computation by processing data in the cloud (on premises/air gapped or remotely such as AWS GovCloud). This enables rapid generation of detailed 3D models with millimeter accuracy from sensors in mobile phones, tablets, and XR headsets with full fidelity of the model and no noticeable lag.

By moving the most intensive processing tasks to the cloud, AI-driven software produces high quality point clouds from inexpensive COTS devices. This significantly accelerates digital twin creation compared to traditional methods. Today’s newer commercial solutions enable fast and accurate 3D point cloud generation using an XR headset as the capture device, while processing all the data on a server PC.

AR/VR applications in maintenance

Augmented Reality (AR) and Virtual Reality (VR) technologies are reshaping aircraft maintenance procedures and technician training programs. AR overlays digital information onto the technician’s field of view, providing real-time guidance and instructions during maintenance tasks. For example, AR can superimpose schematics, checklists, or diagnostic data onto physical aircraft components, allowing technicians to perform complex repairs more accurately and efficiently.

VR, on the other hand, is revolutionizing technician training by offering immersive and interactive simulations of maintenance procedures in a virtual environment. Trainees can practice complex tasks, such as engine disassembly or wiring repairs, without the need for physical aircraft access. VR simulations can replicate different aircraft models and scenarios, providing hands-on experience in a safe and controlled setting.

The integration of AI, 3D spatial digital twins, and AR/VR technologies in military aircraft maintenance and repair functions offers a multitude of benefits for airlines and aerospace manufacturers. Enhanced predictive maintenance capabilities reduce operational disruptions, extend aircraft lifespans, and optimize maintenance costs. Digital twins provide a holistic view of aircraft health, enabling proactive decision-making and streamlined maintenance processes. AR/VR technologies improve technician efficiency and proficiency, ultimately enhancing overall safety and reliability. With these technologies at the forefront, aerospace manufacturers and airlines can greatly improve the process of aircraft maintenance and repair.

Dijam Panigrahi is co-founder and COO of GridRaster Inc., a provider of cloud-based AR/VR platforms.

]]>
<![CDATA[Palantir wins contract to expand access to Project Maven AI tools]]>https://www.c4isrnet.com/artificial-intelligence/2024/05/30/palantir-wins-contract-to-expand-access-to-project-maven-ai-tools/https://www.c4isrnet.com/artificial-intelligence/2024/05/30/palantir-wins-contract-to-expand-access-to-project-maven-ai-tools/Thu, 30 May 2024 16:36:04 +0000The Army awarded Palantir Technologies a $480 million contract to expand a data analysis and decision making tool to more military users across the globe.

The Maven Smart System is part of Project Maven, the Pentagon’s marquis artificial intelligence program, which ingests and processes data from multiple sources, like satellite imagery and geolocation data, and uses it to automatically detect potential targets.

Palantir, a Denver-based software and data analytics company, has been developing and experimenting with the prototype with a limited number of operators. The five-year contract, announced May 29, will allow the Defense Department to expand its use to thousands of users at five combatant commands: U.S. Central Command, European Command, Indo-Pacific Command, Northern Command and Transportation Command. The system will also be available to members of the Joint Staff.

“Users are going to span everyone from intel analysts and operators in some of the remote island chains across the world to leadership at the Pentagon,” Shannon Clark, Palantir’s head of defense growth, told reporters May 30. “This is taking what has been built in prototype and experimentation and bringing this to production.”

Clark described Maven as a contributor to the Pentagon’s Combined Joint All Domain Command and Control concept, which is driving efforts to integrate and share data across the many environments in which the military operates. The system has been involved in DOD exercises, including the Chief Digital and Artificial Intelligence Office’s Global Information Dominance Experiment series.

The CDAO is using these exercises, in part, to validate CJADC2 capabilities. The Defense Department announced in February that it had delivered a minimum viable capability for CJADC2, though declined to provide details on what specific capabilities were included in that. Clark would not confirm whether the Maven Smart System was part of that initial delivery.

Maven’s applications range from awareness and visibility of the battlefield, to congested logistics and joint fires, she said.

For example, the U.S. military has struggled in the past to visualize its location on the battlefield because that navigation data comes from a number of different sources. According to Andrew Locke, Palantir’s enterprise lead, Maven fuses that data and puts it on a map that leaders can use to provide direction to forces.

“Where this becomes powerful is when we start layering on additional sources of information to that, and so besides just seeing like where that formation is, we’re able to do really interesting things through data integration, and through joining with different datasets,” he said.

As part of the contract, Clark noted, Palantir will integrate AI capabilities from other firms to build on the baseline they’ve established.

Clark declined to confirm what DoD assets the Maven Smart System is integrated with or which systems it may connect to in the future, but noted that the firm’s intent is to incorporate any new data system or AI tool the government buys and wants to include in Maven.

“Should, tomorrow, a new sensor come online, should, tomorrow, a new AI capability come online, we want to be able to integrate with that,” she said.

]]>
PATRICK T. FALLON
<![CDATA[All that’s left: A self-defeating semiconductor export tactic for China]]>https://www.c4isrnet.com/opinion/2024/05/28/all-thats-left-a-self-defeating-semiconductor-export-tactic-for-china/https://www.c4isrnet.com/opinion/2024/05/28/all-thats-left-a-self-defeating-semiconductor-export-tactic-for-china/Tue, 28 May 2024 17:53:45 +0000The Biden administration has two tactics to slow — or ideally halt — China’s indigenous development of its advanced semiconductor sector. The core tactic is extensive, unprecedented, unilateral export controls on American advanced semiconductor technology to China.

Supporting this core approach is the ad hoc tactic of periodically badgering key semiconductor countries to follow our export controls so that their semiconductor businesses do not simply take over the significant market share in China required to be abandoned by American semiconductor companies.

In January of this year, I predicted this supporting tactic would fail. In April, it did.

The Netherlands and Japan both refused senior U.S. officials’ requests for tighter export controls because these two key semiconductor supply chain countries wanted additional time to evaluate their existing export controls and to see who would win the upcoming U.S. election. Germany’s response was neutral, neither signaling support nor rejecting the U.S. officials’ requests.

Since then, crickets.

That leaves only the core tactic, which on its own is self-defeating. The competing semiconductor businesses of these key three semiconductor countries, along with South Korea and Taiwan, will simply vacuum up the abandoned American semiconductor market share in China. In those embarrassing parts of the advanced semiconductor supply chain where the semiconductor companies are completely absent from American shores, the sales to China will now still go through.

China is the “pacing challenge” for the U.S. As previously explained, semiconductors are not simply another important technology or even a first-among-equals technology; semiconductors are alone in a class of the first order because they undergird all other advanced technologies.

Dishearteningly, the Biden administration doesn’t even have a strategy for this apex technology. Headlines, such as “Blacklisted China chipmaker SMIC becomes the world’s second-largest pure-play foundry by revenue — outsells GlobalFoundries and others,” confirm the real-world consequences of this strategy void.

As if this were not disastrous enough, with certain types of Chinese advanced technology armaments we currently face, such as hypersonic missiles, or might expect to face in the future, such as autonomous killer robots, there are many additional physical components to these weapons that might be successfully caught by either our own or other U.S. allies’ export controls because they are physical things. When it comes to artificial intelligence, however, once you have the high-end silicon, you need only the software algorithms and large data sets.

While technically such advanced AI technology would be likely caught by either our own or other U.S. allies’ export controls, the real-world, practical difficulty of stopping such AI technology transfer is almost insurmountable because no physical goods are involved. An electrical outlet, a laptop and internet access are all that is needed for this AI technology to be transported to China.

That is particularly the case when the AI software is open source instead of proprietary. For example, Meta Platforms, which owns Facebook, has adopted an open-source AI software business model that allegedly contains national security safeguards. That has not stopped China from using this advanced AI software as the base for a majority of its homegrown AI models. That makes the state of play for China’s advanced semiconductors all the more critical as a last line of defense.

So if China succeeds in buying from other countries the advanced semiconductor technology America has either banned its own companies from selling to China or would if they were domestically resident, expect China’s cyberattacks, biological weapons, robotic submarines, warships, fighter jets, swarming aerial drones and ground combat vehicles to be powered by AI.

And of course when AI is being considered as significant an inflection point equivalent in human history as fire, electricity, the internet or nuclear weapons, the implications to our and our allies’ economies versus China cannot be exaggerated. As succinctly explained by historian Michael Mastanduno, “ military power rests upon a foundation of economic power both qualitatively and quantitatively.”

During the second presidential debate in 1992, Ross Perot famously said the North American Free Trade Agreement would create a “giant sucking sound” of American jobs to Mexico. China is creating a second giant sucking sound by buying from our foreign competitors as much advanced semiconductor technology as possible. This time, the giant sucking sound from China is because there is no export control treaty — only a failed, voluntary export control regime known as the Wassenaar Arrangement, with Russia as its most notorious member.

Thus, it is only a slight exaggeration to state that if we do not get our semiconductor export control strategy right, not much else matters when it comes to the technology arms race with China.

Given deepening cooperation between China and Russia, especially regarding semiconductors, our semiconductor export control strategy cannot be just about China. More than 80% of the semiconductors that Russia has purchased since its full-scale invasion of Ukraine in 2022 have come directly from China. This shockingly high number is yet another example of how critical it is that we get a strategy — and get the right one. This deep semiconductor linkage between China and Russia is another arrow in U.S. officials’ quiver for European countries such as the Netherlands and Germany, who are equivocating regarding tighter semiconductor export controls on China.

I have argued before in short and long form that we need a semiconductor export control strategy — not failed and self-defeating tactics. Our relationships with the Netherlands, Germany, South Korea, Japan and Taiwan are multifaceted and deep, and they contain within them many potential horse trades we could make if we were determined to create a binding semiconductor export control treaty among us that is focused on China, Russia, North Korea and Iran.

The People’s Liberation Army recently surrounded Taiwan’s main island, as well as smaller islands like Matsu and Kinmen, as “punishment” because of Taiwan’s recent inauguration of its democratically elected president, Lai Ching-te. According to Taiwanese military experts, these PLA military drills simulated for the first time a full-scale attack rather than an economic blockade. There is no time to waste.

André Brunel is an international technology attorney with Reiter, Brunel and Dunn. This commentary was adapted from his article published in the Journal of Business & Technology Law. The views and opinions expressed in this commentary are his and do not necessarily reflect the views or positions of the law firm or any clients it represents.

]]>
William_Potter
<![CDATA[How real-time search analytics and AI can help the DOD break down data]]>https://www.c4isrnet.com/opinion/2024/05/17/how-real-time-search-analytics-and-ai-can-help-the-dod-break-down-data/https://www.c4isrnet.com/opinion/2024/05/17/how-real-time-search-analytics-and-ai-can-help-the-dod-break-down-data/Fri, 17 May 2024 12:31:09 +0000From sensors on the ground to drones in the air, data is becoming more ubiquitous for military operations. The quality and accuracy of the insights derived from data depend on the ability to link and examine all the relevant data. However, transferring and storing massive amounts of data in a single place for analysis is not only costly in terms of time and bandwidth, but also prone to errors and delays. Consequently, finding the insights hidden in the vast amounts of information becomes more difficult than ever.

Artificial Intelligence (AI), including machine learning (ML), deep learning and Generative AI (Gen AI), offers new ways to discover and curate data. Marrying search with AI unites the best of both technologies to create a simpler way for everyone to access data. As the military services look to share and consume more data to make real-time decisions, service members need confidence in the insights generated from their data and must trust that their proprietary data will remain secure.

The Department of Defense’s (DoD) Chief Digital and Artificial Intelligence Office (CDAO) continues to move forward with the adoption of emerging technologies such as Gen AI. The CDAO has identified nearly 200 use cases for how the department could leverage the breakthrough technology across a variety of functions, according to the office’s chief technology officer. While Gen AI has commercial applications within the DOD, the consequences are higher and there is a need for the DoD to be responsible in how the department uses the technology.

How search analytics help

Search powered AI is one of the tools that can help the DoD achieve some of these goals. A distributed, real-time search and analytics engine that can centrally store data for fast search, fine-tuned relevancy, and powerful analytics can help defense agencies break data silos and securely share information. There is a pressing need for defense agencies to manage unstructured data such as text, images, geospatial, and time series, and support complex queries and aggregations. Real-time search analytics can also integrate with other tools and platforms such as Apache Hadoop, Apache Spark, business intelligence applications, and AI solutions.

Defense agencies must manage data on-premises, in the cloud, or within a hybrid environment. They need to accommodate remote devices and challenging terrain. Essential security features include encryption, authentication, authorization, and auditing.

A search analytic platform should offer more than real-time search. It must incorporate security functions with predictive analytics. Cloud monitoring and observability capabilities are crucial. They enable automation and anomaly detection, which in turn, hasten problem resolution and the discovery of opportunities.

Breaking down silos

Tearing down data silos and achieving data interoperability is a complex and challenging task. To overcome these challenges, defense agencies must embrace a multifaceted approach that includes technical innovation, organizational restructuring, and cultural shifts. Some of the possible strategies are:

— Promoting inter-team collaboration and data sharing across different domains and partners.

— Establishing a centralized data repository or data lake that can store and manage unstructured data from various sources.

— Employing data integration tools and solutions that can connect, transform, and enrich data from disparate systems and formats.

— Leveraging cloud-based platforms and services that can offer scalable, secure, and flexible data storage and analysis capabilities.

Unlocking hidden value

The future of defense depends on the ability to access, analyze, and share data across different domains and platforms. But too often, data is trapped in silos that prevent effective collaboration and decision-making. For this reason, it’s essential for defense agencies to adopt methods to extract real value from data, facilitating secure sharing and achieving interoperability.

Real-time search analytics can be a valuable tool for the DoD to tear down data silos and harness the potential of their data. However, search analytics alone is not enough to solve the data silo problem. The DoD also needs to adopt a data strategy that aligns with their mission and vision, and foster a data culture that encourages data quality, accessibility, and interoperability, as outlined in the Pentagon’s data, analytics and AI adoption strategy.

By using search powered AI, defense agencies can transform data from a liability to an asset and gain a strategic advantage in a complex and dynamic world.

Chris Townsend is Vice President of Public Sector at Elastic, a data research and analytics company.

]]>
Eugene Mymrin
<![CDATA[France turns to AI for signals analysis in underwater acoustics war]]>https://www.c4isrnet.com/global/europe/2024/05/17/france-turns-to-ai-for-signals-analysis-in-underwater-acoustics-war/https://www.c4isrnet.com/global/europe/2024/05/17/france-turns-to-ai-for-signals-analysis-in-underwater-acoustics-war/Fri, 17 May 2024 10:42:14 +0000PARIS — The French Navy is turning to artificial intelligence to help its submariners detect enemy vessels in a growing sea of underwater sounds.

The Navy’s acoustic recognition and interpretation center CIRA in Toulon is working with French startup Preligens on AI-powered analysis of underwater acoustic signals, the center’s head, Vincent Magnan, said in a presentation here Thursday. France expects to test the technology onboard its submarines by the end of the year, with operational deployment scheduled for 2025.

As France equips more and more vessels with increasingly powerful passive acoustic sensors, the amount of data collected for analysis is growing exponentially. The Navy is counting on AI to help its acoustics analysts, nicknamed “golden ears,” cut through the noise, both at the Toulon center and on board its submarines.

More sensors and greater detection ranges will result in “a massive flow of data,” Magnan said. “To be able to analyze all this data, and especially to be able to isolate from it the useful and decisive information for the conduct of our combat operations, we need to resort to technological innovations, including artificial intelligence.”

In addition to submarines, frigates and aircraft fitted with passive sensors, the near future will bring drones and underwater gliders that capture acoustic data, according to Magnan. The amount of such data gathered by CIRA has increased to around 10 terabytes in 2024 from 1 terabyte in 2020, and is expected to approach 100 terabytes or more by 2030.

Interest in “passive acoustic warfare” is growing because it allows surface vessels and submarines to detect underwater sounds during operations at sea and derive tactical elements in “all discretion,” without an adversary knowing about it, Magnan said. A particular propulsion pattern might allow the Navy to define a target’s speed, which can then in turn determine a tactical maneuver.

The Toulon center is using AI to filter out those acoustic signals of interest, after which humans can carry out high-value added analysis. The goal will be broadly similar at sea, with AI allowing human operators to focus on the useful signals.

Are torpedo-killing torpedoes ready for prime time?

“So we use technology to discard or filter the standard part of the signal, the almost useless part, and we rely on humans to exploit the useful part,” Magnan said.

Sifting through 12 days of acoustic data recorded in the waters off Toulon takes two “golden ears” more than 40 working days, Magnan said. With the AI demonstrator from Preligens, extracting useful signals from those same recordings can be done in 4 to 5 hours, with an additional five to six days of human analysis. “So you can already see that the gain is enormous”

Whereas in the 1990s and 2000s CIRA analyzed acoustic recordings of around 5 minutes targeted at a particular threat, the center now deals with data stretching over forty-day periods that requires “a great deal of human capacity” to process, according to the head of the center.

In the early 2000s, a sonar operator could see around 20 kilometers and would monitor 10 simultaneous acoustic contacts, by 2020 that had increased to more than 200 kilometers and a hundred tracks, Magnan said. France’s third-generation ballistic missile submarines will have even greater sensor capabilities, creating a real need to ease the detection task, the commander said.

France currently operates four Le Triomphant-class nuclear-powered ballistic missile submarines and is in the process of replacing its Rubis-class nuclear-powered attack submarines with six Suffren-class vessels.

The AI model has shown “very encouraging results,” able to distinguish hobbyist boats from commercial vessels, and identify propeller speed, propulsion systems and even the number of propeller blades, according to Magnan. A future step will be combining the AI models applied to acoustics with other sources of information, including satellite, radar, visual and electromagnetic.

The team working on acoustics detection has created a tool to automatically detect and identify various acoustic sources and sound emissions that will be demonstrated at the Viva Technology show in Paris next week, said Julian Le Deunf, an expert at the Armed Forces Ministry’s newly created agency for AI in defense.

“The promising results over these last few months also encourage us to test all these capabilities in real-life conditions, so to take the jump onboard the submarine and test these models directly at sea,” Le Deunf said. “The goal for the end of the year is really to succeed in plugging the model directly behind an audio stream, behind a sensor.”

The AI project has been running at CIRA since 2021, after Magnan met with Preligens executives in October of that year. French military intelligence was already using the company’s AI products to analyze satellite imagery, and Magnan said his discussions with Preligens led to the idea that the model could be replicated to make sense of underwater signals.

Eventually, the AI algorithms will be able to identify ambient noises such as a pump starting up or a wrench falling in a hold, according to Magnan.

“The idea in the long run is obviously to find models that are effective and efficient over the whole acoustic spectrum of the sources we encounter at sea,” he said.

]]>
AFP Contributor
<![CDATA[US aims to outpace China with AI on fighter jets, navigation and more]]>https://www.c4isrnet.com/news/your-air-force/2024/05/13/us-aims-to-outpace-china-with-ai-on-fighter-jets-navigation-and-more/https://www.c4isrnet.com/news/your-air-force/2024/05/13/us-aims-to-outpace-china-with-ai-on-fighter-jets-navigation-and-more/Mon, 13 May 2024 16:14:54 +0000Two Air Force fighter jets recently squared off in a dogfight in California. One was flown by a pilot. The other wasn’t.

That second jet was piloted by artificial intelligence, with the Air Force’s highest-ranking civilian riding along in the front seat. It was the ultimate display of how far the Air Force has come in developing a technology with its roots in the 1950s. But it’s only a hint of the technology yet to come.

The United States is competing to stay ahead of China on AI and its use in weapon systems. The focus on AI has generated public concern that future wars will be fought by machines that select and strike targets without direct human intervention. Officials say this will never happen, at least not on the U.S. side. But there are questions about what a potential adversary would allow, and the military sees no alternative but to get U.S. capabilities fielded fast.

“Whether you want to call it a race or not, it certainly is,” said Adm. Christopher Grady, vice chairman of the Joint Chiefs of Staff. “Both of us have recognized that this will be a very critical element of the future battlefield. China’s working on it as hard as we are.”

A look at the history of military development of AI, what technologies are on the horizon and how they will be kept under control:

From machine learning to autonomy

AI’s roots in the military are actually a hybrid of machine learning and autonomy. Machine learning occurs when a computer analyzes data and rule sets to reach conclusions. Autonomy occurs when those conclusions are applied to take action without further human input.

This took an early form in the 1960s and 1970s with the development of the Navy’s Aegis missile defense system. Aegis was trained through a series of human-programmed “if/then” rule sets to be able to detect and intercept incoming missiles autonomously, and more rapidly than a human could. But the Aegis system was not designed to learn from its decisions and its reactions were limited to the rule set it had.

“If a system uses ‘if/then,’ it is probably not machine learning, which is a field of AI that involves creating systems that learn from data,” said Air Force Lt. Col. Christopher Berardi, who is assigned to the Massachusetts Institute of Technology to assist with the Air Force’s AI development.

AI took a major step forward in 2012 when the combination of big data and advanced computing power enabled computers to begin analyzing the information and writing the rule sets themselves. It is what AI experts have called AI’s “big bang.”

The new data created by a computer writing the rules is artificial intelligence. Systems can be programmed to act autonomously from the conclusions reached from machine-written rules, which is a form of AI-enabled autonomy.

Testing an AI alternative to GPS navigation

Air Force Secretary Frank Kendall got a taste of that advanced warfighting this month when he flew on Vista, the first F-16 fighter jet to be controlled by AI, in a dogfighting exercise over California’s Edwards Air Force Base.

While that jet is the most visible sign of the AI work underway, there are hundreds of ongoing AI projects across the Pentagon.

At MIT, service members worked to clear thousands of hours of recorded pilot conversations to create a data set from the flood of messages exchanged between crews and air operations centers during flights, so the AI could learn the difference between critical messages like a runway being closed and mundane cockpit chatter. The goal was to have the AI learn which messages are critical to elevate to ensure controllers see them faster.

In another significant project, the military is working on an AI alternative to GPS satellite-dependent navigation.

In a future war, high-value GPS satellites would likely be hit or interfered with. The loss of GPS could blind U.S. communication, navigation and banking systems, and make the U.S. military’s fleet of aircraft and warships less able to coordinate a response.

So last year the Air Force flew an AI program — loaded onto a laptop that was strapped to the floor of a C-17 military cargo plane — to work on an alternative solution using the Earth’s magnetic fields.

Experts already knew that aircraft could navigate by following the Earth’s magnetic fields, but so far that hasn’t been practical because each aircraft generates so much of its own electromagnetic noise that there hasn’t been a good way to filter out everything but the Earth’s emissions.

“Magnetometers are very sensitive,” said Col. Garry Floyd, director for the Department of Air Force-MIT Artificial Intelligence Accelerator program. “If you turn on the strobe lights on a C-17, we would see it.”

The AI learned, through the flights and reams of data, which signals to ignore and which to follow. The results “were very, very impressive,” Floyd said. “We’re talking tactical airdrop quality.”

“We think we may have added an arrow to the quiver in the things we can do, should we end up operating in a GPS-denied environment. Which we will,” Floyd said.

The AI so far has been tested only on the C-17. Other aircraft will also be tested. If it works, it could give the military another way to operate if GPS goes down.

Safety rails and pilot-speak

Vista, the AI-controlled F-16, has considerable safety rails as the Air Force trains it. There are mechanical limits that keep the still-learning AI from executing maneuvers that would put the plane in danger. There is a safety pilot, too, who can take over control from the AI with the push of a button.

The algorithm cannot learn during a flight, so each time it goes up, it has only the data and rule sets created from previous flights. When a new flight is over, the algorithm is transferred back onto a simulator, where it is fed new data gathered in-flight to learn from, create new rule sets and improve its performance.

But the AI is learning fast. Because of the supercomputing speed AI uses to analyze data, and then flying those new rule sets in the simulator, its pace in finding the most efficient way to fly and maneuver has already led it to beat some human pilots in dogfighting exercises.

Safety is still a critical concern, and officials said the most important way to take safety into account is to control what data is reinserted into the simulator for the AI to learn from. In the jet’s case, it’s making sure the data reflects safe flying. Ultimately the Air Force hopes that a version of the AI being developed can serve as the brain for a fleet of 1,000 unmanned warplanes under development by General Atomics and Anduril.

In the experiment training AI on how pilots communicate, the service members assigned to MIT cleaned up the recordings to remove classified information and the pilots’ sometimes salty language.

Learning how pilots communicate is “a reflection of command and control, of how pilots think. The machines need to understand that too if they’re going to get really, really good,” said Grady, the Joint Chiefs vice chairman. “They don’t need to learn how to cuss.”

]]>
Damian Dovarganes
<![CDATA[Capella Space automates vessel classification in satellite imagery]]>https://www.c4isrnet.com/industry/2024/05/09/capella-space-automates-vessel-classification-in-satellite-imagery/https://www.c4isrnet.com/industry/2024/05/09/capella-space-automates-vessel-classification-in-satellite-imagery/Thu, 09 May 2024 15:51:31 +0000ORLANDO — Capella Space is combining its space-based imagery with pattern recognition to more quickly detect and characterize vessels of interest.

The company, which builds and manages synthetic aperture radar satellites, or SAR, this week rolled out its Vessel Classification tool, promising streamlined analysis and an automated ability to track ships across its historical archives.

Governments, outside experts and observers are increasingly tapping overhead imagery to monitor faraway fighting or materiel buildup. Its applications were exemplified in the lead up to Russia’s renewed invasion of Ukraine in 2022 and, more recently, to assess damage stemming from the Israel-Hamas war.

Unlike traditional electro-optical satellite imagery, which can be hampered by poor weather and lighting, SAR generates images with radar that can penetrate cloud cover and other adverse conditions.

David Hemphill, a senior product manager at Capella, told C4ISRNET the classification capability takes data and turns it into actionable information.

NGA wants industry’s help monitoring illegal activity in Indo-Pacific

“There might be a lot of different locations you’d be monitoring that you’re not really interested in — until a warship shows up,” he said in a May 7 interview at the GEOINT conference in Florida. “It gets you that much closer to the type of answers that an analyst is looking for, without necessarily having to do as much manual labor.”

The Department of Defense and intelligence community officials have stressed the importance of maritime domain awareness, or a deep understanding of what is happening on, below or near the water’s surface. Such awareness will be critical in the Indo-Pacific, where the U.S. is preparing for potential clashes with China.

Hemphill said ordering Vessel Classification is as easy as checking an additional box on the California-based company’s platform. The tool was made possible by EMSI, a company with years of geospatial intelligence experience.

“We can get more diverse looks at maritime, and it’s consistent,” he said. “We can now tell you if it’s a warship, a submarine, a tanker, those sorts of things. And then if you’re monitoring a certain location, you can start to get trends and graphs.”

]]>
<![CDATA[AI-controlled fighter jet takes Air Force secretary on historic ride]]>https://www.c4isrnet.com/news/your-air-force/2024/05/03/ai-controlled-fighter-jet-takes-air-force-secretary-on-historic-ride/https://www.c4isrnet.com/news/your-air-force/2024/05/03/ai-controlled-fighter-jet-takes-air-force-secretary-on-historic-ride/Fri, 03 May 2024 21:39:06 +0000EDWARDS AIR FORCE BASE, Calif. — With the midday sun blazing, an experimental orange-and-white F-16 fighter jet launched with a familiar roar that is a hallmark of U.S. airpower. But the aerial combat that followed was unlike any other: This F-16 was controlled by artificial intelligence, not a human pilot. And riding in the front seat was Air Force Secretary Frank Kendall.

AI marks one of the biggest advances in military aviation since the introduction of stealth in the early 1990s, and the Air Force has aggressively leaned in. Even though the technology is not fully developed, the service is planning for an AI-enabled fleet of more than 1,000 unmanned warplanes, the first of them operating by 2028.

This image from remote video released by the U.S. Air Force shows Air Force Secretary Frank Kendall flying inside the cockpit of an autonomous X-62A VISTA jet above Edwards Air Base, Calif., May 2, 2024. (U.S. Air Force via AP)

It was fitting that the dogfight took place at Edwards Air Force Base, a vast desert facility where Chuck Yeager broke the speed of sound and the military has incubated its most secret aerospace advances. Inside classified simulators and buildings with layers of shielding against surveillance, a new test-pilot generation is training AI agents to fly in war. Kendall traveled here to see AI fly in real time and make a public statement of confidence in its future role in air combat.

“It’s a security risk not to have it. At this point, we have to have it,” Kendall said in an interview with The Associated Press after he landed. The AP, along with NBC, was granted permission to witness the secret flight on the condition that it would not be reported until it was complete because of operational security concerns.

The AI-controlled F-16, called Vista, flew Kendall in lightning-fast maneuvers at more than 550 miles an hour that put pressure on his body at five times the force of gravity. It went nearly nose-to-nose with a second human-piloted F-16 as both aircraft raced within 1,000 feet of each other, twisting and looping to try force their opponent into vulnerable positions.

Air Force Secretary Frank Kendall flies in the X-62 VISTA in the skies above Edwards Air Force Base, Calif., May 2, 2024. (Richard Gonzales/Air Force)

At the end of the hourlong flight, Kendall climbed out of the cockpit grinning. He said he’d seen enough during his flight that he’d trust this still-learning AI with the ability to decide whether or not to launch weapons in war.

There’s a lot of opposition to that idea. Arms control experts and humanitarian groups are deeply concerned that AI one day might be able to autonomously drop bombs that kill people without further human consultation, and they are seeking greater restrictions on its use.

“There are widespread and serious concerns about ceding life-and-death decisions to sensors and software,” the International Committee of the Red Cross has warned. Autonomous weapons “are an immediate cause of concern and demand an urgent, international political response.”

Kendall said there will always be human oversight in the system when weapons are used.

The military’s shift to AI-enabled planes is driven by security, cost and strategic capability. If the U.S. and China should end up in conflict, for example, today’s Air Force fleet of expensive, manned fighters will be vulnerable because of gains on both sides in electronic warfare, space and air defense systems. China’s air force is on pace to outnumber the U.S. and it is also amassing a fleet of flying unmanned weapons.

Future war scenarios envision swarms of American unmanned aircraft providing an advance attack on enemy defenses to give the U.S. the ability to penetrate an airspace without high risk to pilot lives. But the shift is also driven by money. The Air Force is still hampered by production delays and cost overruns in the F-35 Joint Strike Fighter, which will cost an estimated $1.7 trillion.

Smaller and cheaper AI-controlled unmanned jets are the way ahead, Kendall said.

Vista’s military operators say no other country in the world has an AI jet like it, where the software first learns on millions of data points in a simulator, then tests its conclusions during actual flights. That real-world performance data is then put back into the simulator where the AI then processes it to learn more.

China has AI, but there’s no indication it has found a way to run tests outside a simulator. And, like a junior officer first learning tactics, some lessons can only be learned in the air, Vista’s test pilots said.

Until you actually fly, “it’s all guesswork,” chief test pilot Bill Gray said. “And the longer it takes you to figure that out, the longer it takes before you have useful systems.”

Vista flew its first AI-controlled dogfight in September 2023, and there have only been about two dozen similar flights since. But the programs are learning so quickly from each engagement that some AI versions getting tested on Vista are already beating human pilots in air-to-air combat.

The pilots at this base are aware that in some respects, they may be training their replacements or shaping a future construct where fewer of them are needed.

But they also say they would not want to be up in the sky against an adversary that has AI-controlled aircraft if the U.S. does not also have its own fleet.

“We have to keep running. And we have to run fast,” Kendall said.

]]>
Damian Dovarganes
<![CDATA[Northrop’s colossal Manta Ray underwater drone passes at-sea tests]]>https://www.c4isrnet.com/unmanned/2024/05/02/northrops-colossal-manta-ray-underwater-drone-passes-at-sea-tests/https://www.c4isrnet.com/unmanned/2024/05/02/northrops-colossal-manta-ray-underwater-drone-passes-at-sea-tests/Thu, 02 May 2024 18:23:56 +0000Northrop Grumman’s massive Manta Ray underwater test bed completed at-sea trials this year, validating its ability to operate below the waves and with minimal human contact.

In a photo first shared with C4ISRNET, the prototype is seen just below the surface of the water, with one fin and part of its almond-shaped body breaking through. The grey unmanned underwater vehicle — considered to be “extra large,” in military parlance — dwarfs the boat trailing it as well as the person aboard.

Those working on the secretive project declined to provide specific measurements about its size, but said the UUV is modular. It was separated into pieces on the East Coast and shipped cross-country where “a small team was able to reassemble the vehicle on a standard Navy concrete pier using only common support equipment and a single crane,” according to Brian Theobald, a principal investigator and chief engineer at Northrop.

“I think it is striking in its size and scale, even for those folks that have been there throughout the design process,” he said in an interview May 2. “We see everything in model-based views, in CAD, et cetera, and it’s not until you see it actually built, full scale, that it truly hits home.”

Its first dunk and subsequent trials off the coast of California revealed “no leaks or ground faults or other build issues,” he added. “I think we can all relate: Water is going to go into every single crevice and place that it can get to.”

Northrop’s offering is years in the making. The Defense Advanced Research Projects Agency in 2020 kicked off the program with the thought of creating a large underwater drone that can operate independently of manned vessels and ports once underway. It was also meant to shepherd critical tech for a new class of what the agency called “payload-capable UUVs.”

Northrop last month teased its prototype during the Sea-Air-Space naval conference in Maryland. The image the company shared was darkened and showed little more than its rounded nose and glider-like body.

Marine Corps eyeing more recon boats from Australian ‘Whiskey Project’

Key considerations for Manta Ray development include threat detection, classification and communications capabilities, high-efficiency propulsion systems, and the ability to withstand the taxes of undersea environments. Having a drone that can survive on its own for protracted periods of time would reduce logistical demands and free up manpower.

“The design goal is to be completely autonomous, requiring little human interaction or maintenance to achieve its mission goals,” Joseph Deane, Northrop’s Manta Ray program manager, said in an interview. “What makes it stand out is its low power usage, the ability to go very long distances, the autonomous aspect of it, to operate without human interaction for long periods of time. Those capabilities don’t exist right now.”

The Department of Defense is increasingly interested in uncrewed technologies and their battlefield application. The Navy is seeking to establish a so-called hybrid fleet, empowering sailors and Marines with smart machinery and the sensors or weapons they carry. The Manta Ray prototype features multiple bays for payloads of different types and sizes.

Defense News previously reported the service was fleshing out its manned-unmanned teams in three phases: prototyping and experimenting from fiscal 2024 to 2028; buying and using in fiscal 2029 through 2033; and becoming fully operational in the years thereafter.

The Navy and DARPA are expected to discuss next steps for the Manta Ray program, including additional testing and potential technology transfers.

]]>
Colin Demarest
<![CDATA[Saab unveils technology incubator using Enforcer 3 as test bed]]>https://www.c4isrnet.com/battlefield-tech/2024/04/25/saab-unveils-technology-incubator-using-enforcer-3-as-test-bed/https://www.c4isrnet.com/battlefield-tech/2024/04/25/saab-unveils-technology-incubator-using-enforcer-3-as-test-bed/Thu, 25 Apr 2024 14:39:39 +0000SAN DIEGO — When Saab’s Combat Boat 90 first entered service with the Swedish Navy in 1991, it was as a fully manual vessel — a crewed, amphibious landing craft designed for high-speed operations.

Today, the platform is playing a central role in the company’s vision of rapid testing and deployment of autonomous and digital-first technologies for the U.S. military.

Saab has for several years experimented with a variant of the CB90 called Enforcer 3, equipping it with navigation and communication systems and intelligence, surveillance and reconnaissance sensors. Key to its transformation was a conversion into a fully uncrewed system.

The company’s U.S. subsidiary announced April 23 it plans to use Enforcer 3 as a test bed for a new tech incubator, dubbed Skapa, which in Swedish means “to create.” Based in San Diego, south of Silicon Valley and home to multiple naval installations, the goal is to be an avenue for collaboration with military units seeking novel capabilities and startups in need of a platform to test their technology.

Pentagon trims tech research funding request; AI, networking flat

Erik Smith, the CEO of Saab’s American branch, told C4ISRNET the vision for Skapa is driven both by the pace of tech development and the growing threats from U.S. adversaries. The incubator’s focus on uncrewed systems and artificial intelligence comes as the Defense Department looks to increase its inventory of attritable drones through programs like Replicator and introduce additional autonomous materiel across the armed services.

“What we’ve done here with Skapa is create an accelerator, a laboratory where our end-users can come straight here with their problems and we can work with them in real time, unencumbered by some of the traditional processes and bureaucracy that sometimes you can get wrapped up in,” Smith said in an April 24 interview at the company’s San Diego office.

While it’s not uncommon for defense firms to establish hubs meant to mature technology the military may need in the future, Smith said Skapa’s focus is on the Defense Department’s near-term and quickly evolving requirements.

“We are trying to get capability to the fleet in months,” he said. “So, we’re taking a lot of existing technology and capability from across Silicon Valley, from industry, from our own portfolio and saying, ‘How do we put that together and get differentiated capability to the fleet faster?’”

Smith said establishing an accelerator like Skapa makes sense for Saab, whose defense portfolio includes fighter jets and submarines along with weapons, sensors and enabling systems. The company in recent years acquired CrowdAI and BlueBear Systems, expanding its footprint in the AI and simulation fields.

Skapa will concentrate on three areas: naval autonomy, digital and AI tech, and maritime domain awareness. Smith refused to say how much money Saab has invested in the incubator and declined to detail the size of team, but Skapa’s leader, Chief Strategy Officer Michael Brasseur, characterized it as a “small, agile” group poised for growth.

Brasseur, the former commodore of the Navy’s Task Force 59, told C4ISRNET that Enforcer 3 will provide a “launching pad” for many of the incubator’s ideas and products.

Anduril pairs with Korean shipbuilder to design new unmanned platforms

Starting this summer, Saab plans to begin testing new capabilities on Enforcer 3, including weapons packages that would work in tandem with its ISR sensors. It also hopes to deploy the boat in upcoming military exercises, including the international Rim of the Pacific and a NATO event called Robotic Experimentation and Prototyping with Maritime Unmanned Systems.

Saab is also actively working to partner with tech firms interested in Enforcer 3, according to Brasseur. Down the line, that could mean the vessel serves as home base for smaller drones not designed to survive at sea or as the autonomous backbone of a swarm.

The company could also offer the platform’s kit, including its autonomy package, to customers who want to integrate it on their own systems.

“There’s no reason that you can’t take a capability from a partner or from internally to Saab, demonstrate it on [Enforcer] and then have another platform that has the right envelope and requirements to be able to host that capability,” Smith said. “It’s here to do both.”

]]>
Saab
<![CDATA[Army may swap AI bill of materials for simpler ‘baseball cards’]]>https://www.c4isrnet.com/artificial-intelligence/2024/04/23/army-may-swap-ai-bill-of-materials-for-simpler-baseball-cards/https://www.c4isrnet.com/artificial-intelligence/2024/04/23/army-may-swap-ai-bill-of-materials-for-simpler-baseball-cards/Tue, 23 Apr 2024 14:42:17 +0000The U.S. Army is revising its artificial intelligence bill of materials effort following meetings with defense contractors.

The service last year floated the idea of an AI BOM, which would be similar to existing software bills, or comprehensive lists of components and dependencies that comprise programs and digital goods. Such practices of transparency are championed by the Cybersecurity and Infrastructure Security Agency and other organizations.

The Army is now pivoting to an “AI summary card,” according to Young Bang, the principal deputy assistant secretary for acquisition, logistics and technology. He likened it to a baseball card with useful information available at a glance.

“It’s got certain stats about the algorithm, its intended usage and those types of things,” Bang told reporters at a Pentagon briefing April 22. “It’s not as detailed or necessarily threatening to industry about intellectual property.”

The Department of Defense is spending billions of dollars on AI, autonomy and machine learning as leaders demand quicker decision-making, longer and more-remote intelligence collection and a reduction of human risk on increasingly high-tech battlefields.

Pentagon debuts new data and AI strategy after Biden’s executive order

More than 685 AI-related projects are underway across the department, with at least 230 being handled by the Army, according to a Government Accountability Office tally. The technology is expected to play a key role in the XM30 Mechanized Infantry Combat Vehicle, formerly the Optionally Manned Fighting Vehicle, and the Tactical Intelligence Targeting Access Node, or TITAN.

The goal of an AI BOM or summary card is not to reverse engineer private-sector products or put a company out of business, Bang said.

Rather, it would offer greater understanding of an algorithm’s ins and outs — ultimately fostering trust in something that could inform life-or-death decisions.

“We know innovation’s happening in the open-source environment. We also know who’s contributing to the open source,” Bharat Patel, a project lead with the Army’s Program Executive Office for Intelligence, Electronic Warfare and Sensors, told reporters. “So it goes back to how was that original model trained, who touched that model, could there have been poisons or anything?

Additional meetings with industry are planned, according to the Army.

]]>
BEN STANSALL
<![CDATA[US Air Force stages dogfights with AI-flown fighter jet]]>https://www.c4isrnet.com/air/2024/04/19/us-air-force-stages-dogfights-with-ai-flown-fighter-jet/https://www.c4isrnet.com/air/2024/04/19/us-air-force-stages-dogfights-with-ai-flown-fighter-jet/Fri, 19 Apr 2024 20:57:36 +0000An experimental fighter jet has squared off against an F-16 in the first-ever artificial intelligence-fueled dogfights, the Air Force and the Defense Advanced Research Projects Agency said.

And the successful effort to have the X-62A VISTA engage in practice aerial combat could help the Air Force further refine its plans for autonomous drone wingmen known as collaborative combat aircraft, officials told reporters Friday.

VISTA, which stands for Variable In-flight Simulator Aircraft, is a heavily modified F-16 operated by the U.S. Air Force Test Pilot School at Edwards Air Force Base in California. The service has used it to test cutting-edge aerospace technology for more than three decades, and in recent years it’s been used to test autonomous flight capabilities.

DARPA’s Air Combat Evolution, or ACE, program has been working for the last four years to refine how the military can use AI for air warfare and build airmen’s trust that autonomous technology can perform safely and reliably in combat.

Until now, the military has used autonomy for aspects of flight that are predictable and based on a set of known rules, such as the Auto Ground Collision Avoidance System that keeps jets such as the F-35 from crashing. But within-visual range dogfighting — perhaps the most dangerous, unpredictable form of flight a pilot can engage in — represented an entirely different set of skills for AI to learn, said Col. James Valpiani, commandant of the Air Force Test Pilot School.

“Dogfighting presents a very important challenge case for the question of trust” in autonomy, Valpiani said. “It’s inherently very dangerous. It’s one of the most difficult competencies that military aviators must master.”

The ACE program started by having AI agents control simulated F-16s while dogfighting in computers. Those AI-operated simulated F-16s went five for five against human pilots, DARPA said in a video posted online. But they weren’t yet trained to follow safety guidelines — including those that keep a pilot from breaking the jet — and other ethical requirements such as combat training rules and weapons engagement zones.

In December 2022 and April 2023, the Air Force and DARPA started actual flight tests with AI agents flying VISTA. And in September 2023, it was time for VISTA to go toe-to-toe with a human pilot.

For two weeks, VISTA flew against an F-16 in a variety of scenarios, including situations where it started at a disadvantage against the human-flown jet. VISTA started off by flying defensively to build up confidence in its flight safety, before switching to intense offensive maneuvers. Valpiani said the jets flew aggressively at speeds of up to 1,200 miles per hour and within 2,000 feet of one another, including carrying out nose-to-nose passes and vertical maneuvering.

William Gray, chief test pilot at the Air Force Test Pilot School, and other engineers conduct software updates to the X-62 VISTA jet at Edwards Air Force Base, California, Aug. 3, 2022. (Giancarlo Casem/Air Force photo)

Two pilots were in VISTA’s cockpit to monitor its systems and switch between different AI agents to test their performance, but they never had to take over flying. VISTA carried out 21 test flights between December 2022 and September 2023.

Lt. Col. Ryan Hefron, DARPA’s ACE program manager, and Valpiani said the AI-flown VISTA performed well and tested a variety of agents with multiple different capabilities. But they declined to say how many times VISTA beat the human-flown F-16.

“The purpose of the test was to demonstrate we can safely test these AI agents in a safety-critical air combat environment,” Hefron said.

Hefron and Valpiani said the ACE program learned multiple lessons from the dogfighting tests, including how to quickly adapt AI software and upload it to the jet, sometime while already in flight.

Hefron said the program next plans to hold more VISTA-versus-F-16 matches to refine the technology and test out different scenarios.

They declined to say whether the ACE program’s dogfighting effort might one day lead to a future fighter fleet without pilots in the cockpit, saying those “long range vision” questions are better suited for Air Force leadership. But Valpiani noted that developments such as Auto-GCAS haven’t replaced pilots’ need to be continually aware of their terrain, and only serve as a backup failsafe.

And the lessons learned from ACE could apply to more than just dogfighting, they said. ACE will allow the service to create uncrewed CCAs that can autonomously fly alongside crewed fighters such as F-35s and the Next-Generation Air Dominance platform, carrying out missions such as airstrikes and reconnaissance operations.

“The X-62A program and DARPA’s ACE program are not primarily about dogfighting,” Hefron said. “They’re really about building trust in responsible AI. The key takeaway from our September event is that we can do that safely, we can do it effectively.”

Air Force Secretary Frank Kendall is confident enough in the ACE program’s progress that he plans to soon fly as a passenger in the AI-operated VISTA. DARPA and the Air Force declined to say more specifically when Kendall will fly in VISTA.

“There will be a pilot with me who will just be watching, as I will be, as the autonomous technology works,” Kendall told senators during a budget hearing April 9. “Hopefully neither he or I will be needed to fly the airplane.”

]]>
Mr. Alex R. Lloyd (GS-12)
<![CDATA[AUKUS allies developing undersea capabilities they can field this year]]>https://www.c4isrnet.com/unmanned/2024/04/18/aukus-allies-developing-undersea-capabilities-they-can-field-this-year/https://www.c4isrnet.com/unmanned/2024/04/18/aukus-allies-developing-undersea-capabilities-they-can-field-this-year/Thu, 18 Apr 2024 19:21:09 +0000Though the submarine portion of the AUKUS trilateral alliance will take decades to fully come to fruition, development of the advanced technology under the agreement is in full swing, as Australia, the U.K. and the U.S. seek quick wins for their fleets, officials said.

Pillar 2 of the agreement focuses on advanced tech the nations can develop and field together. There are eight working groups focused on cyber, quantum, artificial intelligence, electronic warfare, hypersonics, undersea warfare, information sharing, and innovation, each with a list of ideas to quickly test and push to operators.

Leaders told Defense News how this process is playing out in the undersea warfare working group and how they aim to bring new capabilities to the three navies as soon as this year.

Dan Packer, the AUKUS director for the Commander of Naval Submarine Forces who also serves as the U.S. lead for the undersea warfare working group, said April 4 that the group has four lines of effort: a torpedo tube launch and recover capability for a small unmanned underwater vehicle; subsea and seabed warfare capabilities; artificial intelligence; and torpedoes and platform defense.

An Iver3-580 Autonomous Underwater Vehicle is put on display at Marine Corps Base Hawaii, Sept. 6, 2017. (Cpl. Jesus Sepulveda Torres/US Marine Corps)

On the small UUV effort, the U.S. Navy on its own in 2023 conducted successful demonstrations: one called Yellow Moray on the West Coast using HII’s Remus UUV; and another called Rat Trap on the East Coast using an L3Harris-made UUV.

L3Harris’ Integrated Mission Systems president Jon Rambeau told Defense News in March that his team had started with experiments in an office using a hula hoop with flashlights attached, to understand how sensors perceive light and sound. They moved from the office to a lab and eventually into the ocean, with a rig tethered to a barge that allowed the company’s Iver autonomous underwater vehicle to, by trial and error, learn to find its way into a small box that was stationary and then, eventually, moving through the water.

Torpedo tube launch

Rambeau said the UUV hardware is inherently capable of going in and out of the torpedo tube, but there’s a software and machine learning challenge to help the UUV learn to navigate various water conditions and safely find its way back into the submarine’s torpedo tube.

Virginia-class attack submarines can silently shoot torpedoes from their launch tubes without giving away their location. If submarines can also fill their tubes with small UUVs, they’d gain the ability to stealthily expand their reach and surveil a larger area around the boat.

During a panel discussion at the Navy League’s annual Sea Air Space conference on April 8, U.K. Royal Navy Second Sea Lord Vice Adm. Martin Connell told Defense News that his country, too, would accelerate its work on developing this capability. He said the U.K. plans to test it on an Astute-class attack submarine this year, and then based on what worked for the U.K. and the U.S., they’d determine how to scale up the capability.

Packer said this effort will “make UUV operations ubiquitous on any submarine. Today, it takes a drydock shelter. It takes divers. It takes a whole host of Rube Goldberg kinds of things. Once I get torpedo tube launch and recovery, it’s just like launching a torpedo, but they welcome him back in.”

He added that the team agreed not to integrate this capability onto Australia’s Collins-class conventionally powered submarines now, but Australia will gain this capability when it buys the first American Virginia-class attack submarine in 2032.

Commander Sean Heaton of the U.K. Royal Navy presents the capabilities used on board HMS Tamar during the Integrated Battle Problem 23.3 exercise near Sydney, Australia. The exercise tested a range of autonomous systems operating from the Royal Australian Navy's Mine Hunter Coastal HMAS Gascoyne, Undersea Support Vessel ADV Guidance and the UK's Off-Shore Patrol Vessel HMS Tamar. (LSIS David Cox/Royal Australian Navy)

On subsea and seabed warfare, Packer said all three countries have an obligation to defend their critical undersea infrastructure. He noted the U.K. and Australia had developed ships that could host unmanned systems that can scan the seabed and ensure undersea cables haven’t been tampered with, for example.

Connell said during the panel the British and Australian navies conducted an exercise together in Australia involving seabed warfare. This effort took just six months from concept to trial, he said, adding he hopes the team can continue to develop even greater expeditionary capability through this line of effort.

Packer said a next step would be collectively developing effectors for these seabed warfare unmanned underwater vehicles — “what are the hammers, the saws, the screwdrivers that I need to develop for these UUVs to get effects on the seabed floor, including sensors.”

A P-8A Poseidon assigned to Patrol Squadron (VP) 46 takes off from the runway at Naval Air Station (NAS) Sigonella, Italy, Jan. 17, 2024.  (MC2 Jacquelin Frost/US Navy)

The primary artificial intelligence effort today involves the three nation’s P-8 anti-submarine warfare airplanes, though it will eventually expand to the submarines themselves.

Packer said the nations created the first secure collaborative development environment, such that they can all contribute terabytes of data collected from P-8 sensors. The alliance, using vendors from all three countries’ industrial bases, is working now to create an artificial intelligence tool that can identify all the biological sources of sounds the P-8s pick up — everything from whales to shrimp — and eliminate those noises from the picture operators see. This will allow operators to focus on man-made sounds and better identify potential enemy submarines.

Packer said the Navy never had the processing power to do something like this before. Now that the secure cloud environment exists, the three countries are moving out as fast as they can to train their AI tools “to detect adversaries from that data … beyond the level of the human operator to do so.”

For now, the collaboration is focused on P-8s, since foreign military sales cases already exist with the U.K. and Australia to facilitate this collaboration.

Connell, without specifying the nature of the AI tool, said the U.K would put an application on its P-8s this year to enhance their onboard acoustic performance.

Packer noted the U.S. is independently using this capability on an attack submarine today using U.S.-only vendors and algorithms, but the AUKUS team plans to eventually share the full automatic target recognition tool with all three countries’ planes, submarines and surface combatants once the right authorities are in place.

Sailors assigned to the U.S. Navy submarine Minnesota prepare an MK 48 torpedo at the Haakonsvern Naval Base in Bergen, Norway, in 2019. (Chief MC Travis Simmons/U.S. Navy)

And finally, Packer said the fourth line of effort is looking at the collective inventory of torpedoes and considering how to create more capability and capacity. Both the U.S. and U.K. stopped building torpedoes decades ago, and the U.S. around 2016 began trying to restart its industrial base manufacturing capability.

“The issue is that none of us have sufficient ordnance-, torpedo-building capability,” Packer said, but the group is looking at options to modernize British torpedoes and share in-development American long-range torpedoes with the allies ­— potentially through an arrangement that involves a multinational industrial base.

Vice Adm. Rob Gaucher, the commander of U.S. Naval Submarine Forces, said during the panel discussion that these AUKUS undersea warfare lines of effort closely match his modernization priorities for his own submarine fleet.

Pursuing these aims as part of a coalition, he said, strengthens all three navies.

“The more we do it, and the faster we do it, and the more we get it in the hands of the operators, the better. And then having three sets of operators to do it makes it even better,” Gaucher said.

]]>
LSIS David Cox