US Homeland Security Department reveals new AI guardrails as it deploys technology across agency


The seal of the U.S. Department of Homeland Security is seen after a news conference near the International Bridge between Mexico and the U.S., as U.S. authorities accelerate removal of migrants at border with Mexico, in Del Rio, Texas, U.S., September 19, 2021. REUTERS/Marco Bello/File photo

WASHINGTON (Reuters) -The Department of Homeland Security on Thursday plans to announce new limits on its use of artificial intelligence even as it highlights the agency's success using the technology to aid in drug busts and catch criminals.

"We must...ensure that our use of AI is responsible and trustworthy, that it is rigorously tested to be effective, that it safeguards privacy, civil rights, and civil liberties while avoiding inappropriate biases, and...that it is transparent and explainable to those whom we serve," DHS Secretary Alejandro Mayorkas writes in an AI policy memo to be released later today.

The new policy comes as the agency is rapidly adopting artificial intelligence technologies across a wide variety of sensitive missions, from border control to tracking the flow of fentanyl into the country.

In the future, DHS hopes to use AI to also improve its ability to secure American supply chains and digital forensic capabilities even as it faces unique challenges with the technology, according to a senior official.

"I think the potential for unintended harm from the use of AI exists in any federal agency and in any use of AI," said DHS Chief Information Officer Eric Hysen. "We interact with more people on a daily basis than any other federal agency. And when we interact with people, it can be during some of the most critical times of their lives."

Historically, academics have flagged the dangers of AI regarding racial profiling and because it can still make errors while identifying relationships in complex data.

As part of the new policy, Americans are able to decline the use of facial recognition technology in a variety of situations, including during air travel check-ins.

The guidelines will also require that facial recognition matches discovered using AI technology be manually reviewed by human analysts to ensure their accuracy, according to a new directive that the agency plans to release alongside the AI memo later on Thursday.

During a congressional hearing on Thursday, Hysen plans to highlight a recent case at California's San Isidro Port of Entry where agents with Customs and Border Patrol had used advanced machine learning (ML) models to flag an otherwise unnoteworthy car driving north from Mexico for having a "potentially suspicious pattern." Agent later discovered 75 kilograms of drugs in the car's gas tank and rear quarter panels.

Another area where DHS has already uses AI technology extensively is on the southern border, where the agency has deployed more than 200 surveillance cameras, said Hysen.

The cameras, sold by a military defense contractor named Anduril, use AI to automatically detect and flag where human crossings occur, Hysen said, helping stop human and drug trafficking activities.

(Reporting by Alexandra Alper and Christopher Bing; Editing by Chizu Nomiyama)

Follow us on our official WhatsApp channel for breaking news alerts and key updates!

   

Next In Tech News

Exclusive-India finds Zomato, Swiggy food delivery businesses breached antitrust laws, documents show
Pharrell Williams to bring star power to Web Summit tech event
Influencer is banned from future NYC marathons for bringing a camera crew to last weekend’s race
LightOn to become Europe's first listed GenAI startup with Paris IPO
What will Trump 2.0 mean for US tech?
Time change glitch sends German man 1,700 identical tax letters
Wave of racist texts after US election prompts FBI scrutiny
Sony profit jumps as games offset weak movie showing
German physicists create the world’s tiniest QR�code
Before the US election, tech CEOs were quietly courting Trump

Others Also Read