AI at DHS: A Deep Dive into our Use Case Inventory
Under the leadership of Secretary Alejandro N. Mayorkas, the Department of Homeland Security has a longstanding commitment to responsible use of Artificial Intelligence (AI) technologies that we employ to advance our missions, including combatting fentanyl trafficking, strengthening supply chain security, countering child sexual exploitation, and protecting critical infrastructure. We are building on that commitment by publishing an updated AI Use Case Inventory and demonstrating how we are exceeding government-wide standards on transparency, accountability, and responsible use.
Particularly given the sensitive nature of our Department’s work, it is critical that our use of AI is transparent and respects the privacy, civil rights, and civil liberties of everyone we serve; and our policies reflect that. We announced our first policies for responsible AI use in September 2023. In March 2024, the Office of Management and Budget (OMB) issued Memo M-24-10 with government-wide requirements for AI risk management, as directed in President Biden’s AI Executive Order. Where requirements differed between DHS’s internal AI policies and M-24-10, we met the higher standard.
This is the most comprehensive inventory of our AI use cases to date. It includes 158 active AI use cases with more details than previously released. It provides information about DHS’s use cases with potential impacts on safety and rights, including our compliance with required minimum risk management practices. It clarifies potentially confusing information from previous iterations of the inventory.
Over the course of 2024, we re-reviewed every AI use case at DHS, searched out new and previously un-inventoried use cases across the Department, and identified safety- and/or rights-impacting AI use cases that required compliance with M-24-10 minimum risk management practices. In this process:
- We identified 39 safety- and/or rights-impacting use cases, 29 of which are deployed and 10 are pre-deployment as of December 16, 2024.*
- Of the 28 deployed use cases, 23 already comply with minimum risk management practices, while OMB approved short compliance extensions for 5 use cases.
- I determined that DHS did not need to issue any waivers of required risk management practices for any deployed use cases.
- I determined that 27 AI use cases do not meet the M-24-10 definitions for safety-and/or rights-impacting AI, despite falling under OMB’s presumed impacting categories.
From the beginning of our AI journey, DHS has engaged with civil society and the general public at every step. We are hosting a series of webinars and other stakeholder engagements this week and will continue to do so in the days to come. This blog post shares additional detail into our decision-making processes to help interested parties and the public understand our work, including:
- More information about use cases with short compliance extensions approved by OMB;
- Our process for determining whether use cases presumed to be safety- and/or rights-impacting under M-24-10 actually were safety- and/or rights-impacting; and
- A deeper dive into risk management and compliance for two use cases in immigration enforcement about which we have seen significant public interest.
I am proud of the hard work put in by teams across DHS to reach this milestone. We have implemented new and complex policies at a rapid pace to meet ambitious timelines as a step to increasing transparency and responsible AI use. We will continue to mature our approach to AI governance over time as technology evolves.
AI Use Case Inventory
We have been publicly disclosing our AI use cases annually since 2022, as first required in the December 2020 Executive Order 13960 Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government. This year, OMB issued new guidance that requires agencies to disclose more use cases and more information about them than ever before, so this year’s inventory is more expansive than prior versions, including 158 active use cases, compared to 67 total use cases in 2023.
This total reflects a clear and consistent definition for an AI use case and clarifies potential misunderstandings about AI use at DHS, removing prior inconsistencies and confusing information. Previous inventories included some ideas for AI use that were never implemented and entries for technologies that were not actually AI. We now clearly indicate which use cases are resourced and approved, and which were only ideas or in research and development. For transparency, we still provide information about use cases from previous inventories, even if they were never deployed.
We consolidated very similar use cases to avoid flooding the inventory with closely related entries that use very similar technology in similar business contexts and are governed and tested the same ways. For example, CISA consolidated similar threat hunting use cases into one current use case: prior separate entries for Security Information and Event Management (SIEM) Alerting Models (DHS-103) and Advanced Network Anomaly Alerting (DHS-105) were consolidated under Security Operation Center (SOC) Network Anomaly Detection (DHS-2403).
We have also focused on expanding the inventory and maximizing transparency. In early 2024, the DHS AI Governance Board directed that DHS disclose as many AI use cases as possible, even if some details about certain use cases cannot be publicly shared. We re-reviewed previously undisclosed sensitive use cases and determined that we can now disclose at least some information about every AI use case subject to M-24-10. This increased disclosure means that every use case appears on our public inventory, making it unnecessary to provide aggregate reporting as directed in M-24-10 for use cases not individually inventoried.
The data fields with details about inventoried use cases increased from 17 in 2023 to 54 in 2024, aligned with the new OMB requirements. This information includes describing the use case’s intended purpose, expected benefits, and outputs; identifying whether each use case is rights- and/or safety-impacting; reporting compliance with required minimum risk management practices; and reporting on AI maturity, including AI infrastructure.
While this is our most comprehensive disclosure of DHS AI use ever and includes every AI use case subject to M-24-10, there are some AI use cases we did not include:
- AI use cases in our Intelligence Community elements and in National Security Systems are separately governed by the October 2024 National Security Memorandum on AI. We will release information on how we will meet these requirements in April 2025.
- Certain commercial-off-the-shelf or freely available AI products are excluded, such as machine learning used for spam filters in email systems. We do not individually inventory commercial-off-the-shelf products, but we have included an inventory entry that covers DHS personnel’s use of commercial GenAI products in their day-to-day work.
For more information about exceptions to the DHS AI Use Case Inventory, see M-24-10 sections 2.b.iv, 2.c, and 5, and footnote 26, OMB’s Guidance for 2024 Agency AI Use Case Inventories, section d, and our DHS AI Use Case Inventory FAQ.
Safety- and/or Rights-Impacting AI
The 2024 update identifies all of DHS’s safety- and/or rights-impacting AI use cases and provides details about compliance with risk management practices outlined in M-24-10. Agencies were required to implement these practices for safety- and rights-impacting AI by December 1, 2024 or stop using the AI use case unless OMB approved an extension. M-24-10 provided definitions of safety- and/or rights-impacting AI and outlined certain types of use cases that were presumed to be safety- and/or rights-impacting, unless determined otherwise by the agency’s Chief AI Officer. I also have the authority under M-24-10 to issue waivers for one or more minimum practices for a specific use case, but I ultimately determined it unnecessary to issue any such waivers.
We conducted an extensive Department-wide review to identify safety and/or rights-impacting AI use cases. Within each DHS agency and office, the senior official responsible for AI reviewed each AI use case with their legal counsel and subject matter experts and made initial determinations as to whether they met the definitions of rights- and/or safety-impacting AI. These initial determinations were then reviewed at the Department level by the Offices of the Chief Information Officer, Chief Privacy Officer, and Officer for Civil Rights and Civil Liberties. As the DHS CAIO, I then reviewed these initial recommendations and made final determinations on the safety- and/or rights-impacting status of each use case.
We identified 39 safety- and/or rights-impacting use cases with 29 deployed and 10 in pre-deployment as of December 16, 2024. All of DHS’s safety- and/or rights-impacting AI use cases are listed in the public inventory and identified as such.
Roughly half (14 of 29) of our deployed safety- and/or rights-impacting AI use cases involve face recognition and face capture (FR/FC) technologies. FR/FC use cases were previously reviewed and approved under DHS Directive 026-11, “Use of Face Recognition and Face Capture Technologies” (September 2023). This directive requires all DHS use of FR/FC be rigorously tested to national and international standards and approved through a Department-wide review process. It also requires that U.S. citizens be afforded the right to opt-out of face recognition for specific, non-law enforcement uses and prohibits face recognition from being used as the sole basis of any law or civil enforcement related action. We re-reviewed these existing approvals to determine compliance with M-24-10 for FR/FC use cases, although the DHS requirements align with and largely exceed the minimum practices under M-24-10.
An additional 10 of the 29 deployed safety- and/or rights-impacting AI use cases also went through a Department-wide review process including subject matter experts, technical experts, and our oversight offices to confirm that they met the minimum risk managements under M-24-10. I reviewed and approved each use case to ensure they were meeting each practice, including testing performance in a real-world context, maintaining human oversight and accountability, conducting ongoing monitoring and mitigation for AI-enabled discrimination, among other requirements.
OMB approved short compliance extensions for the 5 remaining deployed safety- and/or rights-impacting use cases that were unable to meet the extensive requirements in time. We will use these extensions to obtain additional information from vendors on data provenance and training, model accuracy, and ongoing monitoring of model performance. Each extension was approved until December 1, 2025, but we expect to be able to bring each use case into full compliance much sooner than that. Here are the five use cases that received compliance extensions:
- CBP Translate (DHS-2388) is used for informal interactions between CBP officers and individuals to break down communication barriers but is not used for formal interviews or proceedings. More information about this use case is available in the Privacy Impact Assessment [https://www.dhs.gov/publication/dhscbppia-069-cbp-translate-application].
- Babel (DHS-185) and Fivecast ONYX (DHS-186) are used by CBP to further investigate specific individuals officers have already identified as being of interest through other methods. These tools automate searches across publicly available social media and other open-source resources and aggregate the results for review under established CBP investigative processes. Use of Fivecast ONYX will be discontinued in 2025 due to budgetary constraints. More information about these use cases is available in the Privacy Impact Assessment [https://www.dhs.gov/publication/dhscbppia-058-publicly-available-social-media-monitoring-and-situational-awareness].
- Passive Body Scanner (DHS-2380) is used to help identify concealed items such as weapons or dangerous objects on pedestrians at CBP facilities. Detection of an object that could be a safety issue results in a pat down; detection of an object that could be contraband or undeclared merchandise is referred for further consideration to determine if a pat down is necessary. More information about this use case is available in the Privacy Impact Assessment [https://www.dhs.gov/publication/non-intrusive-inspection-systems-program].
- Video Analysis Tool (DHS-172) is used by Homeland Security Investigations to investigate human rights violations, fraud detection and countering transnational organized crime involved in synthetic opioids. Machine learning algorithms identify and crop human faces from lawfully-obtained video evidence. Investigators can then query and compare the images with relevant Federal biometric and biographical databases, or share these images with other agency partners. This tool is not used in immigration enforcement. More information about this use case is available in the Privacy Impact Assessment [https://www.dhs.gov/publication/dhsicepia-055-repository-analytics-virtualized-environment-raven].
After consulting with experts and oversight offices across the Department, I determined 24 AI use cases do not meet the M-24-10 definitions for safety- and/or rights-impacting AI, despite falling under the M-24-10 categories for presumed impact. M-24-10 required us to presume that any AI use in certain categories (such as in law enforcement, travel, or immigration contexts) would be safety- or rights-impacting and conduct a context-specific and systems-specific risk assessment to determine if these use cases did not actually meet the definitions. Given the Department's missions, we had many use cases that fell under these presumed categories. In making these determinations, we carefully considered how systems were used in practice and if the AI output was used as a principal basis for a decision that could impact someone’s safety or rights.
For example, I determined that CBP’s Autonomous Surveillance Towers (DHS-35), or ASTs, are not safety or rights-impacting despite being used for physical location-monitoring in law enforcement contexts, a presumed rights-impacting category. ASTs are deployed along the border and use a variety of sensors to detect the presence of persons, animals, and vehicles. The specific AI in this use case is trained to determine if an object is a human and not a similarly-shaped object. CBP Agents then review all image alerts to determine if any action needs to be taken. This led me to determine that the AI in ASTs is not a principal basis for a rights-impacting decision. My assessment would change if the AI model were predicting if a detected human were part of a smuggling group or not, or if flags from the system automatically deployed CBP Agents without human review. Even though ASTs are not rights- or safety-impacting, because of the significant public interest in this use case, I directed CBP to meet the minimum practices in M-24-10 anyway. They will do so by February 1, 2025.
Another example is ELIS Photo Card Validation (DHS-189). This use case helps people submitting employment authorization applications online to U.S. Citizenship and Immigration Services (USCIS) by determining if the applicant’s uploaded ID photo is high quality enough to be printed on their Employment Authorization Document card. If the AI model detects a quality issue, it provides a real-time alert to the applicant, and they may choose to resubmit a photo of sufficient quality or not. The AI does not affect USCIS’s adjudication of the application. This use case is related to determining individuals’ access to Federal immigration related services through biometrics, which is another presumed rights-impacting category in M-24-10, but because of how the AI is used I determined it does not meet the definition of rights-impacting AI.
AI in Immigration Enforcement: Hurricane Score and RCA
We’ve seen significant public interest in how AI is used in immigration enforcement, specifically in two use cases at Immigration and Customs Enforcement (ICE): Hurricane Score (DHS-2408) and the Risk Classification Assessment (RCA). I’d like to share additional details on how we complied with the minimum risk management practices for Hurricane Score, and why we do not consider RCA to be AI.
Hurricane Score
Hurricane Score (DHS-2408) models the potential risk that a noncitizen, released from detention with the requirement to check in with ICE through monitoring technology (called Alternatives to Detention), will fail to comply with the program. The Hurricane Score is used to inform human decision-making on an individual’s case but does not itself make or suggest decisions on detention, deportation, or surveillance. The model considers several factors, including the individual’s number of violations and length of time in the program, and whether the person has a travel document. ICE officers are directed through policy and training to consider the score as one of many inputs when making individualized decisions about a noncitizen’s case, and do not rely solely on any single factor in making determinations. We determined that Hurricane Score is rights-impacting AI because it is one (of many) factors that inform officers' law enforcement and immigration decisions. Therefore, Hurricane Score needed to come into compliance with M-24-10 before December 1, 2024, or be shut down.
Hurricane Score went through the comprehensive review process for safety- and/or rights-impacting AI described above to examine the purpose and benefits of the AI, potential risks, the relevance and quality of the data, testing results, and user training. During this review, we found that the algorithm was not being sufficiently tested for bias in the output. To address this, my team worked with ICE to conduct testing comparing bias scores with real-world abscondment outcomes to ensure that the accuracy of the score did not vary meaningfully across different demographic values for age, gender or nation of origin. The testing demonstrated that Hurricane Score does not exhibit demographic bias, and ICE will continue this bias testing on an ongoing basis going forward.
Based on our cross-functional review and the newly conducted bias testing, we concluded that Hurricane Score complies with all requirements of M-24-10.
Risk Classification Assessment (RCA)
RCA helps ICE evaluate the risk to public safety and the likelihood of someone fleeing after being arrested for immigration violations. It uses information from a detained noncitizen’s record, including their ties to community, criminal history, and special vulnerabilities to assess public safety or flight risks. RCA generates a recommendation for certain detention decisions and is one source, among others, used by an ICE officer to make an individualized decision that is then manually reviewed and approved by a supervisor. Previously, ICE officers and contractors did these assessments manually using paper forms. The RCA tool automates this process using existing ICE rules, but final decisions about custody are still made by ICE personnel.
We closely reviewed RCA and determined that it only automates human-defined rules and does not use machine learning or any other AI technique to learn or change based on data. M-24-10 says that its “definition of AI does not include robotic process automation or other systems whose behavior is defined only by human-defined rules.” Based on this assessment, I concluded that RCA is not AI and it is not listed in our inventory.
Even though it is not subject to M-24-10, RCA continues to be subject to rigorous oversight, per existing DHS information technology risk management frameworks and policies for the protection of privacy, civil rights, and civil liberties.
Engagement with Civil Society on AI
DHS values engagement with civil society as we continue to develop and implement AI. M-24-10 requires us to consult with communities affected by AI and the public and incorporate feedback where appropriate. Our ongoing dialogue with civil society helps meet this requirement.
We work with civil society in multiple ways. DHS consulted with affected communities to update our DHS Equity Action Plan in 2023 with details about equitable AI use. Secretary Mayorkas appointed civil rights leaders to the AI Safety and Security Board to learn from their expert opinions on the safe and secure development of AI in critical infrastructure. Representatives from civil rights and civil liberties organizations met with the Secretary in July 2024 to discuss responsible AI development, and with biometric identification experts at DHS’s Maryland Test Facility in October 2024 to learn about how DHS tests its face recognition/face capture AI use cases. We continue to build relationships with civil society organizations and affected communities, with leadership from our Office of Civil Rights and Civil Liberties and Office of Partnership and Engagement.
This week, I plan to discuss and answer questions about our AI disclosures with civil society organizations. Going forward, DHS is committed to quarterly engagements with civil society, at a minimum, on AI topics of interest to make sure that this critical dialogue continues.
Conclusion and Looking Ahead
DHS will continue to update the DHS AI Use Case Inventory on a rolling basis and complete at least one full update annually. We will continue to implement and monitor compliance with the M-24-10 minimum practices for every safety- and/or rights-impacting use case before and during its deployment. DHS will conduct ongoing monitoring, testing, and evaluation to make sure that we are living up to our commitments to use AI in safe, responsible, and trustworthy ways.
* On December 1, 2024, we had identified 40 safety- and/or rights-impacting use cases, 28 of which were deployed and 12 in pre-deployment. One of the pre-deployed use cases moved into retirement after December 1, and we identified another deployed use case.
Distribution channels:
Legal Disclaimer:
EIN Presswire provides this news content "as is" without warranty of any kind. We do not accept any responsibility or liability for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this article. If you have any complaints or copyright issues related to this article, kindly contact the author above.
Submit your press release