Happe, A., & Cito, J. (2025). Can LLMs Hack Enterprise Networks? Autonomous Assumed Breach Penetration-Testing Active Directory Networks. ACM Transactions on Software Engineering and Methodology. https://doi.org/10.1145/3766895
ACM Transactions on Software Engineering and Methodology
-
ISSN:
1049-331X
-
Date (published):
24-Aug-2025
-
Number of Pages:
45
-
Publisher:
ASSOC COMPUTING MACHINERY
-
Peer reviewed:
Yes
-
Keywords:
Large Language Models; Security; Enterprise Networks; Active Directory
en
Abstract:
Traditional enterprise penetration-testing, while critical for validating defenses and uncovering vulnerabilities, is often limited by high operational costs and the scarcity of human expertise. This paper investigates the feasibility and effectiveness of using Large Language Model (LLM)-driven autonomous systems to address these challenges in real-world Active Directory (AD) enterprise networks.
We introduce a novel prototype, cochise, designed to employ LLMs to autonomously perform Assumed Breach penetration-testing against enterprise networks. Our system represents the first demonstration of a fully autonomous, LLM-driven framework capable of compromising accounts within a real-life Microsoft Active Directory testbed, the Game of Active Directory (GOAD). The evaluation deliberately utilizes GOAD to capture the intricate interactions and sometimes nondeterministic outcomes of live network penetration-testing, moving beyond the limitations of synthetic benchmarks.
We perform our empirical evaluation using five LLMs, comparing reasoning to non-reasoning models as well as including open-weight models. Through comprehensive quantitative and qualitative analysis, incorporating insights from cybersecurity experts, we demonstrate that autonomous LLMs can effectively conduct Assumed Breach simulations. Key findings highlight their ability to dynamically adapt attack strategies, perform inter-context attacks (e.g., web application audits, social engineering, and unstructured data analysis for credentials), and generate scenario-specific attack parameters like realistic password candidates. The prototype also exhibits robust self-correction mechanisms, automatically installing missing tools and rectifying invalid command generations.
Critically, we find that the associated costs are competitive with, and often significantly lower than, those incurred by professional human penetration testers, suggesting a path toward democratizing access to essential security testing for organizations with budgetary constraints. However, our research also illuminates existing limitations, including instances of LLM “going down rabbit holes”, challenges in comprehensive information transfer between planning and execution modules, and critical safety concerns that necessitate human oversight. Our findings lay foundational groundwork for future software engineering research into LLM-driven cybersecurity automation, emphasizing that the prototype's underlying LLM-driven architecture and techniques are domain-agnostic and hold promise for improving autonomous LLM usage in broader software engineering domains. The source code, traces, and analyzed logs are open-sourced to foster collective cybersecurity and future research.