Search papers, labs, and topics across Lattice.
This paper reviews the application of AI, specifically the fusion of Large Language Models (LLMs) and Reinforcement Learning (RL), to enhance network and device-level cyber deception strategies in contested environments. It addresses the limitations of traditional, static deception methods by exploring AI-driven dynamic approaches that are more cost-effective and accurate. The review focuses on how LLMs and RL can be combined to optimally learn and deploy cyber deception strategies, with a specific focus on validating these strategies against stealthy attacks on Operational Technology (OT) systems.
Forget static honeypots – LLMs and RL could make cyber deception dynamic and adaptive, turning the tables on attackers in contested environments.
Cyber deception assists in increasing the attacker's budget in reconnaissance or any early phases of threat intrusions. In the past, numerous methods of cyber deception have been adopted, such as IP address randomization, the creation of honeypots and honeynets mimicking an actual set of services, and networks deployed within an enterprise or operational technology(OT) network. These types of strategies follow naive approaches of recreating services that are expensive and that need a lot of human intervention. The advent of cloud services and other automations of containerized applications, such as Kubernetes, makes cyber defense easier. Yet, there remains a lot of potential to improve the accuracy of these deception strategies and to make them cost-effective using artificial intelligence (AI)-based solutions by making the deception more dynamic. Hence, in this work, we review various AI-based solutions in building network- and device-level cyber deception methods in contested environments. Specifically, we focus on leveraging the fusion of large language models (LLMs) and reinforcement learning(RL) in optimally learning these cyber deception strategies and validating the efficacy of such strategies in some stealthy attacks against OT systems in the literature.