Search papers, labs, and topics across Lattice.
This paper introduces T2T, a system for captioning smartphone activities directly from encrypted mobile traffic, addressing limitations of traditional activity classification. T2T employs a flow feature encoder to extract latent features from traffic and a caption decoder to generate activity transcripts. The system leverages synchronized screen capture videos and the Qwen-VL-Max vision-language model for automatic textual annotation of mobile traffic, using multi-stage losses for cross-modal training, achieving strong results on a dataset of 40,000 traffic-description pairs.
Forget tapping user data: T2T captions smartphone activities with impressive accuracy just by analyzing encrypted network traffic.
This paper studies the creation of textual descriptions of user activities and interactions on smartphones. Our approach of referring to encrypted mobile traffic exceeds traditional smartphone activity classification methods in terms of model scalability and output readability. The paper addresses two obstacles to the realization of this idea: the semantic gap between traffic features and smartphone activity captions, and the lack of textually annotated traffic data. To overcome these challenges, we introduce a novel smartphone activity captioning system, called T2T (Traffic-to-Text). T2T consists of a flow feature encoder that converts low-level traffic characteristics into meaningful latent features and a caption decoder to yield readable transcripts of smartphone activities. In addition, T2T achieves the automatic textual annotation of mobile traffic by feeding synchronized screen capture videos into the Qwen-VL-Max vision-language model, and proposing multi-stage losses for effective cross-model training. We evaluate T2T on 40,000 traffic-description pairs collected in two real-world environments, involving 8 smartphone users and 20 mobile apps. T2T achieves a BLEU-4 score of 58.1, a METEOR score of 38.3, a ROUGE-L score of 70.5, and a CIDEr score of 108.7. The quantitative and qualitative analyses show that T2T can generate semantically accurate captions that are comparable to the vision-language model.