Authors:
(1) Jinyu Cai, Waseda University ([email protected]);
(2) Jialong Li, Waseda University ([email protected]);
(3) Mingyue Zhang, Southwest University ([email protected]);
(4) Munan Li, Dalian Maritime University ([email protected]);
(5) Chen-Shu Wang, National Taipei University of Technology ([email protected]);
(6) Kenji Tei, Tokyo Institute of Technology ([email protected]).
Table of Links
II. Background and Related Work
V. Conclusion and Future Work, Acknowledgement, and References
V. CONCLUSION AND FUTURE WORK
Our study has introduced an LLM-based multi-agent simulation framework that effectively captures the nuanced strategies individuals use to bypass social media regulations. Through this framework, we have showcased LLMs’ proficiency in adapting communication tactics within regulated environments, reflecting the sophisticated dance between evolving language use and the constraints imposed by regulation. From abstract concepts to real-world scenarios, our research delineates the versatile capabilities of LLMs and underscores their significant potential to illuminate the pathways of language evolution in the digital realm.
Nonetheless, it is crucial to consider that the linguistic adaptations observed in our simulations may not fully capture real human behaviors, and their applicability to other contexts remains uncertain. Moving forward, the scope of our research beckons a more intricate and comprehensive exploration. Future initiatives should aim to weave in complex interactional models, scale up the simulations to encompass broader user interaction networks, and incorporate dynamic, evolving regulatory frameworks to more accurately represent the fluidity of social media. Moreover, we envision incorporating human participants into the simulation framework, either as dialogue participants or supervisors, to conduct a more realistic evaluation. Furthermore, adopting a multimodal approach will more authentically capture the essence of social media, which blends textual, visual, and other forms of communication. These directions are anticipated to enhance the realism of our simulations, offering richer insights into language evolution tactics deployed to elude regulatory detection.
ACKNOWLEDGEMENT
This study was partially supported by the Pioneering Research Program for a Waseda Open Innovation Ecosystem (W-SPRING), and the Special Research Projects of Waseda University (Grant Number 2024E-021).
REFERENCES
[1] Z. Yang, “Wechat users are begging tencent to give their accounts back after talking about a beijing protest,” MIT Technology Review, 2022.
[2] B. Fung, “Twitter bans president trump permanently,” CNN, 2021.
[3] H. Jassim, “The impact of social media on language and communication,” vol. 13, pp. 2347–7180, 07 2023.
[4] W. X. Zhao, K. Zhou, J. Li, T. Tang, X. Wang, Y. Hou, Y. Min, B. Zhang, J. Zhang, Z. Dong, Y. Du, C. Yang, Y. Chen, Z. Chen, J. Jiang, R. Ren, Y. Li, X. Tang, Z. Liu, P. Liu, J. Nie, and J. rong Wen, “A survey of large language models,” ArXiv, vol. abs/2303.18223, 2023.
[5] L. Wang, C. Ma, X. Feng, Z. Zhang, H. ran Yang, J. Zhang, Z.-Y. Chen, J. Tang, X. Chen, Y. Lin, W. X. Zhao, Z. Wei, and J. rong Wen, “A survey on large language model based autonomous agents,” ArXiv, vol. abs/2308.11432, 2023.
[6] X. Tang, Z. Zheng, J. Li, F. Meng, S.-C. Zhu, Y. Liang, and M. Zhang, “Large language models are in-context semantic reasoners rather than symbolic reasoners,” 2023.
[7] C. Ziems, W. Held, O. Shaikh, J. Chen, Z. Zhang, and D. Yang, “Can large language models transform computational social science?” 2023.
[8] Y. Mu, B. P. Wu, W. Thorne, A. Robinson, N. Aletras, C. Scarton, K. Bontcheva, and X. Song, “Navigating prompt complexity for zeroshot classification: A study of large language models in computational social science,” 2023.
[9] M. Choi, J. Pei, S. Kumar, C. Shu, and D. Jurgens, “Do llms understand social knowledge? evaluating the sociability of large language models with socket benchmark,” arXiv preprint arXiv:2305.14938, 2023.
[10] L. P. Argyle, E. C. Busby, N. Fulda, J. R. Gubler, C. Rytting, and D. Wingate, “Out of one, many: Using language models to simulate human samples,” Political Analysis, vol. 31, no. 3, p. 337–351, 2023.
[11] W. Hua, L. Fan, L. Li, K. Mei, J. Ji, Y. Ge, L. Hemphill, and Y. Zhang, “War and peace (waragent): Large language model-based multi-agent simulation of world wars,” 2023.
[12] J. S. Park, J. C. O’Brien, C. J. Cai, M. R. Morris, P. Liang, and M. S. Bernstein, “Generative agents: Interactive simulacra of human behavior,” Proceedings of the 36th Annual ACM Symposium on User Interface Software and Technology, 2023.
[13] C. Gao, X. Lan, Z. Lu, J. Mao, J. Piao, H. Wang, D. Jin, and Y. Li, “S3: Social-network simulation system with large language modelempowered agents,” 2023.
[14] T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, and et al., “Language models are few-shot learners,” in Advances in Neural Information Processing Systems, vol. 33. Curran Associates, Inc., 2020, pp. 1877–1901.
[15] OpenAI, “Gpt-4 technical report,” 2023.
[16] H. Touvron, T. Lavril, G. Izacard, X. Martinet, M.-A. Lachaux, T. Lacroix, B. Rozi`ere, N. Goyal, E. Hambro, F. Azhar, A. Rodriguez, A. Joulin, E. Grave, and G. Lample, “Llama: Open and efficient foundation language models,” 2023.
[17] H. Touvron, L. Martin, K. R. Stone, P. Albert, A. Almahairi, and et al., “Llama 2: Open foundation and fine-tuned chat models,” ArXiv, vol. abs/2307.09288, 2023.
[18] A. Chowdhery, S. Narang, J. Devlin, M. Bosma, G. Mishra, and et al., “Palm: Scaling language modeling with pathways,” J. Mach. Learn. Res., vol. 24, pp. 240:1–240:113, 2023.
[19] R. Anil, A. M. Dai, O. Firat, M. Johnson, D. Lepikhin, A. Passos, S. Shakeri, E. Taropa, P. Bailey, Z. Chen et al., “Palm 2 technical report,” arXiv preprint arXiv:2305.10403, 2023.
[20] A. Zeng, X. Liu, Z. Du, Z. Wang, H. Lai, and et al., “GLM-130B: an open bilingual pre-trained model,” in The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023. OpenReview.net, 2023.
[21] J. Manyika and S. Hsiao, “An overview of bard: an early experiment with generative ai,” AI. Google Static Documents, vol. 2, 2023.
[22] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin, “Attention is all you need,” Advances in neural information processing systems, vol. 30, 2017.
[23] J. Li, M. Zhang, N. Li, D. Weyns, Z. Jin, and K. Tei, “Exploring the potential of large language models in self-adaptive systems,” 2024.
[24] K. Suzuki, J. Cai, J. Li, T. Yamauchi, and K. Tei, “A comparative evaluation on melody generation of large language models,” in 2023 IEEE International Conference on Consumer Electronics-Asia (ICCEAsia), 2023, pp. 1–4.
[25] S. Zhou, J. Li, M. Zhang, D. Saito, H. Washizaki, and K. Tei, “Can chatgpt obey the traffic regulations? evaluating chatgpt’s performance on driving-license written test,” in 2023 IEEE the 8th International Conference on Intelligent Transportation Engineering (ICITE), 2023.
[26] L. Ouyang, J. Wu, X. Jiang, D. Almeida, C. Wainwright, P. Mishkin, C. Zhang, S. Agarwal, K. Slama, A. Ray et al., “Training language models to follow instructions with human feedback,” Advances in Neural Information Processing Systems, vol. 35, pp. 27 730–27 744, 2022.
[27] C.-S. Wang, H.-L. Yang, B.-Y. Li, and H.-Y. Chen, “Can generative ai eliminate speech harms? a study on detection of abusive and hate speech during the covid-19 pandemic,” in 2023 IEEE International Conference on Consumer Electronics-Asia (ICCE-Asia), 2023, pp. 1–4.
[28] Y. Seki and Y. Liu, “Multi-task learning model for detecting internet slang words with two-layer annotation,” in 2022 International Conference on Asian Language Processing (IALP), 2022, pp. 212–218.
[29] M. Rothe, R. Lath, D. Kumar, P. Yadav, and A. Aylani, “Slang language detection and identification in text,” in 2023 14th International Conference on Computing Communication and Networking Technologies (ICCCNT), 2023, pp. 1–5.
[30] Z. Sun, R. S. Zemel, and Y. Xu, “Slang generation as categorization.” in CogSci, 2019, pp. 2898–2904.
[31] Z. Sun, R. Zemel, and Y. Xu, “Semantically informed slang interpretation,” in Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2022, pp. 5213–5231.
[32] Y. Fu, H. Peng, T. Khot, and M. Lapata, “Improving language model negotiation with self-play and in-context learning from ai feedback,” 2023.
[33] Y. Xu, S. Wang, P. Li, F. Luo, X. Wang, W. Liu, and Y. Liu, “Exploring large language models for communication games: An empirical study on werewolf,” 2023.
[34] Z. Xu, C. Yu, F. Fang, Y. Wang, and Y. Wu, “Language agents with reinforcement learning for strategic play in the werewolf game,” 2023.
[35] S. Li, J. Yang, and K. Zhao, “Are you in a masquerade? exploring the behavior and impact of large language model driven social bots in online social networks,” arXiv preprint arXiv:2307.10337, 2023.
[36] R. C. Atkinson and R. M. Shiffrin, “Human memory: A proposed system and its control processes,” in Psychology of learning and motivation. Elsevier, 1968, vol. 2, pp. 89–195.
[37] J. Wei, X. Wang, D. Schuurmans, M. Bosma, E. H. hsin Chi, F. Xia, Q. Le, and D. Zhou, “Chain of thought prompting elicits reasoning in large language models,” ArXiv, vol. abs/2201.11903, 2022.
[38] J. Ying, Y. Cao, K. Xiong, Y. He, L. Cui, and Y. Liu, “Intuitive or dependent? investigating llms’ robustness to conflicting prompts,” ArXiv, vol. abs/2309.17415, 2023.
[39] N. MacKinnon and K. Schilling, “Optimal strategy for a numberguessing game: 11051,” Am. Math. Mon., vol. 113, no. 1, pp. 81–82, 2006.
[40] X. Wang, “Two number-guessing problems plus applications in cryptography,” Int. J. Netw. Secur., vol. 21, no. 3, pp. 494–500, 2019.
[41] K. Bahamazava and R. Nanda, “The shift of darknet illegal drug trade preferences in cryptocurrency: The question of traceability and deterrence,” Digit. Investig., vol. 40, no. Supplement, p. 301377, 2022.
[42] K. Basu and A. Sen, “Monitoring individuals in drug trafficking organizations: a social network analysis,” in ASONAM ’19: International Conference on Advances in Social Networks Analysis and Mining, Vancouver, British Columbia, Canada, 27-30 August, 2019. ACM, 2019, pp. 480–483.
[43] F. Tsai, M. Hsu, C. Chen, and D. Kao, “Exploring drug-related crimes with social network analysis,” in Knowledge-Based and Intelligent Information & Engineering Systems: Proceedings of the 23rd International Conference KES-2019, Budapest, Hungary, 4-6 September 2019, vol. 159. Elsevier, 2019, pp. 1907–1917.
[44] S. Lyu and Z. Lu, “Exploring temporal and multilingual dynamics of post-disaster social media discourse: A case of fukushima daiichi nuclear accident,” Proc. ACM Hum. Comput. Interact., vol. 7, no. CSCW1, pp. 1–24, 2023.
[45] E. Zarrabeitia-Bilbao, M. Jaca-Madariaga, R. M. R´ıo-Belver, and I. Alvarez-Meaza, “Nuclear energy: Twitter data mining for social listening analysis,” Soc. Netw. Anal. Min., vol. 13, no. 1, p. 29, 2023.
This paper is available on arxiv under CC BY 4.0 DEED license.