Abstract:
Large Language Models (LLMs) are increasingly used in real-world applications, but as their capabilities grow, so do the risks of misuse. Despite their widespread adoption, the security of these models remains an area with many open questions. This paper explores these issues through a set of applied experiments carried out in a controlled environment designed for testing. A prototype application that allows demonstrating how an LLM security benchmarking tool could function in practice was designed. The application allows users to simulate attacks and assess the effectiveness of several defense strategies as in-context defense and paraphrase-based. The experimental results show notable differences between the tested methods. Some techniques were able to fully block attacks while maintaining the model' ability to respond accurately to regular prompts. Our work paves the way for a more secure development of LLMs by evaluating their resilience to known attacks, while also providing a practical prototype that serves as a starting point for future research and can be extended to support more advanced evaluation methodologies in the context of security of generative AI systems.