| dc.contributor.author | NITESCU, George | |
| dc.contributor.author | OUATU, Andrei | |
| dc.contributor.author | ȚURCANU, Dinu | |
| dc.date.accessioned | 2026-02-17T17:16:35Z | |
| dc.date.available | 2026-02-17T17:16:35Z | |
| dc.date.issued | 2025 | |
| dc.identifier.citation | NITESCU, George; Andrei OUATU and Dinu ȚURCANU. Evaluating large language models security and resilience: A practical testing framework. In: 24th RoEduNet International Conference Networking in Education and Research, Chisinau, Republic of Moldova, 17-19 September, 2025. Universitatea Politehnică din Bucureşti. IEEE Computer Society, 2025, pp. 1-6. ISBN 979-8-3315-5714-0, eISBN 979-8-331-55713-3, ISSN 2068-1038, eISSN 2247-5443. | en_US |
| dc.identifier.isbn | 979-8-3315-5714-0 | |
| dc.identifier.isbn | 979-8-331-55713-3 | |
| dc.identifier.issn | 2068-1038 | |
| dc.identifier.issn | 2247-5443 | |
| dc.identifier.uri | https://doi.org/10.1109/RoEduNet68395.2025.11208478 | |
| dc.identifier.uri | https://repository.utm.md/handle/5014/35267 | |
| dc.description | Acces full text: https://doi.org/10.1109/RoEduNet68395.2025.11208478 | en_US |
| dc.description.abstract | Large Language Models (LLMs) are increasingly used in real-world applications, but as their capabilities grow, so do the risks of misuse. Despite their widespread adoption, the security of these models remains an area with many open questions. This paper explores these issues through a set of applied experiments carried out in a controlled environment designed for testing. A prototype application that allows demonstrating how an LLM security benchmarking tool could function in practice was designed. The application allows users to simulate attacks and assess the effectiveness of several defense strategies as in-context defense and paraphrase-based. The experimental results show notable differences between the tested methods. Some techniques were able to fully block attacks while maintaining the model' ability to respond accurately to regular prompts. Our work paves the way for a more secure development of LLMs by evaluating their resilience to known attacks, while also providing a practical prototype that serves as a starting point for future research and can be extended to support more advanced evaluation methodologies in the context of security of generative AI systems. | en_US |
| dc.language.iso | en | en_US |
| dc.publisher | IEEE (Institute of Electrical and Electronics Engineers) | en_US |
| dc.rights | Attribution-NonCommercial-NoDerivs 3.0 United States | * |
| dc.rights.uri | http://creativecommons.org/licenses/by-nc-nd/3.0/us/ | * |
| dc.subject | attacks | en_US |
| dc.subject | defenses | en_US |
| dc.subject | machine learning | en_US |
| dc.subject | security | en_US |
| dc.title | Evaluating large language models security and resilience: A practical testing framework | en_US |
| dc.type | Article | en_US |
The following license files are associated with this item: