Abstract:
Fake news is a problem of increasing importance in the world, and AI and machine learning (ML) are helping to fight it by identifying potential hoaxes. This paper compares the application of traditional models and deep learning techniques in the context of various ML and NLP strategies, from the basic to the sophisticated (e.g., BERT, GPT). Some of the key challenges include; limited and costly annotated datasets, bias in models, and adversarial attacks on the information. The findings of the study show that transformer models are more effective than the traditional methods, but they are prone to ethical concerns and adversarial attacks. Combining the linguistic and network-based approaches holds the promise of further enhancing the model’s performance. Future work should aim to increase the flexibility, reduce the bias, and include a human factor in the process of detecting misinformation with the help of AI.