
The adoption of Retrieval-Augmented Generation (RAG) in enhancing Large Language Models (LLMs) has been considered a promising advancement, providing a pathway to increased accuracy and contextual relevance in AI outputs. Recent research undertaken by Bloomberg reveals complex safety concerns that accompany the integration of RAG into these models. The traditionally held belief that RAG inherently reinforces the safety of LLMs