This study investigates the integration of the Llama 2 7b large language model (LLM) with the Google Query API to enhance its accuracy and reduce hallucination instances. By leveraging real-time internet data, we aimed to address the limitations of static training datasets and improve the model's performance across various language processing tasks. The methodology involved augmenting Llama 2 7b's architecture to incorporate dynamic data retrieval from the Google Query API, followed by an evaluation of its impact on model accuracy and hallucination reduction using the BIG-Bench benchmark. The results indicate significant improvements in both accuracy and reliability, demonstrating the effectiveness of integrating LLMs with external data sources. This integration not only marks a substantial advancement in the capabilities of LLMs but also raises important considerations regarding data bias, privacy, and the ethical use of internet-sourced information. The study's findings contribute to the ongoing discourse on enhancing LLMs, suggesting a promising direction for future research and development in artificial intelligence.