Rogue Scholar Posts

language
Published in Stories by Research Graph on Medium

A Unified and Collaborative Framework for LLM Author · Qingqin Fang ( ORCID: 0009–0003–5348–4264) Introduction In today’s rapidly evolving field of artificial intelligence, large language models (LLMs) are demonstrating unprecedented potential. Particularly, the Retrieval-Augmented Generation (RAG) architecture has become a hot topic in AI technology due to its unique technical capabilities.

Published in Stories by Research Graph on Medium

Transformative Advances in Language Models through External Knowledge Integration Author: Qingqin Fang ( ORCID: 0009–0003–5348–4264) Introduction In the dynamic field of natural language processing, the integration of external knowledge has emerged as a pivotal strategy for enhancing the performance of language models.

Published in Stories by Research Graph on Medium

Improving the performance of Large Language Models Author Dhruv Gupta (ORCID: 0009-0004-7109-5403) ChatGPT, which first came out in late 2022, took the world by storm. Since then, various LLM models and LLM based products such as Meta’s Llama and Google’s Gemini have emerged, demonstrating the power of LLMs.

Published in Stories by Research Graph on Medium
Author Amanda Kau

Improving the performance and application of Large Language Models Author Amanda Kau (ORCID: 0009-0004-4949-9284) Large language models (LLMs) like GPT-4, the engine of products like ChatGPT, have taken centre stage in recent years due to their astonishing capabilities. Yet, they are far from perfect.

Published in Stories by Research Graph on Medium
Author Wenyi Pi

How to efficiently retrieve information for different applications Author Wenyi Pi (ORCID: 0009-0002-2884-2771) This article aims to explore various ways in which Retrieval-Augmented Generation (RAG) can be utilised to retrieve information and generate responses effectively within the dialogue system. The rationale behind utilising RAG as well as potential ways in which it can be employed effectively will be covered.

Published in Stories by Research Graph on Medium

The AI Helper Turning Mountains of Data into Bite-Sized Instructions Author Aland Astudillo (ORCID: 0009-0008-8672-3168) LLMs have been changing the way the entire world deals with problems and day-by-day tasks. To make them better for specific applications, they need huge amounts of data and complex and expensive approaches to training them.

Published in Stories by Research Graph on Medium

Authors Nakul Nambiar (ORCID: 0009-0009-9720-9233) Amir Aryani (ORCID: 0000-0002-4259-9774) Knowledge graphs, which offer a structured representation of data and its relationships, are revolutionising how we organise and access information.

Published in Stories by Amir Aryani on Medium

Authors: Hui Yin, Amir Aryani As we discussed in our previous article “A Brief Introduction to Retrieval Augmented Generation (RAG)”, RAG is an artificial intelligence framework that incorporates the latest reliable external knowledge and aims to improve the quality of responses generated by pre-trained language models (PLM). Initially, it was designed to improve the performance of knowledge-intensive NLP tasks (Lewis et al., 2020). As

Tools and Platform for Integration of Knowledge Graph with RAG pipelines. Authors Aland Astudillo (ORCID: 0009-0008-8672-3168) Aishwarya Nambissan (ORCID: 0009-0003-3823-6609) Many users of chatbots such as ChatGPT, have encountered the problem of receiving inappropriate or incompatible responses. There are several reasons why this might happen.

Published in Stories by Amir Aryani on Medium

Authors: Hui Yin , Amir Aryani With the increasing application of large language models in various scenarios, people realize that these models are not omnipotent. When generating dialogues (Shuster et al., 2021), the models often produce hallucinations, leading to inaccurate answers.