paint-brush
100 Days of AI Day 6: Retrieval Techniques and Their Use Casesby@sindamnataraj
466 reads
466 reads

100 Days of AI Day 6: Retrieval Techniques and Their Use Cases

by NatarajJanuary 16th, 2024
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Day 6 of 100 Days of AI delves into the pivotal role of retrieval in RAGs. From basic semantic similarity to cutting-edge algorithms like Maximal Marginal Relevance and LLM-Aided Retrieval, the article navigates the evolving landscape of retrieval techniques. Discover how precision in retrieval impacts the quality of AI-generated responses and stay tuned for upcoming insights on incorporating these advancements into chat-with-your-data applications.
featured image - 100 Days of AI Day 6: Retrieval Techniques and Their Use Cases
Nataraj HackerNoon profile picture



What is Retrieval in the context of building RAGs?

In RAG (Retrieval Augmented Generation) applications, retrieval refers to the process of extracting the most relevant data chunks/splits from Vector databases based on the question received from the user. If your retrieval technique is not good it affects how good the information you can give to the user as a reply is. The data chunks retrieved from the vector db are sent to the LLM as a context to generate the final answer that will be sent to the user as an output.


Different types of retrieval techniques:

  1. Basic Semantic Similarity: In this algorithm, you are retrieving data chunks from vector db that are most semantically close to the question asked by the user. For example, if the user question was “Tell me about all-white mushrooms with large fruiting bodies”. A simple semantic similarity will get you the answer but will not give the information about how they might also be poisonous.


    Here are some edge cases you could encounter using semantic similarity:

    1. If we load duplicate files and our answer exists in both, the result will return both of them. So, we need to get relevant & distinct results from vector embedding when we ask a question.

    2. If we ask a question where all answers should come from doc-2, but we do get answers from doc-1 as well because this is a semantic search and we haven’t explicitly controlled which doc to look into and which to skip.


  2. Maximal Marginal Relevance (MMR): In all cases of retrievals we may not want just similarity but also divergence. In the above example of Tell me about all-white mushrooms with large fruiting bodies, If we do just similar we will not give the info on how poisonous. We can instead use MMR to inject divergence into our retrievals.


    Here’s the MMR algorithm for picking the relevant data chunks.


    1. Query the vector store
    2. Choose the fetch_K most similar responses
    3. In those responses choose K most diverse


Maximal Marginal Relevance (MMR)



3. LLM-Aided Retrieval: We can also use the LLM to carry out retrieval by splitting the question into the filter and search terms. Once we split the query using the LLM we pass the filter to the vector db as a metadata filter which most vector DBs support.


LLM Aided Retrieval



Note that there are retrieval techniques that do not use vector databases like SVM, TF-IDF etc.,

Retrieval is where a lot of innovation is currently happening and is changing rapidly. I will be using the retrieval technique in the upcoming blog to build a chat with your data application. Keep an eye out for it.


That’s it for Day 6 of 100 Days of AI.


I write a newsletter called Above Average where I talk about the second order insights behind everything that is happening in big tech. If you are in tech and don’t want to be average, subscribe to it.

Follow me on Twitter, LinkedIn for the latest updates on 100 days of AI, or bookmark this page. If you are in tech you might be interested in joining my community of tech professionals here.


Also published here.