paint-brush
A Comprehensive Guide to LangChain: Building Powerful Applications with Large Language Modelsby@mikeyoung44
15,164 reads
15,164 reads

A Comprehensive Guide to LangChain: Building Powerful Applications with Large Language Models

by Mike YoungApril 7th, 2023
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

LangChain is a library that empowers developers to build powerful applications using language models. In this guide, I'll give you a quick rundown on how LangChain works and explore some cool use cases, like question-answering, chatbots, and agents. I'll also walk you through a quick-start guide to help you get going.
featured image - A Comprehensive Guide to LangChain: Building Powerful Applications with Large Language Models
Mike Young HackerNoon profile picture

Hey there! Let me introduce you to LangChain, an awesome library that empowers developers to build powerful applications using large language models (LLMs) and other computational resources. In this guide, I'll give you a quick rundown on how LangChain works and explore some cool use cases, like question-answering, chatbots, and agents. I'll also walk you through a quick-start guide to help you get going. Let's begin!

What is LangChain?

LangChain is a powerful framework designed to help developers build end-to-end applications using language models. It offers a suite of tools, components, and interfaces that simplify the process of creating applications powered by large language models (LLMs) and chat models. LangChain makes it easy to manage interactions with language models, chain together multiple components, and integrate additional resources, such as APIs and databases.


LangChain has a number of core concepts. Let's go through them one by one:

1. Components and Chains

In LangChain, components are modular building blocks that can be combined to create powerful applications. A chain is a sequence of components (or other chains) put together to accomplish a specific task. For example, a chain might include a prompt template, a language model, and an output parser, all working together to handle user input, generate a response, and process the output.

2. Prompt Templates and Values

A Prompt Template is responsible for creating a PromptValue, which is what's eventually passed to the language model. Prompt Templates help convert user input and other dynamic information into a format suitable for the language model. PromptValues are classes with methods that can be converted to the exact input types each model type expects (like text or chat messages).

3. Example Selectors

Example Selectors are useful when you want to include examples in a prompt dynamically. They take user input and return a list of examples to use in the prompt, making it more powerful and context-specific.

4. Output Parsers

Output Parsers are responsible for structuring language model responses into a more usable format. They implement two main methods: one for providing formatting instructions and another for parsing the language model's response into a structured format. This makes it easier to work with the output data in your application.

5. Indexes and Retrievers

Indexes are a way to organize documents to make it easier for language models to interact with them. Retrievers are interfaces for fetching relevant documents and combining them with language models. LangChain provides tools and functionality for working with different types of indexes and retrievers, like vector databases and text splitters.

6. Chat Message History

LangChain primarily interacts with language models through a chat interface. The ChatMessageHistory class is responsible for remembering all previous chat interactions, which can then be passed back to the model, summarized, or combined in other ways. This helps maintain context and improves the model's understanding of the conversation.

7. Agents and Toolkits

Agents are entities that drive decision-making in LangChain. They have access to a suite of tools and can decide which tool to call based on user input. Toolkits are sets of tools that, when used together, can accomplish a specific task. The Agent Executor is responsible for running agents with the appropriate tools.


By understanding and leveraging these core concepts, you can harness the power of LangChain to build advanced language model applications that are adaptable, efficient, and capable of handling complex use cases.

What is a LangChain Agent?

A LangChain Agent is an entity that drives decision-making in the framework. It has access to a set of tools and can decide which tool to call based on the user's input. Agents help build complex applications that require adaptive and context-specific responses. They are especially useful when there's an unknown chain of interactions that depend on the user's input and other factors.

How would someone use LangChain?

To use LangChain, a developer would start by importing the necessary components and tools, such as LLMs, chat models, agents, chains, and memory features. These components are combined to create an application that can understand, process, and respond to user inputs.

LangChain provides a variety of components for specific use cases, such as personal assistants, question answering over documents, chatbots, querying tabular data, interacting with APIs, extraction, evaluation, and summarization.

What's a LangChain model?

A LangChain model is an abstraction that represents different types of models used in the framework. There are three main types of models in LangChain:


  1. LLMs (Large Language Models): These models take a text string as input and return a text string as output. They are the backbone of many language model applications.
  2. Chat Models: Chat Models are backed by a language model but have a more structured API. They take a list of Chat Messages as input and return a Chat Message. This makes it easy to manage conversation history and maintain context.
  3. Text Embedding Models: These models take text as input and return a list of floats representing the text's embeddings. These embeddings can be used for tasks like document retrieval, clustering, and similarity comparisons.


Developers can choose the appropriate LangChain model for their use case and leverage the provided components to build their applications.

Key Features of LangChain

LangChain is designed to support developers in six main areas:


  1. LLMs and Prompts: LangChain makes it easy to manage prompts, optimize them, and create a universal interface for all LLMs. Plus, it includes some handy utilities for working with LLMs.
  2. Chains: These are sequences of calls to LLMs or other utilities. LangChain provides a standard interface for chains, integrates with various tools, and offers end-to-end chains for popular applications.
  3. Data Augmented Generation: LangChain enables chains to interact with external data sources to gather data for the generation step. For example, it can help with summarizing long texts or answering questions using specific data sources.
  4. Agents: An agent lets an LLM make decisions about actions, take those actions, check the results, and keep going until the job's done. LangChain provides a standard interface for agents, a variety of agents to choose from, and examples of end-to-end agents.
  5. Memory: LangChain has a standard interface for memory, which helps maintain state between chain or agent calls. It also offers a range of memory implementations and examples of chains or agents that use memory.
  6. Evaluation: It's tough to evaluate generative models with traditional metrics. That's why LangChain provides prompts and chains to help developers assess their models using LLMs themselves.

Use Cases

LangChain supports a bunch of use cases, like:


  • Question Answering over specific documents: Answer questions based on given documents, using the info in those documents to create answers.
  • Chatbots: Build chatbots that can produce text by leveraging LLMs' capabilities.
  • Agents: Develop agents that can decide on actions, take those actions, observe the results, and keep going until they're done.

Quickstart Guide: Building an End-to-End Language Model Application with LangChain

Installation

First, let's get LangChain installed. Just run the following command:

pip install langchain

Environment Setup

Now, since LangChain often needs to integrate with model providers, data stores, APIs, and more, we'll set up our environment. In this example, we're going to use OpenAI's APIs, so we need to install their SDK:

pip install openai

Next, let's set up the environment variable in the terminal:

export OPENAI_API_KEY="..."

Or, if you prefer to work inside a Jupyter notebook or Python script, you can set the environment variable like this:

import os
os.environ["OPENAI_API_KEY"] = "..."

Building a Language Model Application: LLMs

With LangChain installed and the environment set up, we're ready to start building our language model application. LangChain provides a bunch of modules that you can use to create language model applications. You can combine these modules for more complex applications or use them individually for simpler ones.

Building a Language Model Application: Chat Models

In addition to LLMs, you can also work with chat models. These are a variation of language models that use language models under the hood but have a different interface. Instead of a "text in, text out" API, chat models work with chat messages as inputs and outputs. Chat model APIs are pretty new, so everyone is still figuring out the best abstractions.


To get chat completions, you'll need to pass one or more messages to the chat model. LangChain currently supports AIMessage, HumanMessage, SystemMessage, and ChatMessage types. You'll mostly work with HumanMessage, AIMessage, and SystemMessage.


Here's an example of using chat models:

from langchain.chat_models import ChatOpenAI
from langchain.schema import (
    AIMessage,
    HumanMessage,
    SystemMessage
)

chat = ChatOpenAI(temperature=0)


You can get completions by passing in a single message:

chat([HumanMessage(content="Translate this sentence from English to French. I love programming.")])
# -> AIMessage(content="J'aime programmer.", additional_kwargs={})


Or pass in multiple messages for OpenAI's gpt-3.5-turbo and gpt-4 models:

messages = [
    SystemMessage(content="You are a helpful assistant that translates English to French."),
    HumanMessage(content="Translate this sentence from English to French. I love programming.")
]
chat(messages)
# -> AIMessage(content="J'aime programmer.", additional_kwargs={})


You can also generate completions for multiple sets of messages using generate. This returns an LLMResult with an additional message parameter:

batch_messages = [
    [
        SystemMessage(content="You are a helpful assistant that translates English to French."),
        HumanMessage(content="Translate this sentence from English to French. I love programming.")
    ],
    [
        SystemMessage(content="You are a helpful assistant that translates English to French."),
        HumanMessage(content="Translate this sentence from English to French. I love artificial intelligence.")
    ],
]
result = chat.generate(batch_messages)
result
# -> LLMResult(generations=[[ChatGeneration(text="J'aime programmer.", generation_info=None, message=AIMessage(content="J'aime programmer.", additional_kwargs={}))], [ChatGeneration(text="J'aime l'intelligence artificielle.", generation_info=None, message=AIMessage(content="J'aime l'intelligence artificielle.", additional_kwargs={}))]], llm_output={'token_usage': {'prompt_tokens': 71, 'completion_tokens': 18, 'total_tokens': 89}})


And you can extract information like token usage from the LLMResult:

result.llm_output['token_usage']
# -> {'prompt_tokens': 71, 'completion_tokens': 18, 'total_tokens': 89}


With chat models, you can also use templating by employing a MessagePromptTemplate. You can create a ChatPromptTemplate from one or more MessagePromptTemplates. The format_prompt method of ChatPromptTemplate returns a PromptValue, which you can convert to a string or Message object, depending on whether you want to use the formatted value as input to an LLM or chat model.


Here's an example:

from langchain.chat_models import ChatOpenAI
from langchain.prompts.chat import (
    ChatPromptTemplate,
    SystemMessagePromptTemplate,
    HumanMessagePromptTemplate,
)

chat = ChatOpenAI(temperature=0)

template="You are a helpful assistant that translates {input_language} to {output_language}."
system_message_prompt = SystemMessagePromptTemplate.from_template(template)
human_template="{text}"
human_message_prompt = HumanMessagePromptTemplate.from_template(human_template)

chat_prompt = ChatPromptTemplate.from_messages([system_message_prompt, human_message_prompt])

# get a chat completion from the formatted messages
chat(chat_prompt.format_prompt(input_language="English", output_language="French", text="I love programming.").to_messages())
# -> AIMessage(content="J'aime programmer.", additional_kwargs={})


You can use the LLMChain with chat models as well:

from langchain.chat_models import ChatOpenAI
from langchain import LLMChain
from langchain.prompts.chat import (
    ChatPromptTemplate,
    SystemMessagePromptTemplate,
    HumanMessagePromptTemplate,
)

chat = ChatOpenAI(temperature=0)

template="You are a helpful assistant that translates {input_language} to {output_language}."
system_message_prompt = SystemMessagePromptTemplate.from_template(template)
human_template="{text}"
human_message_prompt = HumanMessagePromptTemplate.from_template(human_template)
chat_prompt = ChatPromptTemplate.from_messages([system_message_prompt, human_message_prompt])

chain = LLMChain(llm=chat, prompt=chat_prompt)
chain.run(input_language="English", output_language="French", text="I love programming.")
# -> "J'aime programmer."


You can also use agents with chat models. Initialize an agent using AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION as the agent type:

from langchain.agents import load_tools
from langchain.agents import initialize_agent
from langchain.agents import AgentType
from langchain.chat_models import ChatOpenAI
from langchain.llms import OpenAI

# First, let's load the language model we're going to use to control the agent.
chat = ChatOpenAI(temperature=0)

# Next, let's load some tools to use. Note that the `llm-math` tool uses an LLM, so we need to pass that in.
llm = OpenAI(temperature=0)
tools = load_tools(["serpapi", "llm-math"], llm=llm)

# Finally, let's initialize an agent with the tools, the language model, and the type of agent we want to use.
agent = initialize_agent(tools, chat, agent=AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION, verbose=True)

# Now let's test it out!
agent.run("Who is Olivia Wilde's boyfriend? What is his current age raised to the 0.23 power?")


In this example, the agent will interactively perform a search and calculation to provide the final answer.


Finally, let's explore using Memory with chains and agents initialized with chat models. The main difference between this and Memory for LLMs is that we can keep previous messages as their own unique memory objects, rather than condensing them into a single string.


Here's an example of using a ConversationChain:

from langchain.prompts import (
    ChatPromptTemplate, 
    MessagesPlaceholder, 
    SystemMessagePromptTemplate, 
    HumanMessagePromptTemplate
)
from langchain.chains import ConversationChain
from langchain.chat_models import ChatOpenAI
from langchain.memory import ConversationBufferMemory

prompt = ChatPromptTemplate.from_messages([
    SystemMessagePromptTemplate.from_template("The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know."),
    MessagesPlaceholder(variable_name="history"),
    HumanMessagePromptTemplate.from_template("{input}")
])

llm = ChatOpenAI(temperature=0)
memory = ConversationBufferMemory(return_messages=True)
conversation = ConversationChain(memory=memory, prompt=prompt, llm=llm)

conversation.predict(input="Hi there!")
# -> 'Hello! How can I assist you today?'

conversation.predict(input="I'm doing well! Just having a conversation with an AI.")
# -> "That sounds like fun! I'm happy to chat with you. Is there anything specific you'd like to talk about?"

conversation.predict(input="Tell me about yourself.")
# -> "Sure! I am an AI language model created by OpenAI. I was trained on a large dataset of text from the internet, which allows me to understand and generate human-like language. I can answer questions, provide information, and even have conversations like this one. Is there anything else you'd like to know about me?"


In this example, we used a ConversationChain to maintain the context of the conversation across multiple interactions with the AI.


That's it! Now you have a solid understanding of how to use LangChain to build end-to-end language model applications. By following these examples, you can develop powerful language model applications using LLMs, chat models, agents, chains, and memory features.

Conclusion

In conclusion, LangChain is a powerful framework that simplifies the process of building advanced language model applications by providing a modular and flexible approach. By understanding the core concepts, such as components, chains, prompt templates, output parsers, indexes, retrievers, chat message history, and agents, you can create custom solutions tailored to your specific needs. LangChain's adaptability and ease of use make it an invaluable tool for developers, enabling them to unlock the full potential of language models and create intelligent, context-aware applications across a wide range of use cases.


Also published here.