Machine Learning Spot

How to Use LangChain Chains: 3 Popular Used Chains

LangChain Chains

LangChain Chains is the feature likely responsible for the name of the chain in LangChain. It is a powerful feature that allows us to connect different functions.

LangChain Chains removes the boundaries of creativity by giving flexibility to the developer. LangChain is like a puzzle in which one can place any piece of puzzle according to their own choice, and every piece fits well due to the chains in LangChain.

Let’s be extraordinarily creative by learning how to use this awesome feature from this article.

By the end of this article, you will learn:

  1. How to chain an LLM with a Prompt template
  2. How to chain two different chains together
  3. Difference between the three most used Chains
  4. How to refine output using two different chains
  5. What is a simple sequential chain?
  6. What is a sequential chain?
  7. Use LangChain to learn historical facts and generate reviews.

Note: This blog on LangChain Chains is part of our LangChain learning series, in which I am striving to make you an expert in LangChain. The above list is just a sneak peek at what you will learn in this blog.

Installing Required Packages

  • pip install LangChain We are going to use it for chains, and prompt templates.
  • pip install -U LangChain-OpenAI LangChain OpenAI to interact with the LLM of OpenAI.

Remember, any LLM can be used in LangChain, even multiple different LLMs, but here we are going to use OpenAI only.

Importing Everything Necessary

Let’s Import Everything required to successfully learn from and run each line of code of this blog on LangChain Chains!

#Importing OpenAI
from langchain_openai import OpenAI

#Importing every Chain that we are going to use in single line
from langchain.chains import LLMChain ,SimpleSequentialChain, SequentialChain 
from langchain.prompts import PromptTemplate
import os

Setting Up the OpenAI Environment

Since we will be using OpenAI LLM, get your OpenAI API key and set the environment using the code below.

os.environ["OPENAI_API_KEY"] = "MY API" #Replace it with your own API Key

Creating a Prompt Template

So now we are creating a prompt template that we will chain with our LLM if you want to learn how a prompt template is crafted in detail, then read my blog on prompt templates

prompt_template = ("""
I want you to act as a historian. I will provide you with a topic related to history.
Your task is to research, analyze, and provide a detailed account of the history of {topic}.
Please provide the information in a clear and concise manner, using bullet points where appropriate.
Do not include any personal opinions or speculations in your response.
""")

# Creating a PromptTemplate instance

# While creating a PromptTemplate instance we just tell what is input variable and which text contains that input variable here string  propmpt_template has it 

prompt1 = PromptTemplate(input_variables=["topic"], template=prompt_template)

#Format the prompt with the topic "World History" I gave Pakistan

formatted_prompt = prompt1.format(topic="pakistan") #.format Replaces with assigned word

print(formatted_prompt) #Prints instructions of a prompt template.


So now, we have created the template. Let’s connect this with an LLM using the simplest chain to run this prompt.

LangChain Chain No 1 : The Simplest Chain in LangChain

This is going to be our first LangChain chain, which is the most basic one:

from langchain.chains import LLMChain #written here just to explain 

llm = OpenAI(temperature=0.3)#Bring output from OpenAI with randmoness of 0.3
topic="pakistan" #Assigning variable topic the topic we wanto read about 

chain1 = LLMChain(llm=llm, prompt=prompt1) #Chaining our template with LLM
chain1.invoke(topic) #We are sending with topic Pakistan

The output we got was:

{'Pakistan': 'pakistan', 'text': "\n1. August 14th - Pakistan's Independence Day\n2. March 23rd - Pakistan Day\n3. September 6th - Defence Day of Pakistan\n4. December 25th - Quaid-e-Azam's Birthday\n5. July 5th - Kashmir Martyrs Day\n6. September 11th - Death Anniversary of Quaid-e-Azam\n7. October 27th - Black Day for Kashmir\n8. November 9th - Iqbal Day\n9. December 16th - Victory Day (also known as Army Day)\n10. April 21st - Youm-e-Takbeer (Day of Greatness) - commemorating Pakistan's nuclear tests in 1998."}

Okay, now let’s make a simple sequential chain to go deeper into how to use LangChain chains by connecting our above prompt template’s chain, chain1, with another prompt template chain, which we will call chain2.

We will feed the output of Chain1 as an input to Chain2 using a simple sequential chain – the second chain of our LangChain Chain blog.

The chain that we used above was a simple chain, but now it is a sequential chain where two simple chains are connected.

2. The Simple Sequential Chain: Combining Chain1 and Chain 2

from langchain.chains import SimpleSequentialChain #We had already imported it above

prompt_template = ("""
print the dates that are discused above about {topic}
""") #our prompt template just seperates the dates

# Create a PromptTemplate instance
prompt1 = PromptTemplate(input_variables=["topic"], template=prompt_template)

# Format the prompt with the topic "World History"
#formatted_prompt = prompt.format(topic="pakistan") # Uncomment this line if you wan to see how prompt will appear to the LLM

llm = OpenAI(temperature=0.3)
topic="dates"

chain2 = LLMChain(llm=llm, prompt=prompt1)


chain = SimpleSequentialChain(chains=[chain1, chain2] ) #Output of Chain1 goes to Chain2

chain.invoke("Pakistan") # invoking the overall combined chain "chain"

Output:

In the second template, we asked to bring the dates that are mentioned in chain 1. Hence, my output just gave the dates; you can see it below.

{'input': 'Pakistan', 'output': '\n1. August 14th\n2. March 23rd\n3. September 6th\n4. December 25th\n5. July 5th\n6. September 11th\n7. April 21st\n8. May 28th\n9. October 27th\n10. November 9th'}

Now let’s move on to sequential chains, one of the most powerful tools in LangChain chains. The difference between simple sequential chains and a sequential chain is flexibility. If we had not run chain 1 separately, then we could not have seen the output of chain 1.

Although both simple sequential and sequential chains can be elongated as much as we want, we can connect as many chains together as we want. In a simple sequential chain, there can be only one input and one output, whereas, in a sequential chain, we can have multiple.

Not only that, sequential chains have a lot to offer that goes beyond the scope of this blog. Sequential chains also allow you to explicitly define which output from one step feeds into which input from the next.

Sequential chains can take outputs from different parts of the chain and combine them into a final result; furthermore, using sequential chains of LangChain chains, you can do conditional branching.

Sequential chains in LangChain chains disrupt the normal, one-after-the-other execution flow by introducing a decision point.

By using it whenever a defined condition is met, you can even change the entire flow of the program, giving more flexibility as compared to previous LangChain chains that we have discussed.

Okay, with no more talk, just dive right into code to level up our understanding of how to use LangChain Chains.

How to Use LangChain Chain: Sequential Chains

Just like the simple sequential chain of LangChain chains, this LangChain chain sequential chain can have multiple chains in itself, so let’s make two different chains and then observe the power of the incredible LangChain chain!

Code for First Chain With Initialization of LLM

So, here we are going to create our first chain. Again, this will be to create our own history teacher. I am sticking to the same type of prompt, so it is easier for you to compare different Langchain chains by observing the output of different chains.

from langchain.chains import SequentialChain #keep it at the top of your code

#sequential chain starts here
import
llm = OpenAI(temperature=.7) #Initializing our LLM of OpenAI with .7 randomness

# 1 - Below is the template with two input variables in curly braces, like before topic and period

template = """I want you to act as a historian. I will provide you with a topic and a period of years related to the history of that topic.
Your task is to research, analyze, and provide a detailed account of the history of topics and periods with dates.
Please provide the information in a clear and concise manner, using bullet points where appropriate.
Do not include any personal opinions or speculations in your response.
Topic: {topic}
Period: {period}
Historian: This is what had happened::"""

#Below, we tell the model what the input variables are and what template we are talking about, for chain1.

prompt_template = PromptTemplate(input_variables=["topic", "period"], template=template)

history_chain = LLMChain(llm=llm, prompt=prompt_template, output_key="happened")

# The output key above tells which output you are taking to be fed into the next chain as an input. I have taken what happened.

Code For Second Chain of Sequential Chain


# 2 This is a LLM Chain to write a review of the history that is received from the above chain. 

 llm = OpenAI(temperature=.7)  template = """You are a history critic from the New York Times. Given the history provided, it is your job to write a review for that history of historian.    history:  {happened}  Review from a New York Times critic of the above history:"""  prompt_template = 

PromptTemplate(input_variables=["happened"], template=template)  review_chain = LLMChain(llm=llm, 

prompt=prompt_template, output_key="review")

Overall Chain That Brings the Output

Now let us conclude our sequential chain by combining the chains we made above.

# 3 This is the overall chain of sequential chains where we run these two chains in sequence. 
 
#lets name this combination of chains overall_all chain
overall_chain = SequentialChain( 
    chains=[history_chain, review_chain], 
    input_variables=["period", "topic"], 
    # Here we return multiple variables 
    output_variables=["happened", "review"], 
    verbose=True) # when verbose is true it explains us in detail what it is doing
 
overall_chain.invoke(({"period": "1947-1948", "topic": "Pakistan"}))
#We sent Period of independence of Pakistan with Topic Pakistan you can replace it according to your own choice.

In overall chains, we chained history reviews, and history chains together told what input variables we were going to pass and what output we wanted from the output chain At the end, we simply sent years in period and topic using .invoke on overall_chain variable

Output we got

Since the output was too long, it contained both the history of a particular period and a review of that history If you want to read it, you can also download the document from here.

Conclusion:

Hurray! Now you have learned how LangChains chains work, and you have learned 3 different types of LangChain chains.

This blog is part of the LangChain learning series Find more articles on LangChain here. Tell me what type of LangChain chain you want to learn about.

I am waiting for your feedback; feel free to reach out. happy learning!

Liked the Post?  You can share as well

Facebook
Twitter
LinkedIn

More From Machine Learning Spot

Get The Latest AI News and Insights

Directly To Your Inbox
Subscribe

Signup ML Spot Newsletter

What Will You Get?

Bonus

Get A Free Workshop on
AI Development