Machine Learning Spot

How to Use LangChain: Get 3 Proven Use Cases

How to Use LangChain

If you want to know how to use LangChain within a few minutes, then this blog post is for you. In this blog post, we will cover each of the LangChain components that are enough for you to get started in LangChain.

By the end of the blog, you will be able to witness the power of LangChain while at the same time learning how to use LangChain. By using simple chains, we are going to make a personal history teacher, and we are also going to connect our LLM to modern information available on Wikipedia.

LangChain is a widely used, popular open-source framework used to build apps that deal with LLMs (large language models). It makes it easier and simpler for anyone to connect to multiple LLMs.

This framework is available in Python and Javascript. We are going to code in Python using the OpenAI API, which has the best LLMs considered so far.

So without any delay, let’s get started and learn how to use LangChain.

Installing Packages and Dependencies

Copy the commands below, highlighted with bold characters, to download the required packages.

  • pip install LangChain We are going to use it for chains, prompt templates, and agents.
  • pip install -U LangChain-OpenAI LangChain OpenAI to interact with the LLM of OpenAI.
  • pip install Wikipedia to get information from Wikipedia.

Essential Imports for LangChain Project

To make sure you don’t get an error, I have continuously mentioned in the entire blog of How to Use LangChain what we are importing and why we are importing. To make it even easier, you can import all of them by just copying the code below.

from langchain_openai import OpenAI

from langchain.chains import LLMChain #,SimpleSequentialChain, SequentialChain we are not using simple sequential and sequential include them when you use them by uncommenting

import os

from langchain.prompts import PromptTemplate

from langchain.agents import AgentType, initialize_agent, load_tools

os.environ["OPENAI_API_KEY"] = "MY API" #replace it with your own API Key

Getting the API key of OpenAI

Here, we are going to use OpenAI API You can get it for free here, and then after getting it, save it for later to replace “My API” with your API in the code.

After logging in, this page will appear Click on “Create a new secret key.”

1. How To Use LangChain with OpenAI Prompt Templates

So here is your first code for Langchain. Here, we are using LangChain as our LLM for whatever we want to ask, just like ChatGPT.

from langchain_openai import OpenAI # this is to interact with OpenAI
import os
os.environ["OPENAI_API_KEY"] = "My API" #replace My API with your API

# Create an instance of the OpenAI model with the specified temperature
llm = OpenAI(temperature=0.5) 

# Use the model to make a prediction
llm.invoke(
"tell me a random fact") #invoking LLM to get a random fact

The Output I Got:

The shortest war in history was between Great Britain and Zanzibar in 1896 and lasted only 38 minutes.

In the above code, temperature means how random and creative you want your LLM to be; the higher the temperature, the more creative the LLM gets.

So this was your first code with langChain; let’s dive in further and move to our second project to learn how prompt templates, a structured outline to guide user prompts, are made.

2. How to Use LangChain to Create a Personal History Teacher

from langchain_openai import OpenAI
import os      #no need to import again if running in same file
from langchain.prompts import PromptTemplate
os.environ["OPENAI_API_KEY"] = "My API" #No need to set up environment again if you are running code on same file

prompt = PromptTemplate.from_template("I want you to act as a historian and tell who is {name}")
print(prompt.format(name="Elon Musk"))

Output:

I want you to act as a historian and tell who is Elon Musk

This is the simplest way to write a prompt template since it’s just a template; it doesn’t work on its own; it has to get connected with an LLM to work. This is the reason you are seeing the same template printed in the output .format is used to assign the variable to the input that we want the prompt to get.

We replaced name in the above template with name Elon Musk using prompt.format(name =’ElonMusk”)

Now let’s make another template for the history teacher, and this time we will run it to get the output from the LLM.

prompt_template = ("""
I want you to act as a historian. I will provide you with a topic related to history.
Your task is to research, analyze, and provide a detailed account of the history of {topic}.
Please provide the information in a clear and concise manner, using bullet points where appropriate.
Do not include any personal opinions or speculations in your response.
""")

# Create a PromptTemplate instance
prompt1 = PromptTemplate(input_variables=["topic"], template=prompt_template)

# Format the prompt with the topic "World History"
formatted_prompt = prompt1.format(topic="pakistan")

print(formatted_prompt) # prints instructions just like last prompt template.

Keep running this code afterward; you can practice more by replacing the variables along with the instructions of the template for the sake of experimentation. Trust me, this is the best way you can get a deeper understanding.

So we created the template. Now, let’s connect this with an LLM using the simplest chain to run this prompt.

from langchain.chains import LLMChain # written here just to explain move on top

llm = OpenAI(temperature=0.3)#Bring output from OpenAI with randmoness of 0.3
topic="pakistan" #Assigning variable topic the topic we wanto read about 

chain1 = LLMChain(llm=llm, prompt=prompt1) # chaining our template with LLM
chain1.invoke(topic) # we are sending with topic Pakistan

The output we got was:

{'Pakistan': 'pakistan', 'text': "\n1. August 14th - Pakistan's Independence Day\n2. March 23rd - Pakistan Day\n3. September 6th - Defence Day of Pakistan\n4. December 25th - Quaid-e-Azam's Birthday\n5. July 5th - Kashmir Martyrs Day\n6. September 11th - Death Anniversary of Quaid-e-Azam\n7. October 27th - Black Day for Kashmir\n8. November 9th - Iqbal Day\n9. December 16th - Victory Day (also known as Army Day)\n10. April 21st - Youm-e-Takbeer (Day of Greatness) - commemorating Pakistan's nuclear tests in 1998."}

3. How to Use LangChain Agents Using Wikipedia

The biggest limitation of LLM is that it is trained until a particular date. So, if your model was last trained on June 20, 2023, and you ask anything about 2024, then the model won’t be able to give any output. But suppose you still want the most recent information in your application. In that case, the agent provides an interface to connect you with various tools like Wikipedia, Google SERP API, Brave Search, DuckDuckGo Search, and the list goes on and on. You can see the list by clicking here.

In this blog on how to use LangChain, we will fetch information from Wikipedia as an example. To use it see the code below.

To use it, we will import the agent type, initialize the agent, and load tools from langChain.

from langchain.agents import AgentType, initialize_agent, load_tools
 #Agents usage
llm = OpenAI(temperature=.7)
tools = load_tools(["wikipedia"], llm=llm) 
agent = initialize_agent(tools, llm, agent= AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)
agent.invoke("who won cricket worldcup 2023?")
# print(output)

along with Wikipedia, we can load other tools too You can read about them here. You just need to add them to the list, separated by a comma.

Output I Got

When verbose is true, we get each detail on how it worked. Set verbose to false if you just want a straightforward answer.

output with verbose = true shows us how it got the answer. In the end, it shows the answer that is meant to be shown after the finished chain indication.

The Final Output:

Finished chain. {'input': 'who won cricket worldcup 2023?', 'output': 'Australia won the 2023 Cricket World Cup.'

Concluding Our Blog

Congratulations! You just completed a blog on how to use LangChain, your first step in LangChain usage. In this blog, you learned about chains, agents, and prompt templates, and we made our personal history teacher and connected our LLm to Wikipedia.

LangChain is continuously updating itself, but so are we. Let’s join the journey with us. Read our other blogs, subscribe to our newsletter to never miss an update, and don’t forget to contact us and give us your valuable suggestions and appreciation.

Liked the Post?  You can share as well

Facebook
Twitter
LinkedIn

More From Machine Learning Spot

Get The Latest AI News and Insights

Directly To Your Inbox
Subscribe

Signup ML Spot Newsletter

What Will You Get?

Bonus

Get A Free Workshop on
AI Development