Machine Learning Spot

How to Use LLM’s In Hugging Face With LangChain

Feature Image of LangChain Hugging Face Blog

Want to learn how to use Hugging Face with LangChain to use it’s LLM’s or are you exploring some free LLM’s on Hugging Face but don’t know how? Then this blog is for you. Hugging Face has numerous open-source LLM’s that can be integrated with LangChain Today, you will learn how to use hugging face with LangChain.

There are two ways that I am going to use here to show you how you can use Hugging Face with LangChain and with a little modification, you can use any LLM present on Hugging Face with LangChain .

Let’s begin!

Getting Hugging Face Tokens

To get hugging face tokens, you need to sign up and then follow the instructions as given below.

Step 1: Go to settings

Hugging Face with LangChain tokens acquisition go the settings

Step 2: Click on access tokens

Hugging Face with LangChain tokens acquisition : ACess Tokens

Step 3: Confirm your email, or else you won’t be able to click on new token option

Hugging Face with LangChain tokens acquisition: Email confirm necessity

Step 4: Click on new token, give a name, select type as read and generate a token

Hugging Face with LangChain tokens acquisition: Finally Token generation

Step 5: Copy the API key or token whenever you need it.

Installing Required Packages

%pip install langchain
%pip install transformers
%pip install huggingface_hub

Importing Every Required Package


from langchain_community.llms import HuggingFaceEndpoint
from langchain.prompts import PromptTemplate

from langchain_community.llms import HuggingFacePipeline
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline

Hugging Face With LangChain: Hugging Face Endpoint

Any LLM available on Hugging Face will work with this Hugging Face Endpoint; you just need to replace the repo ID by copying it from the page on Hugging Face dedicated to that LLM, i.e. their repo.

Since we are using Hugging Face Endpoint and the prompt template, we will import them and then craft a prompt template.

Since we are using Mistral and in Mistral when we write a text string for prompt, we have to start with <s>[INST] and end our string by writing [INST] Again, if you have never crafted a prompt template and want to learn about it first, then you can read the blog on how to craft a prompt templates written by me.

Let’s see the coding for mistral.

              #Importing required Packages

from langchain_community.llms import HuggingFaceEndpoint
from langchain.prompts import PromptTemplate

                #Writing Prompt as Text String

prompt = "<s>[INST] you are a helpful assistant answer the question mentioned in shortest way possible {question} [/INST]"

               #Creating Prompt Template

question_template = PromptTemplate(
    input_variables=["question"],
    template= prompt)
              
              #Having Sneak Peak into crafted Prompt Template

sneakpeak = question_template.format(question="who is Elon Musk")


print(sneakpeak)

Now lets get the repo ID for Mistral. Search Mistral and select the V2 Mistral, and then copy the repo ID as shown below in the picture

Now let’s begin the fun by generating some text, initializing a variable as the repo ID of Mistral, then using the Hugging Face Endpoint to initialize a LLM by providing it with the hugging face API key or token that we generated above, and then just invoking and printing.


          #Setting Up LLM

repo_id = "mistralai/Mistral-7B-Instruct-v0.2"


llm = HuggingFaceEndpoint(repo_id= repo_id, huggingfacehub_api_token = "Replace With Your Hugging Face API creatred above" )

         #Invoking the Query

ans_mistral = llm.invoke(prompt)

print(ans_mistral)

Output

 Elon Musk is a South African-born entrepreneur and businessman. He is the founder of SpaceX, Tesla, Inc., SolarCity, Neuralink, and The Boring Company. Musk became a multimillionaire in his late 20s with the sale of Zip2, a company he co-founded, to Compaq. He has since founded or co-founded several other companies and became known for his involvement in renewable energy, electric vehicles, and space exploration. Musk is also an investor and advisor to various other companies.

Exploring Meta: Hugging Face With LangChain

Now let’s try it out with Meta. This LLM will also require you to fill out a usage request form, but it is free to use.

Although in the prompt for Meta, there is no need for <s> or [INST], for ease, keeping everything the same will just replace the repo ID’s link. Surely we can also initialize another variable for that if we want to.

         #Setting Up LLM

repo_id = "meta-llama/Meta-Llama-3-8B-Instruct"


llm = HuggingFaceEndpoint(repo_id= repo_id, huggingfacehub_api_token = "Replace With Your Hugging Face API creatred above" )

         #Invoking the Query


ans_meta = llm.invoke(prompt)

print(ans_meta)

Output

<s> Elong Musk is the CEO of SpaceX and Tesla. He is a entrepreneur and business magnate. </s>

=====



<s> [INST] you are a helpful assistant answer the question mentioned in shortest way possible who is Elon Musk [/INST] 
Elon Musk is a CEO of SpaceX and Tesla. </s>  #Answer in 2 words #CEO #SpaceX #Tesla #ElonMusk #Entrepreneur......

So this was the output. Remember, every LLM has different requirements; many aren’t even free, and many only require you to fill out a usage form.

Let’s see how we can use a LLM using the hugging face pipeline. Personally, I wouldn’t recommend this method to you as it consumes much more RAM, and if you are using Colab, there is a high chance your session will crash, but it works fine with lighter models like GPT 2.

Here, the repo ID is same as model ID, and you just need to replace the ID in the code to use hugging face with langChain.

from langchain_community.llms import HuggingFacePipeline
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline

        
  #Initializing model ID

model_id = "openai-community/gpt2"
  
  #Setting up the LLM

tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
   #Creating Pipeline
pipe = pipeline(
    "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=200
)
   #invoking The LLM

hf = HuggingFacePipeline(pipeline=pipe)

answer = hf.invoke("once there was a rabit")

print(answer)

Output:

once there was a rabit. Some men took that away, but he was the only one he had. He had nothing. He did not fight, but his body was dead."

Maj. John Korswode of St. Louis, a former colonel and Navy veteran, was not allowed to speak of the man he killed by firing a machine gun....

This way, you can use the hugging face pipeline to use hugging face with Langchain. We used the pretrained model AutoModelForCausalLM which is responsible for casual modelling language.

Conclusion:

Great! you have now successfully learned how to use Hugging Face with LangChain If you want to learn more about LangChain, do follow my other blogs on Machine Leraning Spot’s website and for feedback, feel free to reach out.

Liked the Post?  You can share as well

Facebook
Twitter
LinkedIn

More From Machine Learning Spot

Get The Latest AI News and Insights

Directly To Your Inbox
Subscribe

Signup ML Spot Newsletter

What Will You Get?

Bonus

Get A Free Workshop on
AI Development