Introduction to Langchain

March 22, 2024
Share this post

LangChain simplifies AI development with Large Language Models (LLMs) by offering modular components and pre-designed templates for building applications like chatbots and summarizers. It seamlessly integrates with LLMs such as GPT-3.5 and Chat Models, enabling tasks like text completion and conversation generation. LangChain operates through chains of actions, ensuring efficient implementation in a structured manner.

Langchain is an open-source framework that facilitates the development of applications powered by Large Language Models (LLMs) - the current pinnacle of Natural Language Processing (NLP) evolution.

This article will enable the reader to understand the core structure and key features of LangChain and how it has simplified the development of AI-driven linguistic solutions. It also delves into the details to help you build your own application leveraging LangChain.

LangChain and LLMs

In its core, LLM is a deep learning model that is used for language-based tasks in the domain of NLP, originally created for language translation.

They use transformer models and are trained on large datasets, which empower them to understand natural language and perform various language-related tasks. Prominent LMs include GPT-3.5, LLaMA, Bard, Falcon, etc.

“LangChain is a Python framework that allows one to use LLMs easily and efficiently by providing a unified interface and modular components that can be 'chained' together.”

This, in turn, simplifies the creation of advanced systems such as chatbots, image augmenters, sentiment analyzers, etc. These systems can understand language, analyze code, retrieve information, and perform various other tasks.

LangChain's flexibility, extensibility, and integration with LLMs make it a valuable tool in the field of natural language processing and beyond.

Action and Agent

In any software framework, "action" and "agent" are like the building blocks. They're the basic ideas, but they work differently depending on implementation.

In LangChain:

  • Action: It refers to a specific task or operation that a piece of code performs. It could be anything from performing calculations to sending an email.
  • Agent: It is akin to a virtual assistant, a program or component that acts autonomously to perform tasks or make decisions on behalf of the user or another program.

LangChain Ecosystem

There are five main sections in the LangChain ecosystem:

  1. LangChain Libraries: Think of it as building with LEGO bricks; you can combine them to create different structures. With LangChain Libraries, you can create chains of actions and agents without starting from scratch each time.
  2. LangChain Templates: These are pre-designed frameworks for various projects or tasks that you can immediately implement without starting from scratch. Think of it like a LEGO Death Star model and the libraries as LEGO bricks.
  3. LangServe: This is used to deploy LangChain chains as a REST API.
  4. LangSmith: It is a developer platform that allows debugging, testing, evaluating, and monitoring chains built using any framework, not just LangChain.
  5. LangGraph: It is like a toolbox for making smart computer programs that can remember things and work together. It uses something called LangChain to help these programs talk to each other. With LangGraph, you can make these programs do lots of steps over and over again in a clever way.

Key components of Langchain

Often when people talk about LangChain, they refer to LangChain Libraries and not the entire LangChain ecosystem in general.

LangChain Libraries

LangChain Libraries helps in building AI  with two primary methods:

  1. Components: These are adaptable tools and connections designed for interaction with language models. They're easy to use and can be utilized independently or within the LangChain framework.
  2. Pre-made chains: These are pre-built combinations of components for achieving specific tasks. They streamline the process of getting started, while components allow for customization and the creation of new chains.

Components are further classified into three types:

Model I/O:

This component facilitates communication with the model by providing clear interfaces and utilities for constructing inputs and processing outputs i.e. prompt management.

LangChain primarily integrates with two main types of models: LLMs and Chat Models. LLMs in LangChain focus on text completion, taking a string prompt and producing a string completion, while Chat Models are tailored for conversational use, accepting a list of messages as input and returning an AI-generated message. Prompting strategies vary between these models.

Messages, categorized into roles like Human-Message and AI-Message, play a pivotal role in communicating with models, with additional parameters like function_call for specific functionalities.


When using language models like LLMs, sometimes we need them to understand specific details about individual users, even if those details weren't part of their original training. Retrieval Augmented Generation (RAG) is a fancy term for a technique we use to make this happen. Essentially, it means we fetch relevant information from outside sources and feed it to the model when it's creating text.

LangChain is a toolkit that provides all the tools needed for building these kinds of applications.


The "Agent" is like the brain behind decision-making. It uses a language model and instructions to figure out what to do next. Instead of a set sequence of actions, it's more flexible, letting the language model decide the best course of action. So, think of it as the smart decision-maker in the process. In chains, a sequence of actions is hardcoded

How does Langchain work?

LangChain operates much like crafting a meal recipe. Just as you follow the steps in cooking up a dish, LangChain strings together a sequence of actions, called a "chain", to accomplish a particular AI-driven task.

Imagine you're looking for some movie suggestions. LangChain steps in by first understanding what you're asking for. Then, it gathers details about the movies you enjoy and those you've watched before. By examining your watch history and preferences with the help of language models, sophisticated algorithms and data processing techniques, LangChain generates personalized suggestions. Finally, it gives you a list of personalized recommendations to check out.

Each link in this "chain" holds significance, much like adhering to the steps in a recipe. LangChain streamlines the process, ensuring seamless execution from start to finish.

Creating Prompts in LangChain

We'll explore how to set up a simple question-answering system using LangChain and integrate it with the Hugging Face Hub (LLMs are present here) for text generation

Installing LangChain

!pip install langchain

Setting up Prompt Template

from langchain import PromptTemplate

template = """Question: {question}

Answer: """
prompt = PromptTemplate(

Using Hugging Face Hub LLM

First, ensure you have your Hugging Face API key ready. Then, set it up in your environment

import os


Install the Hugging Face Hub library

!pip install huggingface_hub

Iinitialize and use the Hugging Face Hub for text generation

from langchain import HuggingFaceHub, LLMChain

# initialize Hub LLM
hub_llm = HuggingFaceHub(

Integration with LangChain

Combine the prompt template and the Hugging Face Hub model using LangChain

# create prompt template > LLM chain
llm_chain = LLMChain(

Generating Answer

Now, let's ask a question about the IPL 2023 season and get the answer

# user question
question = "Which team won the IPL 2023 season?"# ask the user question about IPL 2023

For this question, we get the correct answer of "chennai super kings” in output.

Applications of LangChain:

LangChain offers specialized tutorials for crafting chatbots. Click on the heading of the list to access them.

  1. Chatbot: Chatbots are widely embraced for their ability to engage in lengthy, ongoing conversations while retaining context. They excel at providing pertinent information in response to user inquiries.
  2. Synthetic data generation:  Synthetic data is data that's generated artificially, as opposed to being gathered from real-world occurrences. Its purpose is to mimic real data without infringing on privacy or being constrained by real-world constraints.
  3. Summarizer: If you've got a bunch of documents like PDFs, notes from Notion, or customer inquiries, and you need to distil their essence down into shorter summaries, LLMs are your best bet. They're good at understanding the meaning of the text and condensing it effectively.
  4. Interacting with APIs: Imagine you need an LLM to tap into external APIs. It's a powerful way to enrich the LLM's understanding by pulling in relevant context. Plus, it opens up the possibility of conversing with APIs in plain language, which can be incredibly handy!

Discover More

March 22, 2024

Transformer Architecture in Large Language Models

LLM Terminology
March 22, 2024

What are LLM Agents?

LLM Terminology
March 22, 2024

What is Prompt Engineering?

LLM Terminology

Related Blogs

No items found.

Blazingly fast way to build, track and deploy your models!