{ "cells": [ { "cell_type": "markdown", "id": "1c1ea03a-cc69-45b0-80d3-664e48ca6831", "metadata": {}, "source": [ "## This demo app shows:\n", "* How to run Llama2 in the cloud hosted on Replicate.\n", "* How to use LangChain to ask Llama general questions and follow up questions.\n", "* How to use LangChain to load a recent PDF doc - the Llama2 paper pdf - and chat about it. This is the well known RAG (Retrieval Augmented Generation) method to let LLM such as Llama2 be able to answer questions about the data not publicly available when Llama2 was trained, or about your own data. RAG is one way to prevent LLM's hallucination. \n", "* You should also review the [HelloLlamaLocal](HelloLlamaLocal.ipynb) notebook for more information on RAG.\n", "\n", "**Note** We will be using Replicate to run the examples here. You will need to first sign in with Replicate with your github account, then create a free API token [here](https://replicate.com/account/api-tokens) that you can use for a while. \n", "After the free trial ends, you will need to enter billing info to continue to use Llama2 hosted on Replicate." ] }, { "cell_type": "markdown", "id": "61dde626", "metadata": {}, "source": [ "We start by installing the necessary packages:\n", "- sentence-transformers for text embeddings\n", "- chromadb gives us database capabilities \n", "- langchain provides necessary RAG tools for this demo\n", "\n", "And setting up the Replicate token." ] }, { "cell_type": "code", "execution_count": null, "id": "2c608df5", "metadata": {}, "outputs": [], "source": [ "!pip install langchain replicate sentence-transformers chromadb" ] }, { "cell_type": "code", "execution_count": 2, "id": "b9c5546a", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ " ········\n" ] } ], "source": [ "from getpass import getpass\n", "import os\n", "\n", "REPLICATE_API_TOKEN = getpass()\n", "os.environ[\"REPLICATE_API_TOKEN\"] = REPLICATE_API_TOKEN\n" ] }, { "cell_type": "markdown", "id": "3e8870c1", "metadata": {}, "source": [ "Next we call the Llama 2 model from replicate. In this example we will use the llama 2 13b chat model. You can find more Llama 2 models by searching for them on the [Replicate model explore page](https://replicate.com/explore?query=llama).\n", "You can add them here in the format: model_name/version" ] }, { "cell_type": "code", "execution_count": null, "id": "ad536adb", "metadata": {}, "outputs": [], "source": [ "from langchain.llms import Replicate\n", "\n", "llama2_13b = \"meta/llama-2-13b-chat:f4e2de70d66816a838a89eeeb621910adffb0dd0baba3976c96980970978018d\"\n", "llm = Replicate(\n", " model=llama2_13b,\n", " model_kwargs={\"temperature\": 0.01, \"top_p\": 1, \"max_new_tokens\":500}\n", ")" ] }, { "cell_type": "markdown", "id": "fd207c80", "metadata": {}, "source": [ "With the model set up, you are now ready to ask some questions. Here is an example of the simplest way to ask the model some general questions." ] }, { "cell_type": "code", "execution_count": 4, "id": "493a7148", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ " Hello! I'd be happy to help you with your question. The book \"The Innovator's Dilemma\" was written by Clayton Christensen, an American author and professor at Harvard Business School. It was first published in 1997 and has since become a widely influential work on innovation and business strategy. Do you have any other questions or would you like more information on this topic?\n" ] } ], "source": [ "question = \"who wrote the book Innovator's dilemma?\"\n", "answer = llm(question)\n", "print(answer)" ] }, { "cell_type": "markdown", "id": "f315f000", "metadata": {}, "source": [ "We will then try to follow up the response with a question asking for more information on the book. \n", "Since the chat history not passed on Llama doesn't have the context and doesn't know this is more about the book thus it treats this as new query." ] }, { "cell_type": "code", "execution_count": 5, "id": "9b5c8676", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ " Hello! I'm here to assist you with any questions or tasks you may have. I can provide information on a wide range of topics, from science and history to entertainment and culture. I can also help with practical tasks such as converting units of measurement or calculating dates and times. Is there something specific you would like to know or discuss?\n" ] } ], "source": [ "# chat history not passed so Llama doesn't have the context and doesn't know this is more about the book\n", "followup = \"tell me more\"\n", "followup_answer = llm(followup)\n", "print(followup_answer)" ] }, { "cell_type": "markdown", "id": "9aeaffc7", "metadata": {}, "source": [ "To get around this we will need to provide the model with history of the chat. \n", "To do this, we will use [`ConversationBufferMemory`](https://python.langchain.com/docs/modules/memory/types/buffer) to pass the chat history to the model and give it the capability to handle follow up questions." ] }, { "cell_type": "code", "execution_count": 6, "id": "5428ca27", "metadata": {}, "outputs": [], "source": [ "# using ConversationBufferMemory to pass memory (chat history) for follow up questions\n", "from langchain.chains import ConversationChain\n", "from langchain.memory import ConversationBufferMemory\n", "\n", "memory = ConversationBufferMemory()\n", "conversation = ConversationChain(\n", " llm=llm, \n", " memory = memory,\n", " verbose=False\n", ")" ] }, { "cell_type": "markdown", "id": "a3e9af5f", "metadata": {}, "source": [ "Once this is set up, let us repeat the steps from before and ask the model a simple question.\n", "Then we pass the question and answer back into the model for context along with the follow up question." ] }, { "cell_type": "code", "execution_count": 7, "id": "baee2d22", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ " Ah, you're asking about \"The Innovator's Dilemma,\" that classic book by Clayton Christensen! He's a renowned author and professor at Harvard Business School, known for his work on disruptive innovation and how established companies can struggle to adapt to new technologies and business models.\n", "\n", "In fact, I have access to a wealth of information on this topic, as well as other areas of expertise. Would you like me to share some interesting facts or insights about Clayton Christensen or his book? For example, did you know that he coined the term\n" ] } ], "source": [ "# restart from the original question\n", "answer = conversation.predict(input=question)\n", "print(answer)" ] }, { "cell_type": "code", "execution_count": 8, "id": "9c7d67a8", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ " Sure thing! Here are some additional details about Clayton Christensen and his book \"The Innovator's Dilemma\":\n", "\n", "1. The book was first published in 1997 and has since become a seminal work in the field of innovation and entrepreneurship.\n", "2. Christensen's central argument is that successful companies often struggle to adopt new technologies and business models because they are too focused on sustaining their existing businesses. This can lead to a \"dilemma\" where these companies fail to innovate and eventually lose market share to newer, more ag\n" ] } ], "source": [ "# pass context (previous question and answer) along with the follow up \"tell me more\" to Llama who now knows more of what\n", "memory.save_context({\"input\": question},\n", " {\"output\": answer})\n", "followup_answer = conversation.predict(input=followup)\n", "print(followup_answer)" ] }, { "cell_type": "markdown", "id": "fc436163", "metadata": {}, "source": [ "Next, let's explore using Llama 2 to answer questions using documents for context. \n", "This gives us the ability to update Llama 2's knowledge thus giving it better context without needing to finetune. \n", "For a more in-depth study of this, see the notebook on using Llama 2 locally [here](HelloLlamaLocal.ipynb)\n", "\n", "We will use the PyPDFLoader to load in a pdf, in this case, the Llama 2 paper." ] }, { "cell_type": "code", "execution_count": 9, "id": "f5303d75", "metadata": {}, "outputs": [], "source": [ "\n", "from langchain.document_loaders import PyPDFLoader\n", "loader = PyPDFLoader(\"https://arxiv.org/pdf/2307.09288.pdf\")\n", "docs = loader.load()" ] }, { "cell_type": "code", "execution_count": 10, "id": "678c2b4a", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "77 Llama 2 : Open Foundation and Fine-Tuned Chat Models\n", "Hugo Touvron∗Louis Martin†Kevin Stone†\n", "Peter Albert Amjad Almahairi Yasmine Babaei Nikolay Bashlykov Soumya Batra\n", "Prajjwal Bhargava Shruti Bhosale Dan Bikel Lukas Blecher Cristian Canton Ferrer Moya Chen\n", "Guillem Cucurull David Esiobu Jude Fernande\n" ] } ], "source": [ "# check docs length and content\n", "print(len(docs), docs[0].page_content[0:300])" ] }, { "cell_type": "markdown", "id": "73b8268e", "metadata": {}, "source": [ "We need to store our documents. There are more than 30 vector stores (DBs) supported by LangChain. \n", "For this example we will use [Chroma](https://python.langchain.com/docs/integrations/vectorstores/chroma) which is light-weight and in memory so it's easy to get started with.\n", "For other vector stores especially if you need to store a large amount of data - see https://python.langchain.com/docs/integrations/vectorstores\n", "\n", "We will also import the HuggingFaceEmbeddings and RecursiveCharacterTextSplitter to assist in storing the documents." ] }, { "cell_type": "code", "execution_count": 11, "id": "eecb6a34", "metadata": {}, "outputs": [], "source": [ "\n", "from langchain.vectorstores import Chroma\n", "\n", "# embeddings are numerical representations of the question and answer text\n", "from langchain.embeddings import HuggingFaceEmbeddings\n", "\n", "# use a common text splitter to split text into chunks\n", "from langchain.text_splitter import RecursiveCharacterTextSplitter" ] }, { "cell_type": "markdown", "id": "36d4a17c", "metadata": {}, "source": [ "To store the documents, we will need to split them into chunks using [`RecursiveCharacterTextSplitter`](https://python.langchain.com/docs/modules/data_connection/document_transformers/text_splitters/recursive_text_splitter) and create vector representations of these chunks using [`HuggingFaceEmbeddings`](https://www.google.com/search?q=langchain+hugging+face+embeddings&sca_esv=572890011&ei=ARUoZaH4LuumptQP48ah2Ac&oq=langchian+hugg&gs_lp=Egxnd3Mtd2l6LXNlcnAiDmxhbmdjaGlhbiBodWdnKgIIADIHEAAYgAQYCjIHEAAYgAQYCjIHEAAYgAQYCjIHEAAYgAQYCjIHEAAYgAQYCjIHEAAYgAQYCjIHEAAYgAQYCjIHEAAYgAQYCjIHEAAYgAQYCjIHEAAYgAQYCkjeHlC5Cli5D3ABeAGQAQCYAV6gAb4CqgEBNLgBAcgBAPgBAcICChAAGEcY1gQYsAPiAwQYACBBiAYBkAYI&sclient=gws-wiz-serp) to them before storing them into our vector database. \n", "\n", "In general, you should use larger chuck sizes for highly structured text such as code and smaller size for less structured text. You may need to experiment with different chunk sizes and overlap values to find out the best numbers." ] }, { "cell_type": "code", "execution_count": 12, "id": "bc65e161", "metadata": {}, "outputs": [], "source": [ "text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=20)\n", "all_splits = text_splitter.split_documents(docs)\n", "\n", "# create the vector db to store all the split chunks as embeddings\n", "embeddings = HuggingFaceEmbeddings()\n", "vectordb = Chroma.from_documents(\n", " documents=all_splits,\n", " embedding=embeddings,\n", ")" ] }, { "cell_type": "markdown", "id": "54ad02d7", "metadata": {}, "source": [ "We then use ` RetrievalQA` to retrieve the documents from the vector database and give the model more context on Llama 2, thereby increasing its knowledge.\n", "For each question, LangChain performs a semantic similarity search of it in the vector db, then passes the search results as the context to Llama to answer the question." ] }, { "cell_type": "code", "execution_count": 13, "id": "00e3f72b", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ " Based on the provided text, Llama2 appears to be a language model developed by Meta AI that is designed for dialogue use cases. It is a fine-tuned version of the original Llama model, with improved performance and safety features. The model has been trained on a large dataset of text and has undergone testing in English, but it may not cover all scenarios or produce accurate responses in certain instances. As such, developers are advised to perform safety testing and tuning before deploying any applications of Llama2. Additionally, the model is released under a responsible use guide and code\n" ] } ], "source": [ "# use LangChain's RetrievalQA, to associate Llama with the loaded documents stored in the vector db\n", "from langchain.chains import RetrievalQA\n", "\n", "qa_chain = RetrievalQA.from_chain_type(\n", " llm,\n", " retriever=vectordb.as_retriever()\n", ")\n", "\n", "\n", "question = \"What is llama2?\"\n", "result = qa_chain({\"query\": question})\n", "print(result['result'])" ] }, { "cell_type": "markdown", "id": "7e63769a", "metadata": {}, "source": [ "Now, lets bring it all together by incorporating follow up questions.\n", "First we ask a follow up questions without giving the model context of the previous conversation. \n", "Without this context, the answer we get does not relate to our original question." ] }, { "cell_type": "code", "execution_count": 14, "id": "53f27473", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ " Based on the context provided, I don't see any explicit mention of \"its\" use cases. However, I can provide some possible inferences based on the information given:\n", "\n", "The text mentions a partnerships team and product and technical organization support, which suggests that the tool or approach being referred to is likely related to product development or customer support.\n", "\n", "The emphasis on prioritizing harmlessness over informativeness and helpfulness suggests that the tool may be used for moderation or content review purposes, where the goal is to avoid causing harm or offense while still providing useful information.\n", "\n", "The\n" ] } ], "source": [ "# no context passed so Llama2 doesn't have enough context to answer so it lets its imagination go wild\n", "result = qa_chain({\"query\": \"what are its use cases?\"})\n", "print(result['result'])" ] }, { "cell_type": "markdown", "id": "833221c0", "metadata": {}, "source": [ "As we did before, let us use the ConversationalRetrievalChain package to give the model context of our previous question so we can add follow up questions." ] }, { "cell_type": "code", "execution_count": 15, "id": "743644a1", "metadata": {}, "outputs": [], "source": [ "# use ConversationalRetrievalChain to pass chat history for follow up questions\n", "from langchain.chains import ConversationalRetrievalChain\n", "chat_chain = ConversationalRetrievalChain.from_llm(llm, vectordb.as_retriever(), return_source_documents=True)" ] }, { "cell_type": "code", "execution_count": 16, "id": "7c3d1142", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ " Based on the provided text, Llama2 appears to be a language model developed by Meta AI that is designed for dialogue use cases. It is a fine-tuned version of the original Llama model, with improved performance and safety features. The model has been trained on a large dataset of text and has undergone testing in English, but it may not cover all scenarios or produce accurate responses in certain instances. As such, developers are advised to perform safety testing and tuning before deploying any applications of Llama2. Additionally, the model is released under a responsible use guide and code\n" ] } ], "source": [ "# let's ask the original question \"What is llama2?\" again\n", "result = chat_chain({\"question\": question, \"chat_history\": []})\n", "print(result['answer'])" ] }, { "cell_type": "code", "execution_count": 17, "id": "4b17f08f", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ " Based on the provided context, here are some potential use cases for Llama2, a language model developed by Meta AI for dialogue use cases:\n", "\n", "1. Assistant-like chat: Tuned models of Llama2 can be used for assistant-like chat applications, such as customer service or personal assistants.\n", "2. Natural language generation tasks: Pretrained models of Llama2 can be adapted for various natural language generation tasks, such as text summarization, machine translation, and content creation.\n", "3. Research use cases: Llama2 can be used in research studies to\n" ] } ], "source": [ "# this time we pass chat history along with the follow up so good things should happen\n", "chat_history = [(question, result[\"answer\"])]\n", "followup = \"what are its use cases?\"\n", "followup_answer = chat_chain({\"question\": followup, \"chat_history\": chat_history})\n", "print(followup_answer['answer'])" ] }, { "cell_type": "markdown", "id": "04f4eabf", "metadata": {}, "source": [ "Further follow ups can be made possible by updating chat_history.\n", "Note that results can get cut off. You may set \"max_new_tokens\" in the Replicate call above to a larger number (like shown below) to avoid the cut off.\n", "\n", "```python\n", "model_kwargs={\"temperature\": 0.01, \"top_p\": 1, \"max_new_tokens\": 1000}\n", "```" ] }, { "cell_type": "code", "execution_count": 18, "id": "95d22347", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ " Based on the information provided, Llama2 can assist with various natural language generation tasks, particularly in English. The model has been fine-tuned for assistant-like chat and has shown proficiency in other languages as well, although its proficiency is limited due to the limited amount of pretraining data available in non-English languages.\n", "\n", "Specifically, Llama2 can be used for tasks such as:\n", "\n", "1. Dialogue systems: Llama2 can be fine-tuned for different dialogue systems, such as customer service chatbots, virtual assistants,\n" ] } ], "source": [ "# further follow ups can be made possible by updating chat_history like this:\n", "chat_history.append((followup, followup_answer[\"answer\"]))\n", "more_followup = \"what tasks can it assist with?\"\n", "more_followup_answer = chat_chain({\"question\": more_followup, \"chat_history\": chat_history})\n", "print(more_followup_answer['answer'])\n", "\n" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.18" } }, "nbformat": 4, "nbformat_minor": 5 }