{ "cells": [ { "cell_type": "markdown", "id": "1c1ea03a-cc69-45b0-80d3-664e48ca6831", "metadata": {}, "source": [ "## This demo app shows:\n", "* How to run Llama2 in the cloud hosted on OctoAI\n", "* How to use LangChain to ask Llama general questions and follow up questions\n", "* How to use LangChain to load a recent PDF doc - the Llama2 paper pdf - and chat about it. This is the well known RAG (Retrieval Augmented Generation) method to let LLM such as Llama2 be able to answer questions about the data not publicly available when Llama2 was trained, or about your own data. RAG is one way to prevent LLM's hallucination\n", "* You should also review the [HelloLlamaLocal](HelloLlamaLocal.ipynb) notebook for more information on RAG\n", "\n", "**Note** We will be using OctoAI to run the examples here. You will need to first sign into [OctoAI](https://octoai.cloud/) with your Github or Google account, then create a free API token [here](https://octo.ai/docs/getting-started/how-to-create-an-octoai-access-token) that you can use for a while (a month or $10 in OctoAI credits, whichever one runs out first).\n", "After the free trial ends, you will need to enter billing info to continue to use Llama2 hosted on OctoAI." ] }, { "cell_type": "markdown", "id": "61dde626", "metadata": {}, "source": [ "Let's start by installing the necessary packages:\n", "- sentence-transformers for text embeddings\n", "- chromadb gives us database capabilities\n", "- langchain provides necessary RAG tools for this demo\n", "\n", "And setting up the OctoAI token." ] }, { "cell_type": "code", "execution_count": null, "id": "2c608df5", "metadata": {}, "outputs": [], "source": [ "!pip install langchain octoai-sdk sentence-transformers chromadb pypdf" ] }, { "cell_type": "code", "execution_count": null, "id": "b9c5546a", "metadata": {}, "outputs": [], "source": [ "from getpass import getpass\n", "import os\n", "\n", "OCTOAI_API_TOKEN = getpass()\n", "os.environ[\"OCTOAI_API_TOKEN\"] = OCTOAI_API_TOKEN" ] }, { "cell_type": "markdown", "id": "3e8870c1", "metadata": {}, "source": [ "Next we call the Llama 2 model from OctoAI. In this example we will use the Llama 2 13b chat FP16 model. You can find more on Llama 2 models on the [OctoAI text generation solution page](https://octoai.cloud/tools/text).\n", "\n", "At the time of writing this notebook the following Llama models are available on OctoAI:\n", "* llama-2-13b-chat\n", "* llama-2-70b-chat\n", "* codellama-7b-instruct\n", "* codellama-13b-instruct\n", "* codellama-34b-instruct\n", "* codellama-70b-instruct" ] }, { "cell_type": "code", "execution_count": null, "id": "ad536adb", "metadata": {}, "outputs": [], "source": [ "from langchain.llms.octoai_endpoint import OctoAIEndpoint\n", "\n", "llama2_13b = \"llama-2-13b-chat-fp16\"\n", "llm = OctoAIEndpoint(\n", " endpoint_url=\"https://text.octoai.run/v1/chat/completions\",\n", " model_kwargs={\n", " \"model\": llama2_13b,\n", " \"messages\": [\n", " {\n", " \"role\": \"system\",\n", " \"content\": \"You are a helpful, respectful and honest assistant.\"\n", " }\n", " ],\n", " \"max_tokens\": 500,\n", " \"top_p\": 1,\n", " \"temperature\": 0.01\n", " },\n", ")" ] }, { "cell_type": "markdown", "id": "fd207c80", "metadata": {}, "source": [ "With the model set up, you are now ready to ask some questions. Here is an example of the simplest way to ask the model some general questions." ] }, { "cell_type": "code", "execution_count": null, "id": "493a7148", "metadata": {}, "outputs": [], "source": [ "question = \"who wrote the book Innovator's dilemma?\"\n", "answer = llm(question)\n", "print(answer)" ] }, { "cell_type": "markdown", "id": "f315f000", "metadata": {}, "source": [ "We will then try to follow up the response with a question asking for more information on the book. \n", "\n", "Since the chat history is not passed on Llama doesn't have the context and doesn't know this is more about the book thus it treats this as new query.\n" ] }, { "cell_type": "code", "execution_count": null, "id": "9b5c8676", "metadata": {}, "outputs": [], "source": [ "# chat history not passed so Llama doesn't have the context and doesn't know this is more about the book\n", "followup = \"tell me more\"\n", "followup_answer = llm(followup)\n", "print(followup_answer)" ] }, { "cell_type": "markdown", "id": "9aeaffc7", "metadata": {}, "source": [ "To get around this we will need to provide the model with history of the chat. \n", "\n", "To do this, we will use [`ConversationBufferMemory`](https://python.langchain.com/docs/modules/memory/types/buffer) to pass the chat history to the model and give it the capability to handle follow up questions." ] }, { "cell_type": "code", "execution_count": null, "id": "5428ca27", "metadata": {}, "outputs": [], "source": [ "# using ConversationBufferMemory to pass memory (chat history) for follow up questions\n", "from langchain.chains import ConversationChain\n", "from langchain.memory import ConversationBufferMemory\n", "\n", "memory = ConversationBufferMemory()\n", "conversation = ConversationChain(\n", " llm=llm, \n", " memory = memory,\n", " verbose=False\n", ")" ] }, { "cell_type": "markdown", "id": "a3e9af5f", "metadata": {}, "source": [ "Once this is set up, let us repeat the steps from before and ask the model a simple question.\n", "\n", "Then we pass the question and answer back into the model for context along with the follow up question." ] }, { "cell_type": "code", "execution_count": null, "id": "baee2d22", "metadata": {}, "outputs": [], "source": [ "# restart from the original question\n", "answer = conversation.predict(input=question)\n", "print(answer)" ] }, { "cell_type": "code", "execution_count": null, "id": "9c7d67a8", "metadata": {}, "outputs": [], "source": [ "# pass context (previous question and answer) along with the follow up \"tell me more\" to Llama who now knows more of what\n", "memory.save_context({\"input\": question},\n", " {\"output\": answer})\n", "followup_answer = conversation.predict(input=followup)\n", "print(followup_answer)" ] }, { "cell_type": "markdown", "id": "fc436163", "metadata": {}, "source": [ "Next, let's explore using Llama 2 to answer questions using documents for context. \n", "This gives us the ability to update Llama 2's knowledge thus giving it better context without needing to finetune. \n", "For a more in-depth study of this, see the notebook on using Llama 2 locally [here](HelloLlamaLocal.ipynb)\n", "\n", "We will use the PyPDFLoader to load in a pdf, in this case, the Llama 2 paper." ] }, { "cell_type": "code", "execution_count": null, "id": "f5303d75", "metadata": {}, "outputs": [], "source": [ "from langchain.document_loaders import PyPDFLoader\n", "loader = PyPDFLoader(\"https://arxiv.org/pdf/2307.09288.pdf\")\n", "docs = loader.load()" ] }, { "cell_type": "code", "execution_count": null, "id": "678c2b4a", "metadata": {}, "outputs": [], "source": [ "# check docs length and content\n", "print(len(docs), docs[0].page_content[0:300])" ] }, { "cell_type": "markdown", "id": "73b8268e", "metadata": {}, "source": [ "We need to store our documents. There are more than 30 vector stores (DBs) supported by LangChain.\n", "For this example we will use [Chroma](https://python.langchain.com/docs/integrations/vectorstores/chroma) which is light-weight and in memory so it's easy to get started with.\n", "For other vector stores especially if you need to store a large amount of data - see https://python.langchain.com/docs/integrations/vectorstores\n", "\n", "We will also import the OctoAIEmbeddings and RecursiveCharacterTextSplitter to assist in storing the documents." ] }, { "cell_type": "code", "execution_count": null, "id": "eecb6a34", "metadata": {}, "outputs": [], "source": [ "from langchain.vectorstores import Chroma\n", "\n", "# embeddings are numerical representations of the question and answer text\n", "from langchain_community.embeddings import OctoAIEmbeddings\n", "\n", "# use a common text splitter to split text into chunks\n", "from langchain.text_splitter import RecursiveCharacterTextSplitter" ] }, { "cell_type": "markdown", "id": "36d4a17c", "metadata": {}, "source": [ "To store the documents, we will need to split them into chunks using [`RecursiveCharacterTextSplitter`](https://python.langchain.com/docs/modules/data_connection/document_transformers/text_splitters/recursive_text_splitter) and create vector representations of these chunks using [`OctoAIEmbeddings`](https://octoai.cloud/tools/text/embeddings?mode=api&model=thenlper%2Fgte-large) on them before storing them into our vector database.\n", "\n", "In general, you should use larger chuck sizes for highly structured text such as code and smaller size for less structured text. You may need to experiment with different chunk sizes and overlap values to find out the best numbers." ] }, { "cell_type": "code", "execution_count": null, "id": "bc65e161", "metadata": {}, "outputs": [], "source": [ "text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=20)\n", "all_splits = text_splitter.split_documents(docs)\n", "\n", "# create the vector db to store all the split chunks as embeddings\n", "embeddings = OctoAIEmbeddings(\n", " endpoint_url=\"https://text.octoai.run/v1/embeddings\"\n", ")\n", "vectordb = Chroma.from_documents(\n", " documents=all_splits,\n", " embedding=embeddings,\n", ")" ] }, { "cell_type": "markdown", "id": "54ad02d7", "metadata": {}, "source": [ "We then use ` RetrievalQA` to retrieve the documents from the vector database and give the model more context on Llama 2, thereby increasing its knowledge.\n", "\n", "For each question, LangChain performs a semantic similarity search of it in the vector db, then passes the search results as the context to Llama to answer the question." ] }, { "cell_type": "code", "execution_count": null, "id": "00e3f72b", "metadata": {}, "outputs": [], "source": [ "# use LangChain's RetrievalQA, to associate Llama with the loaded documents stored in the vector db\n", "from langchain.chains import RetrievalQA\n", "\n", "qa_chain = RetrievalQA.from_chain_type(\n", " llm,\n", " retriever=vectordb.as_retriever()\n", ")\n", "\n", "question = \"What is llama2?\"\n", "result = qa_chain({\"query\": question})\n", "print(result['result'])" ] }, { "cell_type": "markdown", "id": "7e63769a", "metadata": {}, "source": [ "Now, lets bring it all together by incorporating follow up questions.\n", "\n", "First we ask a follow up questions without giving the model context of the previous conversation.\n", "Without this context, the answer we get does not relate to our original question." ] }, { "cell_type": "code", "execution_count": null, "id": "53f27473", "metadata": {}, "outputs": [], "source": [ "# no context passed so Llama2 doesn't have enough context to answer so it lets its imagination go wild\n", "result = qa_chain({\"query\": \"what are its use cases?\"})\n", "print(result['result'])" ] }, { "cell_type": "markdown", "id": "833221c0", "metadata": {}, "source": [ "As we did before, let us use the `ConversationalRetrievalChain` package to give the model context of our previous question so we can add follow up questions." ] }, { "cell_type": "code", "execution_count": null, "id": "743644a1", "metadata": {}, "outputs": [], "source": [ "# use ConversationalRetrievalChain to pass chat history for follow up questions\n", "from langchain.chains import ConversationalRetrievalChain\n", "chat_chain = ConversationalRetrievalChain.from_llm(llm, vectordb.as_retriever(), return_source_documents=True)" ] }, { "cell_type": "code", "execution_count": null, "id": "7c3d1142", "metadata": {}, "outputs": [], "source": [ "# let's ask the original question \"What is llama2?\" again\n", "result = chat_chain({\"question\": question, \"chat_history\": []})\n", "print(result['answer'])" ] }, { "cell_type": "code", "execution_count": null, "id": "4b17f08f", "metadata": {}, "outputs": [], "source": [ "# this time we pass chat history along with the follow up so good things should happen\n", "chat_history = [(question, result[\"answer\"])]\n", "followup = \"what are its use cases?\"\n", "followup_answer = chat_chain({\"question\": followup, \"chat_history\": chat_history})\n", "print(followup_answer['answer'])" ] }, { "cell_type": "markdown", "id": "04f4eabf", "metadata": {}, "source": [ "Further follow ups can be made possible by updating chat_history.\n", "\n", "Note that results can get cut off. You may set \"max_new_tokens\" in the OctoAIEndpoint call above to a larger number (like shown below) to avoid the cut off.\n", "\n", "```python\n", "model_kwargs={\"temperature\": 0.01, \"top_p\": 1, \"max_new_tokens\": 1000}\n", "```" ] }, { "cell_type": "code", "execution_count": null, "id": "95d22347", "metadata": {}, "outputs": [], "source": [ "# further follow ups can be made possible by updating chat_history like this:\n", "chat_history.append((followup, followup_answer[\"answer\"]))\n", "more_followup = \"what tasks can it assist with?\"\n", "more_followup_answer = chat_chain({\"question\": more_followup, \"chat_history\": chat_history})\n", "print(more_followup_answer['answer'])" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.11.6" } }, "nbformat": 4, "nbformat_minor": 5 }