{ "cells": [ { "cell_type": "markdown", "id": "30b1235c-2f3e-4628-9c90-30385f741550", "metadata": {}, "source": [ "\"Open\n", "\n", "## This demo app shows:\n", "* How to use LangChain's YoutubeLoader to retrieve the caption in a YouTube video\n", "* How to ask Llama 3 to summarize the content (per the Llama's input size limit) of the video in a naive way using LangChain's stuff method\n", "* How to bypass the limit of Llama 3's 8k context length limit by using a more sophisticated way using LangChain's `refine` and `map_reduce` methods - see [here](https://python.langchain.com/docs/use_cases/summarization) for more info" ] }, { "cell_type": "markdown", "id": "c866f6be", "metadata": {}, "source": [ "We start by installing the necessary packages:\n", "- [youtube-transcript-api](https://pypi.org/project/youtube-transcript-api/) API to get transcript/subtitles of a YouTube video\n", "- [langchain](https://python.langchain.com/docs/get_started/introduction) provides necessary RAG tools for this demo\n", "- [tiktoken](https://github.com/openai/tiktoken) BytePair Encoding tokenizer\n", "- [pytube](https://pytube.io/en/latest/) Utility for downloading YouTube videos" ] }, { "cell_type": "code", "execution_count": null, "id": "02482167", "metadata": {}, "outputs": [], "source": [ "!pip install langchain youtube-transcript-api tiktoken pytube" ] }, { "cell_type": "markdown", "id": "af3069b1", "metadata": {}, "source": [ "Let's first load a long (2:47:16) YouTube video (Lex Fridman with Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI) transcript using the YoutubeLoader." ] }, { "cell_type": "code", "execution_count": null, "id": "3e4b8598", "metadata": {}, "outputs": [], "source": [ "from langchain.document_loaders import YoutubeLoader\n", "\n", "loader = YoutubeLoader.from_youtube_url(\n", " \"https://www.youtube.com/watch?v=5t1vTLU7s40\", add_video_info=True\n", ")" ] }, { "cell_type": "code", "execution_count": null, "id": "dca32ebb", "metadata": {}, "outputs": [], "source": [ "# load the youtube video caption into Documents\n", "docs = loader.load()" ] }, { "cell_type": "code", "execution_count": null, "id": "afba128f-b7fd-4b2f-873f-9b5163455d54", "metadata": {}, "outputs": [], "source": [ "# check how many characters in the doc and some content\n", "len(docs[0].page_content), docs[0].page_content[:300], len(docs)" ] }, { "cell_type": "markdown", "id": "81748b73-0187-4487-ae89-a9429b580faa", "metadata": {}, "source": [ "You should see 142689 returned for the doc character length, which is about 30k words or 40k tokens, beyond the 8k context length limit of Llama 3. You'll see how to summarize a text longer than the limit." ] }, { "cell_type": "markdown", "id": "4af7cc16", "metadata": {}, "source": [ "**Note:** We will be using [Replicate](https://replicate.com/meta/meta-llama-3-8b-instruct) to run the examples here. You will need to first sign in with Replicate with your github account, then create a free API token [here](https://replicate.com/account/api-tokens) that you can use for a while. You can also use other Llama 3 cloud providers such as [Groq](https://console.groq.com/), [Together](https://api.together.xyz/playground/language/meta-llama/Llama-3-8b-hf), or [Anyscale](https://app.endpoints.anyscale.com/playground) - see Section 2 of the Getting to Know Llama [notebook](https://github.com/meta-llama/llama-recipes/blob/main/recipes/quickstart/Getting_to_know_Llama.ipynb) for more info.\n", "\n", "If you'd like to run Llama 3 locally for the benefits of privacy, no cost or no rate limit (some Llama 3 hosting providers set limits for free plan of queries or tokens per second or minute), see [Running Llama Locally](https://github.com/meta-llama/llama-recipes/blob/main/recipes/quickstart/Running_Llama2_Anywhere/Running_Llama_on_Mac_Windows_Linux.ipynb)." ] }, { "cell_type": "code", "execution_count": null, "id": "ab3ac00e", "metadata": {}, "outputs": [], "source": [ "# enter your Replicate API token, or you can use local Llama. See README for more info\n", "from getpass import getpass\n", "import os\n", "\n", "REPLICATE_API_TOKEN = getpass()\n", "os.environ[\"REPLICATE_API_TOKEN\"] = REPLICATE_API_TOKEN\n" ] }, { "cell_type": "markdown", "id": "6b911efd", "metadata": {}, "source": [ "Next you'll call the Llama 3 70b chat model from Replicate because it's more powerful than the Llama 3 8b chat model when summarizing long text. You may also try Llama 3 8b model by replacing the `model` name with \"meta/meta-llama-3-8b-instruct\"." ] }, { "cell_type": "code", "execution_count": null, "id": "adf8cf3d", "metadata": {}, "outputs": [], "source": [ "from langchain_community.llms import Replicate\n", "llm = Replicate(\n", " model=\"meta/meta-llama-3-70b-instruct\",\n", " model_kwargs={\"temperature\": 0.0, \"top_p\": 1, \"max_new_tokens\":1000}\n", ")" ] }, { "cell_type": "markdown", "id": "8e3baa56", "metadata": {}, "source": [ "Once everything is set up, you can prompt Llama 3 to summarize the first 4000 characters of the transcript. \n", "\n", "**Note:** The context length of 8k tokens in Llama 3 is roughly 6000-7000 words or 32k characters, so you should be able to use a number larger than 4000." ] }, { "cell_type": "code", "execution_count": null, "id": "51739e11", "metadata": {}, "outputs": [], "source": [ "text = docs[0].page_content[:4000]\n", "summary = llm.invoke(f\"Give me a summary of the text below: {text}.\")\n", "print(summary)" ] }, { "cell_type": "markdown", "id": "5cb1fc2a-71cc-4fda-9d0b-2765a266dee7", "metadata": {}, "source": [ "You can try a larger text to see how the summary differs." ] }, { "cell_type": "code", "execution_count": null, "id": "f049cf9c-528e-4786-93b0-756575be92fb", "metadata": {}, "outputs": [], "source": [ "text = docs[0].page_content[:10000]\n", "summary = llm.invoke(f\"Give me a summary of the text below: {text}.\")\n", "print(summary)" ] }, { "cell_type": "markdown", "id": "d6f3ef40-7cce-4b74-9c66-7bb769650760", "metadata": {}, "source": [ "If you try the whole content which has over 142k characters, about 40k tokens, which exceeds the 8k limit, you'll get an empty result (Replicate used to return an error \"RuntimeError: Your input is too long.\")." ] }, { "cell_type": "code", "execution_count": null, "id": "0ba1ecca-6d0c-495f-a398-df5be119b5cb", "metadata": {}, "outputs": [], "source": [ "# this will generate an empty result because the input exceeds Llama 3's context length limit\n", "text = docs[0].page_content\n", "summary = llm.invoke(f\"Give me a summary of the text below: {text}.\")\n", "print(summary)" ] }, { "cell_type": "markdown", "id": "1ad1881a", "metadata": {}, "source": [ "To fix this, you can use LangChain's `load_summarize_chain` method (detail [here](https://python.langchain.com/docs/use_cases/summarization)). \n", "\n", "First you'll create splits or sub-documents of the original content, then use the LangChain's `load_summarize_chain` with the `refine` or `map_reduce` type. \n", "\n", "Because this may involve many calls to Llama 3, it'd be great to set up a quick free LangChain API key [here](https://smith.langchain.com/settings), run the following cell to set up necessary environment variables, and check the logs on [LangSmith](https://docs.smith.langchain.com) during and after the run." ] }, { "cell_type": "code", "execution_count": null, "id": "d436a764-181f-459d-ac00-ea8ea4e9a831", "metadata": {}, "outputs": [], "source": [ "import os\n", "os.environ[\"LANGCHAIN_API_KEY\"] = \"your_langchain_api_key\"\n", "os.environ[\"LANGCHAIN_TRACING_V2\"] = \"true\"\n", "os.environ[\"LANGCHAIN_PROJECT\"] = \"Video Summary with Llama 3\"" ] }, { "cell_type": "code", "execution_count": null, "id": "3be1236a-fe6a-4bf6-983f-0e72dde39fee", "metadata": {}, "outputs": [], "source": [ "from langchain.text_splitter import RecursiveCharacterTextSplitter\n", "\n", "# we need to split the long input text\n", "text_splitter = RecursiveCharacterTextSplitter.from_tiktoken_encoder(\n", " chunk_size=1000, chunk_overlap=0\n", ")\n", "split_docs = text_splitter.split_documents(docs)" ] }, { "cell_type": "code", "execution_count": null, "id": "12ae9e9d-3434-4a84-a298-f2b98de9ff01", "metadata": {}, "outputs": [], "source": [ "# check the splitted docs lengths\n", "len(split_docs), len(docs), len(split_docs[0].page_content), len(docs[0].page_content)" ] }, { "cell_type": "markdown", "id": "0352a0ec-4d40-4ac1-a46a-b121eacb593e", "metadata": {}, "source": [ "The `refine` type implements the following steps under the hood:\n", "\n", "1. Call Llama 3 on the first sub-document to generate a concise summary;\n", "2. Loop over each subsequent sub-document, pass the previous summary with the current sub-document to generate a refined new summary;\n", "3. Return the final summary generated on the final sub-document as the final answer - the summary of the whole content.\n", "\n", "An example prompt template for each call in step 2 is:\n", "```\n", "Your job is to produce a final summary.\n", "We have provided an existing summary up to a certain point:\n", "\n", "Refine the existing summary (only if needed) with some more content below:\n", "\n", "```\n", "\n", "**Note:** The following call will make 33 calls to Llama 3 and genereate the final summary in about 10 minutes. The complete log of the the calls with inputs and outputs is [here](https://smith.langchain.com/public/7f23d823-926f-4874-bbd7-b509328a94bf/r)." ] }, { "cell_type": "code", "execution_count": null, "id": "127f17fe-d5b7-43af-bd2f-2b47b076d0b1", "metadata": {}, "outputs": [], "source": [ "chain = load_summarize_chain(llm, chain_type=\"refine\")\n", "chain.run(split_docs)" ] }, { "cell_type": "markdown", "id": "c3976c92", "metadata": {}, "source": [ "You can also set `chain_type` to [`map_reduce`](https://python.langchain.com/docs/modules/chains/document/map_reduce) to generate the summary of the entire content using the standard map and reduce method, which works behind the scene by first mapping each split document to a sub-summary via a call to LLM, then combines all those sub-summaries into a single final summary by yet another call to LLM.\n", "\n", "**Note:** The following call takes about 3 minutes and all the calls to Llama 3 with inputs and outputs can be traced [here](https://smith.langchain.com/public/e54fad15-91ad-44a0-8d8f-f27a0d880b04/r)." ] }, { "cell_type": "code", "execution_count": null, "id": "8991df49-8578-46de-8b30-cb2cd11e30f1", "metadata": {}, "outputs": [], "source": [ "chain = load_summarize_chain(llm, chain_type=\"map_reduce\")\n", "chain.run(split_docs)" ] }, { "cell_type": "markdown", "id": "e1b87c24-207e-403b-b534-20f24a142f53", "metadata": {}, "source": [ "One final `chain_type` you can set is `stuff`, but it won't work with large documents because it stuffs all the split documents into one and uses it in a single prompt which exceeds the Llama 3 context length limit." ] }, { "cell_type": "code", "execution_count": null, "id": "ccd48de3-5387-4eba-8b2f-097bfac7dbbc", "metadata": {}, "outputs": [], "source": [ "# this will return nothing\n", "chain = load_summarize_chain(llm, chain_type=\"stuff\")\n", "chain.run(split_docs)" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.11.9" } }, "nbformat": 4, "nbformat_minor": 5 }