|
@@ -41,7 +41,7 @@
|
|
|
"id": "af3069b1",
|
|
|
"metadata": {},
|
|
|
"source": [
|
|
|
- "Next we load the YouTube video transcript using the YoutubeLoader."
|
|
|
+ "Let's load the YouTube video transcript using the YoutubeLoader."
|
|
|
]
|
|
|
},
|
|
|
{
|
|
@@ -140,6 +140,7 @@
|
|
|
"metadata": {},
|
|
|
"source": [
|
|
|
"Next we call the Llama 2 model from Replicate. In this example we will use the llama 2 13b chat model. You can find more Llama 2 models by searching for them on the [Replicate model explore page](https://replicate.com/explore?query=llama).\n",
|
|
|
+ "\n",
|
|
|
"You can add them here in the format: model_name/version\n",
|
|
|
"\n",
|
|
|
"If you using local Llama, just set llm accordingly - see the [HelloLlamaLocal notebook](HelloLlamaLocal.ipynb)"
|
|
@@ -253,7 +254,8 @@
|
|
|
"source": [
|
|
|
"\n",
|
|
|
"Let's try some workarounds to see if we can summarize the entire transcript without running into the `RuntimeError`.\n",
|
|
|
- "We will use the `load_summarize_chain` package from LangChain and change the `chain_type`.\n"
|
|
|
+ "\n",
|
|
|
+ "We will use the LangChain's `load_summarize_chain` and play around with the `chain_type`.\n"
|
|
|
]
|
|
|
},
|
|
|
{
|