浏览代码

Updates to resolve reviewer comments

Updated minor typos/fixes from reviewer comments
Eissa Jamil 11 月之前
父节点
当前提交
b61150d974

+ 1 - 1
examples/examples_with_aws/Prompt_Engineering_with_Llama_2_On_Amazon_Bedrock.ipynb

@@ -20,7 +20,7 @@
     "### Note about LangChain \n",
     "The Bedrock classes provided by LangChain create a Bedrock boto3 client by default. Your AWS credentials will be automatically looked up in your system's `~/.aws/` directory\n",
     "\n",
-    "#### Example `/.aws/config`\n",
+    "#### Example `/.aws/`\n",
     "    [default]\n",
     "    aws_access_key_id=YourIDToken\n",
     "    aws_secret_access_key=YourSecretToken\n",

+ 7 - 4
examples/examples_with_aws/ReAct_Llama_2_Bedrock-WK.ipynb

@@ -9,7 +9,7 @@
     "\n",
     "LLMs abilities for reasoning (e.g. chain-of-thought CoT prompting) and acting have primarily been studied as separate topics. **ReAct** [Shunyu Yao et al. ICLR 2023](https://arxiv.org/pdf/2210.03629.pdf) (Reason and Act) is a method to generate both reasoning traces and task-specific actions in an interleaved manner.\n",
     "\n",
-    "In simple words, we define specific patterns for the language model to follow. This allows the model to act (usually through tools) and reason. Hence the model create a squence of interleaved thoughts and actions. Such systems that act on an enviroment are usually called **agents** (borrowed from reinforcement learning).\n",
+    "In simple words, we define specific patterns for the language model to follow. This allows the model to act (usually through tools) and reason. Hence the model creates a squence of interleaved thoughts and actions. Such systems that act on an enviroment are usually called **agents** (borrowed from reinforcement learning).\n",
     "\n",
     "![image.png](https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiuuYg9Pduep9GkUfjloNVOiy3qjpPbT017GKlgGEGMaLNu_TCheEeJ7r8Qok6-0BK3KMfLvsN2vSgFQ8xOvnHM9CAb4Ix4I62bcN2oXFWfqAJzGAGbVqbeCyVktu3h9Dyf5ameRe54LEr32Emp0nG52iofpNOTXCxMY12K7fvmDZNPPmfJaT5zo1OBQA/s595/Screen%20Shot%202022-11-08%20at%208.53.49%20AM.png)"
    ]
@@ -79,7 +79,7 @@
   },
   {
    "cell_type": "code",
-   "execution_count": 4,
+   "execution_count": null,
    "metadata": {},
    "outputs": [],
    "source": [
@@ -87,7 +87,7 @@
     "LLAMA2_13B_CHAT = \"meta.llama2-13b-chat-v1\"\n",
     "\n",
     "# We'll default to the smaller 13B model for speed; change to LLAMA2_70B_CHAT for more advanced (but slower) generations\n",
-    "DEFAULT_MODEL = LLAMA2_70B_CHAT\n",
+    "DEFAULT_MODEL = LLAMA2_13B_CHAT\n",
     "\n",
     "llm = Bedrock(credentials_profile_name='default', model_id=DEFAULT_MODEL)"
    ]
@@ -451,7 +451,9 @@
    ],
    "source": [
     "response_observation = next_step(response)\n",
-    "new_query = query + '\\033[32m\\033[1m' + response_observation\n",
+    "\n",
+    "# '\\033[32m\\033[1m' is the escape code to set the text that follows to be Bold Green\n",
+    "new_query = query + '\\033[32m\\033[1m' + response_observation \n",
     "print(new_query)"
    ]
   },
@@ -520,6 +522,7 @@
     }
    ],
    "source": [
+    "# '\\033[34m\\033[1m' is the escape code to set the text that follows to be Bold Blue\n",
     "print(new_query + '\\033[34m\\033[1m' + response)"
    ]
   },