Hamid Shojanazeri 5cedef2155 Update llama-on-prem.md 6 mesiacov pred
..
benchmarks 706961250e add <!-- markdown-link-check-disable --> to bypass URL checks for ECR URLs 6 mesiacov pred
code_llama 6d449a859b New folder structure (#1) 8 mesiacov pred
evaluation 84f15fee50 updating the REAMEs to llama3 7 mesiacov pred
finetuning fda8482c71 Update peft_finetuning.ipynb 6 mesiacov pred
inference 5cedef2155 Update llama-on-prem.md 6 mesiacov pred
llama_api_providers 79266217ef Update location and name of llm.py example notebook 7 mesiacov pred
multilingual e98f6de80d typo 7 mesiacov pred
quickstart c68410cbad typo fix 6 mesiacov pred
responsible_ai c1be7d802a Updating responsible AI main readme 7 mesiacov pred
use_cases 3f6e47446b typo fix 6 mesiacov pred
README.md 0efb8bd31e Update README.md 7 mesiacov pred

README.md

This folder contains examples organized by topic:

Subfolder Description
quickstart The "Hello World" of using Llama2, start here if you are new to using Llama2
multilingual Scripts to add a new language to Llama2
finetuning Scripts to finetune Llama2 on single-GPU and multi-GPU setups
inference Scripts to deploy Llama2 for inference locally and using model servers
use_cases Scripts showing common applications of Llama2
responsible_ai Scripts to use PurpleLlama for safeguarding model outputs
llama_api_providers Scripts to run inference on Llama via hosted endpoints
benchmarks Scripts to benchmark Llama 2 models inference on various backends
code_llama Scripts to run inference with the Code Llama models
evaluation Scripts to evaluate fine-tuned Llama2 models using lm-evaluation-harness from EleutherAI

Note on using Replicate To run some of the demo apps here, you'll need to first sign in with Replicate with your github account, then create a free API token here that you can use for a while. After the free trial ends, you'll need to enter billing info to continue to use Llama2 hosted on Replicate - according to Replicate's Run time and cost for the Llama2-13b-chat model used in our demo apps, the model "costs $0.000725 per second. Predictions typically complete within 10 seconds." This means each call to the Llama2-13b-chat model costs less than $0.01 if the call completes within 10 seconds. If you want absolutely no costs, you can refer to the section "Running Llama2 locally on Mac" above or the "Running Llama2 in Google Colab" below.

Note on using OctoAI You can also use OctoAI to run some of the Llama demos under OctoAI_API_examples. You can sign into OctoAI with your Google or GitHub account, which will give you $10 of free credits you can use for a month. Llama2 on OctoAI is priced at $0.00086 per 1k tokens (a ~350-word LLM response), so $10 of free credits should go a very long way (about 10,000 LLM inferences).

Running Llama2 in Google Colab

To run Llama2 in Google Colab using llama-cpp-python, download the quantized Llama2-7b-chat model here, or follow the instructions above to build it, before uploading it to your Google drive. Note that on the free Colab T4 GPU, the call to Llama could take more than 20 minutes to return; running the notebook locally on M1 MBP takes about 20 seconds.