Browse Source

update llama-on-prem.md to Llama 3 - format fix

Jeff Tang 6 months atrás
parent
commit
43a28956d1
1 changed files with 0 additions and 1 deletions
  1. 0 1
      recipes/inference/model_servers/llama-on-prem.md

+ 0 - 1
recipes/inference/model_servers/llama-on-prem.md

@@ -145,7 +145,6 @@ Then run the command below to deploy a quantized version of the Llama 3 8b chat
 
 ```
 docker run --gpus all --shm-size 1g -e HUGGING_FACE_HUB_TOKEN=$token -p 8080:80 -v $volume:/data ghcr.io/huggingface/text-generation-inference:2.0 --model-id $model
-
 ```
 
 After this, you'll be able to run the command below on another terminal: