浏览代码

update llama-on-prem.md to Llama 3 - format fix

Jeff Tang 5 月之前
父节点
当前提交
43a28956d1
共有 1 个文件被更改,包括 0 次插入1 次删除
  1. 0 1
      recipes/inference/model_servers/llama-on-prem.md

+ 0 - 1
recipes/inference/model_servers/llama-on-prem.md

@@ -145,7 +145,6 @@ Then run the command below to deploy a quantized version of the Llama 3 8b chat
 
 ```
 docker run --gpus all --shm-size 1g -e HUGGING_FACE_HUB_TOKEN=$token -p 8080:80 -v $volume:/data ghcr.io/huggingface/text-generation-inference:2.0 --model-id $model
-
 ```
 
 After this, you'll be able to run the command below on another terminal: