Bladeren bron

Update llama-on-prem.md

Disable link check to bypass the lint checker which mistakenly marked available links as dead.
Chester Hu 1 jaar geleden
bovenliggende
commit
7dc9a11a1f
1 gewijzigde bestanden met toevoegingen van 2 en 0 verwijderingen
  1. 2 0
      demo_apps/llama-on-prem.md

+ 2 - 0
demo_apps/llama-on-prem.md

@@ -22,7 +22,9 @@ pip install vllm
 
 Then run `huggingface-cli login` and copy and paste your Hugging Face access token to complete the login.
 
+<!-- markdown-link-check-disable -->
 There are two ways to deploy Llama 2 via vLLM, as a general API server or an OpenAI-compatible server (see [here](https://platform.openai.com/docs/api-reference/authentication) on how the OpenAI API authenticates, but you won't need to provide a real OpenAI API key when running Llama 2 via vLLM in the OpenAI-compatible mode).
+<!-- markdown-link-check-enable -->
 
 ### Deploying Llama 2 as an API Server