sekyonda f70fceb8c7 Moved inference.md to docs hace 1 año
..
hf-text-generation-inference 4767f09ecd Initial commit hace 1 año
README.md f70fceb8c7 Moved inference.md to docs hace 1 año
chat_completion.py 4767f09ecd Initial commit hace 1 año
chat_utils.py 4767f09ecd Initial commit hace 1 año
chats.json 4767f09ecd Initial commit hace 1 año
inference.py 4767f09ecd Initial commit hace 1 año
model_utils.py 4767f09ecd Initial commit hace 1 año
safety_utils.py 4767f09ecd Initial commit hace 1 año
samsum_prompt.txt 4767f09ecd Initial commit hace 1 año
vLLM_inference.py 4767f09ecd Initial commit hace 1 año

README.md

Inference

For inference we have provided an inference script. Depending on the type of finetuning performed during training the inference script takes different arguments. To finetune all model parameters the output dir of the training has to be given as --model_name argument. In the case of a parameter efficient method like lora the base model has to be given as --model_name and the output dir of the training has to be given as --peft_model argument. Additionally, a prompt for the model in the form of a text file has to be provided. The prompt file can either be piped through standard input or given as --prompt_file parameter.

For other inference options, you can use the vLLM_inference.py script for vLLM or review the hf-text-generation-inference folder for TGI.

For more information including inference safety checks, examples and other inference options available to you, see the inference documentation here.