|
1 éve | |
---|---|---|
.. | ||
hf-text-generation-inference | 1 éve | |
README.md | 1 éve | |
chat_completion.py | 1 éve | |
chat_utils.py | 1 éve | |
chats.json | 1 éve | |
inference.py | 1 éve | |
model_utils.py | 1 éve | |
safety_utils.py | 1 éve | |
samsum_prompt.txt | 1 éve | |
vLLM_inference.py | 1 éve |
For inference we have provided an inference script. Depending on the type of finetuning performed during training the inference script takes different arguments. To finetune all model parameters the output dir of the training has to be given as --model_name argument. In the case of a parameter efficient method like lora the base model has to be given as --model_name and the output dir of the training has to be given as --peft_model argument. Additionally, a prompt for the model in the form of a text file has to be provided. The prompt file can either be piped through standard input or given as --prompt_file parameter.
For other inference options, you can use the vLLM_inference.py script for vLLM or review the hf-text-generation-inference folder for TGI.
For more information including inference safety checks, examples and other inference options available to you, see the inference documentation here.