Beto 6f53f26e05 Merge branch 'main' into l3p/llama_guard 11 ay önce
..
__init__.py 207d2f80e9 Make code-llama and hf-tgi inference runnable as module 1 yıl önce
chat_utils.py 6d9d48d619 Use apply_chat_template instead of custom functions 11 ay önce
checkpoint_converter_fsdp_hf.py ce9501f22c remove relative imports 1 yıl önce
llm.py a404c9249c Notebook to demonstrate using llama and llama-guard together using OctoAI 1 yıl önce
model_utils.py d51d2cce9c adding sdpa for flash attn 1 yıl önce
prompt_format_utils.py bcdb5b31fe Fixing quantization config. Removing prints 11 ay önce
safety_utils.py f63ba19827 Fixing tokenizer used for llama 3. Changing quantization configs on safety_utils. 11 ay önce