Historique des commits

Auteur SHA1 Message Date
  Matthias Reso 2717048197 Add vllm and pytest as dependencies il y a 1 an
  Matthias Reso c46be5f7a3 Bump version as 0.1.0 has been burned on name registration il y a 1 an
  Matthias Reso bd9f933c77 Exclude dist folder when creating source package il y a 1 an
  Matthias Reso 27e56bdfd3 Add llama_finetuning.py script to provide support for torchrun il y a 1 an
  Matthias Reso 6e327c95e1 Added install section to readme il y a 1 an
  Matthias Reso 38ac7963a8 Added pyproject.toml il y a 1 an
  Matthias Reso cf678b9bf0 Adjust imports to package structure + cleaned up imports il y a 1 an
  Matthias Reso 02428c992a Adding vllm as dependency; fix dep install with hatchling il y a 1 an
  Matthias Reso c8522eb0ff Remove peft install from src il y a 1 an
  Matthias Reso 4c9cc7d223 Move modules into separate src folder il y a 1 an
  Geeta Chauhan fbc513ec47 adding notes how to get the HF models (#151) il y a 1 an
  Hamid Shojanazeri bcfafd9a0b adding notes how to get the HF models il y a 1 an
  Geeta Chauhan cfba150311 adding llama code inference (#144) il y a 1 an
  Hamid Shojanazeri 6105a3f886 clarifying the infilling use-case il y a 1 an
  Hamid Shojanazeri 8b0008433c fix typos il y a 1 an
  Hamid Shojanazeri 564ef2f628 remove padding logic il y a 1 an
  Hamid Shojanazeri 277a292fbc adding autotokenizer il y a 1 an
  Hamid Shojanazeri 3f2fb9167e adding notes to model not supporting infilling il y a 1 an
  Hamid Shojanazeri c62428b99c setting defaults of temp and top_p il y a 1 an
  Hamid Shojanazeri c014ae7cb8 setting BT option to true il y a 1 an
  Hamid Shojanazeri 4fa44e16d9 add note for python llama not suited for llama infilling il y a 1 an
  Hamid Shojanazeri b18a186385 removing the option to take prompt from cli il y a 1 an
  Hamid Shojanazeri 75991d8795 fix the extra line added and remove take prompt from cli il y a 1 an
  Hamid Shojanazeri d28fc9898a addressing doc comments il y a 1 an
  Hamid Shojanazeri a234d1fe0c fix typos il y a 1 an
  Hamid Shojanazeri 2d9f4796e8 fixing the output format il y a 1 an
  Hamid Shojanazeri 1e8ea70b26 adding llama code inference il y a 1 an
  Geeta Chauhan 82e05c46e0 fix a bug in the config for use_fast_kernels (#121) il y a 1 an
  Hamid Shojanazeri 971c079aa6 bugfix: remove duplicate load_peft_model (#124) il y a 1 an
  hongbo.mo fcc817e923 bugfix: remove duplicate load_peft_model il y a 1 an