Commit History

Author SHA1 Message Date
  Matthias Reso 27e56bdfd3 Add llama_finetuning.py script to provide support for torchrun 1 year ago
  Matthias Reso 4c9cc7d223 Move modules into separate src folder 1 year ago
  lchu feaa344af3 resolve conflicts 1 year ago
  Hamid Shojanazeri 51269b816f moving Bt to the try block 1 year ago
  lchu 3d1e9cd58c minor code optimization 1 year ago
  lchu 41ffbcab52 code cleanup to remove all unused imports 1 year ago
  lchu 1cc9df19e6 remove unused import 1 year ago
  Hamid Shojanazeri b2a55022cb clean up 1 year ago
  Hamid Shojanazeri 44ef280d31 adding flash attention and xformer memory efficient through PT SDPA 1 year ago
  lchu 0c51b47262 fix #90 1 year ago
  lchu c19c5c69aa fix fsdp construction on low_cpu_fsdp 1 year ago
  lchu 895dfcea30 add nightly check for using low_cpu_fsdp mode 1 year ago
  lchu 1e64fc98d9 switch to simpler param_init_fn and meta device init 1 year ago
  lchu 101391f46a Revert "replace init_empty_weights with torch.device(meta)" 1 year ago
  lchu c8d4f38d23 replace init_empty_weights with torch.device(meta) 1 year ago
  lchu d8a81bb531 save cpu mem by leveraging FSDP rank0 broadcasting 1 year ago
  Andrew Gu 71fdc4920a Save memory and fix typos 1 year ago
  Hamid Shojanazeri 7ec390bfc8 aliging special tokens in toeknizer with HF latest 1 year ago
  Rohan Varma d3d7a1656e Update llama_finetuning.py 1 year ago
  chauhang 4767f09ecd Initial commit 1 year ago