提交历史

作者 SHA1 备注 提交日期
  Hamid Shojanazeri 8776ceb833 update with the HF flash attention native 1 年之前
  Hamid Shojanazeri dbfea484c6 Feature : Enable Intel GPU/XPU finetuning and inference (#116) 1 年之前
  Beto d92226a873 Removing option for local model, it's not working as expected. Would need further testing with the models from HF 1 年之前
  Beto 7881b3bb99 Changing safety utils to use HF classes to load Llama Guard. Removing Llama plain inference code 1 年之前
  Beto 109b728d02 Adding Llama Guard safety checker. 1 年之前
  Abhilash Majumder 6a78b96764 Merge branch 'main' into ipex_feature 1 年之前
  Matthias Reso 8ac44ef3be Fix vocab size mismatch in inference due to added pad token 1 年之前
  abhilash1910 ad6b27d316 merge conflicts 1 年之前
  abhilash1910 33da341af5 upstream resolve conflict 1 年之前
  Matthias Reso ccda6fb8ca Move inference scripts into example folder 1 年之前