Hamid Shojanazeri před 1 rokem
rodič
revize
f8a524f506
3 změnil soubory, kde provedl 14 přidání a 14 odebrání
  1. 1 1
      README.md
  2. 9 10
      eval/README.md
  3. 4 3
      eval/eval.py

+ 1 - 1
README.md

@@ -188,7 +188,7 @@ You can read more about our fine-tuning strategies [here](./docs/LLM_finetuning.
 
 # Evaluation Harness
 
-Here, we make use `lm-evaluation-harness` from `EleutherAI` for evaluation of fine-tuned Llama 2 models. This also can extend to evaluate other optimizations for inference of Llama 2 model such as quantization. Pleas use this get started [doc](./eval/README.md).
+Here, we make use `lm-evaluation-harness` from `EleutherAI` for evaluation of fine-tuned Llama 2 models. This also can extend to evaluate other optimizations for inference of Llama 2 model such as quantization. Please use this get started [doc](./eval/README.md).
 
 # Demo Apps
 This folder contains a series of Llama2-powered apps:

Rozdílová data souboru nebyla zobrazena, protože soubor je příliš velký
+ 9 - 10
eval/README.md


+ 4 - 3
eval/eval.py

@@ -9,7 +9,7 @@ from pathlib import Path
 import numpy as np
 import lm_eval
 from lm_eval import evaluator, tasks
-from lm_eval.utils import make_table, load_yaml_config
+from lm_eval.utils import make_table
 
 
 def _handle_non_serializable(o):
@@ -47,7 +47,8 @@ def handle_output(args, results, logger):
     if args.show_config:
         logger.info(results_str)
 
-    with open(args.output_path, "w", encoding="utf-8") as f:
+    file_path = os.path.join(args.output_path, "results.json")
+    with open(file_path , "w", encoding="utf-8") as f:
         f.write(results_str)
 
     if args.log_samples:
@@ -103,7 +104,7 @@ def parse_eval_args():
         help="Comma-separated string arguments for model, e.g., `pretrained=EleutherAI/pythia-160m`.",
     )
     parser.add_argument(
-        "--open-llm-leaderboard-tasks",
+        "--open_llm_leaderboard_tasks",
         "-oplm",
         action="store_true",
         default=False,