Suraj Subramanian 6d449a859b New folder structure (#1) | 8 месяцев назад | |
---|---|---|
.. | ||
azure | 8 месяцев назад | |
README.md | 8 месяцев назад |
This folder contains code to run inference benchmark for Llama 2 models on cloud API with popular cloud service providers. The benchmark will focus on overall inference throughput for querying the API endpoint for output generation with different level of concurrent requests. Remember that to send queries to the API endpoint, you are required to acquire subscriptions with the cloud service providers and there will be a fee associated with it.
Disclaimer - The purpose of the code is to provide a configurable setup to measure inference throughput. It is not a representative of the performance of these API services and we do not plan to make comparisons between different API providers.
To get started, there are certain steps we need to take to deploy the models:
Once deployed successfully, you should be assigned for an API endpoint and a security key for inference. For more information, you should consult Azure's official documentation here for model deployment and inference.
Now, replace the endpoint url and API key in azure/parameters.json
. For parameter MODEL_ENDPOINTS
, with chat models the suffix should be v1/chat/completions
and with pretrained models the suffix should be v1/completions
.
Note that the API endpoint might implemented a rate limit for token generation in certain amount of time. If you encountered the error, you can try reduce MAX_NEW_TOKEN
or start with smaller CONCURRENT_LEVELs
.
Once everything configured, to run chat model benchmark:
python chat_azure_api_benchmark.py
To run pretrained model benchmark:
python pretrained_azure_api_benchmark.py
Once finished, the result will be written into a CSV file in the same directory, which can be later imported into dashboard of your choice.