|
@@ -6,9 +6,9 @@ The 'llama-recipes' repository is a companion to the [Llama 2 model](https://git
|
|
|
> Llama 3 has a new prompt template and special tokens (based on the tiktoken tokenizer).
|
|
|
> | Token | Description |
|
|
|
> |---|---|
|
|
|
-> `<|begin_of_text|>` | This is equivalent to the BOS token. |
|
|
|
-> `<|eot_id|>` | This signifies the end of the message in a turn. This is equivalent to the EOS token. |
|
|
|
-> `<|start_header_id|>{role}<|end_header_id|>` | These tokens enclose the role for a particular message. The possible roles can be: system, user, assistant. |
|
|
|
+> `<\|begin_of_text\|>` | This is equivalent to the BOS token. |
|
|
|
+> `<\|eot_id\|>` | This signifies the end of the message in a turn. This is equivalent to the EOS token. |
|
|
|
+> `<\|start_header_id\|>{role}<\|end_header_id\|>` | These tokens enclose the role for a particular message. The possible roles can be: system, user, assistant. |
|
|
|
>
|
|
|
> A multiturn-conversation with Llama 3 follows this prompt template:
|
|
|
> ```
|