unitorch.cli.models.bart¤
BartProcessor¤
Tip
core/process/bart
is the section for configuration of BartProcessor.
Bases: BartProcessor
Class for processing data with BART model.
Initialize BartProcessor.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
vocab_path |
str
|
The path to the vocabulary file. |
required |
merge_path |
str
|
The path to the merge file. |
required |
special_input_ids |
Dict
|
Special input IDs. Defaults to an empty dictionary. |
dict()
|
max_seq_length |
int
|
The maximum sequence length. Defaults to 128. |
128
|
max_gen_seq_length |
int
|
The maximum generation sequence length. Defaults to 48. |
48
|
Source code in src/unitorch/cli/models/bart/processing.py
25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 |
|
from_core_configure
classmethod
¤
from_core_configure(config, **kwargs)
Create an instance of BartProcessor from a core configuration.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
config |
The core configuration. |
required | |
**kwargs |
Additional keyword arguments. |
{}
|
Returns:
Name | Type | Description |
---|---|---|
BartProcessor |
An instance of BartProcessor. |
Source code in src/unitorch/cli/models/bart/processing.py
51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 |
|
BartForGeneration¤
Tip
core/model/generation/bart
is the section for configuration of BartForGeneration.
Bases: BartForGeneration
BART model for generation tasks.
Initialize the BartForGeneration model.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
config_path |
str
|
Path to the model configuration file. |
required |
gradient_checkpointing |
bool
|
Whether to use gradient checkpointing for memory optimization. |
False
|
Source code in src/unitorch/cli/models/bart/modeling.py
24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 |
|
forward ¤
forward(
input_ids: Tensor,
attention_mask: Tensor,
decoder_input_ids: Tensor,
decoder_attention_mask: Tensor,
)
Forward pass of the BartForGeneration model.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
input_ids |
Tensor
|
Input IDs. |
required |
attention_mask |
Tensor
|
Attention mask. |
required |
decoder_input_ids |
Tensor
|
Decoder input IDs. |
required |
decoder_attention_mask |
Tensor
|
Decoder attention mask. |
required |
Returns:
Name | Type | Description |
---|---|---|
GenerationOutputs |
The generated sequences and their scores. |
Source code in src/unitorch/cli/models/bart/modeling.py
76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 |
|
from_core_configure
classmethod
¤
from_core_configure(config, **kwargs)
Create an instance of BartForGeneration from core configuration.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
config |
The core configuration object. |
required | |
**kwargs |
Additional keyword arguments. |
{}
|
Returns:
Name | Type | Description |
---|---|---|
BartForGeneration |
The initialized BartForGeneration instance. |
Source code in src/unitorch/cli/models/bart/modeling.py
40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 |
|
generate ¤
generate(
input_ids: Tensor,
num_beams: Optional[int] = 5,
decoder_start_token_id: Optional[int] = 2,
decoder_end_token_id: Optional[
Union[int, List[int]]
] = 2,
num_return_sequences: Optional[int] = 1,
min_gen_seq_length: Optional[int] = 0,
max_gen_seq_length: Optional[int] = 48,
repetition_penalty: Optional[float] = 1.0,
no_repeat_ngram_size: Optional[int] = 0,
early_stopping: Optional[bool] = True,
length_penalty: Optional[float] = 1.0,
num_beam_groups: Optional[int] = 1,
diversity_penalty: Optional[float] = 0.0,
do_sample: Optional[bool] = False,
temperature: Optional[float] = 1.0,
top_k: Optional[int] = 50,
top_p: Optional[float] = 1.0,
)
Generate sequences using the BartForGeneration model.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
input_ids |
Tensor
|
Input IDs. |
required |
num_beams |
int
|
Number of beams for beam search. |
5
|
decoder_start_token_id |
int
|
ID of the decoder start token. |
2
|
decoder_end_token_id |
int or List[int]
|
ID of the decoder end token. |
2
|
num_return_sequences |
int
|
Number of generated sequences to return. |
1
|
min_gen_seq_length |
int
|
Minimum length of generated sequences. |
0
|
max_gen_seq_length |
int
|
Maximum length of generated sequences. |
48
|
repetition_penalty |
float
|
Repetition penalty. |
1.0
|
no_repeat_ngram_size |
int
|
Size of n-grams to avoid repeating. |
0
|
early_stopping |
bool
|
Whether to stop generation early. |
True
|
length_penalty |
float
|
Length penalty for generated sequences. |
1.0
|
num_beam_groups |
int
|
Number of groups for diverse beam search. |
1
|
diversity_penalty |
float
|
Diversity penalty for diverse beam search. |
0.0
|
do_sample |
bool
|
Whether to use sampling for generation. |
False
|
temperature |
float
|
Sampling temperature. |
1.0
|
top_k |
int
|
Top-k sampling parameter. |
50
|
top_p |
float
|
Top-p sampling parameter. |
1.0
|
Returns:
Name | Type | Description |
---|---|---|
GenerationOutputs |
The generated sequences and their scores. |
Source code in src/unitorch/cli/models/bart/modeling.py
104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 |
|