Class ModelParameters
- Namespace
- Oobabooga
APIHelper
- Assembly
- DougWare.OobaboogaAPIHelper.dll
ModelParameters represents the parameters used to control a chat model's text generation process.
- Inheritance
-
Model
Parameters
- Derived
- Inherited Members
Properties
add_bos_token
If true, adds a beginning-of-sentence token to the prompt. Some models need this.
Property Value
ban_eos_token
If true, forbids the model from ending the generation prematurely. Some models need this to be unset.
Property Value
do_sample
Controls whether to use contrastive search during generation.
Property Value
early_stopping
If true, the generation stops as soon as the model produces a token that satisfies the constraints.
Property Value
encoder_repetition_penalty
The "Hallucinations filter", penalizes tokens that are not in the prior text. Higher values make the model stay more in context, lower values allow it to diverge.
Property Value
length_penalty
The factor by which to multiply the length of the generated text when scoring it.
Property Value
max_context_length
This is the maximum string length for a prompt that the API will consume. If your prompt exceeds this length, it will be truncated from the top.
Property Value
max_new_tokens
Maximum number of new tokens to generate, affects the length of generated text. This is very important as it directly affects the prompt length inversely to the length of the generated text. Make it small for a bigger prompt. Large for a bigger response.
Property Value
min_length
The minimum length of the generated text in tokens.
Property Value
no_repeat_ngram_size
Specifies the length of token sets that are completely blocked from repeating at all. Higher values block larger phrases, only 0 or high values are recommended.
Property Value
num_beams
The number of beams in beam search. Higher values can provide better results but use more VRAM.
Property Value
penalty_alpha
Penalty alpha parameter, not clearly documented.
Property Value
repetition_penalty
Applies an exponential penalty to repeating tokens. 1 means no penalty, higher values reduce repetition, lower values increase repetition.
Property Value
seed
The seed for random number generation. Use -1 for random.
Property Value
skip_special_tokens
If true, the model will skip special tokens in the output. Unsetting this can make replies more creative.
Property Value
temperature
Controls the randomness of outputs. 0 makes output deterministic, while higher values increase randomness.
Property Value
top_k
Selects only the top_k most likely tokens, similar to top_p. Higher values increase the range of possible random results.
Property Value
top_p
When not set to 1, only tokens with probabilities adding up to less than this number are selected. Higher values increase the range of possible random results.
Property Value
truncation_length
The maximum length of the generated text. If a prompt exceeds this length, the leftmost tokens are removed. Most models require this to be at most 2048.
Property Value
typical_p
Selects only tokens that are at least this much more likely to appear than random tokens, given the prior text.
Property Value
Methods
FromEmbeddedResource<T>(string)
Parameters
resourceName
string
Returns
- T
Type Parameters
T
FromFile<T>(string)
Parameters
filePath
string
Returns
- T
Type Parameters
T