Parameters
This page contains descriptions of all parameters in LightGBM.
List of other helpful links
External Links
Parameters Format
The parameters format is key1=value1 key2=value2 ....
Parameters can be set both in config file and command line.
By using command line, parameters should not have spaces before and after =.
By using config files, one line can only contain one parameter. You can use # to comment.
If one parameter appears in both command line and config file, LightGBM will use the parameter from the command line.
For the Python and R packages, any parameters that accept a list of values (usually they have multi-xxx type, e.g. multi-int or multi-double) can be specified in those languages’ default array types.
For example, monotone_constraints can be specified as follows.
Python
params = {
"monotone_constraints": [-1, 0, 1]
}
R
params <- list(
monotone_constraints = c(-1, 0, 1)
)
Core Parameters
config🔗︎, default ="", type = string, aliases:config_filepath of config file
Note: can be used only in CLI version
task🔗︎, default =train, type = enum, options:train,predict,convert_model,refit, aliases:task_typetrain, for training, aliases:trainingpredict, for prediction, aliases:prediction,testconvert_model, for converting model file into if-else format, see more information in Convert Parametersrefit, for refitting existing models with new data, aliases:refit_treesave_binary, load train (and validation) data then save dataset to binary file. Typical usage:save_binaryfirst, then run multipletraintasks in parallel using the saved binary fileNote: can be used only in CLI version; for language-specific packages you can use the correspondent functions
objective🔗︎, default =regression, type = enum, options:regression,regression_l1,huber,fair,poisson,quantile,mape,gamma,tweedie,binary,multiclass,multiclassova,cross_entropy,cross_entropy_lambda,lambdarank,rank_xendcg, aliases:objective_type,app,application,lossregression application
regression, L2 loss, aliases:regression_l2,l2,mean_squared_error,mse,l2_root,root_mean_squared_error,rmseregression_l1, L1 loss, aliases:l1,mean_absolute_error,maehuber, Huber lossfair, Fair losspoisson, Poisson regressionquantile, Quantile regressionmape, MAPE loss, aliases:mean_absolute_percentage_errorgamma, Gamma regression with log-link. It might be useful, e.g., for modeling insurance claims severity, or for any target that might be gamma-distributedtweedie, Tweedie regression with log-link. It might be useful, e.g., for modeling total loss in insurance, or for any target that might be tweedie-distributed
binary classification application
binary, binary log loss classification (or logistic regression)requires labels in {0, 1}; see
cross-entropyapplication for general probability labels in [0, 1]
multi-class classification application
multiclass, softmax objective function, aliases:softmaxmulticlassova, One-vs-All binary objective function, aliases:multiclass_ova,ova,ovrnum_classshould be set as well
cross-entropy application
cross_entropy, objective function for cross-entropy (with optional linear weights), aliases:xentropycross_entropy_lambda, alternative parameterization of cross-entropy, aliases:xentlambdalabel is anything in interval [0, 1]
ranking application
lambdarank, lambdarank objective. label_gain can be used to set the gain (weight) ofintlabel and all values inlabelmust be smaller than number of elements inlabel_gainrank_xendcg, XE_NDCG_MART ranking objective function, aliases:xendcg,xe_ndcg,xe_ndcg_mart,xendcg_martrank_xendcgis faster than and achieves the similar performance aslambdaranklabel should be
inttype, and larger number represents the higher relevance (e.g. 0:bad, 1:fair, 2:good, 3:perfect)
boosting🔗︎, default =gbdt, type = enum, options:gbdt,rf,dart,goss, aliases:boosting_type,boostgbdt, traditional Gradient Boosting Decision Tree, aliases:gbrtrf, Random Forest, aliases:random_forestgoss, Gradient-based One-Side SamplingNote: internally, LightGBM uses
gbdtmode for the first1 / learning_rateiterations
data🔗︎, default ="", type = string, aliases:train,train_data,train_data_file,data_filenamepath of training data, LightGBM will train from this data
Note: can be used only in CLI version
valid🔗︎, default ="", type = string, aliases:test,valid_data,valid_data_file,test_data,test_data_file,valid_filenamespath(s) of validation/test data, LightGBM will output metrics for these data
support multiple validation data, separated by
,Note: can be used only in CLI version
num_iterations🔗︎, default =100, type = int, aliases:num_iteration,n_iter,num_tree,num_trees,num_round,num_rounds,num_boost_round,n_estimators,max_iter, constraints:num_iterations >= 0number of boosting iterations
Note: internally, LightGBM constructs
num_class * num_iterationstrees for multi-class classification problems
learning_rate🔗︎, default =0.1, type = double, aliases:shrinkage_rate,eta, constraints:learning_rate > 0.0shrinkage rate
in
dart, it also affects on normalization weights of dropped trees
num_leaves🔗︎, default =31, type = int, aliases:num_leaf,max_leaves,max_leaf,max_leaf_nodes, constraints:1 < num_leaves <= 131072max number of leaves in one tree
tree_learner🔗︎, default =serial, type = enum, options:serial,feature,data,voting, aliases:tree,tree_type,tree_learner_typeserial, single machine tree learnerfeature, feature parallel tree learner, aliases:feature_paralleldata, data parallel tree learner, aliases:data_parallelvoting, voting parallel tree learner, aliases:voting_parallelrefer to Distributed Learning Guide to get more details
num_threads🔗︎, default =0, type = int, aliases:num_thread,nthread,nthreads,n_jobsnumber of threads for LightGBM
0means default number of threads in OpenMPfor the best speed, set this to the number of real CPU cores, not the number of threads (most CPUs use hyper-threading to generate 2 threads per CPU core)
do not set it too large if your dataset is small (for instance, do not use 64 threads for a dataset with 10,000 rows)
be aware a task manager or any similar CPU monitoring tool might report that cores not being fully utilized. This is normal
for distributed learning, do not use all CPU cores because this will cause poor performance for the network communication
Note: please don’t change this during training, especially when running multiple jobs simultaneously by external packages, otherwise it may cause undesirable errors
device_type🔗︎, default =cpu, type = enum, options:cpu,gpu,cuda, aliases:devicedevice for the tree learning, you can use GPU to achieve the faster learning
Note: it is recommended to use the smaller
max_bin(e.g. 63) to get the better speed upNote: for the faster speed, GPU uses 32-bit float point to sum up by default, so this may affect the accuracy for some tasks. You can set
gpu_use_dp=trueto enable 64-bit float point, but it will slow down the trainingNote: refer to Installation Guide to build LightGBM with GPU support
seed🔗︎, default =None, type = int, aliases:random_seed,random_statethis seed is used to generate other seeds, e.g.
data_random_seed,feature_fraction_seed, etc.by default, this seed is unused in favor of default values of other seeds
this seed has lower priority in comparison with other seeds, which means that it will be overridden, if you set other seeds explicitly
deterministic🔗︎, default =false, type = boolused only with
cpudevice typesetting this to
trueshould ensure the stable results when using the same data and the same parameters (and differentnum_threads)when you use the different seeds, different LightGBM versions, the binaries compiled by different compilers, or in different systems, the results are expected to be different
you can raise issues in LightGBM GitHub repo when you meet the unstable results
Note: setting this to
truemay slow down the trainingNote: to avoid potential instability due to numerical issues, please set
force_col_wise=trueorforce_row_wise=truewhen settingdeterministic=true
Learning Control Parameters
force_col_wise🔗︎, default =false, type = boolused only with
cpudevice typeset this to
trueto force col-wise histogram buildingenabling this is recommended when:
the number of columns is large, or the total number of bins is large
num_threadsis large, e.g.> 20you want to reduce memory cost
Note: when both
force_col_wiseandforce_row_wisearefalse, LightGBM will firstly try them both, and then use the faster one. To remove the overhead of testing set the faster one totruemanuallyNote: this parameter cannot be used at the same time with
force_row_wise, choose only one of them
force_row_wise🔗︎, default =false, type = boolused only with
cpudevice typeset this to
trueto force row-wise histogram buildingenabling this is recommended when:
the number of data points is large, and the total number of bins is relatively small
num_threadsis relatively small, e.g.<= 16you want to use small
bagging_fractionorgossboosting to speed up
Note: setting this to
truewill double the memory cost for Dataset object. If you have not enough memory, you can try settingforce_col_wise=trueNote: when both
force_col_wiseandforce_row_wisearefalse, LightGBM will firstly try them both, and then use the faster one. To remove the overhead of testing set the faster one totruemanuallyNote: this parameter cannot be used at the same time with
force_col_wise, choose only one of them
histogram_pool_size🔗︎, default =-1.0, type = double, aliases:hist_pool_sizemax cache size in MB for historical histogram
< 0means no limit
max_depth🔗︎, default =-1, type = intlimit the max depth for tree model. This is used to deal with over-fitting when
#datais small. Tree still grows leaf-wise<= 0means no limit
min_data_in_leaf🔗︎, default =20, type = int, aliases:min_data_per_leaf,min_data,min_child_samples,min_samples_leaf, constraints:min_data_in_leaf >= 0minimal number of data in one leaf. Can be used to deal with over-fitting
Note: this is an approximation based on the Hessian, so occasionally you may observe splits which produce leaf nodes that have less than this many observations
min_sum_hessian_in_leaf🔗︎, default =1e-3, type = double, aliases:min_sum_hessian_per_leaf,min_sum_hessian,min_hessian,min_child_weight, constraints:min_sum_hessian_in_leaf >= 0.0minimal sum hessian in one leaf. Like
min_data_in_leaf, it can be used to deal with over-fitting
bagging_fraction🔗︎, default =1.0, type = double, aliases:sub_row,subsample,bagging, constraints:0.0 < bagging_fraction <= 1.0like
feature_fraction, but this will randomly select part of data without resamplingcan be used to speed up training
can be used to deal with over-fitting
Note: to enable bagging,
bagging_freqshould be set to a non zero value as well
pos_bagging_fraction🔗︎, default =1.0, type = double, aliases:pos_sub_row,pos_subsample,pos_bagging, constraints:0.0 < pos_bagging_fraction <= 1.0used only in
binaryapplicationused for imbalanced binary classification problem, will randomly sample
#pos_samples * pos_bagging_fractionpositive samples in baggingshould be used together with
neg_bagging_fractionset this to
1.0to disableNote: to enable this, you need to set
bagging_freqandneg_bagging_fractionas wellNote: if both
pos_bagging_fractionandneg_bagging_fractionare set to1.0, balanced bagging is disabledNote: if balanced bagging is enabled,
bagging_fractionwill be ignored
neg_bagging_fraction🔗︎, default =1.0, type = double, aliases:neg_sub_row,neg_subsample,neg_bagging, constraints:0.0 < neg_bagging_fraction <= 1.0used only in
binaryapplicationused for imbalanced binary classification problem, will randomly sample
#neg_samples * neg_bagging_fractionnegative samples in baggingshould be used together with
pos_bagging_fractionset this to
1.0to disableNote: to enable this, you need to set
bagging_freqandpos_bagging_fractionas wellNote: if both
pos_bagging_fractionandneg_bagging_fractionare set to1.0, balanced bagging is disabledNote: if balanced bagging is enabled,
bagging_fractionwill be ignored
bagging_freq🔗︎, default =0, type = int, aliases:subsample_freqfrequency for bagging
0means disable bagging;kmeans perform bagging at everykiteration. Everyk-th iteration, LightGBM will randomly selectbagging_fraction * 100 %of the data to use for the nextkiterationsNote: to enable bagging,
bagging_fractionshould be set to value smaller than1.0as well
bagging_seed🔗︎, default =3, type = int, aliases:bagging_fraction_seedrandom seed for bagging
feature_fraction🔗︎, default =1.0, type = double, aliases:sub_feature,colsample_bytree, constraints:0.0 < feature_fraction <= 1.0LightGBM will randomly select a subset of features on each iteration (tree) if
feature_fractionis smaller than1.0. For example, if you set it to0.8, LightGBM will select 80% of features before training each treecan be used to speed up training
can be used to deal with over-fitting
feature_fraction_bynode🔗︎, default =1.0, type = double, aliases:sub_feature_bynode,colsample_bynode, constraints:0.0 < feature_fraction_bynode <= 1.0LightGBM will randomly select a subset of features on each tree node if
feature_fraction_bynodeis smaller than1.0. For example, if you set it to0.8, LightGBM will select 80% of features at each tree nodecan be used to deal with over-fitting
Note: unlike
feature_fraction, this cannot speed up trainingNote: if both
feature_fractionandfeature_fraction_bynodeare smaller than1.0, the final fraction of each node isfeature_fraction * feature_fraction_bynode
feature_fraction_seed🔗︎, default =2, type = intrandom seed for
feature_fraction
extra_trees🔗︎, default =false, type = bool, aliases:extra_treeuse extremely randomized trees
if set to
true, when evaluating node splits LightGBM will check only one randomly-chosen threshold for each featurecan be used to speed up training
can be used to deal with over-fitting
extra_seed🔗︎, default =6, type = intrandom seed for selecting thresholds when
extra_treesis true
early_stopping_round🔗︎, default =0, type = int, aliases:early_stopping_rounds,early_stopping,n_iter_no_changewill stop training if one metric of one validation data doesn’t improve in last
early_stopping_roundrounds<= 0means disablecan be used to speed up training
first_metric_only🔗︎, default =false, type = boolLightGBM allows you to provide multiple evaluation metrics. Set this to
true, if you want to use only the first metric for early stopping
max_delta_step🔗︎, default =0.0, type = double, aliases:max_tree_output,max_leaf_outputused to limit the max output of tree leaves
<= 0means no constraintthe final max output of leaves is
learning_rate * max_delta_step
lambda_l1🔗︎, default =0.0, type = double, aliases:reg_alpha,l1_regularization, constraints:lambda_l1 >= 0.0L1 regularization
lambda_l2🔗︎, default =0.0, type = double, aliases:reg_lambda,lambda,l2_regularization, constraints:lambda_l2 >= 0.0L2 regularization
linear_lambda🔗︎, default =0.0, type = double, constraints:linear_lambda >= 0.0linear tree regularization, corresponds to the parameter
lambdain Eq. 3 of Gradient Boosting with Piece-Wise Linear Regression Trees
min_gain_to_split🔗︎, default =0.0, type = double, aliases:min_split_gain, constraints:min_gain_to_split >= 0.0the minimal gain to perform split
can be used to speed up training
drop_rate🔗︎, default =0.1, type = double, aliases:rate_drop, constraints:0.0 <= drop_rate <= 1.0used only in
dartdropout rate: a fraction of previous trees to drop during the dropout
max_drop🔗︎, default =50, type = intused only in
dartmax number of dropped trees during one boosting iteration
<=0means no limit
skip_drop🔗︎, default =0.5, type = double, constraints:0.0 <= skip_drop <= 1.0used only in
dartprobability of skipping the dropout procedure during a boosting iteration
xgboost_dart_mode🔗︎, default =false, type = boolused only in
dartset this to
true, if you want to use xgboost dart mode
uniform_drop🔗︎, default =false, type = boolused only in
dartset this to
true, if you want to use uniform drop
drop_seed🔗︎, default =4, type = intused only in
dartrandom seed to choose dropping models
top_rate🔗︎, default =0.2, type = double, constraints:0.0 <= top_rate <= 1.0used only in
gossthe retain ratio of large gradient data
other_rate🔗︎, default =0.1, type = double, constraints:0.0 <= other_rate <= 1.0used only in
gossthe retain ratio of small gradient data
min_data_per_group🔗︎, default =100, type = int, constraints:min_data_per_group > 0minimal number of data per categorical group
max_cat_threshold🔗︎, default =32, type = int, constraints:max_cat_threshold > 0used for the categorical features
limit number of split points considered for categorical features. See the documentation on how LightGBM finds optimal splits for categorical features for more details
can be used to speed up training
cat_l2🔗︎, default =10.0, type = double, constraints:cat_l2 >= 0.0used for the categorical features
L2 regularization in categorical split
cat_smooth🔗︎, default =10.0, type = double, constraints:cat_smooth >= 0.0used for the categorical features
this can reduce the effect of noises in categorical features, especially for categories with few data
max_cat_to_onehot🔗︎, default =4, type = int, constraints:max_cat_to_onehot > 0when number of categories of one feature smaller than or equal to
max_cat_to_onehot, one-vs-other split algorithm will be used
top_k🔗︎, default =20, type = int, aliases:topk, constraints:top_k > 0used only in
votingtree learner, refer to Voting parallelset this to larger value for more accurate result, but it will slow down the training speed
monotone_constraints🔗︎, default =None, type = multi-int, aliases:mc,monotone_constraint,monotonic_cstused for constraints of monotonic features
1means increasing,-1means decreasing,0means non-constraintyou need to specify all features in order. For example,
mc=-1,0,1means decreasing for 1st feature, non-constraint for 2nd feature and increasing for the 3rd feature
monotone_constraints_method🔗︎, default =basic, type = enum, options:basic,intermediate,advanced, aliases:monotone_constraining_method,mc_methodused only if
monotone_constraintsis setmonotone constraints method
basic, the most basic monotone constraints method. It does not slow the library at all, but over-constrains the predictionsintermediate, a more advanced method, which may slow the library very slightly. However, this method is much less constraining than the basic method and should significantly improve the resultsadvanced, an even more advanced method, which may slow the library. However, this method is even less constraining than the intermediate method and should again significantly improve the results
monotone_penalty🔗︎, default =0.0, type = double, aliases:monotone_splits_penalty,ms_penalty,mc_penalty, constraints:monotone_penalty >= 0.0used only if
monotone_constraintsis setmonotone penalty: a penalization parameter X forbids any monotone splits on the first X (rounded down) level(s) of the tree. The penalty applied to monotone splits on a given depth is a continuous, increasing function the penalization parameter
if
0.0(the default), no penalization is applied
feature_contri🔗︎, default =None, type = multi-double, aliases:feature_contrib,fc,fp,feature_penaltyused to control feature’s split gain, will use
gain[i] = max(0, feature_contri[i]) * gain[i]to replace the split gain of i-th featureyou need to specify all features in order
forcedsplits_filename🔗︎, default ="", type = string, aliases:fs,forced_splits_filename,forced_splits_file,forced_splitspath to a
.jsonfile that specifies splits to force at the top of every decision tree before best-first learning commences.jsonfile can be arbitrarily nested, and each split containsfeature,thresholdfields, as well asleftandrightfields representing subsplitscategorical splits are forced in a one-hot fashion, with
leftrepresenting the split containing the feature value andrightrepresenting other valuesNote: the forced split logic will be ignored, if the split makes gain worse
see this file as an example
refit_decay_rate🔗︎, default =0.9, type = double, constraints:0.0 <= refit_decay_rate <= 1.0decay rate of
refittask, will useleaf_output = refit_decay_rate * old_leaf_output + (1.0 - refit_decay_rate) * new_leaf_outputto refit treesused only in
refittask in CLI version or as argument inrefitfunction in language-specific package
cegb_tradeoff🔗︎, default =1.0, type = double, constraints:cegb_tradeoff >= 0.0cost-effective gradient boosting multiplier for all penalties
cegb_penalty_split🔗︎, default =0.0, type = double, constraints:cegb_penalty_split >= 0.0cost-effective gradient-boosting penalty for splitting a node
cegb_penalty_feature_lazy🔗︎, default =0,0,...,0, type = multi-doublecost-effective gradient boosting penalty for using a feature
applied per data point
cegb_penalty_feature_coupled🔗︎, default =0,0,...,0, type = multi-doublecost-effective gradient boosting penalty for using a feature
applied once per forest
path_smooth🔗︎, default =0, type = double, constraints:path_smooth >= 0.0controls smoothing applied to tree nodes
helps prevent overfitting on leaves with few samples
if set to zero, no smoothing is applied
if
path_smooth > 0thenmin_data_in_leafmust be at least2larger values give stronger regularization
the weight of each node is
(n / path_smooth) * w + w_p / (n / path_smooth + 1), wherenis the number of samples in the node,wis the optimal node weight to minimise the loss (approximately-sum_gradients / sum_hessians), andw_pis the weight of the parent nodenote that the parent output
w_pitself has smoothing applied, unless it is the root node, so that the smoothing effect accumulates with the tree depth
interaction_constraints🔗︎, default ="", type = stringcontrols which features can appear in the same branch
by default interaction constraints are disabled, to enable them you can specify
for CLI, lists separated by commas, e.g.
[0,1,2],[2,3]for Python-package, list of lists, e.g.
[[0, 1, 2], [2, 3]]for R-package, list of character or numeric vectors, e.g.
list(c("var1", "var2", "var3"), c("var3", "var4"))orlist(c(1L, 2L, 3L), c(3L, 4L)). Numeric vectors should use 1-based indexing, where1Lis the first feature,2Lis the second feature, etc
any two features can only appear in the same branch only if there exists a constraint containing both features
verbosity🔗︎, default =1, type = int, aliases:verbosecontrols the level of LightGBM’s verbosity
< 0: Fatal,= 0: Error (Warning),= 1: Info,> 1: Debug
input_model🔗︎, default ="", type = string, aliases:model_input,model_infilename of input model
for
predictiontask, this model will be applied to prediction datafor
traintask, training will be continued from this modelNote: can be used only in CLI version
output_model🔗︎, default =LightGBM_model.txt, type = string, aliases:model_output,model_outfilename of output model in training
Note: can be used only in CLI version
saved_feature_importance_type🔗︎, default =0, type = intthe feature importance type in the saved model file
0: count-based feature importance (numbers of splits are counted);1: gain-based feature importance (values of gain are counted)Note: can be used only in CLI version
snapshot_freq🔗︎, default =-1, type = int, aliases:save_periodfrequency of saving model file snapshot
set this to positive value to enable this function. For example, the model file will be snapshotted at each iteration if
snapshot_freq=1Note: can be used only in CLI version
IO Parameters
Dataset Parameters
linear_tree🔗︎, default =false, type = bool, aliases:linear_treesfit piecewise linear gradient boosting tree
tree splits are chosen in the usual way, but the model at each leaf is linear instead of constant
the linear model at each leaf includes all the numerical features in that leaf’s branch
categorical features are used for splits as normal but are not used in the linear models
missing values should not be encoded as
0. Usenp.nanfor Python,NAfor the CLI, andNA,NA_real_, orNA_integer_for Rit is recommended to rescale data before training so that features have similar mean and standard deviation
Note: only works with CPU and
serialtree learnerNote:
regression_l1objective is not supported with linear tree boostingNote: setting
linear_tree=truesignificantly increases the memory use of LightGBMNote: if you specify
monotone_constraints, constraints will be enforced when choosing the split points, but not when fitting the linear models on leaves
max_bin🔗︎, default =255, type = int, aliases:max_bins, constraints:max_bin > 1max number of bins that feature values will be bucketed in
small number of bins may reduce training accuracy but may increase general power (deal with over-fitting)
LightGBM will auto compress memory according to
max_bin. For example, LightGBM will useuint8_tfor feature value ifmax_bin=255
max_bin_by_feature🔗︎, default =None, type = multi-intmax number of bins for each feature
if not specified, will use
max_binfor all features
min_data_in_bin🔗︎, default =3, type = int, constraints:min_data_in_bin > 0minimal number of data inside one bin
use this to avoid one-data-one-bin (potential over-fitting)
bin_construct_sample_cnt🔗︎, default =200000, type = int, aliases:subsample_for_bin, constraints:bin_construct_sample_cnt > 0number of data that sampled to construct feature discrete bins
setting this to larger value will give better training result, but may increase data loading time
set this to larger value if data is very sparse
Note: don’t set this to small values, otherwise, you may encounter unexpected errors and poor accuracy
data_random_seed🔗︎, default =1, type = int, aliases:data_seedrandom seed for sampling data to construct histogram bins
is_enable_sparse🔗︎, default =true, type = bool, aliases:is_sparse,enable_sparse,sparseused to enable/disable sparse optimization
enable_bundle🔗︎, default =true, type = bool, aliases:is_enable_bundle,bundleset this to
falseto disable Exclusive Feature Bundling (EFB), which is described in LightGBM: A Highly Efficient Gradient Boosting Decision TreeNote: disabling this may cause the slow training speed for sparse datasets
use_missing🔗︎, default =true, type = boolset this to
falseto disable the special handle of missing value
zero_as_missing🔗︎, default =false, type = boolset this to
trueto treat all zero as missing values (including the unshown values in LibSVM / sparse matrices)set this to
falseto usenafor representing missing values
feature_pre_filter🔗︎, default =true, type = boolset this to
true(the default) to tell LightGBM to ignore the features that are unsplittable based onmin_data_in_leafas dataset object is initialized only once and cannot be changed after that, you may need to set this to
falsewhen searching parameters withmin_data_in_leaf, otherwise features are filtered bymin_data_in_leaffirstly if you don’t reconstruct dataset objectNote: setting this to
falsemay slow down the training
pre_partition🔗︎, default =false, type = bool, aliases:is_pre_partitionused for distributed learning (excluding the
feature_parallelmode)trueif training data are pre-partitioned, and different machines use different partitions
two_round🔗︎, default =false, type = bool, aliases:two_round_loading,use_two_round_loadingset this to
trueif data file is too big to fit in memoryby default, LightGBM will map data file to memory and load features from memory. This will provide faster data loading speed, but may cause run out of memory error when the data file is very big
Note: works only in case of loading data directly from text file
header🔗︎, default =false, type = bool, aliases:has_headerset this to
trueif input data has headerNote: works only in case of loading data directly from text file
label_column🔗︎, default ="", type = int or string, aliases:labelused to specify the label column
use number for index, e.g.
label=0means column_0 is the labeladd a prefix
name:for column name, e.g.label=name:is_clickif omitted, the first column in the training data is used as the label
Note: works only in case of loading data directly from text file
weight_column🔗︎, default ="", type = int or string, aliases:weightused to specify the weight column
use number for index, e.g.
weight=0means column_0 is the weightadd a prefix
name:for column name, e.g.weight=name:weightNote: works only in case of loading data directly from text file
Note: index starts from
0and it doesn’t count the label column when passing type isint, e.g. when label is column_0, and weight is column_1, the correct parameter isweight=0
group_column🔗︎, default ="", type = int or string, aliases:group,group_id,query_column,query,query_idused to specify the query/group id column
use number for index, e.g.
query=0means column_0 is the query idadd a prefix
name:for column name, e.g.query=name:query_idNote: works only in case of loading data directly from text file
Note: data should be grouped by query_id, for more information, see Query Data
Note: index starts from
0and it doesn’t count the label column when passing type isint, e.g. when label is column_0 and query_id is column_1, the correct parameter isquery=0
ignore_column🔗︎, default ="", type = multi-int or string, aliases:ignore_feature,blacklistused to specify some ignoring columns in training
use number for index, e.g.
ignore_column=0,1,2means column_0, column_1 and column_2 will be ignoredadd a prefix
name:for column name, e.g.ignore_column=name:c1,c2,c3means c1, c2 and c3 will be ignoredNote: works only in case of loading data directly from text file
Note: index starts from
0and it doesn’t count the label column when passing type isintNote: despite the fact that specified columns will be completely ignored during the training, they still should have a valid format allowing LightGBM to load file successfully
categorical_feature🔗︎, default ="", type = multi-int or string, aliases:cat_feature,categorical_column,cat_column,categorical_featuresused to specify categorical features
use number for index, e.g.
categorical_feature=0,1,2means column_0, column_1 and column_2 are categorical featuresadd a prefix
name:for column name, e.g.categorical_feature=name:c1,c2,c3means c1, c2 and c3 are categorical featuresNote: only supports categorical with
inttype (not applicable for data represented as pandas DataFrame in Python-package)Note: index starts from
0and it doesn’t count the label column when passing type isintNote: all values should be less than
Int32.MaxValue(2147483647)Note: using large values could be memory consuming. Tree decision rule works best when categorical features are presented by consecutive integers starting from zero
Note: all negative values will be treated as missing values
Note: the output cannot be monotonically constrained with respect to a categorical feature
forcedbins_filename🔗︎, default ="", type = stringpath to a
.jsonfile that specifies bin upper bounds for some or all features.jsonfile should contain an array of objects, each containing the wordfeature(integer feature index) andbin_upper_bound(array of thresholds for binning)see this file as an example
save_binary🔗︎, default =false, type = bool, aliases:is_save_binary,is_save_binary_fileif
true, LightGBM will save the dataset (including validation data) to a binary file. This speed ups the data loading for the next timeNote:
init_scoreis not saved in binary fileNote: can be used only in CLI version; for language-specific packages you can use the correspondent function
precise_float_parser🔗︎, default =false, type = booluse precise floating point number parsing for text parser (e.g. CSV, TSV, LibSVM input)
Note: setting this to
truemay lead to much slower text parsing
Predict Parameters
start_iteration_predict🔗︎, default =0, type = intused only in
predictiontaskused to specify from which iteration to start the prediction
<= 0means from the first iteration
num_iteration_predict🔗︎, default =-1, type = intused only in
predictiontaskused to specify how many trained iterations will be used in prediction
<= 0means no limit
predict_raw_score🔗︎, default =false, type = bool, aliases:is_predict_raw_score,predict_rawscore,raw_scoreused only in
predictiontaskset this to
trueto predict only the raw scoresset this to
falseto predict transformed scores
predict_leaf_index🔗︎, default =false, type = bool, aliases:is_predict_leaf_index,leaf_indexused only in
predictiontaskset this to
trueto predict with leaf index of all trees
predict_contrib🔗︎, default =false, type = bool, aliases:is_predict_contrib,contribused only in
predictiontaskset this to
trueto estimate SHAP values, which represent how each feature contributes to each predictionproduces
#features + 1values where the last value is the expected value of the model output over the training dataNote: if you want to get more explanation for your model’s predictions using SHAP values like SHAP interaction values, you can install shap package
Note: unlike the shap package, with
predict_contribwe return a matrix with an extra column, where the last column is the expected valueNote: this feature is not implemented for linear trees
predict_disable_shape_check🔗︎, default =false, type = boolused only in
predictiontaskcontrol whether or not LightGBM raises an error when you try to predict on data with a different number of features than the training data
if
false(the default), a fatal error will be raised if the number of features in the dataset you predict on differs from the number seen during trainingif
true, LightGBM will attempt to predict on whatever data you provide. This is dangerous because you might get incorrect predictions, but you could use it in situations where it is difficult or expensive to generate some features and you are very confident that they were never chosen for splits in the modelNote: be very careful setting this parameter to
true
pred_early_stop🔗︎, default =false, type = boolused only in
predictiontaskused only in
classificationandrankingapplicationsif
true, will use early-stopping to speed up the prediction. May affect the accuracyNote: cannot be used with
rfboosting type or custom objective function
pred_early_stop_freq🔗︎, default =10, type = intused only in
predictiontaskthe frequency of checking early-stopping prediction
pred_early_stop_margin🔗︎, default =10.0, type = doubleused only in
predictiontaskthe threshold of margin in early-stopping prediction
output_result🔗︎, default =LightGBM_predict_result.txt, type = string, aliases:predict_result,prediction_result,predict_name,prediction_name,pred_name,name_predused only in
predictiontaskfilename of prediction result
Note: can be used only in CLI version
Convert Parameters
convert_model_language🔗︎, default ="", type = stringused only in
convert_modeltaskonly
cppis supported yet; for conversion model to other languages consider using m2cgen utilityif
convert_model_languageis set andtask=train, the model will be also convertedNote: can be used only in CLI version
convert_model🔗︎, default =gbdt_prediction.cpp, type = string, aliases:convert_model_fileused only in
convert_modeltaskoutput filename of converted model
Note: can be used only in CLI version
Objective Parameters
objective_seed🔗︎, default =5, type = intused only in
rank_xendcgobjectiverandom seed for objectives, if random process is needed
num_class🔗︎, default =1, type = int, aliases:num_classes, constraints:num_class > 0used only in
multi-classclassification application
is_unbalance🔗︎, default =false, type = bool, aliases:unbalance,unbalanced_setsused only in
binaryandmulticlassovaapplicationsset this to
trueif training data are unbalancedNote: while enabling this should increase the overall performance metric of your model, it will also result in poor estimates of the individual class probabilities
Note: this parameter cannot be used at the same time with
scale_pos_weight, choose only one of them
scale_pos_weight🔗︎, default =1.0, type = double, constraints:scale_pos_weight > 0.0used only in
binaryandmulticlassovaapplicationsweight of labels with positive class
Note: while enabling this should increase the overall performance metric of your model, it will also result in poor estimates of the individual class probabilities
Note: this parameter cannot be used at the same time with
is_unbalance, choose only one of them
sigmoid🔗︎, default =1.0, type = double, constraints:sigmoid > 0.0used only in
binaryandmulticlassovaclassification and inlambdarankapplicationsparameter for the sigmoid function
boost_from_average🔗︎, default =true, type = boolused only in
regression,binary,multiclassovaandcross-entropyapplicationsadjusts initial score to the mean of labels for faster convergence
reg_sqrt🔗︎, default =false, type = boolused only in
regressionapplicationused to fit
sqrt(label)instead of original values and prediction result will be also automatically converted toprediction^2might be useful in case of large-range labels
alpha🔗︎, default =0.9, type = double, constraints:alpha > 0.0used only in
huberandquantileregressionapplicationsparameter for Huber loss and Quantile regression
fair_c🔗︎, default =1.0, type = double, constraints:fair_c > 0.0used only in
fairregressionapplicationparameter for Fair loss
poisson_max_delta_step🔗︎, default =0.7, type = double, constraints:poisson_max_delta_step > 0.0used only in
poissonregressionapplicationparameter for Poisson regression to safeguard optimization
tweedie_variance_power🔗︎, default =1.5, type = double, constraints:1.0 <= tweedie_variance_power < 2.0used only in
tweedieregressionapplicationused to control the variance of the tweedie distribution
set this closer to
2to shift towards a Gamma distributionset this closer to
1to shift towards a Poisson distribution
lambdarank_truncation_level🔗︎, default =30, type = int, constraints:lambdarank_truncation_level > 0used only in
lambdarankapplicationcontrols the number of top-results to focus on during training, refer to “truncation level” in the Sec. 3 of LambdaMART paper
this parameter is closely related to the desirable cutoff
kin the metric NDCG@k that we aim at optimizing the ranker for. The optimal setting for this parameter is likely to be slightly higher thank(e.g.,k + 3) to include more pairs of documents to train on, but perhaps not too high to avoid deviating too much from the desired target metric NDCG@k
lambdarank_norm🔗︎, default =true, type = boolused only in
lambdarankapplicationset this to
trueto normalize the lambdas for different queries, and improve the performance for unbalanced dataset this to
falseto enforce the original lambdarank algorithm
label_gain🔗︎, default =0,1,3,7,15,31,63,...,2^30-1, type = multi-doubleused only in
lambdarankapplicationrelevant gain for labels. For example, the gain of label
2is3in case of default label gainsseparate by
,
Metric Parameters
metric🔗︎, default ="", type = multi-enum, aliases:metrics,metric_typesmetric(s) to be evaluated on the evaluation set(s)
""(empty string or not specified) means that metric corresponding to specifiedobjectivewill be used (this is possible only for pre-defined objective functions, otherwise no evaluation metric will be added)"None"(string, not aNonevalue) means that no metric will be registered, aliases:na,null,customl1, absolute loss, aliases:mean_absolute_error,mae,regression_l1l2, square loss, aliases:mean_squared_error,mse,regression_l2,regressionrmse, root square loss, aliases:root_mean_squared_error,l2_rootquantile, Quantile regressionmape, MAPE loss, aliases:mean_absolute_percentage_errorhuber, Huber lossfair, Fair losspoisson, negative log-likelihood for Poisson regressiongamma, negative log-likelihood for Gamma regressiongamma_deviance, residual deviance for Gamma regressiontweedie, negative log-likelihood for Tweedie regressionndcg, NDCG, aliases:lambdarank,rank_xendcg,xendcg,xe_ndcg,xe_ndcg_mart,xendcg_martmap, MAP, aliases:mean_average_precisionauc, AUCaverage_precision, average precision scorebinary_logloss, log loss, aliases:binarybinary_error, for one sample:0for correct classification,1for error classificationauc_mu, AUC-mumulti_logloss, log loss for multi-class classification, aliases:multiclass,softmax,multiclassova,multiclass_ova,ova,ovrmulti_error, error rate for multi-class classificationcross_entropy, cross-entropy (with optional linear weights), aliases:xentropycross_entropy_lambda, “intensity-weighted” cross-entropy, aliases:xentlambdakullback_leibler, Kullback-Leibler divergence, aliases:kldiv
support multiple metrics, separated by
,
metric_freq🔗︎, default =1, type = int, aliases:output_freq, constraints:metric_freq > 0frequency for metric output
Note: can be used only in CLI version
is_provide_training_metric🔗︎, default =false, type = bool, aliases:training_metric,is_training_metric,train_metricset this to
trueto output metric result over training datasetNote: can be used only in CLI version
eval_at🔗︎, default =1,2,3,4,5, type = multi-int, aliases:ndcg_eval_at,ndcg_at,map_eval_at,map_atmulti_error_top_k🔗︎, default =1, type = int, constraints:multi_error_top_k > 0used only with
multi_errormetricthreshold for top-k multi-error metric
the error on each sample is
0if the true class is among the topmulti_error_top_kpredictions, and1otherwisemore precisely, the error on a sample is
0if there are at leastnum_classes - multi_error_top_kpredictions strictly less than the prediction on the true class
when
multi_error_top_k=1this is equivalent to the usual multi-error metric
auc_mu_weights🔗︎, default =None, type = multi-doubleused only with
auc_mumetriclist representing flattened matrix (in row-major order) giving loss weights for classification errors
list should have
n * nelements, wherenis the number of classesthe matrix co-ordinate
[i, j]should correspond to thei * n + j-th element of the listif not specified, will use equal weights for all classes
Network Parameters
num_machines🔗︎, default =1, type = int, aliases:num_machine, constraints:num_machines > 0the number of machines for distributed learning application
this parameter is needed to be set in both socket and mpi versions
local_listen_port🔗︎, default =12400 (random for Dask-package), type = int, aliases:local_port,port, constraints:local_listen_port > 0TCP listen port for local machines
Note: don’t forget to allow this port in firewall settings before training
time_out🔗︎, default =120, type = int, constraints:time_out > 0socket time-out in minutes
machine_list_filename🔗︎, default ="", type = string, aliases:machine_list_file,machine_list,mlistpath of file that lists machines for this distributed learning application
each line contains one IP and one port for one machine. The format is
ip port(space as a separator)Note: can be used only in CLI version
machines🔗︎, default ="", type = string, aliases:workers,nodeslist of machines in the following format:
ip1:port1,ip2:port2
GPU Parameters
gpu_platform_id🔗︎, default =-1, type = intOpenCL platform ID. Usually each GPU vendor exposes one OpenCL platform
-1means the system-wide default platformNote: refer to GPU Targets for more details
gpu_device_id🔗︎, default =-1, type = intOpenCL device ID in the specified platform. Each GPU in the selected platform has a unique device ID
-1means the default device in the selected platformNote: refer to GPU Targets for more details
gpu_use_dp🔗︎, default =false, type = boolset this to
trueto use double precision math on GPU (by default single precision is used)Note: can be used only in OpenCL implementation, in CUDA implementation only double precision is currently supported
num_gpu🔗︎, default =1, type = int, constraints:num_gpu > 0number of GPUs
Note: can be used only in CUDA implementation
Others
Continued Training with Input Score
LightGBM supports continued training with initial scores. It uses an additional file to store these initial scores, like the following:
0.5
-0.1
0.9
...
It means the initial score of the first data row is 0.5, second is -0.1, and so on.
The initial score file corresponds with data file line by line, and has per score per line.
And if the name of data file is train.txt, the initial score file should be named as train.txt.init and placed in the same folder as the data file.
In this case, LightGBM will auto load initial score file if it exists.
Weight Data
LightGBM supports weighted training. It uses an additional file to store weight data, like the following:
1.0
0.5
0.8
...
It means the weight of the first data row is 1.0, second is 0.5, and so on.
The weight file corresponds with data file line by line, and has per weight per line.
And if the name of data file is train.txt, the weight file should be named as train.txt.weight and placed in the same folder as the data file.
In this case, LightGBM will load the weight file automatically if it exists.
Also, you can include weight column in your data file. Please refer to the weight_column parameter in above.
Query Data
For learning to rank, it needs query information for training data.
LightGBM uses an additional file to store query data, like the following:
27
18
67
...
For wrapper libraries like in Python and R, this information can also be provided as an array-like via the Dataset parameter group.
[27, 18, 67, ...]
For example, if you have a 112-document dataset with group = [27, 18, 67], that means that you have 3 groups, where the first 27 records are in the first group, records 28-45 are in the second group, and records 46-112 are in the third group.
Note: data should be ordered by the query.
If the name of data file is train.txt, the query file should be named as train.txt.query and placed in the same folder as the data file.
In this case, LightGBM will load the query file automatically if it exists.
Also, you can include query/group id column in your data file. Please refer to the group_column parameter in above.