R 随机森林教程及示例
R 中的随机森林是什么?
随机森林基于一个简单的想法:“群体智慧”。多个预测因子的结果总和比最好的单个预测因子给出的预测更好。一组预测因子称为 合奏。因此,这种技术被称为 合奏学习.
在之前的教程中,您学习了如何使用 决策树 进行二元预测。为了改进我们的技术,我们可以训练一组 决策树分类器,每个都针对训练集的不同随机子集。为了进行预测,我们只需获取所有个体树的预测,然后预测获得最多投票的类别。这种技术称为 随机森林.
步骤1) 导入数据
为了确保你拥有与教程中相同的数据集 决策树,训练测试和测试集都保存在互联网上,你可以直接导入,无需做任何修改。
library(dplyr) data_train <- read.csv("https://raw.githubusercontent.com/guru99-edu/R-Programming/master/train.csv") glimpse(data_train) data_test <- read.csv("https://raw.githubusercontent.com/guru99-edu/R-Programming/master/test.csv") glimpse(data_test)
步骤2)训练模型
评估模型性能的一种方法是使用多个不同的较小数据集对其进行训练,然后在另一个较小的测试集上对其进行评估。这称为 F 倍交叉验证 功能。 R 具有随机分割几乎相同大小的数据集的功能。例如,如果 k=9,则在九个文件夹中评估模型,并在剩余的测试集上进行测试。重复此过程,直到评估完所有子集。此技术广泛用于模型选择,尤其是当模型有参数需要调整时。
现在我们有了评估模型的方法,我们需要弄清楚如何选择最能概括数据的参数。
随机森林选择一组随机的特征子集并构建许多决策树。该模型会对所有决策树的预测进行平均。
随机森林有一些参数可以改变,以提高预测的泛化能力。您将使用函数 RandomForest() 来训练模型。
随机森林的语法是
RandomForest(formula, ntree=n, mtry=FALSE, maxnodes = NULL) Arguments: - Formula: Formula of the fitted model - ntree: number of trees in the forest - mtry: Number of candidates draw to feed the algorithm. By default, it is the square of the number of columns. - maxnodes: Set the maximum amount of terminal nodes in the forest - importance=TRUE: Whether independent variables importance in the random forest be assessed
备注:随机森林可以训练更多的参数。你可以参考 小插图 查看不同的参数。
调整模型是一项非常繁琐的工作。参数之间可能存在很多组合。你不一定有时间尝试所有组合。一个好的选择是让机器为你找到最佳组合。有两种方法可用:
- 随机搜寻
- 网格搜索
我们将定义这两种方法,但在教程中,我们将使用网格搜索来训练模型
网格搜索定义
网格搜索方法很简单,模型将使用交叉验证对您在函数中传递的所有组合进行评估。
例如,您想要用 10、20、30 棵树来尝试模型,并且每棵树将在等于 1、2、3、4、5 的 mtry 数上进行测试。然后机器将测试 15 种不同的模型:
.mtry ntrees 1 1 10 2 2 10 3 3 10 4 4 10 5 5 10 6 1 20 7 2 20 8 3 20 9 4 20 10 5 20 11 1 30 12 2 30 13 3 30 14 4 30 15 5 30
该算法将评估:
RandomForest(formula, ntree=10, mtry=1) RandomForest(formula, ntree=10, mtry=2) RandomForest(formula, ntree=10, mtry=3) RandomForest(formula, ntree=20, mtry=2) ...
每次,随机森林都会进行交叉验证实验。网格搜索的一个缺点是实验次数。当组合数量很高时,它很容易爆炸。为了解决这个问题,你可以使用随机搜索
随机搜索定义
随机搜索和网格搜索最大的区别在于,随机搜索不会评估搜索空间中的所有超参数组合。相反,它会在每次迭代时随机选择组合。这样做的好处是降低了计算成本。
设置控制参数
您将按以下步骤构建和评估模型:
- 使用默认设置评估模型
- 找到最佳的 mtry 数量
- 找到最佳的最大节点数
- 找到最佳的 ntree 数量
- 在测试数据集上评估模型
在开始参数探索之前,您需要安装两个库。
- caret:R 机器学习库。如果您有 安装 R 使用 r-essential。它已在图书馆中
- 蟒蛇:conda 安装-cr r-caret
- e1071:R 机器学习库。
- 蟒蛇:conda 安装-cr r-e1071
您可以将它们与 RandomForest 一起导入
library(randomForest) library(caret) library(e1071)
默认设置
K 折交叉验证由 trainControl() 函数控制
trainControl(method = "cv", number = n, search ="grid") arguments - method = "cv": The method used to resample the dataset. - number = n: Number of folders to create - search = "grid": Use the search grid method. For randomized method, use "grid" Note: You can refer to the vignette to see the other arguments of the function.
您可以尝试使用默认参数运行模型并查看准确度分数。
备注:您将在整个教程中使用相同的控件。
# Define the control trControl <- trainControl(method = "cv", number = 10, search = "grid")
您将使用 caret 库来评估您的模型。该库有一个名为 train() 的函数来评估几乎所有 机器学习 算法。换句话说,你可以使用这个函数来训练其他算法。
基本语法是:
train(formula, df, method = "rf", metric= "Accuracy", trControl = trainControl(), tuneGrid = NULL) argument - `formula`: Define the formula of the algorithm - `method`: Define which model to train. Note, at the end of the tutorial, there is a list of all the models that can be trained - `metric` = "Accuracy": Define how to select the optimal model - `trControl = trainControl()`: Define the control parameters - `tuneGrid = NULL`: Return a data frame with all the possible combination
让我们尝试用默认值构建模型。
set.seed(1234) # Run the model rf_default <- train(survived~., data = data_train, method = "rf", metric = "Accuracy", trControl = trControl) # Print the results print(rf_default)
代码说明
- trainControl(method=”cv”, number=10, search=”grid”): 使用 10 个文件夹的网格搜索来评估模型
- train(…):训练随机森林模型。根据准确度测量选择最佳模型。
输出:
## Random Forest ## ## 836 samples ## 7 predictor ## 2 classes: 'No', 'Yes' ## ## No pre-processing ## Resampling: Cross-Validated (10 fold) ## Summary of sample sizes: 753, 752, 753, 752, 752, 752, ... ## Resampling results across tuning parameters: ## ## mtry Accuracy Kappa ## 2 0.7919248 0.5536486 ## 6 0.7811245 0.5391611 ## 10 0.7572002 0.4939620 ## ## Accuracy was used to select the optimal model using the largest value. ## The final value used for the model was mtry = 2.
该算法使用了 500 棵树并测试了三个不同的 mtry 值:2、6、10。
模型最终使用的值为 mtry = 2,准确率为 0.78。我们来尝试获得更高的分数。
步骤2)搜索最佳mtry
您可以使用 1 到 10 的 mtry 值来测试模型
set.seed(1234) tuneGrid <- expand.grid(.mtry = c(1: 10)) rf_mtry <- train(survived~., data = data_train, method = "rf", metric = "Accuracy", tuneGrid = tuneGrid, trControl = trControl, importance = TRUE, nodesize = 14, ntree = 300) print(rf_mtry)
代码说明
- tuneGrid <- expand.grid(.mtry=c(3:10)): 构建一个值为 3:10 的向量
该模型使用的最终值是 mtry = 4。
输出:
## Random Forest ## ## 836 samples ## 7 predictor ## 2 classes: 'No', 'Yes' ## ## No pre-processing ## Resampling: Cross-Validated (10 fold) ## Summary of sample sizes: 753, 752, 753, 752, 752, 752, ... ## Resampling results across tuning parameters: ## ## mtry Accuracy Kappa ## 1 0.7572576 0.4647368 ## 2 0.7979346 0.5662364 ## 3 0.8075158 0.5884815 ## 4 0.8110729 0.5970664 ## 5 0.8074727 0.5900030 ## 6 0.8099111 0.5949342 ## 7 0.8050918 0.5866415 ## 8 0.8050918 0.5855399 ## 9 0.8050631 0.5855035 ## 10 0.7978916 0.5707336 ## ## Accuracy was used to select the optimal model using the largest value. ## The final value used for the model was mtry = 4.
mtry 的最佳值存储在:
rf_mtry$bestTune$mtry
您可以存储它并在需要调整其他参数时使用它。
max(rf_mtry$results$Accuracy)
输出:
## [1] 0.8110729
best_mtry <- rf_mtry$bestTune$mtry best_mtry
输出:
## [1] 4
步骤 3)搜索最佳最大节点
您需要创建一个循环来评估 maxnodes 的不同值。在下面的代码中,您将:
- 建立清单
- 创建一个具有参数 mtry 最佳值的变量;强制
- 创建循环
- 存储 maxnode 的当前值
- 总结结果
store_maxnode <- list() tuneGrid <- expand.grid(.mtry = best_mtry) for (maxnodes in c(5: 15)) { set.seed(1234) rf_maxnode <- train(survived~., data = data_train, method = "rf", metric = "Accuracy", tuneGrid = tuneGrid, trControl = trControl, importance = TRUE, nodesize = 14, maxnodes = maxnodes, ntree = 300) current_iteration <- toString(maxnodes) store_maxnode[[current_iteration]] <- rf_maxnode } results_mtry <- resamples(store_maxnode) summary(results_mtry)
代码说明:
- store_maxnode <- list():模型的结果将存储在这个列表中
- expand.grid(.mtry=best_mtry):使用mtry的最佳值
- for (maxnodes in c(15:25)) { … }:使用从 15 到 25 的 maxnodes 值计算模型。
- maxnodes=maxnodes:对于每次迭代,maxnodes 等于 maxnodes 的当前值。即 15、16、17、...
- key <- toString(maxnodes): 将 maxnode 的值存储为字符串变量。
- store_maxnode[[key]] <- rf_maxnode:将模型的结果保存在列表中。
- resamples(store_maxnode): 对模型的结果进行排序
- summary(results_mtry):打印所有组合的摘要。
输出:
## ## Call: ## summary.resamples(object = results_mtry) ## ## Models: 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15 ## Number of resamples: 10 ## ## Accuracy ## Min. 1st Qu. Median Mean 3rd Qu. Max. NA's ## 5 0.6785714 0.7529762 0.7903758 0.7799771 0.8168388 0.8433735 0 ## 6 0.6904762 0.7648810 0.7784710 0.7811962 0.8125000 0.8313253 0 ## 7 0.6904762 0.7619048 0.7738095 0.7788009 0.8102410 0.8333333 0 ## 8 0.6904762 0.7627295 0.7844234 0.7847820 0.8184524 0.8433735 0 ## 9 0.7261905 0.7747418 0.8083764 0.7955250 0.8258749 0.8333333 0 ## 10 0.6904762 0.7837780 0.7904475 0.7895869 0.8214286 0.8433735 0 ## 11 0.7023810 0.7791523 0.8024240 0.7943775 0.8184524 0.8433735 0 ## 12 0.7380952 0.7910929 0.8144005 0.8051205 0.8288511 0.8452381 0 ## 13 0.7142857 0.8005952 0.8192771 0.8075158 0.8403614 0.8452381 0 ## 14 0.7380952 0.7941050 0.8203528 0.8098967 0.8403614 0.8452381 0 ## 15 0.7142857 0.8000215 0.8203528 0.8075301 0.8378873 0.8554217 0 ## ## Kappa ## Min. 1st Qu. Median Mean 3rd Qu. Max. NA's ## 5 0.3297872 0.4640436 0.5459706 0.5270773 0.6068751 0.6717371 0 ## 6 0.3576471 0.4981484 0.5248805 0.5366310 0.6031287 0.6480921 0 ## 7 0.3576471 0.4927448 0.5192771 0.5297159 0.5996437 0.6508314 0 ## 8 0.3576471 0.4848320 0.5408159 0.5427127 0.6200253 0.6717371 0 ## 9 0.4236277 0.5074421 0.5859472 0.5601687 0.6228626 0.6480921 0 ## 10 0.3576471 0.5255698 0.5527057 0.5497490 0.6204819 0.6717371 0 ## 11 0.3794326 0.5235007 0.5783191 0.5600467 0.6126720 0.6717371 0 ## 12 0.4460432 0.5480930 0.5999072 0.5808134 0.6296780 0.6717371 0 ## 13 0.4014252 0.5725752 0.6087279 0.5875305 0.6576219 0.6678832 0 ## 14 0.4460432 0.5585005 0.6117973 0.5911995 0.6590982 0.6717371 0 ## 15 0.4014252 0.5689401 0.6117973 0.5867010 0.6507194 0.6955990 0
maxnode 的最后一个值准确率最高。你可以尝试更高的值,看看能否获得更高的分数。
store_maxnode <- list() tuneGrid <- expand.grid(.mtry = best_mtry) for (maxnodes in c(20: 30)) { set.seed(1234) rf_maxnode <- train(survived~., data = data_train, method = "rf", metric = "Accuracy", tuneGrid = tuneGrid, trControl = trControl, importance = TRUE, nodesize = 14, maxnodes = maxnodes, ntree = 300) key <- toString(maxnodes) store_maxnode[[key]] <- rf_maxnode } results_node <- resamples(store_maxnode) summary(results_node)
输出:
## ## Call: ## summary.resamples(object = results_node) ## ## Models: 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30 ## Number of resamples: 10 ## ## Accuracy ## Min. 1st Qu. Median Mean 3rd Qu. Max. NA's ## 20 0.7142857 0.7821644 0.8144005 0.8075301 0.8447719 0.8571429 0 ## 21 0.7142857 0.8000215 0.8144005 0.8075014 0.8403614 0.8571429 0 ## 22 0.7023810 0.7941050 0.8263769 0.8099254 0.8328313 0.8690476 0 ## 23 0.7023810 0.7941050 0.8263769 0.8111302 0.8447719 0.8571429 0 ## 24 0.7142857 0.7946429 0.8313253 0.8135112 0.8417599 0.8690476 0 ## 25 0.7142857 0.7916667 0.8313253 0.8099398 0.8408635 0.8690476 0 ## 26 0.7142857 0.7941050 0.8203528 0.8123207 0.8528758 0.8571429 0 ## 27 0.7023810 0.8060456 0.8313253 0.8135112 0.8333333 0.8690476 0 ## 28 0.7261905 0.7941050 0.8203528 0.8111015 0.8328313 0.8690476 0 ## 29 0.7142857 0.7910929 0.8313253 0.8087063 0.8333333 0.8571429 0 ## 30 0.6785714 0.7910929 0.8263769 0.8063253 0.8403614 0.8690476 0 ## ## Kappa ## Min. 1st Qu. Median Mean 3rd Qu. Max. NA's ## 20 0.3956835 0.5316120 0.5961830 0.5854366 0.6661120 0.6955990 0 ## 21 0.3956835 0.5699332 0.5960343 0.5853247 0.6590982 0.6919315 0 ## 22 0.3735084 0.5560661 0.6221836 0.5914492 0.6422128 0.7189781 0 ## 23 0.3735084 0.5594228 0.6228827 0.5939786 0.6657372 0.6955990 0 ## 24 0.3956835 0.5600352 0.6337821 0.5992188 0.6604703 0.7189781 0 ## 25 0.3956835 0.5530760 0.6354875 0.5912239 0.6554912 0.7189781 0 ## 26 0.3956835 0.5589331 0.6136074 0.5969142 0.6822128 0.6955990 0 ## 27 0.3735084 0.5852459 0.6368425 0.5998148 0.6426088 0.7189781 0 ## 28 0.4290780 0.5589331 0.6154905 0.5946859 0.6356141 0.7189781 0 ## 29 0.4070588 0.5534173 0.6337821 0.5901173 0.6423101 0.6919315 0 ## 30 0.3297872 0.5534173 0.6202632 0.5843432 0.6590982 0.7189781 0
当 maxnode 值等于 22 时,可获得最高准确度分数。
步骤 4)搜索最佳 ntree
现在您有了 mtry 和 maxnode 的最佳值,您可以调整树的数量。方法与 maxnode 完全相同。
store_maxtrees <- list() for (ntree in c(250, 300, 350, 400, 450, 500, 550, 600, 800, 1000, 2000)) { set.seed(5678) rf_maxtrees <- train(survived~., data = data_train, method = "rf", metric = "Accuracy", tuneGrid = tuneGrid, trControl = trControl, importance = TRUE, nodesize = 14, maxnodes = 24, ntree = ntree) key <- toString(ntree) store_maxtrees[[key]] <- rf_maxtrees } results_tree <- resamples(store_maxtrees) summary(results_tree)
输出:
## ## Call: ## summary.resamples(object = results_tree) ## ## Models: 250, 300, 350, 400, 450, 500, 550, 600, 800, 1000, 2000 ## Number of resamples: 10 ## ## Accuracy ## Min. 1st Qu. Median Mean 3rd Qu. Max. NA's ## 250 0.7380952 0.7976190 0.8083764 0.8087010 0.8292683 0.8674699 0 ## 300 0.7500000 0.7886905 0.8024240 0.8027199 0.8203397 0.8452381 0 ## 350 0.7500000 0.7886905 0.8024240 0.8027056 0.8277623 0.8452381 0 ## 400 0.7500000 0.7886905 0.8083764 0.8051009 0.8292683 0.8452381 0 ## 450 0.7500000 0.7886905 0.8024240 0.8039104 0.8292683 0.8452381 0 ## 500 0.7619048 0.7886905 0.8024240 0.8062914 0.8292683 0.8571429 0 ## 550 0.7619048 0.7886905 0.8083764 0.8099062 0.8323171 0.8571429 0 ## 600 0.7619048 0.7886905 0.8083764 0.8099205 0.8323171 0.8674699 0 ## 800 0.7619048 0.7976190 0.8083764 0.8110820 0.8292683 0.8674699 0 ## 1000 0.7619048 0.7976190 0.8121510 0.8086723 0.8303571 0.8452381 0 ## 2000 0.7619048 0.7886905 0.8121510 0.8086723 0.8333333 0.8452381 0 ## ## Kappa ## Min. 1st Qu. Median Mean 3rd Qu. Max. NA's ## 250 0.4061697 0.5667400 0.5836013 0.5856103 0.6335363 0.7196807 0 ## 300 0.4302326 0.5449376 0.5780349 0.5723307 0.6130767 0.6710843 0 ## 350 0.4302326 0.5449376 0.5780349 0.5723185 0.6291592 0.6710843 0 ## 400 0.4302326 0.5482030 0.5836013 0.5774782 0.6335363 0.6710843 0 ## 450 0.4302326 0.5449376 0.5780349 0.5750587 0.6335363 0.6710843 0 ## 500 0.4601542 0.5449376 0.5780349 0.5804340 0.6335363 0.6949153 0 ## 550 0.4601542 0.5482030 0.5857118 0.5884507 0.6396872 0.6949153 0 ## 600 0.4601542 0.5482030 0.5857118 0.5884374 0.6396872 0.7196807 0 ## 800 0.4601542 0.5667400 0.5836013 0.5910088 0.6335363 0.7196807 0 ## 1000 0.4601542 0.5667400 0.5961590 0.5857446 0.6343666 0.6678832 0 ## 2000 0.4601542 0.5482030 0.5961590 0.5862151 0.6440678 0.6656337 0
您有了最终模型。您可以使用以下参数训练随机森林:
- ntree =800:将训练800棵树
- mtry=4:每次迭代选择 4 个特征
- maxnodes = 24:终端节点(叶子节点)最多有 24 个节点
fit_rf <- train(survived~., data_train, method = "rf", metric = "Accuracy", tuneGrid = tuneGrid, trControl = trControl, importance = TRUE, nodesize = 14, ntree = 800, maxnodes = 24)
步骤5)评估模型
库 caret 具有进行预测的功能。
predict(model, newdata= df) argument - `model`: Define the model evaluated before. - `newdata`: Define the dataset to make prediction
prediction <-predict(fit_rf, data_test)
您可以使用预测来计算混淆矩阵并查看准确度分数
confusionMatrix(prediction, data_test$survived)
输出:
## Confusion Matrix and Statistics ## ## Reference ## Prediction No Yes ## No 110 32 ## Yes 11 56 ## ## Accuracy : 0.7943 ## 95% CI : (0.733, 0.8469) ## No Information Rate : 0.5789 ## P-Value [Acc > NIR] : 3.959e-11 ## ## Kappa : 0.5638 ## Mcnemar's Test P-Value : 0.002289 ## ## Sensitivity : 0.9091 ## Specificity : 0.6364 ## Pos Pred Value : 0.7746 ## Neg Pred Value : 0.8358 ## Prevalence : 0.5789 ## Detection Rate : 0.5263 ## Detection Prevalence : 0.6794 ## Balanced Accuracy : 0.7727 ## ## 'Positive' Class : No ##
您的准确率为 0.7943%,高于默认值
步骤 6)可视化结果
最后,您可以使用函数 varImp() 查看特征重要性。似乎最重要的特征是性别和年龄。这并不奇怪,因为重要特征可能出现在树根附近,而不太重要的特征通常会出现在叶子附近。
varImpPlot(fit_rf)
输出:
varImp(fit_rf) ## rf variable importance ## ## Importance ## sexmale 100.000 ## age 28.014 ## pclassMiddle 27.016 ## fare 21.557 ## pclassUpper 16.324 ## sibsp 11.246 ## parch 5.522 ## embarkedC 4.908 ## embarkedQ 1.420 ## embarkedS 0.000
结语
我们可以用下表总结如何训练和评估随机森林:
自学资料库 | 目的 | 功能 | 产品型号 |
---|---|---|---|
随机森林 | 创建随机森林 | 随机森林() | 公式,ntree=n,mtry=FALSE,maxnodes = NULL |
插入符 | 创建K文件夹交叉验证 | 训练控制() | 方法 = “cv”,数字 = n,搜索 =”grid” |
插入符 | 训练随机森林 | 火车() | 公式,df,方法 = “rf”,度量 = “准确度”,trControl = trainControl(),tuneGrid = NULL |
插入符 | 预测样本之外 | 预测 | 模型,新数据= df |
插入符 | 混淆矩阵和统计数据 | 混淆矩阵() | 模型,y 检验 |
插入符 | 重要性不一 | 变量Imp() | 模型 |
附录
插入符号中使用的模型列表
names>(getModelInfo())
输出:
## [1] "ada" "AdaBag" "AdaBoost.M1" ## [4] "adaboost" "amdai" "ANFIS" ## [7] "avNNet" "awnb" "awtan" ## [10] "bag" "bagEarth" "bagEarthGCV" ## [13] "bagFDA" "bagFDAGCV" "bam" ## [16] "bartMachine" "bayesglm" "binda" ## [19] "blackboost" "blasso" "blassoAveraged" ## [22] "bridge" "brnn" "BstLm" ## [25] "bstSm" "bstTree" "C5.0" ## [28] "C5.0Cost" "C5.0Rules" "C5.0Tree" ## [31] "cforest" "chaid" "CSimca" ## [34] "ctree" "ctree2" "cubist" ## [37] "dda" "deepboost" "DENFIS" ## [40] "dnn" "dwdLinear" "dwdPoly" ## [43] "dwdRadial" "earth" "elm" ## [46] "enet" "evtree" "extraTrees" ## [49] "fda" "FH.GBML" "FIR.DM" ## [52] "foba" "FRBCS.CHI" "FRBCS.W" ## [55] "FS.HGD" "gam" "gamboost" ## [58] "gamLoess" "gamSpline" "gaussprLinear" ## [61] "gaussprPoly" "gaussprRadial" "gbm_h3o" ## [64] "gbm" "gcvEarth" "GFS.FR.MOGUL" ## [67] "GFS.GCCL" "GFS.LT.RS" "GFS.THRIFT" ## [70] "glm.nb" "glm" "glmboost" ## [73] "glmnet_h3o" "glmnet" "glmStepAIC" ## [76] "gpls" "hda" "hdda" ## [79] "hdrda" "HYFIS" "icr" ## [82] "J48" "JRip" "kernelpls" ## [85] "kknn" "knn" "krlsPoly" ## [88] "krlsRadial" "lars" "lars2" ## [91] "lasso" "lda" "lda2" ## [94] "leapBackward" "leapForward" "leapSeq" ## [97] "Linda" "lm" "lmStepAIC" ## [100] "LMT" "loclda" "logicBag" ## [103] "LogitBoost" "logreg" "lssvmLinear" ## [106] "lssvmPoly" "lssvmRadial" "lvq" ## [109] "M5" "M5Rules" "manb" ## [112] "mda" "Mlda" "mlp" ## [115] "mlpKerasDecay" "mlpKerasDecayCost" "mlpKerasDropout" ## [118] "mlpKerasDropoutCost" "mlpML" "mlpSGD" ## [121] "mlpWeightDecay" "mlpWeightDecayML" "monmlp" ## [124] "msaenet" "multinom" "mxnet" ## [127] "mxnetAdam" "naive_bayes" "nb" ## [130] "nbDiscrete" "nbSearch" "neuralnet" ## [133] "nnet" "nnls" "nodeHarvest" ## [136] "null" "OneR" "ordinalNet" ## [139] "ORFlog" "ORFpls" "ORFridge" ## [142] "ORFsvm" "ownn" "pam" ## [145] "parRF" "PART" "partDSA" ## [148] "pcaNNet" "pcr" "pda" ## [151] "pda2" "penalized" "PenalizedLDA" ## [154] "plr" "pls" "plsRglm" ## [157] "polr" "ppr" "PRIM" ## [160] "protoclass" "pythonKnnReg" "qda" ## [163] "QdaCov" "qrf" "qrnn" ## [166] "randomGLM" "ranger" "rbf" ## [169] "rbfDDA" "Rborist" "rda" ## [172] "regLogistic" "relaxo" "rf" ## [175] "rFerns" "RFlda" "rfRules" ## [178] "ridge" "rlda" "rlm" ## [181] "rmda" "rocc" "rotationForest" ## [184] "rotationForestCp" "rpart" "rpart1SE" ## [187] "rpart2" "rpartCost" "rpartScore" ## [190] "rqlasso" "rqnc" "RRF" ## [193] "RRFglobal" "rrlda" "RSimca" ## [196] "rvmLinear" "rvmPoly" "rvmRadial" ## [199] "SBC" "sda" "sdwd" ## [202] "simpls" "SLAVE" "slda" ## [205] "smda" "snn" "sparseLDA" ## [208] "spikeslab" "spls" "stepLDA" ## [211] "stepQDA" "superpc" "svmBoundrangeString"## [214] "svmExpoString" "svmLinear" "svmLinear2" ## [217] "svmLinear3" "svmLinearWeights" "svmLinearWeights2" ## [220] "svmPoly" "svmRadial" "svmRadialCost" ## [223] "svmRadialSigma" "svmRadialWeights" "svmSpectrumString" ## [226] "tan" "tanSearch" "treebag" ## [229] "vbmpRadial" "vglmAdjCat" "vglmContRatio" ## [232] "vglmCumulative" "widekernelpls" "WM" ## [235] "wsrf" "xgbLinear" "xgbTree" ## [238] "xyf"