BP: Biological Process

CC: Cellular Component

MF: Molecular Function

CC: Cellular Component

MF: Molecular Function

Runtime 1 gene: ~40 secs

Runtime 10 genes: ~40 secs

Runtime 100 genes: ~1 min 4 secs

Runtime 10 genes: ~40 secs

Runtime 100 genes: ~1 min 4 secs

Runtime 10 genes: ~54 secs

Runtime 40 genes: ~55 secs

Runtime 100 genes: ~66 secs

Runtime 40 genes: ~55 secs

Runtime 100 genes: ~66 secs

Convergence threshold for coordinate descent. Each inner coordinate-descent loop continues until the maximum change in the objective after any coefficient update is less than thresh times the null deviance.

Lasso is a regularization technique for performing linear regression, which can be used by investigators to predict an outcome by selecting a subset of the variables that minimizes prediction error. Lasso uses a penalty term that constrains the size of the estimated coefficients. Therefore, it resembles ridge regression. Lasso is a shrinkage estimator: it generates coefficient estimates that are biased to be small. Nevertheless, a lasso estimator can have smaller mean squared error (i.e., smaller variance) than an ordinary least-squares estimator when you apply it to new data. This is known as the bias-variance tradeoff. Unlike ridge regression, as the penalty term increases, lasso sets more coefficients to zero, which results in smaller model with fewer predictors.

Elastic net is a hybrid of ridge regression and lasso regularization. The main difference between Lasso and Ridge is the penalty term they use. Ridge uses L2 penalty term which limits the size of the coefficient vector. Lasso uses L1 penalty which imposes sparsity among the coefficients and thus, makes the fitted model more interpretable. Elastic net is introduced as a compromise between these two techniques, and has a penalty which is a mix of L1 and L2 norms. Like lasso, elastic net can perform variable selection by shrinking some coefficients to zero. If your study has highly-correlated variables, Ridge regression shrinks the two coefficients towards one another. Lasso is similar and generally picks one over the other (depending on the context, one does not know which variable gets picked). Elastic-net is a compromise between the two that attempts to shrink and do a sparse selection simultaneously. Empirical studies have suggested that the elastic net technique can outperform lasso on data with highly correlated predictors.

**AESA paper** https://www.ncbi.nlm.nih.gov/pubmed/31607216

Advancing Pan-cancer Gene Expression Survival Analysis by Inclusion of Non-coding RNA. *RNA Biology,* 2019

Supported browsers: PC: Google Chrome 67.0 or later, Internet Explorer 11.0.70 or later.

(It works best with Chrome)

© 2018-2020

All rights Reserved.

All rights Reserved.