--- title: "Example_Walkthrough" output: rmarkdown::html_vignette vignette: > %\VignetteIndexEntry{Example_Walkthrough} %\VignetteEngine{knitr::rmarkdown} %\VignetteEncoding{UTF-8} --- ```{r, include = FALSE} knitr::opts_chunk$set( collapse = TRUE, comment = "#>" ) ``` # Introduction In this walkthrough, we will go through the various use cases of TangledFeatures and how it outperforms traditional feature selection methods. We will also compare the raw accuracy of the package to other methods including standard black box approaches. TangledFeatures relies on the dataset input to be correct. If there are categorical features please ensure that it is input as a factor. If done correctly, the package will identify the correct correlation based relationships between the variables. In this example, we will be using the Boston housing prices data set. There has been some cleaning beforehand so that all the variables are in the correct orientation. # Getting Started TangledFeatures primarily relies on ggplot2 for visualization, data.table for data manipulation, correlation for various correlation methods and ranger for its Random Forest implementation. There are a few other packages used to run certain features. ```{r libraries} options(warn=-1) suppressMessages(suppressWarnings(library(ranger))) suppressMessages(suppressWarnings(library(igraph))) suppressMessages(suppressWarnings(library(correlation))) suppressMessages(suppressWarnings(library(data.table))) suppressMessages(suppressWarnings(library(fastDummies))) suppressMessages(suppressWarnings(library(ggplot2))) suppressMessages(suppressWarnings(library(randomForest))) suppressMessages(suppressWarnings(library(glmnet))) suppressMessages(suppressWarnings(library(caret))) suppressMessages(suppressWarnings(library(broom))) suppressMessages(suppressWarnings(library(broom.mixed))) suppressMessages(suppressWarnings(library(jtools))) ``` # Loading the Data To read the data we can use the simple: ```{r Data Loading} data <- TangledFeatures::Housing_Prices_dataset data <- data[,-1] ``` Let's call the package's data cleaning function. It sets dummy columns, cleans NAs and a host of other functions. You can check the 'Data Cleaning' tab for further information. We also need to define the dependent variable. ```{r Data Cleaning} Data <- TangledFeatures::DataCleaning(Data = data, Y_var = 'SalePrice') ``` Let's see the new columns ```{r EDA} colnames(Data$Cleaned_Data) ``` As we can see post cleaning, we have created many new columns. # Correlation and treating the data We are able to identify the variable class for each variable and assign it's appropriate correlation relationship with other variables. Read more in the Correlation section. Let's call the main package function. ```{r Function call} Results <- TangledFeatures::TangledFeatures(Data = TangledFeatures::Housing_Prices_dataset[,-1], Y_var = 'SalePrice', Focus_variables = list(), corr_cutoff = 0.85, RF_coverage = 0.95, plot = TRUE, fast_calculation = FALSE, cor1 = 'pearson', cor2 = 'polychoric', cor3 = 'spearman') ``` The function returns visualizations and metrics about the correlation as well as other interesting outputs. ```{r Correlation Heatmap} Heatmap_plot <- Results$Correlation_heatmap plot(Heatmap_plot) ``` For visualization purposes, we are only working with the top 20 variables. We can take the various relationships and visualize them on a single graph. TangledFeatures also returns the variables as well as the correlation metric used between them. From the graph we can broadly see that there are two clusters of interrelated variables, basement related variables and garage related variables. Additionally there are smaller clusters. ```{r Correlation Graph} Igraph_plot <- Results$Graph_plot plot(Igraph_plot) ``` TangledFeatures uses graph theory algorithms to define clusters of interrelated variables. From the network plot we can see the two clusters that we observed in the heatmap, as well as a few other clusters. Let's see the correlation between the sales price and the other variables. We are plotting only significant correlations above 0.4 and below -0.4 for space. ```{r Correlation Vs Sales} cor_df <- as.data.frame(sort(cor(Data$Cleaned_Data)[,37])) cor_df$Correlation <- cor_df[,1] cor_df <- cor_df[c(-1)] cor_df$Variable <- rownames(cor_df) rownames(cor_df) <- NULL cor_df_subset <- cor_df[abs(cor_df$Correlation) > 0.4,] cor_df_subset <- cor_df_subset[order(-cor_df_subset$Correlation), ] rownames(cor_df_subset) <- NULL cor_df_subset$Variable <- as.factor(cor_df_subset$Variable) p <- ggplot(data = cor_df_subset, aes(x = Correlation, y = Variable)) + geom_bar(stat="identity") + scale_y_discrete(limits = cor_df_subset$Variable) plot(p) ``` Although simple correlation is of course not indicative of a variable's behavior in a linear model, this is a good starting place to see the relationships. Both garage and basement variables have both positive and negative relationships on the dependent variable. Let's observe the positive relationship of Garage Area and the Sale price. # Features selected by TangledFeatures ```{r Final Variables} TangledFeatures_variables <- Results$Final_Variables ``` Let us input the given features into a linear model and observe the behavior ```{r TangledFeatures LM} formula_TangledFeatures <- as.formula(paste(paste("sale_price", '~'), paste(TangledFeatures_variables, collapse = "+"))) lm_TangledFeatures <- lm(formula_TangledFeatures, data = Data$Cleaned_Data) summary(lm_TangledFeatures) ``` The features selected all seem to indicate significance and more importantly agree with the findings from our correlation analysis. # Let's see what simple Lasso gives ```{r Other feature selection techniques} library(caret) library(glmnet) control = caret::trainControl(method = "cv",number=5) set.seed(849) lasso_caret <- caret::train(x = Data$Cleaned_Data[ , -c("sale_price")], y = Data$Cleaned_Data$sale_price, method = "glmnet", trControl=control, preProc = c("center","scale"), tuneGrid = expand.grid(alpha = 1, lambda = 0)) lasso_coef <- coef(lasso_caret$finalModel,lasso_caret$bestTune$lambda) ``` Let's run a linear model with the variables we get from lasso ```{r lasso regression} lasso_coef <- as.data.frame(as.matrix(lasso_coef)) lasso_coef$Var_names <- rownames(lasso_coef) lasso_variables <- lasso_coef[which(lasso_coef$s1 > 0),]$Var_names formula_lasso <- as.formula(paste(paste("sale_price", '~'), paste(lasso_variables[-1], collapse = "+"))) lm_lasso <- lm(formula_lasso, data = Data$Cleaned_Data) summary(lm_lasso) ``` There are clearly a lot of variables. Most of them are insignificant and the coefficient direction varies wildly from what we have seen in correlation. This is a fairly bad model that seems to be significantly over fitting. # Let's see what Random Forest RFE gives ```{r RFE RandomForest} control <- caret::rfeControl(functions = rfFuncs, method = "repeatedcv", # repeated cv repeats = 2, # number of repeats number = 2) # number of folds result_rfe <- caret::rfe(x = Data$Cleaned_Data[ , -c("sale_price")], y = Data$Cleaned_Data$sale_price, sizes = c(1:35), rfeControl = control) result_rfe <- predictors(result_rfe) ``` This is about 10x slower than TangledFeatures. Let's see the variable behavior in a linear model. ```{r RFE lm} formula_rfe <- as.formula(paste(paste("sale_price", '~'), paste(result_rfe, collapse = "+"))) lm_rfe <- lm(formula_rfe, data = Data$Cleaned_Data) summary(lm_rfe) ``` Although a little better than lasso, we still have many variables that are insignificant. Variables such as garage_area having a negative coefficient also do not make sense. # Comparisons If we take a look at the coefficient behavior across three models for a few variables, we can see that it is largely the same with one major exception. ```{r Plotting and Comparing Coefficents} jtools::plot_summs(lm_TangledFeatures, lm_rfe, lm_lasso , coefs = names(lm_TangledFeatures$coef)[! names(lm_TangledFeatures$coef) %in% c(("(Intercept)"), "overall_qual", "bsmt_qual_ex")], model.names = c("TangledFeatures", "Random Forest RFE", "Lasso")) ``` We can see the coefficients as well as the confidence intervals for each linear model. Both Lasso and Random Forest make the garage_area variable negative, as it is highly correlated with other variables. TangledFeatures takes the variable interrelationships into account and generates the correct variable effect. The confidence interval for the model from TangledFeatures is tighter as well. # Conclusion We have seem the effects variables from TangledFeatures has on the outcome compared to a few other simple feature techniques. There are many ways to pick features for a linear/logistic model. As of right now, TangledFeatures is the best alternative to traditional dimensionality reduction techniques.