Tag: prediction

Calibrating algorithmic predictions with logistic regression

I found this interesting blog by Guilherme Duarte Marmerola where he shows how the predictions of algorithmic models (such as gradient boosted machines, or random forests) can be calibrated by stacking a logistic regression model on top of it: by using the predicted leaves of the algorithmic model as features / inputs in a subsequent logistic model.

When working with ML models such as GBMs, RFs, SVMs or kNNs (any one that is not a logistic regression) we can observe a pattern that is intriguing: the probabilities that the model outputs do not correspond to the real fraction of positives we see in real life.

Guilherme’s in his blog post

This is visible in the predictions of the light gradient boosted machine (LGBM) Guilherme trained: its predictions range only between ~ 0.45 and ~ 0.55. In contrast, the actual fraction of positive observations in those groups is much lower or higher (ranging from ~ 0.10 to ~0.85).

Motivated by `sklearn`’s topic Probability Calibration and the paper Practical Lessons from Predicting Clicks on Ads at Facebook, Guilherme continues to show how the output probabilities of a tree-based model can be calibrated, while simultenously improving its accuracy.

I highly recommend you look at Guilherme’s code to see for yourself what’s happening behind the scenes, but basically it’s this:

• Train an algorithmic model (e.g., GBM) using your regular features (data)
• Retrieve the probabilities GBM predicts
• Retrieve the leaves (end-nodes) in which the GBM sorts the observations
• Turn the array of leaves into a matrix of (one-hot-encoded) features, showing for each observation which leave it ended up in (1) and which not (many 0’s)
• Basically, until now, you have used the GBM to reduce the original features to a new, one-hot-encoded matrix of binary features
• Now you can use that matrix of new features as input for a logistic regression model predicting your target (Y) variable
• Apparently, those logistic regression predictions will show a greater spread of probabilities with the same or better accuracy

Here’s a visual depiction from Guilherme’s blog, with the original GBM predictions on the X-axis, and the new logistic predictions on the Y-axis.

As you can see, you retain roughly the same ordering, but the logistic regression probabilities spread is much larger.

Now according to Guilherme and the Facebook paper he refers to, the accuracy of the logistic predictions should not be less than those of the original algorithmic method.

Much better. The calibration plot of `lgbm+lr` is much closer to the ideal. Now, when the model tells us that the probability of success is 60%, we can actually be much more confident that this is the true fraction of success! Let us now try this with the ET model.

Guilherme in https://gdmarmerola.github.io/probability-calibration/

In his blog, Guilherme shows the same process visually for an Extremely Randomized Trees model, so I highly recommend you read the original article. Also, you can find the complete code on his GitHub.

ROC, AUC, precision, and recall visually explained

A receiver operating characteristic (ROC) curve displays how well a model can classify binary outcomes. An ROC curve is generated by plotting the false positive rate of a model against its true positive rate, for each possible cutoff value. Often, the area under the curve (AUC) is calculated and used as a metric showing how well a model can classify data points.

If you’re interest in learning more about ROC and AUC, I recommend this short Medium blog, which contains this neat graphic:

Dariya Sydykova, graduate student at the Wilke lab at the University of Texas at Austin, shared some great visual animations of how model accuracy and model cutoffs alter the ROC curve and the AUC metric. The quotes and animations are from the associated github repository.

ROC & AUC

The plot on the left shows the distributions of predictors for the two outcomes, and the plot on the right shows the ROC curve for these distributions. The vertical line that travels left-to-right is the cutoff value. The red dot that travels along the ROC curve corresponds to the false positive rate and the true positive rate for the cutoff value given in the plot on the left.

The traveling cutoff demonstrates the trade-off between trying to classify one outcome correctly and trying to classify the other outcome correcly. When we try to increase the true positive rate, we also increase the false positive rate. When we try to decrease the false positive rate, we decrease the true positive rate.

The shape of an ROC curve changes when a model changes the way it classifies the two outcomes.

The animation [below] starts with a model that cannot tell one outcome from the other, and the two distributions completely overlap (essentially a random classifier). As the two distributions separate, the ROC curve approaches the left-top corner, and the AUC value of the curve increases. When the model can perfectly separate the two outcomes, the ROC curve forms a right angle and the AUC becomes 1.

Precision-Recall

Two other metrics that are often used to quantify model performance are precision and recall.

Precision (also called positive predictive value) is defined as the number of true positives divided by the total number of positive predictions. Hence, precision quantifies what percentage of the positive predictions were correct: How correct your model’s positive predictions were.

Recall (also called sensitivity) is defined as the number of true positives divided by the total number of true postives and false negatives (i.e. all actual positives). Hence, recall quantifies what percentage of the actual positives you were able to identify: How sensitive your model was in identifying positives.

Dariya also made some visualizations of precision-recall curves:

Precision-recall curves also displays how well a model can classify binary outcomes. However, it does it differently from the way an ROC curve does. Precision-recall curve plots true positive rate (recall or sensitivity) against the positive predictive value (precision).

In the middle, here below, the ROC curve with AUC. On the right, the associated precision-recall curve.

Similarly to the ROC curve, when the two outcomes separate, precision-recall curves will approach the top-right corner. Typically, a model that produces a precision-recall curve that is closer to the top-right corner is better than a model that produces a precision-recall curve that is skewed towards the bottom of the plot.

Class imbalance

Class imbalance happens when the number of outputs in one class is different from the number of outputs in another class. For example, one of the distributions has 1000 observations and the other has 10. An ROC curve tends to be more robust to class imbalanace that a precision-recall curve.

In this animation [below], both distributions start with 1000 outcomes. The blue one is then reduced to 50. The precision-recall curve changes shape more drastically than the ROC curve, and the AUC value mostly stays the same. We also observe this behaviour when the other disribution is reduced to 50.

Here’s the same, but now with the red distribution shrinking to just 50 samples.

Dariya invites you to use these visualizations for educational purposes:

Please feel free to use the animations and scripts in this repository for teaching or learning. You can directly download the gif files for any of the animations, or you can recreate them using these scripts. Each script is named according to the animation it generates (i.e. `animate_ROC.r` generates `ROC.gif``animate_SD.r` generates `SD.gif`, etc.).

Privacy, Compliance, and Ethical Issues with Predictive People Analytics

November 9th 2018, I defended my dissertation on data-driven human resource management, which you can read and download via this link. On page 149, I discuss several of the issues we face when implementing machine learning and analytics within an HRM context. For the references and more detailed background information, please consult the full dissertation. More interesting reads on ethics in machine learning can be found here.

Privacy, Compliance, and Ethical Issues

Privacy can be defined as “a natural right of free choice concerning interaction and communication […] fundamentally linked to the individual’s sense of self, disclosure of self to others and his or her right to exert some level of control over that process” (Simms, 1994, p. 316). People analytics may introduce privacy issues in many ways, including the data that is processed, the control employees have over their data, and the free choice experienced in the work place. In this context, ethics would refer to what is good and bad practice from a standpoint of moral duty and obligation when organizations collect, analyze, and act upon HRM data. The next section discusses people analytics specifically in light of data privacy, legal boundaries, biases, and corporate social responsibility and free choice.

Data Privacy

Technological advancements continue to change organizational capabilities to collect, store, and analyze workforce data and this forces us to rethink the concept of privacy (Angrave et al., 2016; Bassi, 2011; Martin & Freeman, 2003). For the HRM function, data privacy used to involve questions such as “At what team size can we use the average engagement score without causing privacy infringements?” or “How long do we retain exit interview data?” In contrast, considerably more detailed information on employees’ behaviors and cognitions can be processed on an almost continuous basis these days. For instance, via people analytics, data collected with active monitoring systems help organizations to improve the accuracy of their performance measurement, increasing productivity and reducing operating costs (Holt, Lang, & Sutton, 2016). However, such systems seem in conflict with employees’ right to solitude and their freedom from being watched or listened to as they work (Martin & Freeman, 2003) and are perceived as unethical and unpleasant, affecting employees’ health and morale (Ball, 2010; Faletta, 2014; Holt et al., 2016; Martin & Freeman, 2003; Sánchez Abril, Levin, & Del Riego, 2012). Does the business value such monitoring systems bring justify their implementation? One could question whether business value remains when a more long-term and balanced perspective is taken, considering the implications for employee attraction, well-being, and retention. These can be difficult considerations, requiring elaborate research and piloting.

Faletta (2014) asked American HRM professionals which of 21 data sources would be appropriate for use in people analytics. While some were considered appropriate from an ethical perspective (e.g., performance ratings, demographic data, 360-degree feedback), particularly novel data sources were considered problematic: data of e-mail and video surveillance, performance and behavioral monitoring, and social media profiles and messages. At first thought, these seem extreme, overly intrusive data that are not and will not be used for decision-making. However, in reality, several organizations already collect such data (e.g., Hoffmann, Hartman, & Rowe, 2003; Roth et al., 2016) and they probably hold high predictive value for relevant business outcomes. Hence, it is not inconceivable that future organizations will find ways to use these data for personnel-related decisions – legally or illegally. Should they be allowed to? If not, who is going to monitor them? What if the data are used for mutually beneficial goals – to prevent problems or accidents? These and other questions deserve more detailed discussion by scholars, practitioners, and governments – preferably together.

Legal Boundaries

Although HRM professionals should always ensure that they operate within the boundaries of the law, legal compliance does not seem sufficient when it comes to people analytics. Frequently, legal systems are unprepared to defend employees’ privacy against the potential invasions via the increasingly rigorous data collection systems (Boudreau, 2014; Ciocchetti, 2011; Sánchez Abril et al., 2012). Initiatives such as the General Data Protection Regulation in the European Union somewhat restore the power balance, holding organizations and their HRM departments accountable to inform employees what, why, and how personal data is processed and stored. The rights to access, correct, and erase their information is returned to employees (GDPR, 2016). However, such regulation may not always exist and, even if it does, data usage may be unethical, regardless of its legality.

For instance, should organizations use all personnel data for which they have employee consent? One could argue that there are cases where the power imbalance between employers and employees negates the validity of consent. For instance, employees may be asked to sign written elaborate declarations or complex agreements as part of their employment, without being fully aware of what they consent to. Moreover, employees may feel pressured to provide consent in fear of losing their job, losing face, or peer pressure. Relatedly, employees may be incentivized to provide consent because of the perks associated with doing so, without fully comprehending the consequences. For instance, employees may share access to personal behavioral data in exchange for mobile devices, wellness, or mobility benefits, in which case these direct benefits may bias their perception and judgement. In such cases, data usage may not be ethically responsible, regardless of the legal boundaries, and HRM departments in general and people analytics specialists in specific should take the responsibility to champion the privacy and the interests of their employees.

Automating Historic Biases

While ethics can be considered an important factor in any data analytics project, it is particularly so in people analytics projects. HRM decisions have profound implications in an imbalanced relationship, whereas the data within the HRM field often suffer from inherent biases. This becomes particularly clear when exploring applications of predictive analytics in the HRM domain.

For example, imagine that we want to implement a decision-support system to improve the efficiency of our organization’s selection process. A primary goal of such a system could be to minimize the human time (both of our organizational agents and of the potential candidates) wasted on obvious mismatches between candidates and job positions. Under the hood, a decision-support system in a selection setting could estimate a likelihood (i.e., prediction) for each candidate that he/she makes it through the selection process successfully. Recruiters would then only have to interview the candidates that are most likely to be successful, and save valuable time for both themselves and for less probable candidates. In this way, an artificially intelligent system that reviews candidate information and recommends top candidates could considerably decrease the human workload and thereby the total cost of the selection process.

For legal compliance as well as ethical considerations, we would not want such a decision-support system to be biased towards any majority or minority group. Should we therefore exclude demographic and socio-economic factors from our predictive model? What about the academic achievements of candidates, the university they attended, or their performance on our selection tests? Some of those are scientifically validated predictors of future job performance (e.g., Hunter & Schmidt, 1998). However, they also relate to demographic and socio-economic factors and would therefore introduce bias (e.g., Hough, Oswald, & Ployhart, 2001; Pyburn, Ployhart, & Kravitz, 2008; Roth & Bobko, 2000). Do we include or exclude these selection data in our model?

Maybe the simplest solution would be to include all information, to normalize our system’s predictions within groups afterwards (e.g., gender), and to invite the top candidates per group for follow-up interviews. However, which groups do we consider? Do we only normalize for gender and nationality, or also for age and social class? What about combinations of these characteristics? Moreover, if we normalize across all groups and invite the best candidate within each, we might end up conducting more interviews than in the original scenario. Should we thus account for the proportional representation of each of these groups in the whole labor population? As you notice, both the decision-support system and the subject get complicated quickly.

Even more problematic is that any predictive decision-support system in HRM is likely biased from the moment of conception. HRM data is frequently infested with human biases as bias was present in the historic processes that generated the data. For instance, the recruiters in our example may have historically favored candidates with a certain profile, for instance, red hair. After training our decision-support system (i.e., predictive model) on these historic data, it will recognize and copy the pattern that candidates with red hair (or with correlated features, such as a Northwest European nationality) are more likely successful. The system thus learns to recommend those individuals as the top candidates. While this issue could be prevented by training the model on more objective operationalization of candidate success, most HRM data will include its own specific biases. For example, data on performance ratings will include not only the historic preferences of recruiters (i.e., only hired employees received ratings), but also the biases of supervisors and other assessors in the performance evaluation processes. Similar and other biases may occur in data regarding promotions, training courses, talent assessments, or compensation. If we use these data to train our models and systems, we would effectively automate our historic biases. Such issues greatly hinder the implementation of (predictive) people analytics without causing compliance and ethical issues.

Corporate Social Responsibility versus Free Choice

Corporate social responsibility also needs to be discussed in light of people analytics. People analytics could allow HRM departments to work on social responsibility agendas in many ways. For instance, people analytics can help to demonstrate what causes or prevents (un)ethical behavior among employees, to what extent HRM policies and practices are biased, to what extent they affect work-life balance, or how employees can be stimulated to make decisions that benefit their health and well-being. Regarding the latter case, a great practical example comes from Google’s people analytics team. They uncovered that employees could be stimulated to eat more healthy snacks by color-coding snack containers, and that smaller cafeteria plate sizes could prevent overconsumption and food loss (ABC News, 2013). However, one faces difficult ethical dilemmas in this situation. Is it organizations’ responsibility to nudge employees towards good behavior? Who determines what good entails? Should employees be made aware of these nudges? What do we consider an acceptable tradeoff between free choice and societal benefits?

When we consider the potential of predictive analytics in this light, the discussion gets even more complicated. For instance, imagine that organizations could predict work accidents based on historic HRM information, should they be forbidden, allowed, or required to do so? What about health issues, such as stress and burnout? What would be an acceptable accuracy for such models? How do we feel about false positive and false negatives? Could they use individual-level information if that resulted in benefits for employees?

In conclusion, analytics in the HRM domain quickly encounters issues related to privacy, compliance, and ethics. In bringing (predictive) analytics into the HRM domain, we should be careful not to copy and automate the historic biases present in HRM processes and data. The imbalance in the employment relationship puts the responsibility in the hands of organizational agents. The general message is that what can be done with people analytics may differ from what should be done from a corporate social responsibility perspective. The spread of people analytics depends on our collective ability to harness its power ethically and responsibility, to go beyond the legal requirements and champion both the privacy as well as the interests of employees and the wider society. A balanced approach to people analytics – with benefits beyond financial gain for the organization – will be needed to make people analytics accepted by society, and not just another management tool.

PyData, London 2018

PyData provides a forum for the international community of users and developers of data analysis tools to share ideas and learn from each other. The communities approach data science using many languages, including (but not limited to) Python, Julia, and R.

April 2018, a PyData conference was held in London, with three days of super interesting sessions and hackathons. While I couldn’t attend in person, I very much enjoy reviewing the sessions at home as all are shared open access on YouTube channel PyDataTV!

In the following section, I will outline some of my favorites as I progress through the channel:

Winning with simple, even linear, models:

One talk that really resonated with me is Vincent Warmerdam‘s talk on “Winning with Simple, even Linear, Models“. Working at GoDataDriven, a data science consultancy firm in the Netherlands, Vincent is quite familiar with deploying deep learning models, but is also midly annoyed by all the hype surrounding deep learning and neural networks. Particularly when less complex models perform equally well or only slightly less. One of his quote’s nicely sums it up:

“Tensorflow is a cool tool, but it’s even cooler when you don’t need it!”

— Vincent Warmerdam, PyData 2018

In only 40 minutes, Vincent goes to show the finesse of much simpler (linear) models in all different kinds of production settings. Among others, Vincent shows:

• how to solve the XOR problem with linear models
• how to win at timeseries with radial basis features
• how to use weighted regression to deal with historical overfitting
• how deep learning models introduce a new theme of horror in production
• how to create streaming models using passive aggressive updating
• how to build a real-time video game ranking system using mere histograms
• how to create a well performing recommender with two SQL tables
• how to rock at data science and machine learning using Python, R, and even Stan

Predicting Employee Turnover at SIOP 2018

The 2018 annual Society for Industrial and Organizational Psychology (SIOP) conference featured its first-ever machine learning competition. Teams competed for several months in predicting the enployee turnover (or churn) in a large US company. A more complete introduction as presented at the conference can be found here. All submissions had to be open source and the winning submissions have been posted in this GitHub repository. The winning teams consist of analysts working at WalMart, DDI, and HumRRO. They mostly built ensemble models, in Python and/or R, combining algorithms such as (light) gradient boosted trees, neural networks, and random forest analysis.

R resources (free courses, books, tutorials, & cheat sheets)

Help yourself to these free books, tutorials, packages, cheat sheets, and many more materials for R programming. There’s a separate overview for handy R programming tricks. If you have additions, please comment below or contact me!

Join 207 other followers

LAST UPDATED: 2020-02-16

Completely new to R? → Start learning here!

Cheat Sheets

Many of the above cheat sheets are hosted in the official RStudio cheat sheet overview.

Data Visualization

Interactive / HTML / JavaScript widgets

• R HTML Widgets Gallery***
• `plotly` – interactive plots
• `billboarder` – easy interface to billboard.js, a JavaScript chart library based on D3
• `d3heatmap` – interactive D3 heatmaps
• `altair`Vega-Lite visualizations via Python
• `DT` – interactive tables
• `DiagrammeR` – interactive diagrams (DiagrammeR cheat sheet)
• `dygraphs` – interactive time series plots
• `formattable` – formattable data structures
• `ggvis` – interactive ggplot2
• `highcharter` – interactive Highcharts plots
• `leaflet` – interactive maps
• `metricsgraphics` – interactive JavaScript bare-bones line, scatterplot and bar charts
• `networkD3` – interative D3 network graphs
• `scatterD3` – interactive scatterplots with D3
• `rbokeh` – interactive Bokeh plots
• `rCharts` – interactive Javascript charts
• `rcdimple` – interactive JavaScript bar charts and others
• `rglwidget` – interactive 3d plots
• `threejs` – interactive 3d plots and globes
• `visNetwork` – interactive network graphs
• `wordcloud2` – interface to wordcloud2.js.
• `timevis` – interactive timelines

ggplot2

ggplot2 extensions

• ggplot2 extensions overview***
• `ggthemes` – plot style themes
• `hrbrthemes` – opinionated, typographic-centric themes
• `ggmap` – maps with Google Maps, Open Street Maps, etc.
• `ggiraph` – interactive ggplots
• `gghighight` – highlight lines or values, see vignette
• `ggstance` – horizontal versions of common plots
• `GGally` – scatterplot matrices
• `ggalt` – additional coordinate systems, geoms, etc.
• `ggbeeswarm` – column scatter plots or voilin scatter plots
• `ggforce` – additional geoms, see visual guide
• `ggrepel` – prevent plot labels from overlapping
• `ggraph` – graphs, networks, trees and more
• `ggpmisc` – photo-biology related extensions
• `geomnet` – network visualization
• `ggExtra` – marginal histograms for a plot
• `gganimate` – animations, see also the gganimate wiki page
• `ggpage` – pagestyled visualizations of text based data
• `ggpmisc` – useful additional `geom_*` and `stat_*` functions
• `ggstatsplot` – include details from statistical tests in plots
• `ggspectra` – tools for plotting light spectra
• `ggnetwork` – geoms to plot networks
• `ggpoindensity` – cross between a scatter plot and a 2D density plot
• `ggradar` – radar charts
• `ggsurvplot (survminer)` – survival curves
• `ggseas` – seasonal adjustment tools
• `ggthreed` – (evil) 3D geoms
• `ggtech` – style themes for plots
• `ggtern` – ternary diagrams
• `ggTimeSeries` – time series visualizations
• `ggtree` – tree visualizations
• `treemapify` – wilcox’s treemaps
• `seewave` – spectograms

Miscellaneous

• `coefplot` – visualizes model statistics
• `circlize` – circular visualizations for categorical data
• `clustree` – visualize clustering analysis
• `quantmod` – candlestick financial charts
• `dabestr`– Data Analysis using Bootstrap-Coupled ESTimation
• `devoutsvg` – an SVG graphics device (with pattern fills)
• `devoutpdf` – an PDF graphics device
• `cartography` – create and integrate maps in your R workflow
• `colorspace` – HSL based color palettes
• `viridis` – Matplotlib viridis color pallete for R
• `munsell` – Munsell color palettes for R
• `Cairo` – high-quality display output
• `igraph` – Network Analysis and Visualization
• `graphlayouts` – new layout algorithms for network visualization
• `lattice` – Trellis graphics
• `tmap` – thematic maps
• `trelliscopejs` – interactive alternative for `facet_wrap`
• `rgl` – interactive 3D plots
• `corrplot` – graphical display of a correlation matrix
• `googleVis` – Google Charts API
• `plotROC` – interactive ROC plots
• `extrafont` – fonts in R graphics
• `rvg` – produces Vector Graphics that allow further editing in PowerPoint or Excel
• `showtext` – text using system fonts
• `animation` – animated graphics using ImageMagick.
• `misc3d` – 3d plots, isosurfaces, etc.
• `xkcd` – xkcd style graphics
• `imager` – CImg library to work with images
• `ungeviz` – tools for visualize uncertainty
• `waffle` – square pie charts a.k.a. waffle charts
• Creating spectograms in R with `hht`, `warbleR`, `soundgen`, `signal`, `seewave`, or `phonTools`

Markdown & Other Output Formats

• `tidystats` – automating updating of model statistics
• `papaja` – preparing APA journal articles
• `blogdown` – build websites with Markdown & Hugo
• `huxtable` – create Excel, html, & LaTeX tables
• `xaringan` – make slideshows via remark.js and markdown
• `summarytools` – produces neat, quick data summary tables
• `citr` – RStudio Addin to Insert Markdown Citations

Statistical Modeling & Machine Learning

Miscellaneous

• `corrr` – easier correlation matrix management and exploration

Integrated Development Environments (IDEs) & Graphical User Inferfaces (GUIs)

Descriptions mostly taken from their own websites:

• RStudio*** – Open source and enterprise ready professional software
• Jupyter Notebook*** – open-source web application that allows you to create and share documents that contain live code, equations, visualizations and narrative text across dozens of programming languages.
• Microsoft R tools for Visual Studio – turn Visual Studio into a powerful R IDE
• R Plugins for Vim, Emax, and Atom editors
• Rattle*** – GUI for data mining
• equisse – RStudio add-in to interactively explore and visualize data
• R Analytic Flow – data flow diagram-based IDE
• RKWard – easy to use and easily extensible IDE and GUI
• Eclipse StatET – Eclipse-based IDE
• OpenAnalytics Architect – Eclipse-based IDE
• TinnR – open source GUI and IDE
• DisplayR – cloud-based GUI
• BlueSkyStatistics – GUI designed to look like SPSS and SAS
• ducer – GUI for everyone
• R commander (Rcmdr) – easy and intuitive GUI
• JGR – Java-based GUI for R
• jamovi & `jmv` – free and open statistical software to bridge the gap between researcher and statistician
• Exploratory.io – cloud-based data science focused GUI
• Stagraph – GUI for ggplot2 that allows you to visualize and connect to databases and/or basic file types
• ggraptr – GUI for visualization (Rapid And Pretty Things in R)
• ML Studio – interactive Shiny platform for data visualization, statistical modeling and machine learning

R & other software and languages

R & SQL

Join 207 other followers

Light GBM vs. XGBOOST in Python & R

XGBOOST stands for eXtreme Gradient Boosting. A big brother of the earlier AdaBoost, XGB is a supervised learning algorithm that uses an ensemble of adaptively boosted decision trees. For those unfamiliar with adaptive boosting algorithms, here’s a 2-minute explanation video and a written tutorial. Although XGBOOST often performs well in predictive tasks, the training process can be quite time-consuming (similar to other bagging/boosting algorithms (e.g., random forest)).

In a recent blog, Analytics Vidhya compares the inner workings as well as the predictive accuracy of the XGBOOST algorithm to an upcoming boosting algorithm: Light GBM. The blog demonstrates a stepwise implementation of both algorithms in Python. The table below reflects the main conclusion of the comparison: Although the algorithms are comparable in terms of their predictive performance, light GBM is much faster to train. With continuously increasing data volumes, light GBM, therefore, seems the way forward.

Laurae also benchmarked lightGBM against xgboost on a Bosch dataset and her results show that, on average, LightGBM (binning) is between 11x to 15x faster than xgboost (without binning):

However, the differences get smaller as more threads are used due to thread inefficiencies (idle-time increases because threads are not scheduled a next task fast enough).

Light GBM is also available in R:

`devtools::install_github("Microsoft/LightGBM", subdir = "R-package")`

Neil Schneider tested the three algorithms for gradient boosting in R (`GBM`, `xgboost`, and `lightGBM`) and sums up their (dis)advantages:

• `GBM` has no specific advantages but its disadvantages include no early stopping, slower training and decreased accuracy,
• `xgboost` has demonstrated successful on kaggle and though traditionally slower than `lightGBM`, `tree_method = 'hist'` (histogram binning) provides a significant improvement.
• `lightGBM` has the advantages of training efficiency, low memory usage, high accuracy, parallel learning, corporate support, and scale-ability. However, its’ newness is its main disadvantage because there is little community support.