Tag: deeplearning

Python resources (free courses, books, & cheat sheets)

Python resources (free courses, books, & cheat sheets)

Find more comprehensive Python repositories:
Vinta’s awesome Python Github repository, the easy Python docs, the Python Wiki Beginners Guide, or CourseDuck’s overview of free Python courses!

My list of Python resources is still quite short so if you have additions, please comment below or contact me! There are separate overviews for Data Science, Machine Learning, & Statistics resources in general, and for R resources and SQL resources in specific.

LAST UPDATED: 11-11-2018

Cheat sheets:

Courses:

Books:

Data Science, Machine Learning, & Statistics resources (free courses, books, tutorials, & cheat sheets)

Data Science, Machine Learning, & Statistics resources (free courses, books, tutorials, & cheat sheets)

Welcome to my repository of data science, machine learning, and statistics resources. Software-specific material has to a large extent been listed under their respective overviews: R Resources & Python Resources. I also host a list of SQL Resources and datasets to practice programming. If you have any additions, please comment or contact me!

LAST UPDATED: 21-05-2018

Courses:

Video:

Books:

Sentiment Lexicons:

Cheatsheets:

Other:

Must read: Computer Age Statistical Inference (Efron & Hastie, 2016)

Must read: Computer Age Statistical Inference (Efron & Hastie, 2016)

Statistics, and statistical inference in specific, are becoming an ever greater part of our daily lives. Models are trying to estimate anything from (future) consumer behaviour to optimal steering behaviours and we need these models to be as accurate as possible. Trevor Hastie is a great contributor to the development of the field, and I highly recommend the machine learning books and courses that he developed, together with Robert Tibshirani. These you may find in my list of R Resources (Cheatsheets, Tutorials, & Books).

Today I wanted to share another book Hastie wrote, together with Bradley Efron, another colleague of his at Stanford University. It is called Computer Age Statistical Inference (Efron & Hastie, 2016) and is a definite must read for every aspiring data scientist because it illustrates most algorithms commonly used in modern-day statistical inference. Many of these algorithms Hastie and his colleagues at Stanford developed themselves and the book handles among others:

  • Regression:
    • Logistic regression
    • Poisson regression
    • Ridge regression
    • Jackknife regression
    • Least angle regression
    • Lasso regression
    • Regression trees
  • Bootstrapping
  • Boosting
  • Cross-validation
  • Random forests
  • Survival analysis
  • Support vector machines
  • Kernel smoothing
  • Neural networks
  • Deep learning
  • Bayesian statistics

 

 

R learning: Neural Networks

R learning: Neural Networks

Artificial neural networks (ANNs) are computing systems inspired by the human brain. They can teach themselves to do tasks, simply by considering examples of the tasks’ outcome. For example, they can learn to identify images that contain cats by analyzing example images that have been tagged “cat” or “no cat”. When given enough examples, the neural network can autonomously determine whether “untagged” images include cats or not (Wikipedia). If you want to learn more and have 20 minutes to spare, I can recommend this YouTube video by Brandon Rohrer.

Neural networks are commonly used for those machine learning problems where there is a vast amount of (complex) data available. Some toy examples include fingerprint recognition, language translation, car steering behaviours, object detection, text generation, and doodle recognition (by Google). Chances are pretty high that any system that makes complex recommendations these days (e.g., “Is this John in the picture?”, “Did you mean “South End Taco’s” instead of “Sout En dTacos”?”) has a neural net running in the background.

http://www.r-exercises.com designs tutorials for beginning programmers in R. On their website they host a learning series on neural networks, consisting of three sets of exercises: Part 1Part 2, and Part 3. Afterwards, you can check your performance with the solutions: Solutions 1Solutions 2, and Solutions 3.

Keep on learning!

P.S. afterwards you might want to check out this package and API for deep learning in R and Python.

Keras: Deep Learning in R or Python within 30 seconds

Keras is a high-level neural networks API that was developed to enabling fast experimentation with Deep Learning in both Python and R. According to its author Taylor Arnold: Being able to go from idea to result with the least possible delay is key to doing good research. The ideas behind deep learning are simple, so why should their implementation be painful?

Keras comes with the following key features:

  • Allows the same code to run on CPU or on GPU, seamlessly.
  • User-friendly API which makes it easy to quickly prototype deep learning models.
  • Built-in support for convolutional networks (for computer vision), recurrent networks (for sequence processing), and any combination of both.
  • Supports arbitrary network architectures: multi-input or multi-output models, layer sharing, model sharing, etc. This means that Keras is appropriate for building essentially any deep learning model, from a memory network to a neural Turing machine
  • Fast implementation of dense neural networks, convolution neural networks (CNN) and recurrent neural networks (RNN) in R or Python, on top of  TensorFlow or Theano.

R

R: Installation

The R interface to Keras uses TensorFlow™ as it’s underlying computation engine. First, you have to install the keras R package from GitHub:

devtools::install_github("rstudio/keras")

Using the install_tensorflow() function you can then install TensorFlow:

library(keras)
install_tensorflow()

This will provide you with a default installation of TensorFlow suitable for use with the keras R package. See the article on TensorFlow installation to learn about more advanced options, including installing a version of TensorFlow that takes advantage of Nvidia GPUs if you have the correct CUDA libraries installed.

R: Getting started in 30 seconds

Keras uses models to organize layers. Sequential models are the simplest structure, simply stacking layers. More complex architectures require the Keras functional API, which allows to build arbitrary graphs of layers.

Here is an example of a sequential model (hosted on this website):

library(keras)

model keras_model_sequential() 

model %>% 
  layer_dense(units = 64, input_shape = 100) %>% 
  layer_activation(activation = 'relu') %>% 
  layer_dense(units = 10) %>% 
  layer_activation(activation = 'softmax')

model %>% compile(
  loss = 'categorical_crossentropy',
  optimizer = optimizer_sgd(lr = 0.02),
  metrics = c('accuracy')
)

The above demonstrates the little effort needed to define your model. Now, you can iteratively train your model on batches of training data:

model %>% fit(x_train, y_train, epochs = 5, batch_size = 32)

Next, performance evaluation can be prompted in a single line of code:

loss_and_metrics %>% evaluate(x_test, y_test, batch_size = 128)

Similarly, generating predictions on new data is easily done:

classes %>% predict(x_test, batch_size = 128)

Building more complex models, for example, to answer questions or classify images, is just as fast.

Python

A step-by-step implementation of several Neural Network architectures with Keras in Python can be found on DataCamp. Similarly, one may use this quick cheatsheet to deploy the most basic models.

Additional resources: