Tag: server

Computerphile on Cyber Security

Computerphile on Cyber Security

Computerphile is a Youtube sister channel of Numberphile. Where Numberphile’s videos are about the magic behind match and numbers, Computerphile’s videos are all about computers and computer stuff. I recommend both channels in general, and have watched many of their videos already.

Yet, over the past weeks I specifically enjoyed what seems to be several series of videos on Cyber Security related topics.

What makes a good password?

One series is all about passwords.

What are strong passwords, which are bad? How can hackers crack yours? And how do websites secure user passwords?

The videos below are in somewhat of the right order and they make for an interesting insight in the world of password management. They give you advice on how to pick you password, and even a nice tool to check whether your password has ever been leaked.

Probably, you will want to change your password afterwards!

Hacking and attacking

If you are up to no good, please do not watch this second series, which revolves all around hacks and computer attacks.

How do people get access to a websites database? How can we prevent it? How can we recognize security dangers?

You might know of SQL injections, but do you know what a slow loris attack is? Or how ransomware works? Or what exploitX is?

These videos nicely continue the line of a previous post on Try Hack Me’s Cyber Security Challenges, where you can learn how computers work and where there vulnerabilities lie.

Video: Human-Computer Interactions in Reinforcement Learning

Video: Human-Computer Interactions in Reinforcement Learning

Reinforcement learning is an area of machine learning inspired by behavioral psychology, concerned with how software agents ought to take actions in an environment so as to maximize some notion of cumulative reward (Wikipedia, 2017). Normally, reinforcement learning occurs autonomously. Here, algorithms will seek to minimize/maximize a score that is estimated via predefined constraints. As such, algorithms can thus learn to perform the most effective actions (those that minimize/maximize the score) by repeatedly experimenting and assessing strategies.

The approach in the video below is radically different. Instead of a pre-defined scoring, human-computer interaction is used to assign each action sequence (each iteration/experiment) a score. This approach is particularly useful for complex behaviors, such as a back-flip, for which it is hard to pre-define the constraints and actions that lead to the “most effective” back-flip. However, for us humans, it is relatively easy to recognize a good back-flip when we see one. The video below shows how the researchers therefore integrated a human-computer interaction in their reinforcement learning algorithm. After observing the algorithm perform a sequence of actions, a human actor indicates to what extent the goal (i.e., a backflip) is achieved or not. This human assessment thus functions as the score which the algorithm will try to minimize/maximize.

This approach can be really valuable for organizations seeking to improve their machine learning application. The paper on the principle (Deep Reinforcement Learning from Human Preferences) can be found here. The scholars conclude that this supervised approach based on human preferences has very good training results whereas the cost are similar the simple bulldozer approach of training a neural net from scratch using GPU servers.