Tag: mobile

Using OpenCV to win Mobile games

Using OpenCV to win Mobile games

OpenCV logo

OpenCV is open-source library with tools and functionalities that support computer vision. It allows your computer to use complex mathematics to detect lines, shapes, colors, text and what not.

OpenCV was originally developed by Intel in 2000 and sometime later someone had the bright idea to build a Python module on top of it.

Using a simple…

pip install opencv-python

…you can now use OpenCV in Python to build advanced computer vision programs.

And this is exactly what many professional and hobby programmers are doing. Specifically, to get their computer to play (and win) mobile app games.

ZigZag

In ZigZag, you are a ball speeding down a narrow pathway and your only mission is to avoid falling off.

Using OpenCV, you can get your computer to detect objects, shapes, and lines.

This guy set up an emulator on his computer, so the computer can pretend to be a mobile device. Then he build a program using Python’s OpenCV module to get a top score

You can find the associated code here, but note that will need to set up an emulator yourself before being able to run this code.

Kick Ya Chop

In Kick Ya Chop, you need to stomp away parts of a tree as fast as you can, without hitting any of the branches.

This guy uses OpenCV to perform image pattern matching to allow his computer to identify and avoid the trees braches. Find the code here.

Whack ‘Em All

We all know how to play Whack a Mole, and now this computer knows how to too. Code here.

Pong

This last game also doesn’t need an introduction, and you can find the code here.

Is this machine learning or AI?

If you’d ask me, the videos above provide nice examples of advanced automation. But there’s no real machine learning or AI involved.

Yes, sure, the OpenCV package uses pre-trained neural networks under the hood, and you can definitely call those machine learning. But the programmers who now use the opencv library just leverage the knowledge stored in those network to create very basal decision rules.

IF pixel pattern of mole
THEN whack!
ELSE no whack.

To me, it’s only machine learning when there’s really some learning going on. A feedback loop with performance improvement. And you may call it AI, IMO, when the feedback loop is more or less autonomous.

Fortunately, programmers have also been taking a machine learning/AI approach to beating games. Specifically using reinforcement learning. Think of famous applications like AlphaGo and AlphaStar. But there are also hobby programmers who use similar techniques. For example, to get their computer to obtain highscores on Trackmania.

In a later post, I’ll dive into those in more detail.

pix2code: teaching AI to build apps

Last May, Tony Beltramelli of Ulzard Technologies presented his latest algorithm pix2code at the NIPS conference. Put simply, the algorithm looks at a picture of a graphical user interface (i.e., the layout of an app), and determines via an iterative process what the underlying code likely looks like.

Afbeeldingsresultaat voor user interface
Graphical user interface examples (Google Images)

Please watchUlzard’s pix2code demo video or the third-party summary at the bottom of this blog. My undertanding is that pix2code is based on convolutional and recurrent neural networks (long explanation video) in combination with long short-term memory (short explanation video). Based on a single input image, pix2code can generate code that is 77% accurate and it works for three of the larger platforms (i.e. iOS, Android and web-based technologies).

The input and output of pix2code

Obviously, this is groundbreaking technology. When further developed, pix2code not only increases the speed with which society is automated/robotized but it also further expands the automation to more complex and highly needed tasks, such as programming and web/app development.

Here you can read the full academic paper on pix2code.

Below is the official demo reviewed by another data enthusiast with commentary and some additional food for thought.

Read here some of my other blogs on neural networks and robotization:

EAWOP 2017 – Takeaways

Past week, I attended the 2017 conference of the European Association of Work and Organizational Psychology (EAWOP), which was hosted by University College Dublin. There were many interesting sessions, the venue was amazing, and Dublin is a lovely city.  Personally, I mostly enjoyed the presentations on selection and assessment test validity, and below are my main takeaways:

  • circumplexProfessor Stephen Woods gave a most interesting presentation on the development of a periodic table of personality. The related 2016 JAP article you can find here. Woods compares the most commonly used personality indices, “plotting” each scale on a two-dimensional circumplex of the most strongly related Big-Five OCEAN scales. This creates a structure that closely resembles a periodic table, with which he demonstrates which elements of personality are well-researched and which require more scholarly attention. In the presentation, Woods furthermore reviewed the relationship of several of these elements and their effect on job-related outcomes. You can find the abstracts of the larger personality & analytics symposium here.
  • One of the symposia focused on social desirability, impression management, and faking behaviors in personality measurement. The first presentation by Patrick Dunlop elaborated on the various ways in which to measure faking, such as with bogus items, social desirability scales, or by measuring blatant extreme responses. Dunlop’s exemplary study on repeat applicants to firefighter positions was highly amusing. Second, Nicolas Roulin demonstrated how the perceived competitive climate in organizations can cause applicants to positively inflate most of their personality scores, with the exception of their self-reported Extraversion and Conscientiousness which seemed quite stable no matter the perceived competitiveness. Third, Pelt (Ph.D. at Erasmus University and IXLY) demonstrated how (after some statistical corrections) the level of social desirability in personality tests can be reduced by using forced-choice instead of Likert scales. If practitioners catch on, this will likely become the new status quo. The fourth presentation was also highly relevant, proposing to use items that are less biased in their formulation towards specific personality traits (Extraversion is often promoted whereas items on Introversion inherently have negative connotations (e.g., “shyness”)). Fifth and most interestingly, Van der Linden (also Erasmus) showed how a higher-order factor analysis on the Big-Five OCEAN scales results in a single factor of personality – commonly referred to as the Big-One or the general factor of personality. This one factor could represent some sort of social desirability, but according to meta-analytical results presented by van der Linden, the factor correlates .88 with emotional intelligence! Moreover, it consistently predicts performance behaviors (also as rated by supervisors or in 360 assessments) better than the Big-Five factors separately, with only Conscientiousness retaining some incremental validity. You can find the abstracts and the author details of the symposium here.

socialdesirability

  • Schäpers (Free University Berlin) demonstrates with three independent experiments that the situational or contextual prompts in a situational judgment test (SJT) do not matter for its validity. In other words, excluding the work-related critical incidents before the item did not affect the predictive validity: not for general mental ability, personality dimensions, emotional intelligence, nor job performance. Actually, the validity improved a little for certain outcomes. These results suggest that SJTs may measure something completely different from what is previously posed. Schäpers found similar effects for written and video-based SJTs. The abstract of Schäpers’ paper can be found here.
  • Finally, assessment vendor cut-e was the main sponsor of the conference. They presented among others their new tool chatAssess, which brings SJTs to a mobile environment. Via this link (https://maptq.com/default/home/nl/start/2tkxsmdi) you can run a demo using the password demochatassess. The abstract of this larger session on game-based assessment can be found here.

csm_chatassess-screens_14e4fee553

The rest of the 2017 EAWOP program can be viewed here.