Last May, Tony Beltramelli of Ulzard Technologies presented his latest algorithm pix2code at the NIPS conference. Put simply, the algorithm looks at a picture of a graphical user interface (i.e., the layout of an app), and determines via an iterative process what the underlying code likely looks like.
Obviously, this is groundbreaking technology. When further developed, pix2code not only increases the speed with which society is automated/robotized but it also further expands the automation to more complex and highly needed tasks, such as programming and web/app development.
Past week, I attended the 2017 conference of the European Association of Work and Organizational Psychology (EAWOP), which was hosted by University College Dublin. There were many interesting sessions, the venue was amazing, and Dublin is a lovely city. Personally, I mostly enjoyed the presentations on selection and assessment test validity, and below are my main takeaways:
Professor Stephen Woods gave a most interesting presentation on the development of a periodic table of personality. The related 2016 JAP article you can find here. Woods compares the most commonly used personality indices, “plotting” each scale on a two-dimensional circumplex of the most strongly related Big-Five OCEAN scales. This creates a structure that closely resembles a periodic table, with which he demonstrates which elements of personality are well-researched and which require more scholarly attention. In the presentation, Woods furthermore reviewed the relationship of several of these elements and their effect on job-related outcomes. You can find the abstracts of the larger personality & analytics symposium here.
One of the symposia focused on social desirability, impression management, and faking behaviors in personality measurement. The first presentation by Patrick Dunlop elaborated on the various ways in which to measure faking, such as with bogus items, social desirability scales, or by measuring blatant extreme responses. Dunlop’s exemplary study on repeat applicants to firefighter positions was highly amusing. Second, Nicolas Roulin demonstrated how the perceived competitive climate in organizations can cause applicants to positively inflate most of their personality scores, with the exception of their self-reported Extraversion and Conscientiousness which seemed quite stable no matter the perceived competitiveness. Third, Pelt (Ph.D. at Erasmus University and IXLY) demonstrated how (after some statistical corrections) the level of social desirability in personality tests can be reduced by using forced-choice instead of Likert scales. If practitioners catch on, this will likely become the new status quo. The fourth presentation was also highly relevant, proposing to use items that are less biased in their formulation towards specific personality traits (Extraversion is often promoted whereas items on Introversion inherently have negative connotations (e.g., “shyness”)). Fifth and most interestingly, Van der Linden (also Erasmus) showed how a higher-order factor analysis on the Big-Five OCEAN scales results in a single factor of personality – commonly referred to as the Big-One or the general factor of personality. This one factor could represent some sort of social desirability, but according to meta-analytical results presented by van der Linden, the factor correlates .88 with emotional intelligence! Moreover, it consistently predicts performance behaviors (also as rated by supervisors or in 360 assessments) better than the Big-Five factors separately, with only Conscientiousness retaining some incremental validity. You can find the abstracts and the author details of the symposium here.
Schäpers (Free University Berlin) demonstrates with three independent experiments that the situational or contextual prompts in a situational judgment test (SJT) do not matter for its validity. In other words, excluding the work-related critical incidents before the item did not affect the predictive validity: not for general mental ability, personality dimensions, emotional intelligence, nor job performance. Actually, the validity improved a little for certain outcomes. These results suggest that SJTs may measure something completely different from what is previously posed. Schäpers found similar effects for written and video-based SJTs. The abstract of Schäpers’ paper can be found here.
Finally, assessment vendor cut-e was the main sponsor of the conference. They presented among others their new tool chatAssess, which brings SJTs to a mobile environment. Via this link (https://maptq.com/default/home/nl/start/2tkxsmdi) you can run a demo using the password demochatassess. The abstract of this larger session on game-based assessment can be found here.
The rest of the 2017 EAWOP program can be viewed here.