A simple programm for Arduino that plays Timberman.
Find more about the code and circuit examples on Github.
Abstract – Over the last few decades, with the help of technological advancements in computational power and improvements in interaction design, video games have been prominent instruments for entertainment. With increasing number of players, researchers mainly have focused on revealing underlying psychological reasons behind gaming. By applying Self-Determination Theory (SDT) in gaming context, it is concluded that satisfactions of three basic intrinsic needs, namely, autonomy, competence and relatedness, are the predictors of motivation to play video games.
However, only a few studies focused on game features supporting each of these three basic needs. Game developers might make use of the discovery of the specific game features contributing specific need satisfactions while designing games in which motivation and engagement are ensured.
In this thesis, the relations between time pressure which is one of the commonly used game design element and autonomy and competence need satisfactions are observed. In an experimental design, time pressure is manipulated to establish two conditions (no time pressure in control group and time pressure in experimental group) by implementing countdown mechanics in a 3D survival shooting game. Mediating effects of autonomy and competence on the associations between time pressure and intrinsic motivation, flow, engagement, performance and enjoyment are also observed.
Results showed that, although there was a significant difference in perceived time pressure of players, no significant differences were found in autonomy and competence need satisfactions between two conditions. Similarly, no differences in intrinsic motivation, engagement, performance and enjoyment between two conditions were revealed. The only significant difference was found in flow between iv control and experimental conditions such that the participants in the experimental condition experienced more flow than those in the control condition. However, there were significant differences in flow and engagement among a subgroup of experimental condition, who failed to complete the goal in the game in the specified time limit, and other subgroups (both in control and experimental groups) who successfully completed the game in the given time. Competence and performance decreased with the increase in perceived time pressure within experimental group but the differences did not reach significance. On the other hand, flow and engagement were enhanced with the increase in perceived time pressure.
These findings give us the idea that there may be an optimal time limit in which autonomy and competence are maximized and positively correlated, and thus intrinsic motivation, flow, engagement, performance and enjoyment are promoted throughout game play
Download the full thesis.
Yıldırım, Irem Gökçe Time Pressure As Video Game Design Element And Basic Need Satisfaction, MSc Thesis, Department of Modeling and Simulation Supervisor, August 2015, 57 pages
In a thought-provoking contribution published by The Atlantic, Ian Bogost discusses the role of characters in video games, our obsession with self-identification and self-representation and why we should start consider a future where systems like video games could work without characters at all renouncing “[…] our own selfish, individual desires in the interest of participating in systems larger than ourselves”.
Google Deepmind‘s scientists have built a software that can play video games as a human being, or even better. By exploiting the theory of reinforcement learning, Google’s software is able to improve its performance after hours of play. As stated in the abstract:
The theory of reinforcement learning provides a normative account, deeply rooted in psychological and neuroscientific perspectives on animal behaviour, of how agents may optimize their control of an environment. To use reinforcement learning successfully in situations approaching real-world complexity, however, agents are confronted with a difficult task: they must derive efficient representations of the environment from high-dimensional sensory inputs, and use these to generalize past experience to new situations. Remarkably, humans and other animals seem to solve this problem through a harmonious combination of reinforcement learning and hierarchical sensory processing systems, the former evidenced by a wealth of neural data revealing notable parallels between the phasic signals emitted by dopaminergic neurons and temporal difference reinforcement learning algorithms. While reinforcement learning agents have achieved some successes in a variety of domains, their applicability has previously been limited to domains in which useful features can be handcrafted, or to domains with fully observed, low-dimensional state spaces. Here we use recent advances in training deep neural networks to develop a novel artificial agent, termed a deep Q-network, that can learn successful policies directly from high-dimensional sensory inputs using end-to-end reinforcement learning. We tested this agent on the challenging domain of classic Atari 2600 games. We demonstrate that the deep Q-network agent, receiving only the pixels and the game score as inputs, was able to surpass the performance of all previous algorithms and achieve a level comparable to that of a professional human games tester across a set of 49 games, using the same algorithm, network architecture and hyperparameters. This work bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks.