A Cognitively Inspired Heuristic for Two-armed Bandit Problems: The Loosely Symmetric (LS) Model☆
Under a Creative Commons license
open access
Keywords
reinforcement learning
causal induction
biconditional reading
symmetry
mutual exclusivity
n-armed bandit problem
exploration– exploitation dilemma
speed-accuracy tradeoff
Cited by (0)
- ☆
Selection and peer-review under responsibility of the Program Committee of IES2013.
Copyright © 2013 The Authors. Published by Elsevier B.V.