/
39. Decision making and learning while taking sequential risks.txt
2550 lines (2141 loc) · 114 KB
/
39. Decision making and learning while taking sequential risks.txt
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
Journal of Experimental Psychology:
Learning, Memory, and Cognition
2008, Vol. 34, No. 1, 167–185
Copyright 2008 by the American Psychological Association
0278-7393/08/$12.00 DOI: 10.1037/0278-7393.34.1.167
Decision Making and Learning While Taking Sequential Risks
Timothy J. Pleskac
This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
Indiana University
A sequential risk-taking paradigm used to identify real-world risk takers invokes both learning and
decision processes. This article expands the paradigm to a larger class of tasks with different stochastic
environments and different learning requirements. Generalizing a Bayesian sequential risk-taking model
to the larger set of tasks clarifies the roles of learning and decision making during sequential risky choice.
Results show that respondents adapt their learning processes and associated mental representations of the
task to the stochastic environment. Furthermore, their Bayesian learning processes are shown to interfere
with the paradigm’s identification of risky drug use, whereas the decision-making process facilitates its
diagnosticity. Theoretical implications of the results in terms of both understanding risk-taking behavior
and improving risk-taking assessment methods are discussed.
Keywords: risk taking, learning, Bayesian, individual differences, cognitive psychometrics
example, create an aversion toward risky alternatives in the gain
domain and an attraction toward risky alternatives in the loss
domain—a pattern typically attributed to how DMs evaluate outcomes (Denrell, 2007; March, 1996). The learning process can
even produce the opposite pattern (Erev & Barron, 2005; Hertwig,
Barron, Weber, & Erev, 2004; Weber, Shafir, & Blais, 2004).
Applying theories of decision making to the Balloon Analogue
Risk Task (BART; Lejuez et al., 2002) or to the Iowa Gambling
Task (Bechara, Damasio, Damasio, & Anderson, 1994) also exposes the necessity of learning. Clinicians use these laboratorybased gambling tasks to study and identify people with specific
clinical or neurological deficits. Cognitive models of these tasks
reveal that decision and learning processes are necessary to account for choices made by both clinical and normal populations
(Busemeyer & Stout, 2002; Wallsten, Pleskac, & Lejuez, 2005).
Besides describing behavior during the tasks, the models also show
how the populations differ on the underlying cognitive dimensions
captured within the models. For example, during the BART, Wallsten et al. (2005) found that people who take unhealthy and unsafe
risks tend to differ from normal populations in how they evaluate
payoffs and the consistency of their responses.
What is unclear, however, is the extent to which real-world risk
takers systematically differ in the learning process used during the
BART. There are two possible roles the learning process could
play. On the one hand, learning may not play a significant role at
all. This prediction comes from studies showing Slovic’s (1966)
devil task discriminates between risk takers without requiring
learning (Hoffrage, Weber, Hertwig, & Chase, 2003). Slovic’s
devil task (henceforth devil task) is of interest because it has an
identical structure to the BART but does not require learning. On
the other hand, the learning process may aid in the BART’s ability
to identify risk takers. In the Iowa Gambling Task, for example,
the learning process is necessary to understand how specific clinical deficits lead to individual differences in risky decision making
(Stout, Rock, Campbell, Busemeyer, & Finn, 2005; Yechiam,
Busemeyer, Stout, & Bechara, 2005). To directly address these
conflicting predictions, this article develops a larger class of sequential risk-taking tasks: the Angling Risk Tasks (ART). This
Learning and decision making are conceptually linked. Typically only after decision makers (DMs) make a decision do they
observe or experience the precise outcome of that decision. For
example, only after commuters select a traffic route do they
determine its effectiveness, and only after athletes use a steroid do
they learn about the precise properties it has on their body. These
observations better inform DMs about the precise properties of
their choice options and shape their next decision among the same
or similar options. Despite this natural association between decision and learning processes, most decision-making theories fail to
incorporate or explicate a learning component (e.g., Busemeyer &
Townsend, 1993; González-Vallejo, 2002; Kahneman & Tversky,
1979). Yet, how DMs learn from experience has proven an important process in understanding risk-taking behavior. It can, for
Timothy J. Pleskac, Cognitive Science Program, Indiana University.
A portion of this article is based on a dissertation submitted in partial
fulfillment of the doctoral requirements at the University of Maryland–
College Park. Parts of this article were also presented as posters at the 2004
annual meeting of the Society for Judgment and Decision Research, Minneapolis, Minnesota, and the 2005 biannual meeting of the European
Association of Decision Making, SPUDM-20, Stockholm. National Institute on Drug Abuse Grant R21-DA14699 (principal investigator: Carl W.
Lejuez) partially funded the study, and National Institute of Mental Health
Research Service Award MH019879, awarded to Indiana University, supported the writing of this article.
I extend my gratitude to Thomas S. Wallsten, Carl W. Lejuez, and Ralph
Hertwig for their continuous support throughout this project. I thank Neil
Bearden, Ana Franco-Watkins, and Eldad Yechiam for constructive comments on an earlier version of this article. I also thank Anthony Bishara,
Sarah Queller, Julie Stout, Ido Erev, members of the Clinical and Cognitive
Neuroscience Laboratory at Indiana University, and members of the ABC
Research Group at the Max Planck Institute for Human Development for
their input on the project. Finally, I am indebted to Laura Wiles for detailed
editing of the article.
Correspondence concerning this article should be addressed to Timothy
J. Pleskac, who is now at the Department of Psychology, 282A Psychology
Building, Michigan State University, East Lansing, MI 48824. E-mail:
tim.pleskac@gmail.com
167
PLESKAC
This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
168
larger class experimentally dissociates the contribution of learning
from the decision processes in a model-free manner. Consequently, we can address whether the learning process gives rise to
systematic individual differences between populations or not.
Equally important to understanding how individual DMs differ
in their learning process is to identify how the learning process
changes across different task environments. One possibility is that
DMs use the same learning process regardless of the task environment. This prediction is derived from the counterintuitive finding that DMs mentally model the BART, a nonstationary probabilistic environment, as a stationary or sampling-with-replacement
environment (Wallsten et al., 2005). The mental representation in
turn affects the specific learning process used. One would then
expect that during a task with a stationary stochastic environment
DMs would adopt the same stationary mental representation. Alternatively, akin to findings in multiattribute decision making,
DMs may use different processes in different environments (see
Gigerenzer, Todd, & the ABC Research Group, 1999; Payne,
Bettman, & Johnson, 1993). To address this issue, DMs completed
the ART with different probabilistic structures (stationary and
nonstationary probabilistic environments). Anticipating the results,
this article will show that DMs do adjust their mental representations to different stochastic environments. Furthermore, during the
ART, in stark contrast to during the BART, they adopt mental
representations that match the task’s actual probabilistic structure.
The article is structured as followed. First, I introduce the
sequential risk-taking paradigm and compare two exemplars from
it: the BART and Slovic’s (1966) devil task. Next, to derive
predictions about how learning may either aid or obstruct in
identifying risk takers, I introduce Wallsten et al.’s (2005) Bayesian sequential risk-taking model (BSR) of the paradigm. I show
how the model can be adapted to account for different mental
representations of different stochastic environments. Then to test
the predictions, I develop the ART and present the results of a
study. During the experiment, participants played the four different
conditions of the ART, reported their past drug-use activity, and
completed the Domain-Specific Risk-Attitude Scale (DOSPERT;
Weber, Blais, & Betz, 2002). The results are analyzed in terms of
the cognitive processes at play during the four tasks and how risky
drug users differ on the underlying cognitive dimensions of the
model. Finally, in the Discussion section I address how the tandem
use of cognitive models and the sequential risk-taking paradigm
can lead to a more precise understanding of the underlying processes used in sequential risky choice.
The Sequential Risk-Taking Paradigm
Two exemplars of the sequential risk-taking paradigm are the
BART and devil task. Both identify real-world risk takers. Performance on the BART, for instance, correlates with a variety of
self-reported risky behaviors such as drinking alcohol, smoking
cigarettes, using illegal drugs, gambling, not wearing a seatbelt,
engaging in unprotected sex, and stealing (Lejuez, Aklin, Jones, et
al., 2003; Lejuez, Aklin, Zvolensky, & Pedulla, 2003; Lejuez et al.,
2002; Lejuez, Simmons, Aklin, Daughters, & Dvir, 2004). The
devil task distinguishes among risk takers at different developmental stages (Montgomery, 1974; Slovic, 1966) and discriminates
between children who cross the street dangerously (Hoffrage et al.,
2003).
Both the BART and the devil task require participants to play
the same structural game for up to a maximum of n choice trials.
On each trial they make a choice between two options, a stop and
a play option. If DMs choose to stop, then the game ends and they
collect the reward in the bank. If they choose the risky play option,
then with some probability one of two possible events occurs. A
successful event can happen in which a constant reward is deposited into the bank, and DMs proceed to the next trial. Alternatively,
a failure event can occur, ending the game, and participants lose
the accumulated reward in the bank for that round.1
In both the BART and the devil task, the same probabilistic
structure determines the probability of a failure event (and by
implication the successful event). For each round, one of the n
choice trials is randomly chosen to produce the failure event. Thus,
all possible trials are a priori equally likely to result in a failure
event. Yet, like drawing balls from an urn and not replacing them,
the probability of a failure increases with each successful play
option, making the probability of a successful event nonstationary.
The games are typically played anywhere between 1 and 90
rounds, and at the beginning of each round the bank is empty.
Despite their underlying similarities, the BART and the devil
task differ in the amount of information participants know about
the task. The devil task is well defined for respondents. During the
game participants— often children—are presented with a board
with n ⫽ 10 switches making them visually aware of the number
of choice trials available for each round.2 True to the structure of
the paradigm, nine of the switches produce a sticker or some fixed
amount of candy when pulled, and the remaining switch yields a
failure event (a devil sticker). Again, players must choose when to
stop the round and collect their cumulated reward; otherwise, if
they pull the devil the round ends and they lose their collected
prizes for that round.
The BART, in comparison, is ill defined for players.3 During the
game, a balloon is shown on a computer monitor and participants
inflate it by pressing a button at the bottom of the screen. With
each successful inflation they receive a fixed amount of money, x¢.
If the balloon explodes (a failure), participants lose the money for
the round. The task is usually constructed to allow a total of n ⫽
128 total possible inflations (choice trials) per round. The BART
is ill defined because participants are unaware of the possible
choice trials for each balloon, nor are they explicitly told that the
chances of a failure (explosion) increase with each successful
choice trial. Participants are told only that (a) they will earn x¢
with each successful pump option; (b) the balloon will explode
somewhere between the first pump and when it fills the screen; (c)
they must decide when to stop pumping and collect their reward
1
This is a different structure from bandit problems (see Berry &
Fristedt, 1985), typically used to study how DMs learn in uncertain
situations. During bandit problems, DMs make repeated choices between
two or more initially unknown options, and unlike in the sequential
risk-taking paradigm, DMs have to learn the possible payoffs as well as
their distribution associated with each option.
2
The children are also shown a graph explaining that the likelihood of
the failure event increases with each pulled switch (see Hoffrage et al.,
2003; Jamieson, 1969; Slovic, 1966).
3
The terms ill-defined task and well-defined task come from experimental economics, where they are frequently used in discussions about deception (see Hertwig & Ortmann, 2001; Hey, 1998).
DECISION MAKING AND LEARNING
This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
before an explosion ends their round; and (d) they will play the
BART for 30 rounds (Lejuez et al., 2002).
The success of both the BART and the devil task in identifying
real-world risk takers suggests that the ill-defined characteristic of
the BART may not be a necessary element to identify real-world
risk takers. In fact, one aspect of the BSR model suggests that not
fully informing respondents about the BART may even hurt its
clinical diagnosticity. At the same time, though, the model indicates that learning might help its diagnosticity. These conflicting
predictions are not intuitive, so in the next section I introduce the
BSR model and the underlying predictions.
dents with larger values of ␥⫹ will have a target that is greater than
those for people with lower values of ␥⫹ and will typically choose
the play option for more trials in a given round.
Response
DMs use the targeted trial Gh to probabilistically choose between playing or stopping on each trial. The response rule assumes
that the probability of choosing the play option, rh,i, during round
h on trial i strictly decreases with each choice. Formally, the
response rule
rh,i ⫽
The BSR Model
The BSR model was the best performing model of the many
Wallsten et al. (2005) tested, accounting for both choice behavior
during the BART and correlated with self-reported unhealthy and
unsafe risky behaviors.4 The model posits that DMs use three
cognitive processes to complete a sequential risk-taking task:
evaluation, response, and learning. Each component has at least
one free parameter that is used to identify how each individual DM
quantitatively differs on the underlying process. Each process is
briefly described next, beginning with the evaluation process.
Evaluation
To make a choice between playing or stopping, DMs within the
model adopt a choice policy whereby prior to each round they
identify how many trials they should play to maximize expected
gains. That is, at the beginning of each round, DMs can earn x¢,
2x¢, . . ., ix¢, . . ., nx¢ on choice trials 1, 2, . . ., i,. . ., n with
subjective probability h(1), h(2), . . ., h(i), h(n). Thus, the
expected gain on round h for each choice trial i is
⫹
vh,i ⫽ h(i)*(ix)y .
(1)
The exponent ␥⫹ is akin to prospect theory’s diminishing sensitivity parameter where ␥⫹ must be greater than 0 and typical
participants have values less than 1 (Kahneman & Tverksy, 1979;
Tversky & Kahneman, 1992). Lower values of ␥⫹ indicate less
sensitivity to changes in payoffs, and higher values indicate greater
sensitivity.5 The precise subjective probability of a success h(i)
depends on DMs’ mental representations of the stochastic process
(stationary or nonstationary) and their cumulative experience from
the past h – 1 rounds with the task. The best fitting BSR in the
BART assumes participants have a stationary representation,
where the probability of a success on any given trial is q̂h and the
subjective probability of i successes is h(i) ⫽ q̂ih. The Bayesian
learning process (specified later) describes how q̂h changes with
cumulative experience in the task.
The model assumes DMs identify the choice trial that maximizes expected payoffs and use it as a targeted trial, Gh. Optimizing Equation 1—assuming a stationary stochastic process—
produces the following closed form solution:
Equation 2 illustrates how DMs with different values of ␥⫹ will
behave differently during sequential risk-taking tasks. Respon-
(3)
h ⫽ exp关z(h ⫺ H/2)兴⫺1,
where H is the total number of rounds that DMs played during the
task. The free parameter z identifies the type of bias a DM has.
Negative and positive values of z characterize an early exploratory
or a late bias, respectively. If z ⫽ 0, then the participant exhibits
no round-dependent bias.
Learning
Reacting to participants’ partial ignorance in ill-defined tasks
like the BART, the BSR allows DMs to have different mental
representations of the stochastic process and assumes they engage
in a Bayesian learning process to discover the parameters of the
task, given their mental representation. During the BART, participants typically represent the task as a stationary one (e.g., drawing
successive balls from an urn and replacing them), despite its true
nonstationary stochastic structure. This mental representation and
Bayesian learning process is described next.
Stationary representation. Before the first round of each
game, DMs have a prior opinion about the probability of a success,
q. The prior beliefs are modeled with beta distributions, f1(q),
The cognitive model is technically number 3 in Wallsten et al. (2005).
Because of the nature of the BART, a probability weighting function
could not be estimated, and so now weighting function was assumed.
However, the larger set of tasks developed here does allow both the value
and weighting function to be estimated for a given respondent and is
therefore addressed later in the article.
5
(2)
1
1 ⫹ exp (dh,i⫺h)
captures these properties, where dh,i ⫽ i – Gh and  is a free
parameter representing how consistently DMs follow their targeted
evaluation. DMs with lower values of  will be more variable in
their overall gambling behavior during a sequential risk-taking
task.
Departing from the model in Wallsten et al. (2005), the response
function in Equation 3 also contains a response bias module, h,
allowing DMs to have a bias in their response that changes over
rounds. Some DMs, as Wallsten et al. (2005) report, have an
exploratory bias whereby during the first few rounds they continue
choosing the play option past the maximizing trial. Other participants nearing the end of the task tend to choose the play option
much more than would be expected, perhaps exhibiting a house–
money effect (e.g., Thaler & Johnson, 1990). To account for these
two biases, h is set equal to the following expression:
4
⫺␥⫹
Gh ⫽
.
ln(q̂h)
169
PLESKAC
170
whereby the mean of the beta distribution, q̂1, represents the
estimated subjective probability of a success on the first round for
any given trial and is used in Equation 2. It is a free parameter in
the models.6 DMs with higher levels of q̂1 are more optimistic and
will tend to choose the play option more than would DMs with
lower levels of q̂1.
After each round, DMs observe the total number of successes,
ch, and whether the round ended in a failure (dh ⫽ 1) or not (dh ⫽
0). DMs then update their beliefs about the probability of a success
using Bayes’ rule
This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
fh⫹1(q兩c1,d1,. . .,cn,dn) ⫽
冕
p(c1,d1,. . .,ch兩q)f1(q)
.
(4)
p(c1,d1,. . .,ch,dh兩q)f1(q)dq
The impact that the observed data have during the updating process depends on the DMs’ uncertainty in their prior beliefs, ␦1. The
free parameter ␦1 is the variance of the prior beta distribution. DMs
with low uncertainty will have a low ␦1 that leads them to discount
the observed data more than would DMs with a higher ␦1. Once the
distributions describing respondents’ beliefs in q are updated, their
new means are used as the estimates of q̂h⫹1 for Round h ⫹ 1.
The mean (q̂1) and the variance (␦1) of the prior distribution
show how the learning process can either improve or impede the
sequential risk-taking paradigm’s clinical diagnosticity. On the one
hand, real-world risk seekers and avoiders may have systematic
differences in their prior beliefs (summarized by q̂1). Specifically,
risk seekers could be more optimistic and believe the initial probability of a success to be higher (high q̂1) than their risk-avoiding
counterparts. As a result they would choose to play for more trials
in a given round, producing a correlation between the adjusted
ART and risky behaviors.
On the other hand, the uncertainty across DMs in their beliefs
(␦1) may only hurt the diagnosticity of the task. In the model, ␦1
moderates the degree to which respondents’ updated beliefs are a
compromise between their observed data from past rounds and
their initial beliefs. Higher levels of uncertainty (␦1) mean that
DMs will update their beliefs faster to reflect the observed data.
And because the observed data are probabilistic, in the short run
the beliefs of DMs will be more variable and thus reduce the
reliability of correlations between performance and real-world
behavior. Of course, this result is further compounded by differences in how far DMs continue to choose the play option for a
given round.
Nonstationary mental representation. The stationary mental
representation is surprising because both the BART and devil task
have a nonstationary probabilistic structure where the a priori
probability of a success, s(i), decreases with each passing Choice
Trial i,
s(i) ⫽
n⫺i
,
n
where n is the total possible number of choice trials. Examining
performance in a larger class of task can address whether DMs use
the same or different mental representation in different tasks.
This question of same or different mental representations can be
examined with an alternative BSR model that allows for a correct
nonstationary mental representation of the task (see also Wallsten
et al., 2005 model number 7). The alternative model assumes that
DMs adopt a correct mental representation but remain uncertain
about the precise properties of the task. That is, during ill-defined
conditions, DMs are uncertain of the maximum number of possible
trials, n. A discretized gamma distribution over n, p1(n), describes
their prior opinion about n’s value for Round 1. The mean, n̂1, and
variance, 1, of the gamma distribution are free parameters and
carry the same psychological interpretation as their stationary
counterparts.7 The mean represents the best prior estimate of the
maximum number of trials allowed on Round 1, n̂1, and scales how
optimistic DMs are about the task. The variance, 1, indexes
uncertainty and controls the impact that observed successes and
failures have on the DMs’ Bayesian updating process. In this
mental representation, DMs learn and update their opinion of the
likelihood of different values of n using Bayes’ rule
ph⫹1(n兩c1,d1,. . .,ch,dh) ⫽
(c1,d1,. . .,ch兩n)p1(n)
.
兺np(c1,d1,. . .,ch,dh兩n⬘)p1(n⬘)
(5)
.
A derivation of the revision process can be found in Appendix C
of Wallsten et al. (2005). The expected value for the updated
distribution after each round represents the new estimate of the
maximum number of trials, n̂h⫹1.
The remaining evaluation and response components are unchanged. DMs still identify the maximizing trial and probabilistically choose to play or stop based on the distance from that trial.
However, the subjective probability in Equation 1 is set to h(i) ⫽
(n̂h⫺i)/n̂h, and the maximizing trial is
Gh ⫽
n̂h␥⫹
,
␥⫹ ⫹ 1
(6)
where again n̂h is either obtained from the DMs’ estimated beliefs
via the learning process in ill-defined tasks or directly observed in
well-defined games. Comparing Equation 2 with Equation 6 also
shows that ␥⫹ retains the same properties. In both cases, larger
values of ␥⫹ lead to larger target values and consequently riskier
play options taken in a given round.
Summary of Model and Predictions
In summary, the BSR model uses at most five parameters to
predict the decision between playing and stopping during round h
on choice trial i. It assumes DMs evaluate options prior to begin6
The beta distribution was chosen due to its nice mathematical properties as a conjugate distribution to the binomial (see Gelman, Carlin, Stern,
& Rubin, 2003). Typically the beta distribution is modeled with two free
parameters, a0 and b0, where a0 ⱖ 0 and b0 ⬎ 0. The mean and variance
are functions of these two parameters. In fact, the models were estimated
with a0 and b0. For ease of interpretation, a0 and b0 are reparameterized
into the mean and variance in the text.
7
The gamma distribution is a continuous distribution that is sometimes
specified by the parameters and , where G ⫽ and G2⫽ 2 (see
Gelman et al., 2003) . It was chosen because of its properties of being
distributed over the positive reals and being flexible enough to characterize
a wide range of different opinions. To obtain the discrete approximation to
the gamma distribution, I integrated the distribution from x ⫽ n – 0.5 to
x ⫽ n ⫹ 0.5 for each n ⫽ 1, 2, . . ., ⬁ and then normalized.
This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
DECISION MAKING AND LEARNING
ning each round and identify a targeted choice trial that would
maximize their expected gains. They then probabilistically choose
to play or stop according to the distance from their target. Finally,
during ill-defined games they use their observed data from past
rounds to update their beliefs about the properties of the task. The
model can account for both stationary and nonstationary mental
representations.
There are two sets of predictions from the BSR that this article
investigates. One set deals with the competing hypotheses regarding learning during ill-defined conditions. On the one hand, individual differences in prior beliefs captured with the parameter p̂h
or n̂h may aid the paradigm’s ability to identify real-world risk
takers. On the other hand, DMs’ general uncertainty seen with
parameter ␦1 or 1 may do the opposite and hurt this ability. A
second prediction is that DMs mentally model sequential risktaking tasks as a stationary probabilistic structure even when the
stochastic structure is truly nonstationary. Next, I develop a class
of four sequential risk-taking tasks that I call the Angling Risk
Tasks, which experimentally tests these predictions (see Figure 1).
The Angling Risk Tasks (ART)
The ART uses fishing tournaments akin to decision theory’s prototypical “balls in the urn” laboratory task to aid participants’ understanding of the stochastic environment. The premise of the ART lends
itself well to manipulations of learning and the type of stochastic
process. During a particular game, participants fish in a tournament
for H rounds or trips (typically 30) in a pond that has 1 blue fish and
n – 1 red fish. With each cast of a computerized fishing rod, they hook
a fish (each fish is equally likely to be caught). If it is red, then they
171
earn 5¢ and can cast again. But if it is blue, then the round ends and
the money earned on that round is lost.
Different levels of learning can be manipulated by changing the
weather conditions of the fishing tournament. For example, the
tournament can take place on a sunny day, allowing participants to
see how many fish are swimming in the pond and eliminating the
need to learn their distribution. In contrast, the tournament can take
place on a cloudy day, concealing the fish in the pond, which in
turn forces participants to learn about how many potential fish are
in the pond. If real-world risk takers do systematically differ in
their prior beliefs (q̂1 or n̂1), then the ill-defined cloudy conditions
should correlate equally well or even better than the well-defined
sunny conditions.
Also, DMs’ use of the same or different mental representations
in different stochastic conditions can be tested by changing the
pond’s release law. Participants can catch and keep their fish (the
catch ‘n’ keep tournament), a sampling-without-replacement process that is identical to the BART and devil task. Or participants
can release their fish back into the pond (catch ‘n’ release tournament), a new task with a stationary sample-with-replacement process that matches the assumptions DMs hold about the BART.
In the experiment detailed next, respondents completed all four
tournaments and also reported their past illegal drug use activity.
The activity of drug use was selected because it has been one of the
standard reported behaviors that the BART has been validated with
(see for example Lejuez et al., 2002). In addition, risky drug use is
an activity performed at different levels in and around college
campuses, where this study took place (Johnston, O’Malley, Bachman, & Schulenberg, 2005).
Figure 1. A screenshot of the Angling Risk Tasks’ catch ‘n’ keep tournament on a sunny day. The screen
changes for the three other tournaments. If the weather is cloudy, then the fish in the pond are not shown to the
participants and no information about their number is given in the information panel. During catch ‘n’ release,
the cooler is closed and the fish are returned to the pond rather than the cooler.
PLESKAC
172
A total of 72 participants were recruited from the University of
Maryland community with advertisements placed throughout the
campus. The sample consisted of 38 men and 34 women, ranging
in age from 18 to 34 years (M ⫽ 21.6, SD ⫽ 3.9). Fifty-six percent
described themselves as White, 18% as Black or African American, 17% as Asian or Southeast Asian, and 4% as Hispanic or
Latino; the remaining 6% marked “Other” or chose not to respond
to the question. They were paid $7 for their time. In addition,
participants could earn a bonus based on their winnings in the
games. On average, people received an extra $5, but a few won
nothing and one individual earned an extra $12.
most frequently (i.e., never, one time, monthly or less, 2 to 4 times
a month, 2 to 3 times a week, or 4 or more times a week).
Weighting the categories that participants reported trying with the
frequency rank and summing created the measure identified as
weighted polydrug.
Domain-Specific Risk-Taking (DOSPERT) Scale. Participants
also completed the DOSPERT Scale (Weber et al., 2002), which
contains 40 items that assess the likelihood of engaging in risky
behavior in six domains: ethics, investment, gambling, health/
safety, recreational, and social. Two separate variants of the scale
also assess the perception of the magnitude of the risk for and
expected benefit from each of the 40 risks. The DOSPERT has also
had success in identifying real-world risk takers (Hanoch, Johnson,
& Wilke, 2006).
Materials
Design and Procedure
The ART. In each of the four tournaments participants had
H ⫽ 30 rounds during which they fished in a pond similar to the
one shown in Figure 1. At the beginning of each round the pond
had 1 blue fish and n – 1 red fish. Below the pond were two buttons
and an information panel. One button was labeled “Go Fish.”
Pressing it caused the rod on the left of the screen to cast a line into
the pond and hook a fish. Each fish was equally likely to be caught
on a given cast. If a red fish was caught, then 5¢ was placed into
the “Trip Bank” shown on the information panel. What happened
next depended on the release law. If the law was catch ‘n’ keep,
then the red fish was placed in the cooler on the right of the screen,
reducing the total number of red fish in the pond by one. If the law
was catch ‘n’ release, then the red fish was placed back into the
pond. Either way, the computerized fish swam around the pond
and participants had another opportunity to cast the line into the
pond for that round. However, if a blue fish was caught, then the
round ended and participants lost their accumulated money in the
“Trip Bank.” When participants decided to stop fishing, they
pressed the “Collect” button to transfer the money to the “Tournament Bank” on the information panel and then began a new
round.
In addition to the two release laws, there were two types of
weather conditions. If the weather was sunny—as indicated by the
weather forecast in the bottom right—the pond was clear and
participants could see how many fish were in it at all times.
Additionally, the information panel listed how many red and blue
fish were in the pond before each cast. However, if the weather
was cloudy, then the pond was murky, concealing the number of
fish in the pond and the information panel was left blank. Combining the two release laws with the two weather forecasts produced four different fishing tournaments (conditions). Participants
completed all four tournaments.
Drug and alcohol questionnaire. As a measure of risky illegal/
legal drug use, participants reported with yes/no responses the
number of drug categories ever tried. The 11 different drug categories were cannabis, alcohol, cocaine, MDM (ecstasy), stimulants
(e.g., speed), sedatives/hypnotics, opiates, hallucinogens, PCP,
inhalants, and nicotine. The sum of the number of categories tried
(polydrug) is a validated measure of risky drug use (see Babor et
al., 1992; Grant, Contoreggi, & London, 2000). After each yes/no
question, participants were also asked to report about how often
they used the drug at the time in their life when they used it the
The study used a 2 (release law) ⫻ 2 (weather) within-subjects
design. Participants fished in all four tournaments (conditions) and
completed the drug and alcohol questionnaire along with the
DOSPERT Scale. Participants had 30 rounds per tournament to
cast for as many red fish as they chose, earning 5¢ per red fish
caught. Intertask comparisons were facilitated by setting the ART
parameter settings to be similar to those used in the BART. In the
BART (a nonstationary environment), the total number of possible
choice trials (pumps) is typically set at n ⫽ 128 (Lejuez et al.,
2002). If the goal of participants is to maximize expected value and
they have full knowledge of the structure of the task, then the
strategy that maximizes earnings is for participants to choose the
play option on approximately 64 of the choice trials. This setting
has been found to be the best for distinguishing between real-world
risk seekers and risk avoiders (Lejuez et al., 2002). Consequently,
in the catch ‘n’ keep conditions there were n ⫽ 128 total number
of fish or possible choice trials. Calibrating the catch ‘n’ release
conditions in the ART so that choosing play 64 times per round
was the maximizing strategy (assuming the same conditions as
above) was done by setting the number of fish at n ⫽ 65 fish (see
Equation 2 with ␥⫹ ⫽ 1). Thus, if participants had a precise
understanding of all four tasks and sought to maximize expected
value, they should make an equal number of casts in all four
tournaments.
The order in which participants completed each tournament and
the risk questionnaires was counterbalanced. All eight tasks were
programmed using Sun Microsystem’s Java language and are
available upon request. The entire experiment was administered on
PC computers in separate sound-attenuated laboratory cubicles.
After reading and signing the informed consent form, participants read an introductory set of instructions on the computer.
They were told that they would be playing four different fishing
tournaments with different rules and conditions. The instructions
described the two different release laws and the two different
weather conditions they would experience. Participants were also
informed that between each fishing tournament they would fill out
questionnaires assessing their own risky behavior. No further
instructions were given about the stochastic structure of the tasks.
Next, the participants completed four practice games, one for
each tournament. During the practice games participants were able
to select how many fish they would like in the pond to practice
with (1 to 360). This manipulation was done to demonstrate that
Method
This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
Participants
This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
DECISION MAKING AND LEARNING
the ponds in each of the experimental conditions could have any
number of fish so as to minimize knowledge transfer from one
tournament to the next. During the practice round participants
played each condition for two rounds, during which they cast for
red fish as many times as they chose to.
After completing the four practice games, they began the experimental session, alternating between the four questionnaires and
the four tournaments. They started with a questionnaire. Before
each tournament, participants were briefly reminded of the rules
governing the pond they were about to visit. After completing all
of the tournaments, they completed a set of questions regarding the
strategy they used to fish in the tournaments. At the conclusion of
the session, the computer produced four tables showing how much
money participants earned on each round during the four tournaments. A round from each tournament was then chosen randomly
using a bingo basket (four rounds total), and participants were paid
based on their performance during these rounds.
Results
The Results section is organized into two subsections. The first
addresses the cognitive processes used during the four different
conditions of the ART (catch ‘n’ keep, sunny; catch ‘n’ keep,
cloudy; catch ‘n’ release, sunny; and catch ‘n’ release, cloudy).
The BSR model is used in this section to examine whether participants use the same mental representation in all four conditions of
the ART, or whether they change their mental representation
according to the stochastic structure of the task. Additionally, a set
of BSR models with a probability weighting function is compared
to the models without to examine whether DMs overweight rare
events and underweight common events in sequential risk-taking
tasks. The second section examines individual differences in the
cognitive processes and their association to the risky use of drugs.
One participant did not complete the experimental session and is
not included in subsequent analyses.
Cognitive Processes During the ART
Both the average adjusted ART scores and proportion of rounds
ending in a blue fish are listed in Table 1. The adjusted ART score
is the average number of casts taken on rounds when participants
chose to stop fishing. The adjusted score is the standard behavioral
measure of performance in sequential risk-taking tasks (Lejuez et
al., 2002; Pleskac, Wallsten, Wang, & Lejuez, 2007). Hence, it is
used in all subsequent analyses at the behavioral level.
Table 1
Mean (SD) Adjusted ART Scores and Percentage of Rounds
Ending With a Participant Catching a Blue Fish
Adjusted ART scores
Release law
Sunny day
Cloudy day
Catch ‘n’ keep
38.96 (18.77) 31.59 (15.35)
Catch ‘n’ release 32.19 (15.95) 28.06 (15.22)
Percentage of rounds
ending with a blue fish
173
To analyze the effect of the weather conditions and release
conditions, I conducted an ANOVA on the adjusted ART scores
where each experimental condition was a within-groups variable.
To test for order effects, two other between-groups variables were
entered: (a) the order of completing sunny and cloudy tournaments; and (b) the order of completing the release conditions. The
ANOVA utilized the full model, allowing interactions between the
individual participant and the variables of interest. The appropriate
error terms are therefore the interactions between the participant
and the variable of interest, MSPar ⫻ Variable (see Howell, 1997).
The left panel in Figure 2 shows the average adjusted ART score
in the four experimental conditions.8 Participants had a tendency to
cast more frequently in the catch ‘n’ keep condition than during
catch ‘n’ release, F(1, 67) ⫽ 17.70, p ⬍ .01, MSPar ⫻ Release
Rule ⫽ 103.84. They also made more casts in the sunny condition
than in the cloudy condition, F(1, 67) ⫽ 18.18, p ⬍ .01, MSPar ⫻
Weather Condition ⫽ 121.55. Finally, there was a significant
interaction between release law and weather, F(1, 67) ⫽ 6.01, p ⬍
.05, MSPar ⫻ Release Rule ⫻ Weather Condition ⫽ 35.76. As
Figure 2 indicates, the change from sunny to cloudy had a larger
effect in the catch ‘n’ keep condition than in catch ‘n’ release,
t(67) ⫽ 2.46, p ⬍ .05.
There were no main effects related to the order in which participants completed the four fishing conditions. Participants who
completed sunny tournaments before cloudy tournaments did not
have significantly different adjusted ART scores. Nor did the order
in which they completed the different release laws affect their
casting behavior. Despite all efforts to minimize information transformation from the sunny to cloudy condition, there were two
significant interactions with the order of completing the weather
conditions. The order of completing the different release laws
interacted with the weather variable, F(1, 67) ⫽ 6.00, p ⬍ .05,
MSPar ⫻ Weather Condition ⫽ 121.55. Paired comparisons revealed participants did cast significantly more in sunny conditions
(M ⫽ 36.7) than in cloudy (M ⫽ 27.9) when completing catch ‘n’
release conditions first, t(67) ⫽ 4.8, p ⬍ .05. But when the catch
‘n’ keep conditions were completed before catch ‘n’ release conditions, there was not a significant difference between casting
behavior between sunny (M ⫽ 34.2) and cloudy (M ⫽ 31.8)
conditions, t(67) ⫽ 1.3. The order of completing the weather
conditions also interacted with the weather manipulation, F(1,
67) ⫽ 7.55, p ⬍ .05, MSPar ⫻ Weather Condition ⫽ 121.55. The
adjusted ART scores for the cloudy conditions were significantly
greater when participants completed sunny conditions first (M ⫽
33.4) as compared with cloudy conditions first (M ⫽ 26.4), t(67) ⫽
3.8, p ⬍ .05. But there was not a significant difference between
adjusted ART scores on the sunny conditions based on the order in
which they completed the weather conditions (M ⫽ 35.4 and 35.5,
respectively). The only other significant order effect was the
four-way interaction, but it was due to the two significant two-way
interactions. Neither significant interaction pertaining to the order
in which participants completed the four conditions influenced any
Sunny day Cloudy day
.34 (.16)
.38 (.14)
.25 (.15)
.35 (.14)
Note. Calculations are based on a total of 30 rounds. ART ⫽ Angling
Risk Tasks.
8
The mean and median number of casts made on rounds that participants stopped on is approximately equal. All conclusions drawn from
analyses based on the mean, including all the ANOVAs and all the
correlations, are identical to those done with the median. Consequently,
only the means will be reported in the text.
This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
174
PLESKAC
Figure 2. Average adjusted Angling Risk Tasks (ART) scores across participants for the four different fishing
tournaments. In the left panel, the bars represent the average adjusted ART scores and the vertical lines denote
standard errors of the mean, estimated from the MSPar ⫻ Release Rule ⫻ Weather Condition interaction. The
right panel plots the predicted average adjusted ART score calculated from the 71 sets of individual parameter
estimates using the Bayesian sequential risk-taking model with correct mental representations (see Appendix B
for the analytic solutions for predicting the adjusted ART scores from the cognitive models). Stars indicate the
average number of casts that the decision makers intended to take on all rounds (see Appendix B).
of the conclusions drawn from all subsequent analyses and will not
be considered further.
A final observation concerning the adjusted ART score is that
choice behavior in the ART conditions appears relatively consistent with that of the BART. The average adjusted ART scores for
all four conditions were near or slightly greater than the adjusted
score typically observed with the BART (M ⫽ 30.3; N ⫽ 448;
SE ⫽ 2.2; 95% confidence interval ⫽ 26.0 ⬍ score ⬍ 34.6; see
Aklin, Lejuez, Zvolensky, Kahler, & Gwadz, 2005; Crowley,
Raymond, Mikulich-Gilbertson, Thompson, & Lejuez, 2006;
Jones & Lejuez, 2005; Lejuez, Aklin, Bornovalova, & Moolchan,
2005; Lejuez, Aklin, Jones, et al., 2003; Lejuez, Aklin, Zvolensky,
& Pedulla, 2003; Lejuez et al., 2002). Next I use the two versions
of the BSR model to test whether respondents use the same mental
representation in all four conditions or whether they adapt their
mental representation to the different conditions.
BSR Analyses
Two BSR models were fit to the data from all four ART
conditions at the individual level. Both predict the probability of
casting, rh,i, during round h on trial i. The first model—the best
fitting model for the BART—tests the hypothesis that participants
believe a stationary stochastic process governs the task. For cloudy
conditions it has one evaluation parameter, two response parameters, and two learning parameters. In sunny conditions there is no
learning and q̂h in Equation 2 is set to the observed parameters of
the task (e.g., 64/65 in catch ‘n’ release). The second model tests
whether participants mentally represent the structure as nonstationary. It has the same allocation of free parameters, and in sunny
conditions n̂h is set to the observed number of fish (e.g., 128 in
catch ‘n’ keep). In addition, a baseline statistical model was
estimated from the data (see Wallsten et al., 2005, for details). It
has one parameter, the probability of casting on trial i. It assumes
no cognitive processes and serves as an initial test for the BSR
models.
Model estimation and comparison. The models were fit to
each individual’s choice data for all rounds from each fishing
tournament using maximum likelihood estimation methods described in Appendix A. The data have a substantial number of
observations per participant, but the precise number of independent observations depends on the intercorrelation between trials
(captured within the model) and differs for each individual. As a
rough estimate, the number of independent observations for each
tournament ranged between 30 rounds and the total number of
trials on which participants actually chose between playing and
stopping, which was on average 840.4 trials per person per tournament.
Standard maximum likelihood ratio tests are not available to
evaluate and compare model fits due to the nonnested nature of the
models. Instead descriptive-level comparisons were made with the
Bayesian information criterion (BIC; Schwarz, 1978; Wasserman,
2000). The BIC is one method to compare the fit of nonnested
models with different numbers of parameters, where
BIC ⫽ 关⫺2 ⫻ ML兴 ⫹ 关k ⫻ log(N)兴.
(7)
In the expression, ML is the maximum log-likelihood of the data
given the model, k is the number of parameters in the model, and
N is the number of independent observations that contributes to the
likelihood. The term on the right-hand side of Equation 7 accounts
for model complexity (i.e., the more parameters the more complex
the model). The model with the lowest BIC is chosen as the best
fitting model. As a very liberal estimate of N, I used the total
number of choices a participant made in a given tournament. This
estimate puts the greatest handicap on models with more parameters.
Table 2 shows how many participants were best fit by each
model in the two sunny tournaments using BIC. Recall no learning
process parameters for the BSR models were fit in these conditions, and the number of free parameters reflects this. The model
fits in the sunny tournaments also reveal that the models with
This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
DECISION MAKING AND LEARNING
congruent representations fit the data better than does the statistical
baseline model.
However, as is apparent in Table 2, in the sunny tournaments the
BSR model that assumes participants have an incorrect mental representation of the task could not fit the data. Here is why. Recall that
in the well-defined sunny tournaments participants knew how many
fish were in the pond. Consequently, in the catch ‘n’ release tournaments the model assuming a nonstationary representation would set
the value of n̂h to 65 in Equation 6 for all 30 rounds (the number of
fish in the pond at the beginning of the round). With this setting, the
model does not predict that DMs would make more than 65 casts on
a given round. That is because in their sample-without-replacement
model they would have caught all 65 fish in the pond, leaving none in
the pond. Yet, a third of the participants in the sunny catch ‘n’ release
tournaments cast more than 65 times for at least one round. For the
remaining participants the evaluation parameter would have to reflect
extreme sensitivity to payoffs (␥⫹ ⬎ 1.5 in Equation 6) to capture
their data—an estimate that is highly unlikely and inconsistent with
decision theory literature (e.g., Tversky & Kahneman, 1992).
The same issue arises when fitting the model that posits a
stationary representation to the data from the sunny catch ‘n’ keep
condition. Here q̂h would be set to 127/128 in Equation 2. Using
this value, all participants would appear incredibly insensitive to
outcomes (␥⫹ ⬍ 0.3)—an estimate that is quite incongruent with
the literature. Thus, these results paired with the superior performance of the BSR model over the baseline model offer preliminary
support that in nonlearning conditions DMs can correctly adapt to
and represent the probabilistic structure of the sequential task they
face. This result is in contrast to the hypothesis derived from the
BART that DMs use the same representation in both conditions.
The results from the cloudy tournaments support the same
conclusion of correct mental representations. Table 3 shows that
both BSR models fit the data better than do the baseline statistical
models in the catch ‘n’ keep and catch ‘n’ release conditions.
Moreover, the models that assume participants correctly represent
and learn about the task structure best fit a majority of participants
in both conditions. That is, the stationary model fits best in catch
‘n’ release, and the nonstationary fits best in catch ‘n’ keep. These
results are in contrast with the BART results, where the model
Table 2
Model Comparisons of the Sunny Tournaments
Catch ‘n’ keep
Model
df
Mean
BIC
Number of
DMs best
fit with
BIC
Baseline
Stationary mental
representation
Nonstationary mental
representation
1
192.16
4
3
3
156.86
Catch ‘n’ release
Mean
BIC
Number of
DMs best
fit with
BIC
176.15
1
145.53
70
67
Note. During sunny conditions, only the Bayesian sequential risk-taking
models that assumed the correct stochastic process could be fit to the
respective conditions. BIC ⫽ Bayesian information criterion; DM ⫽ decision maker.
175
Table 3
Model Comparisons of the Cloudy Tournaments
Catch ‘n’ keep
Catch ‘n’ release
Model
df
Mean
BIC
Number of
DMs best
fit with
BIC
Baseline
Stationary mental
representation
Nonstationary mental
representation
1
206.00
4