Graduate Research Day: Difference between revisions

From PNB Graduate Handbook
Jump to navigation Jump to search
(Replaced content with "Further Information Coming Soon!")
Tag: Replaced
 
(75 intermediate revisions by 2 users not shown)
Line 1: Line 1:
= Graduate Research Day - February 11, 2021 =
Further Information Coming Soon!
 
Graduate Research Day 2021 will be held on the [https://www.airmeet.com Airmeet.com] platform. <br>
The link to the conference is: 
 
 
== Schedule ==
=== 9:30-10:05 AM Keynote 1: Dr. Mayu Nishimura ===
* Title
=== 10:05-10:50 AM Symposium 1 ===
==== Tovah Kashetsky ====
<p style="margin-left:40px">Effect of Experience on Collective Decision-Making and Social Organization <br>
''Tovah Kashetsky, Grant Doering, and Reuven Dukas''<br></p>
 
<p style="margin-left:40px;text-indent:40px;font-size:13px"> Expertise built from experience allows individuals to perform significantly better than novices on a complex task. Social groups can also demonstrate expertise. Within social groups, collective decision-making is crucial for maintaining cohesion, but it is unknown whether a group’s collective decision-making skills can improve with experience. To investigate this, we tested whether repeated experience with choosing between multiple nests during emigration in house-hunting ants (Temnothorax ambiguus) would improve the speed and efficiency with which colonies reach consensus. We hypothesize that experience with decision-making would improve colony performance on future decisions. We first ran preliminary experiments to quantify nest features that colonies prefer in order to construct artificial nests of varying attractiveness. We will provide 20 colonies experience with a choice between a good- and poor-quality nest during emigration, and 20 colonies with no choice during emigration (a single nest). Lastly, we will test all colonies to decide between a good- and poor-quality nest during a final emigration. So far, we found that colonies with experience decision-making do indeed appear to be faster and more efficient at decisions than colonies without experience decision-making. We will also run a social network analysis on 3 colonies from both groups to examine temporal changes in social organization. This will provide us with a mechanistic explanation for how improvements in collective decision-making arise from the actions of individuals. Studying decision-making in ants will allow us to achieve an improved understanding of the development and mechanisms behind expertise.</p>
 
==== Hanna Haponenko ====
<p style="margin-left:40px"> Depth-specific IOR effect when attention shifts from far to near space relative to viewer <br>
''Hanna Haponenko, Hong Jin Sun <br>'' </p>
<p style="margin-left:40px;text-indent:40px;font-size:13px">
Inhibition of return (IOR) is a phenomenon where responses to a peripheral target are delayed if the target appears more than 300ms in the same location as a previous cue. IOR has been extensively shown to operate in 2D scenes. It is not fully understood whether IOR is determined by relative location between cue and target in retinal coordinates or world coordinates. Such a question can be studied by examining IOR in 3D scenes. We compared IOR when cues and targets appeared at same or different depth planes and when depth information was provided by monocular cues. When the cue and target appeared at different depths, a vertical offset was created on-screen, a potential confound to depth perception. We removed the contribution of this confound by contrasting the 3D condition with a 2D control condition that matched cue and target positions but removed all context simulating 3D space. Results showed that IOR magnitude decreased for the different-depth condition compared to the same-depth condition in 3D displays. IOR magnitude also decreased as a function of vertical offset in corresponding 2D displays. Most importantly, such magnitude reduction in 3D displays was higher than that in the 2D displays, but only when the difference in depth was caused by the target appearing at a nearer position compared to the cue. We thus have identified a depth-specific IOR effect in a setting strictly comprised of monocular depth cues, which occurs only when attention shifts from far to near space relative to the viewer. </p>
 
==== Joanna Spyra ====
<p style="margin-left:40px;">
Memory for global musical structures: Dissecting musical features for their contribution to memory for nonadjacent tonal centers <br>
''Joanna Spyra & Dr. Matthew Woolhouse'' <br> </p>
 
<p style="margin-left:40px;text-indent:40px;font-size:12px"> Memory for musical keys is exceptionally poor. Studies have found that participants maintain a memory for key for only 11-20 seconds after key-change occurs. But music is a complex stimulus with many features; how do these features, such as rhythmical activity or timbre, contribute to the maintenance of memory for past musical sequences? In the Digital Music Lab, we employ a paradigm called “nonadjacent key relationships” to tease apart these musical features and examine their unique effects on memory for key. This paradigm divides stimuli into three sections: (1) a key-defining nonadjacent section, (2) an intervening section in a different key, and (3) a probe cadence either in the original key or in a third key (forming an ABA or CBA relationship between the three sections). Participants are asked to rate the probe for its goodness-of-completion, the idea being that if a memory for the original key remains—despite intervening information—participants will rate the ABA condition higher than the CBA condition. Using this as a baseline, we can manipulate various musical features and compare the strength of completion ratings. If a feature boosts memory, goodness-of-completion should receive a similar boost when compared to CBA conditions. Indeed, this is a pattern we found in many musical features. Results confirm that though memory for key itself may be weak, it is supported by common features we use in music composition every day. </p>
 
=== 10:50–11:00 AM Break 1 ===
=== 11:00-11:45 AM Symposium 2 ===
==== Emily Wood ====
 
<p style="margin-left:40px"> Body sway reflects nonverbal communication in a string quartet learning to play unfamiliar music together <br>
''Emily Wood, Dobri Dotov, Andrew Chang, Dan Bosnyak, Lucas Klein & Laurel Trainor <br>'' </p>
<p style="margin-left:40px;text-indent:40px;font-size:13px"> Ensemble musicians must anticipate their partners’ actions to coordinate playing a piece together. To achieve this, musicians attend to sensorimotor signals embedded in their partners’ body sway movements. Indeed, a musician’s body sway movements reveal their upcoming intent regarding phrasing, tempo, and dynamics, which helps their partners anticipate how and when to play next. We have previously measured the body sway of expert musicians in small ensembles with motion capture, and used Granger Causality (GC) to calculate bidirectional influence, or information flow, between the body sway of each musician in the ensemble. We showed that information flow was greater from assigned leaders to assigned followers than vice versa, and that group information flow was greater when musicians played with emotional expression than without. Here, we show how information flow changes in an ensemble that learns to play unfamiliar music together. A professional string quartet came into the LIVELab and played two unfamiliar pieces of music together eight times in succession while body sway motion data was recorded. Linear mixed effect modelling showed that information flow within the group decreased significantly across trials for both pieces, suggesting that musicians relied on body sway to help them play together when the pieces were most novel (trial 1), but this reliance decreased as they gained familiarity with playing the pieces together. We are currently completing cross-correlation analyses to examine how the similarity of group body sway movements changes across trials. Overall, our studies show that body sway reflects nonverbal communication in musical ensembles. </p>
 
==== Wei (Vivian) Fang ====
<p style="margin-left:40px"> Dominance pulls faces closer <br>
''Wei Fang, Cristina I. Galusca, Zhe Wang, Yu-Hao Sun, Olivier Pascalis, Naiqi G. Xiao <br>'' </p>
<p style="margin-left:40px;text-indent:40px;font-size:13px"> Perceived social traits, such as dominance and trustworthiness, affect other people’s behaviors. While the impact of social traits has been consistently found in high-level cognitive processing, it is unclear whether social traits also modulates perceptual processing of faces. To this end, we investigated how facial dominance affects the perceived distance of faces.
 
We used an implicit but highly robust perceptual illusion to measure the perceived distance: when two identical faces are presented vertically (one above the other), relative to the top face, the bottom one appears closer. Observers exhibit a strong bias to indicate the bottom face is bigger.<br>
 
We examined how facial dominance influences the perceived distance with a set of computer-generated Dominant and Submissive faces. If facial dominance makes faces perceived closer, participants will likely report the bottom one is bigger. To probe the generality of this effect, we tested this effect in Canada, China, and France (N = 30/country) with faces from three races (African, Asian, and Caucasian).<br>
 
Across the three countries, participants showed a significant bias in choosing the bottom face as the bigger (Mean bottom responses = 72.03%, p < .001), replicating the illusion. Moreover, Dominant faces led to a stronger illusion than Submissive faces (p = .009), suggesting that facial dominance led faces to be perceived closer. No effect of face race or country were found.<br>
 
As facial dominance is often associated with negative signals, our finding suggests an evolutional mechanism in the visual system, which amplifies dangerous signals in the environment.<br> </p>
 
==== Hannah M. Anderson ====
<p style="margin-left:40px"> Variation and correlations of behavioral lateralization <br>
''Hannah M. Anderson, David N. Fisher, Brendan L. McEwen, Justin Yeager, Jonathan N. Pruitt, James B. Barnett
<br>'' </p>
<p style="margin-left:40px;text-indent:40px;font-size:13px"> Sensory and behavioral laterality, or “handedness,” is widespread across the animal kingdom and is thought to increase neural efficiency and dual information processing. Historically, research on behavioral lateralization has focused on among-individual variation, but the importance of within-individual variation in behavior is being increasingly acknowledged. Among-individual laterality correlations can indicate both neural multitasking or linkage of stimuli and/or behaviors; however, within-individual correlations of lateralization have yet to be explored experimentally. We adopted a multivariate approach to investigate lateralization at both the population and individual level in two species of terrestrial frog: the poison frog Ameerega bilinguis and their Batesian mimic Allobates zaparo. In contrast to other research on the subject we found no evidence for among-individual correlations but did find evidence for within-individual correlations, a previously unexplored form of lateralization. We discuss possible meanings for these results and their broader implications to both lateralization and broader behavioral research. </p>
 
=== 1:00-2:00 PM Symposium 3 ===
==== Leigh Greenberg ====
<p style="margin-left:40px"> Visualizing critical information for the perception of androgyny
<br>
''Leigh Greenberg, Patrick J. Bennett, Allison B. Sekuler
<br>'' </p>
<p style="margin-left:40px;text-indent:40px;font-size:13px"> Androgynous face stimuli typically are generated by morphing strongly masculine and strongly feminine faces, based on an assumption that androgynous faces are equally masculine and feminine. Our past work challenged that assumption, finding that faces could be perceived simultaneously as androgynous and strongly gendered. The current study uses a reverse correlation technique (Dotsch & Todorov, 2011) to examine the stimulus characteristics that make a face look more or less androgynous. Observers viewed a pair of male or female faces embedded in Gaussian white noise and chose the face that appeared more androgynous. The two noise fields varied across trials, but were anti-correlated within each trial. Noise fields were sorted based on observer responses and averaged to create a Classification Image (CI) and antiCI. Preliminary results showed that the spatial structure in the CI was related to perceived androgyny: when the CI and antiCI were added to the base face images, the base+CI was clearly more androgynous than the base alone or the base+antiCI. Currently, we are investigating the similarity of CIs obtained from different observers and from different base male and female faces. To verify our findings, we plan to have new observers view CI and antiCI pairs and judge which is more androgynous. We also plan to use this technique to create additional CIs that represent concepts related to androgyny, such as masculinity and femininity. The results of these studies will shed light on our understanding of how the visual system processes face gender information.
</p>
 
==== Jesse Pazdera ====
<p style="margin-left:40px"> Title <br>
''Authors <br>'' </p>
<p style="margin-left:40px;text-indent:40px;font-size:13px"> Abstract Text </p>
 
==== Brendan McEwen ====
<p style="margin-left:40px"> Title <br>
''Authors <br>'' </p>
<p style="margin-left:40px;text-indent:40px;font-size:13px"> Abstract Text </p>
 
==== Elizabeth Phillips ====
<p style="margin-left:40px"> Title <br>
''Authors <br>'' </p>
<p style="margin-left:40px;text-indent:40px;font-size:13px"> Abstract Text </p>
 
=== 2:00 - 2:45 PM Poster Session ===
==== Rachael Finnerty ====
<p style="margin-left:40px"> Title <br>
''Authors <br>'' </p>
<p style="margin-left:40px;text-indent:40px;font-size:13px"> Abstract Text </p>
 
==== Cindy Tran ====
<p style="margin-left:40px"> Title <br>
''Authors <br>'' </p>
<p style="margin-left:40px;text-indent:40px;font-size:13px"> Abstract Text </p>
 
==== Seyedbehrad Dehnadi ====
<p style="margin-left:40px"> Title <br>
''Authors <br>'' </p>
<p style="margin-left:40px;text-indent:40px;font-size:13px"> Abstract Text </p>
 
==== Konrad Swierczek ====
<p style="margin-left:40px"> Title <br>
''Authors <br>'' </p>
<p style="margin-left:40px;text-indent:40px;font-size:13px"> Abstract Text </p>
 
==== Emma Marsden ====
<p style="margin-left:40px"> Title <br>
''Authors <br>'' </p>
<p style="margin-left:40px;text-indent:40px;font-size:13px"> Abstract Text </p>
 
==== Carly McIntyre-Wood ====
<p style="margin-left:40px"> Title <br>
''Authors <br>'' </p>
<p style="margin-left:40px;text-indent:40px;font-size:13px"> Abstract Text </p>
 
==== Peter Najdzionek ====
<p style="margin-left:40px"> Title <br>
''Authors <br>'' </p>
<p style="margin-left:40px;text-indent:40px;font-size:13px"> Abstract Text </p>
 
==== Maya Flannery ====
<p style="margin-left:40px"> Title <br>
''Authors <br>'' </p>
<p style="margin-left:40px;text-indent:40px;font-size:13px"> Abstract Text </p>
 
==== Jiali Song ====
<p style="margin-left:40px"> Title <br>
''Authors <br>'' </p>
<p style="margin-left:40px;text-indent:40px;font-size:13px"> Abstract Text </p>
 
=== 3:00 - 3:30 PM Lightning Talks ===
# Vidhi Patel
# Lucas Klein
# Jamie Cochrane
# Janice Yan
# James Mirabelli
 
== Prizes ==
Top three:
*Oral Presentation: $75
*Poster Presentation: $75
*Lightning Talks: $50
<!-----Most Popular Meme will be featured on this page!!----->
Winners to be announced...

Latest revision as of 16:09, 25 August 2022

Further Information Coming Soon!