"Time is the substance I am made of.
Time is a river which sweeps me along, but I am the river."

— Borges


The body, cast aside for long stretches in academia, is now understood as connected integrally to cognition and learning. The upshot? The unfolding of the beautiful and intricate world of gesture and embodied learning. In my research, I study how students through gesture become what they study, move as inanimate things, experience them from the inside. The Borges quote from above, though (admittedly) metaphysical, provides a helpful analogy: The human body, in combination with language and play, is a wide open canvas for representing things in the world. Movement becomes at once a tool for thought and the very thing we are. When we represent someone or something in gesture—no matter how strange, long ago, or far away—we are still our own bodies.

Science Through Technology-Enhanced Play

The Science Through Technology-Enhanced Play (STEP) project, conceived and directed by Noel Enyedy (UCLA) and Joshua Danish (Indiana University), creates an imaginative space for first- and second-grade students to playfully explore science through collective body movement within a mixed-reality learning environment. In past projects, children participating in STEP have played as microscopic particles changing state of matter (e.g., solid to liquid), bees collecting nectar, and inanimate objects exposed to different forces and friction.  In each case, children adopt a first-person perspective, using their bodies as stand-ins for parts of the system. Our research addresses how students' joyful creativity and inquiry during play stretches across the body, environment, multiple roles, and stories. 

DeLiema, D., Enyedy, N., & Danish, J. A. (submitted). How play and games structure learning in different ways: A comparison of two collaborative, mixed-reality learning environments. 

Enyedy, N., Danish, J. A., DeLiema, D. (2015). Constructing liminal blends in a collaborative augmented-reality learning environment. Int'l. Journal of Computer Supported Collaborative Learning, 10(1), 7-34. 

Enyedy, N., Danish, J. A., DeLiema, D. (2013, June). Constructing and deconstructing materially-anchored conceptual blends in an augmented reality collaborative learning environment. In proceedings of the Conference on Computer Supported Collaborative Learning, Madison, Wisconsin.

Multiple viewpoints and spatial congruence

In this series of studies, I explore how students manufacture viewpointed gestural models of packet switching, the technology behind information transfer on the Internet. Here, a student throws a receipt acting as a computer router and then catches that receipt with a different hand as if she were a different computer, all the while organizing the action meticulously in space.

DeLiema, D., Enyedy, N., Iacoboni, M., & Steen, F. F. (submitted). Blending viewpoint and space: Gestures in conceptual integration and learning.

DeLiema, D. & Steen, F. F. (2014). Thinking with the body: conceptual integration through gesture in multiviewpoint model construction. In M. Borkent, B. Dancygier, Hinnell, J. (Ed.) Language and the Creative Mind (pp. 275-294). Stanford, CA: CSLI Publications.

DeLiema, D., & Steen, F. F. (2012, April). The evolution of gestural blends around learning a new technical system. Paper presented at the Eleventh Conceptual Structures and Discourse in Language conference, Vancouver, Canada.