Video for CHI’2020: Adults’ and Children’s Mental Models for Gestural Interactions with Interactive Spherical Displays

Since I was not able to present our paper (Adults’ and Children’s Mental Models for Gestural Interactions with Interactive Spherical Displays) at CHI’2020 due to COVID-19 restrictions, I prepared a remote video presentation which can be watched here.

Please email us if you have any questions on this work and I hope to see everyone at CHI 2021!

Posted in Uncategorized | Leave a comment

Opening Up Access to TIDESS Research Data

I began working for the TIDESS project in March of 2019, the beginning of my first spring semester at UF. As a freshman in schooling and in research, one of my main duties has been compiling and structuring how we will share the data underlying our findings with both the education and computer science communities.

Our first internal goal was to have a single location where all of the data from our research participants – videos, audio files, transcripts, etc. – could be accessed with ease. The first step in my journey into data management began with assembling an Excel spreadsheet that would detail all of the files that we had for each study that took place. For this, I created columns for each piece of information someone looking for the data would want to know, such as the file names, how many copies there were, the date they were uploaded, etc. This would allow for anyone interested in learning more about the project to have one centralized location where they can find the exact information they are looking for, similar to a book’s index.

Once all of this was combined, it was time to start thinking about how our data should be shared and through what platform we would share it. This is when Dr. Stofer pointed me to Open Science Framework (https://osf.io/), a software made specifically for the publication of project data, allowing for public and private uploads and easy access for all collaborators and interested others. With Open Science Framework, or OSF, we were able to create a private project that encapsulated the video files, audio files, and transcripts before they were released for public view. We were also able to use this feature internally, as I was able to store some of the files that would not be shared in a permanently private folder, such as the videos that showed a participant’s face, to ensure that their privacy was secure. This step was incredibly helpful, as it allowed me to upload all of the data at different times without risk of someone stumbling upon incomplete data. It was also useful in that it allowed me to organize the data in a way that would appear logical through the creation of a separate project file within the overarching TIDESS project (https://osf.io/rjzkh), thus giving individuals access to museum deployment data, Tabletop study data, and so on. Furthermore, I was able to add subsections for each data type (video, audio, etc.), ensuring that each piece of uploaded data will be easy to find and access. After all of the data has been uploaded and reviewed by the team, we plan to make the project public and available on the site so that our work can be referenced at any time.

To say that this was a daunting task at first would be an understatement, given how critical it is that all of the information posted be accurate and easily accessible.  However, the more familiar I became with the platform, the less intimidating the job became. I first looked at the release of data as the last step in a research project, so I thought that everything I would post would be the only representation of the study, this being the cause of my stress. It was when I spent more time observing the project that I realized that there existed an abundance of opportunities for revision and feedback, thus allowing me to feel more comfortable in the task.

As I stated earlier, this is my first research project. While I was a first year undergraduate student when I began working with TIDESS, I am now in my second year at the University of Florida, studying computer science. Being a part of the team has been such an incredible learning experience, and I am excited to dive deeper into the development side of research.

Posted in Uncategorized | Leave a comment

TIDESS Museum Learning Project Update: CHI 2020 Paper Accepted!

Our paper on analysis of users’ gestural interaction mental models for multi-touch spherical displays, titled as “Adults’ and Children’s Mental Models for Gestural Interactions with Interactive Spherical Displays,” was accepted to ACM SIGCHI 2020, a top conference for human-computer interaction! We previously reported in our PerDis’19 paper [1] a comparision of patterns in the physical characteristics (e.g., hand pose, number of fingers) of children’s and adults’ gestures on interactive spherical displays to those on flatscreen displays. In this CHI’20 paper, we analyzed think-aloud data from the same gesture elicitation study [1] to understand what differences may exist in children’s and adults’ gestural interaction mental models for spherical and flatscreen tabletop displays. During the gesture elicitation study, which we conducted in Summer and Fall 2018, we asked children (ages 7-11) and adults to suggest touchscreen gestures for different tasks on a multi-touch spherical display. To help us understand the underlying mental models that drive users’ interactions with spherical displays, during the study we also asked users to think-aloud while suggesting touchscreen gestures. The CHI’20 paper reports our new understanding of users’ mental models for interacting with spherical displays.

Here is the abstract:

“Interactive spherical displays offer numerous opportunities for engagement and education in public settings. Prior work established that users’ touch-gesture patterns on spherical displays differ from those on flatscreen tabletops, and speculated that these differences stem from dissimilarity in how users conceptualize interactions with these two form factors. We analyzed think-aloud data collected during a gesture elicitation study to understand adults’ and children’s (ages 7 to 11) conceptual models of interaction with spherical displays and compared them to conceptual models of interaction with tabletop displays from prior work. Our findings confirm that the form factor strongly influenced users’ mental models of interaction with the sphere. For example, participants conceptualized that the spherical display would respond to gestures in a similar way as real-world spherical objects like physical globes. Our work contributes new understanding of how users draw upon the perceived affordances of the sphere as well as prior touchscreen experience during their interactions.”

Interested readers can find the camera-ready version (preprint) available here. The CHI 2020 conference will take place in Honolulu, Hawaiʻi from April 25 – April 30. I am really excited since this is my first first-author paper at CHI. I am looking forward to presenting our paper at the conference as presenting our work will help us gain valuable feedback from the CHI community related to our current and the planned work on designing interactions for multi-touch spherical displays.

[1] Soni, N., Gleaves, S., Neff, H., Morrison-Smith, S., Esmaeili, S., Mayne, I., Bapat, S., Schuman, C., Stofer, K.A., and Anthony, L. 2019. Do User-Defined Gestures for Flatscreens Generalize to Interactive Spherical Displays for Adults and Children? Proceedings of the International Symposium on Pervasive Displays (PerDis’ 2019), Palermo, Italy, June 12-14, Article No. 24, 7 Pages.

Posted in Uncategorized | Leave a comment

Workshop paper at CSCL 2019

Check out our recent blog post about a CSCL workshop paper presented by a TIDESS team member, Nikita Soni, on the INIT Lab website!

Posted in Uncategorized | Leave a comment

PerDis 2019 Paper Accepted!

In addition to our main TIDESS project research questions of understanding how to design more effective learning environments on spherical displays [link], we have also been studying some more specific human-computer interaction research questions regarding interacting with these displays. In Summer and Fall 2018, we conducted a user-defined gesture elicitation study [1], in which we asked children (ages 7-11)  and adults to propose touchscreen gestures for different touchscreen tasks on the sphere to help us understand their gesture preferences for touch-enabled spherical displays. Our paper on this study, titled “Do User-Defined Gestures for Flatscreens Generalize to Interactive Spherical Displays for Adults and Children?,” was accepted as a full paper to the International Symposium on Pervasive Displays (PerDis 2019). The paper reports our preliminary findings related to the types of gestures children and adults find intuitive on spherical displays as opposed to flatscreen tabletop displays. Our findings also report on similarities and differences in children’s and adults’ gesture preferences for touch-driven spherical displays.

Here is the abstract:

“Interactive spherical displays offer unique opportunities for engagement in public spaces. Research on flatscreen tabletop displays has mapped the gesture design space and compared gestures created by adults and children. However, it is not clear if the findings from these prior studies can be directly applied to spherical displays. To investigate this question, we conducted a user-defined gestures study to understand the gesture preferences of adults and children (ages 7 to 11) for spherical displays. We compare the physical characteristics of the gestures performed on the spherical display to gestures on tabletop displays from prior work. We found that the spherical form factor influenced users’ gesture design decisions. For example, users were more likely to perform multi-finger or whole-handed gestures on the sphere than in prior work on tabletop displays. Our findings will inform the design of interactive applications for spherical displays.”

Interested readers can find the camera-ready version (preprint) available here. The PerDis 2019 conference will take place in Palermo, Italy from June 12 – June 14. I am really excited since this will be my first time attending PerDis, and my first official conference paper presentation. I am looking forward to presenting our paper at the conference as it will help us get feedback regarding our planned future work on designing for touchscreen spherical displays.

[1] Jacob O. Wobbrock, Meredith Ringel Morris, and Andrew D. Wilson. 2009. User defined Gestures for Surface Computing. In Proceedings of the Conference on Human Factors in Computing Systems (CHI’09), 1083–1092.

Posted in Uncategorized | Leave a comment

Iterating on Our Sphere Prototype – TIDESS Museum Deployment

As part of our ongoing studies with the PufferSphere spherical interface, the TIDESS team has decided to create a prototype that implements many of the same features present in our tabletop prototype, which we’ve discussed previously. This will allow us to directly compare the two platforms and see how the spherical nature of the interface affects how users interact with and learn from the device.

One of the primary additions to the prototype that we had to build is the creation of a new gesture library. A gesture library is a set of gestures that our sphere will recognize, and which will trigger relevant actions. Our tabletop prototype was developed using the built-in GestureWorks gesture library to allow users to manipulate objects in the prototype. However, the Puffersphere PufferPrime API does not currently have an existing gesture library for us to use. This means that, in its default state, the sphere does not allow for objects to be dragged and does not support other basic gestures (zooming, swiping, long-tap). These gestures were all supported by the tabletop form of the prototype, so we chose these as the main four gestures for our gesture library.

To support the same set of gestures available on the tabletop, we defined each gesture in the following way:
• Drag – user moves an object while maintaining contact the entire time
• Swipe – user moves an object, and after they release contact the object continues to move
• Zoom – user enlarges an area by pinching outwards (from two contact points)
• Long-Tap – user holds their finger(s) in one place for an extended period

To create this gesture library, we looked at existing gesture libraries for flatscreen interfaces (including the one we used for our tabletop prototype) and determined how we can implement our own versions of these gestures on the spherical display. For example, to implement the long-tap gesture, we added a timer to determine how long a user had kept their finger in one area and a radius to define this area. If the timer reached a certain threshold and the user had not moved their finger outside the radius, then we considered that a long-tap.

We tested this prototype at the Florida Museum of Natural History as a part of an unstructured study, where we recorded people naturally interacting with the sphere. The sphere was deployed for a week, during which we recorded participants’ interactions with the sphere through audio, video, and touch logs.

As a third-year undergraduate at the University of Florida, I have been able to work with many new and innovative technologies during my involvement on the TIDESS project, such as the PufferSphere spherical display. Developing applications for the sphere has allowed me to further improve my programming skills, especially my ability to work in a large, existing codebase. When we analyze the data, I look forward to seeing how users interacted with the prototype that we developed!

Posted in Uncategorized | Leave a comment

CHI 2019 Late Breaking Work accepted!

In spring 2018, we conducted an exploratory study in which we asked groups of children (ages 8-13) and adults to interact with spherical displays in a real-world setting to help us understand how to design effective interactions for these novel technologies. Our paper on this study, titled “Towards Understanding Interactions with Multi-Touch Spherical Displays,” was accepted as a late-breaking work to ACM SIGCHI 2019, a top conference for human-computer interaction! The paper reports our preliminary findings related to similarities and differences between children’s and adults’ interaction patterns and mental models for interaction around a sphere in a public setting.

Here is the abstract:

“Interactive spherical displays offer unique educational and entertainment opportunities for both children and adults in public spaces. However, designing interfaces for spherical displays remains difficult because we do not yet fully understand how users naturally interact with and collaborate around spherical displays. This paper reports current progress on a project to understand how children (ages 8 to 13) and adults interact with spherical displays in a real-world setting. Our initial data gathering includes an exploratory study in which children and adults interacted with a prototype application on a spherical display in small groups in a public setting. We observed that child groups tended to interact more independently around the spherical display, whereas adult groups interacted with the sphere in a driver-navigator mode and did not tend to walk around the sphere. This work will lay the groundwork for future research into designing interactive applications for spherical displays tailored towards users of all age groups.”

Interested readers can find the camera-ready version (preprint) available here. The CHI 2019 conference will take place in Glasgow, UK from May 4 – May 9. I am really excited since this will be my first time attending CHI. I am looking forward to presenting our poster at the conference as it will help us gain valuable feedback regarding the future steps of the paper. I would also like to thank ACM Student Travel Grants for proving me partial funding to attend the conference.

Posted in Uncategorized | Leave a comment

CSCL 2019 paper accepted!

In a recent post, we discussed that the TIDESS team was working on analyzing data from our tabletop lab study to understand the role of touchscreen gestures in facilitating collaborative learning around large interactive tabletop touchscreen displays. We are pleased to announce that our paper, “Analysis of Touchscreen Interactive Gestures During Embodied Cognition in Collaborative Tabletop Science Learning Experiences,” presenting findings from this analysis, was accepted to CSCL 2019: the International Conference of Computer-Supported Collaborative Learning. The paper outlines themes with respect to the types of direct-touch gestures that support collaborative learning experiences around interactive tabletops, and also proposes touchscreen interaction design guidelines to inform the design of future tabletop experiences for science learning. The abstract of the paper is as follows:

“Previous work has used embodied cognition as a theoretical framework to inform the design of large touchscreen interfaces for learning. We seek to understand how specific gestural interactions may be tied to particular instances of learning supported by embodiment. To help us investigate this question, we built a tabletop prototype that facilitates collaborative science learning from data visualizations and used this prototype as a testbed in a laboratory study with 11 family groups. We present an analysis of the types of gestural interactions that accompanied embodied cognition (as revealed by users’ language) while learners interacted with our prototype. Our preliminary findings indicate a positive role of cooperative (multi-user) gestures in supporting scientific discussion and collaborative meaning-making during embodied cognition. Our next steps are to continue our analysis to identify additional touchscreen interaction design guidelines for learning technologies, so that designers can capitalize on the affordances of embodied cognition in these contexts.”

Interested readers can find the camera-ready version (preprint) available here. The CSCL 2019 conference will take place in Lyon, France from June 17 – June 21. I am really excited since it will be my first time attending the CSCL conference and I will get to be in France on my birthday (June 20th). Also, this will be my first time presenting at any research conference. I am looking forward to presenting our paper and will post the talk slides when available.

Posted in Uncategorized | Leave a comment

Focusing on content: what do people want to learn about the ocean?

pexels-photo-1680140 Daria Shevtsova ocean

Photo Credit: Daria Shevtsova

I’ve recently both successfully defended my dissertation and graduated from the University of Florida with my PhD in Interdisciplinary Ecology as Katie alluded to in her post. I am now in a short term post-doctoral position with the TIDESS team and am working with the team to finish some research we started earlier last year.

I am spending most of my time currently on two specific tasks. The first is working with the team on revisions to our manuscript for the International Journal of Science Education. Katie previously mentioned our initial submission, though we have shifted to Part A of the journal rather than part B. The paper has since undergone an initial review and we are now making edits in response. Members of our team have discussed the process in more detail in earlier posts on the TIDESS blog.

Secondly, I am working with Katie on coding focus group data and starting to draft the associated journal paper. You might remember that Nikita described how we transitioned our prototype of ocean sea surface temperature data from an interactive tabletop to an interactive sphere. Last year, we ran focus groups with the goal of understanding what adults and children want to learn about, especially with regards to the ocean. Ultimately, we want to design our prototype so that anyone interacting with it can have an engaging and informative experience without someone there to facilitate. The focus groups allow us to gather information from the content side of things, while other work the TIDESS team has done examines details like how users manipulate that content. For example, Nikita recently described our efforts to investigate the gestures users employ with the sphere. These efforts will give us ideas to incorporate into future iterations of our touch interactive sphere prototype.

 One of the things I particularly like about this study is the level of involvement I’ve had the whole way through. Because of the team dynamic of TIDESS, we often may work on varying portions of studies and team members also come and go based on our academic timelines. However, I’ve been involved with TIDESS long enough to have sat in on the initial meetings for designing the focus groups. Hannah, a former education team member who has also written about her TIDESS experience, and I both facilitated the focus groups. I’m excited to be part of analyzing the resulting data and translating it into a paper.

Posted in Uncategorized | Leave a comment

Understanding the IRB: How Does It Affect Us?

When I joined the INIT Lab, one of the first things I was tasked with was completing a series of IRB trainings. Like many others who don’t work in the research field (and had never done so before), I had no idea what the “IRB” was, or what purpose it served. I thought that it was just a set of rules and regulations we must follow to do our research. However, as I worked with the TIDESS group on several projects, I realized that, even after my trainings, I did not fully understand the purpose of the IRB. This may have been because I thought that most of the IRB information is hard to understand for someone who is unfamiliar with the field. The information presented within IRB trainings is verbose, and some of the terminology which is being used is specific to the research field. However, the thing that made understanding the IRB most difficult was my lack of context on what the IRB is, and the reasons it is important to our research. Topics such as privacy and safety are highly emphasized in the IRB trainings, but I could not fully understand the reasonings behind it, as I did not understand why safety/privacy was such a large topic within non-medical research labs such as ours. However, during my time as an undergraduate research assistant in the INIT lab, I have vastly improved my understanding of the IRB and the purpose it serves. In this post, I am going to explain exactly what the IRB is, and how it affects our projects on a large scale.

The IRB, or Institutional Review Board, is an administrative body that was created to protect the rights of participants in research activities. Each institution which is conducting research has their own IRB which reviews all research proposals, and either approves or denies them. Before anyone at an institution like UF can perform any studies/research involving human participants, they must get approval from the IRB. Every IRB is given latitude to interpret the federal regulations and create their own set of procedures. This blog post will cover what I have learned about UF’s IRB. Your own institution may do things similarly, or they may have differences.

The IRB review process begins with a research proposal, where a summary of the planned research is given. There are three categories of review that each proposal will fall into: Full Board, Expedited, or Exempt. An Exempt review means that the proposal will be reviewed by one IRB member, and is used when the proposal is low risk. An example of something that would be considered “Exempt” would be an anonymous survey. Expedited reviews are done on proposals which pose more risk than an exempt proposal (e.g., collecting height and weight data) and require either the chair of the IRB, or an experienced board member designated by the chair, to review. A Full Board review involves the entire IRB panel in the review process and occurs when there is “greater than minimal risk” involved in the study, such as in a study testing new medications. The risk level that a proposal is given is determined based on how identifiable the data is that you are collecting from the participants, what are the real and likely physical, emotional, or psychological risks of participating in the research, and whether any of the participants belong to any protected groups (like children) that may be vulnerable to coercion.

The research that we do as a part of the TIDESS project (and most INIT Lab research) typically falls into the Expedited category, as it poses no more than minimal risk to the participants. But what exactly does it mean for research to have “risk”? Research which many would think is perfectly safe, such as studying interactions with our touchscreen interfaces, actually involves possibly collecting many types of sensitive data from our participants. Even things such as contact information sheets can hold confidential identifiable data such as phone numbers, full names, and even addresses. However, following the IRB regulations assures that this information is seen only by those who need to see it and is stored in a secure environment. If information like this were to be handled unsafely, it could lead to personal data being leaked to the public.

One of my first moments seeing these regulations in action occurred when analyzing data of participants interacting with our sphere prototype (see Pufferfish). As part of our analysis, we were going to look at the videos of these participants interacting with the prototype, alongside written transcripts of everything they said. When I received access to these transcripts as a member of the team, every participant name was already replaced with a participant number, concealing any personal data that may have been present in the videos. Alongside this, the videos were stored on an encrypted hard drive, set to delete itself if the person who tried to access it did not have the correct password. Later, when making a presentation about the videos to the lab, we blurred out the faces of all the participants, when possible. All these security measures were done even though this information was just being utilized in the lab, and not in any way accessible by the public. If we were to have given this presentation to the public, we would take even more care to make sure anything that can be traced back to the participant would be removed. Any instances of participant names would be replaced with a number, and all faces would be blurred or even removed if possible. All audio would be checked/removed so that no personal information (names, etc.) is revealed. This would be done to make sure that nobody can use the information to identify the participants, fulfilling the privacy and protection goals of the IRB.

Another important factor of the IRB process is known as informed consent, where the participant is told exactly what the purpose of the study is, and the procedures they will have to do to participate. The goal is to ensure that they understand all the potential risks and benefits of the study to be able to decide for themselves if they would like to participate in the study. The participant has the right to withdraw from the study at any time (or choose not to participate), and the informed consent process assures that the participant is aware of their rights and understands them fully.

As a first-year student at UF, and a newcomer to the world of research studies, my time with the lab has changed my thoughts on the IRB. I realized that what I once thought were just excessive rules (e.g., blurring faces for in-lab presentations) are vital in protecting the participants of our studies. Privacy and protection plays a huge role in the research process, one which I was not initially aware of. The data we have access to is sensitive, and the IRB is there to assure that it is kept secure, in order to prevent dangerous consequences to our participants. Overall, the IRB allows us to conduct research in a way that prioritizes the safety and privacy of participants, and assures that they understand the full scope of what it is they are participating in.  

Posted in Uncategorized | Leave a comment