SlideShare a Scribd company logo
1 of 8
Download to read offline
Is the Laptop a Physical Barrier
Against Team Communication?
An Analysis into the Design and Implementation of Robot Reflector
Lucy He
Information Science
Cornell University
Ithaca, NY
lh486@cornell.edu
Pehuen Moore
Computer Science
Cornell University
Ithaca, NY
ppm44@cornell.edu
Alex Sinfarosa
Information Science
Cornell University
Ithaca, NY
as898@cornell.edu
Janani Subhashini Umamaheswaran
Information Science
Cornell University
Ithaca, NY
ju48@cornell.edu
Yu Meng (Kate) Zhou
Mechanical Engineering
Cornell University
Ithaca, NY
ykz2@cornell.edu
Keywords—robot, mirror, laptop, barrier, reflection, teamwork,
technology, communication, human-robot interaction
I. INTRODUCTION
Laptops can become a distracting factor as opposed to a useful
tool in team communication. When a group of students sit
around a round table with their laptops open, team members
often find it difficult to see what each person is doing on
his/her computer. Each team member may become distracted
with social media or other websites, especially when no one
knows they are not paying attention and working productively.
In addition to laptops distracting students with internet content,
students might also view the “flipped-up” laptop screen as a
physical barrier. Lohnes & Kinzer (2007) found that students
often feel more distracted and less focused on the task at hand
when working in a group with open laptops [1]. These students
compared the secrecy and lack of focus they felt behind their
laptop screens to having dinner with an elaborative vase of
flowers in the center of the table [1].
Laptop screens might hide team members, but a mirror
reflection could help them avoid distraction. Maravita et al.
(2002) found that visual distractors produce stronger
interference on tactile judgments when placed close to the
stimulated hand, but observed indirectly as distant mirror
reflections [2]. Consequently, reflected glass on a laptop screen
or a mirror might cause people to feel more aware and guilty
about their lack of focus in their mirror reflections [2]. Because
of all of these negative impacts around distraction and secrecy
surrounding laptops and teamwork efficiency, we proposed the
idea of building a small assistive robot that could let team
members focus while working together on laptops.
II. MOTIVATION AND RESEARCH QUESTION
The focus of our research was to explore how a laptop could
form a physical barrier to team communication. We wished to
understand how students got distracted and explore whether
any external sources might minimize their distraction and
increase work productivity. Specifically, this external source
could be a robot that politely interferes when team members
are distracted. The method of interference can vary from
physical contact with team members to subtle motions like
tapping, peering, and twirling that express emotions. In order
to increase adaptability and convenience of use, our robot
should be small and portable.
Our final research question was: How can a robot intervene
and help individual team members communicate better by
minimizing use on their laptops and also valuing and holding
each team member accountable to group discussion by not
being a source of distraction?
III. ROBOT DESIGN AND DEVELOPMENT
A. Brainstorming
Each of us met up to discuss how we could create an
expressive robot to stop team members from hiding behind
and getting distracted by their laptop screens. Specifically, we
focused on answering these three questions:
 How is the laptop a physical barrier to group
communication?
 How can we increase communication between
different members in a team? I.e. leader,
programmer, designer, business strategist?
 How could the robot fit between computers?
Then we created a benchmarking mind map centered around
these number of factors for how we could build this robot
including types of motions: tapping or peering, material
needed: cardboard or paper, and environment: team
collaboration with laptops, students typing up a report
together, or two or three person groups that want to improve
accountability with fewer distractions on the laptop (Fig 1).
Fig. 1. Benchmarking mind map
B. Interaction Video Analysis
Our team created a task and video-recorded how individuals
on laptops get distracted in group work. At the start, one of us
sent all individuals a link to Flockdraw, an online drawing
collaboration platform. Three team members had 10-15
minutes to solve two visual brain puzzles together. They could
use the online chat or talk to each other in person as well as
draw out their ideas (Fig 2). When they finished one problem,
they all agreed on the answer before typing it in the chat box.
Fig. 2. Flockdraw user interface
C. Experimental Setup
We recruited three senior college students to participate in a
social experiment at Duffield Hall. None of them had met each
other before. One of our participants was a business student,
while the others were engineers. At the start, we had each of
them sit at an empty table, open up their laptops to an email
with two questions and Flockdraw, and start discussing the
problem (Fig 3). Two of our team members video-recorded
their interactions from two different angles for ten minutes.
Fig. 3. Three participants discussing puzzles on their laptops
D. Analysis and Coding Scheme
Forty minutes later, our team watched the videos again and
started developing a coding scheme (Fig 4). We observed that
all our participants engaged in channel blending where they
alternated across different electronics like a calculator,
smartphone, and the Flockdraw interface to solve the
problems. They mainly used the calculator and their
smartphone to calculate values for the first question and the
Flockdraw interface to draw out their ideas. They rarely used
the online chat interface, but preferred to speak their thoughts
out loud. Interestingly, the two engineers talked the majority
of the time and often leaned towards each other. They
understood how to solve the problem, but the business student
felt left out and her opinion undervalued because she did not
understand what was going on and often avoided eye contact.
Fig. 4. Coding scheme
E. Initial Sketches
One week later, we made sketches to further explore how our
robot should look like (Fig 5). These initial design ideas range
from a robot peaking over to look at a team member’s
computer screen to robots that can tap on the laptop to get the
team member’s attention. We also thought about making the
robot rotate during the idling state as if it was watching users.
Moreover, we considered the idea of using a mirror or another
reflective surface that plays into the guilty factor and lets
distracted team members see their reflections [4].
According to Wittreich (1959), one's judgment of his mirror
image as seen through different view changes mainly in detail,
while judgments of another person's image change in overall
size and shape [4]. Consequentially, when people look at their
own reflection, they are more likely to remember specific
details and realize what they are doing is wrong.
Fig. 5. Top left: tapping pointer, top right: turning tapper, middle left:
peeking robot, middle right: tapping attennae, bottom: tapping tin man
F. First Paper Robot Prototypes
A few days later, we created two paper prototype robots to
explore the physical action and interaction participants
physically experience with the tapping motion and peering
motion. The Pointer lightly taps against a participant’s
computer screen when he/she is not paying attention,
increasing in intensity depending on how distracted he/she is
and how long it must wait before he/she focuses (Fig 6). When
we experimented with participants, we realized that the robot
was too small and the sound too soft for individuals to realize
something was going on beyond outside noise.
Fig 6. Pointer paper prototype lightly taps against laptop screen
The Gazer resembles two anthromoporhic eyes that rotate
around to see who is paying attention (Fig 7). When someone
does not pay attention, it stops and peers over his/her laptop
screen. Moreover, it also rotates back and forth to get the
person’s attention. When we experimented with participants,
we realized that the robot was too annoying. Participants did
not enjoy when the eyes smacked against their laptop screen
or the vigorous eye motion across their screens.
Fig 7. Gazer paper prototype peering over a laptop screen
G. Second Cardboard Robot Prototype
One week later, we made our first technical prototype using
three Arduino motors, a cardboard body, two rulers, and a
mirror (Fig 8). There are two joints in the arm of this robot,
with one at the base and the other at the end of the bottom
ruler. The base (red cardboard box) can also rotate. We had
some trouble balancing the rotating base as the center of mass
is not directly on top of the bottom motor.
Fig 8. Cardboard and wired robot design
H. Three Motions in Interactive Flowchart
In order to explore and develop three expressive motions for
our robot, one of our team members created an interactive
flowchart depicting how the robot interacts based on how
distracted participants are (Fig 9). Originally, the robot simply
rotates 360 degrees to watch out for distracted participants.
When it notices a team member getting distracted online, it
stops in front of that person's screen. If the team member stays
distracted for more than five minutes, then the robot starts
tapping on the screen. If ten minutes pass, then the robot leans
over the laptop to show that person a mirror reflection. Once
the team member stops getting distracted, the robot resumes
rotating 360 degrees and surveilling the crowd.
Fig 9. Interactive flowchart depicting robot rotation, tapping, and gazing
I. Third CAD 3D Printed Prototype
Two weeks later, we came back together and brainstormed
ideas for a more aesthetically appealing, 3D printed model
(Fig 10). We decided to go with a unique pear shape for the
base and conceal one continuous 360 degrees motor and wires
inside. We would stick on two other motors to the base, one of
which moves the arm up and down and another that pulls the
string on the second arm and flicks the mirror up and down.
Unfortunately, we could not get our parts 3D printed in time
because we sent them to Rhodes Hall too late. But, we did end
up using many of our ideas from our 3D printed model in our
final prototype like our decision to conceal wires inside a
wooden base and the flicking arm with a mirror attached.
Fig 10. Left: CAD 3D printed design, right: 3D printed parts
J. Final Wireless Reflector Robot Prototype
The final design has one arm with one joint at the base. We
kept a rectangular prism base as it appeared neat and functional
to hide all electronics inside. Moreover, we used a Bluetooth
Mate Gold with a BLE application connected to an Arduino
101 board to wirelessly connect and remove extraneous wires
from our robot. The BLE application reads from the serial port
and writes to the Bluetooth port on the Arduino 101 board (Fig
10). We typed in letters which triggered commands on the
robot: o for open, c for close, r for right, l for left, d for
decrease, i for increase, and w for wiggle (Fig 11).
Fig 11. Left: Arduino 101 board, right: BLE application
We also strategically placed the motor at the base of the arm,
so that the weight was evenly distributed. Moreover, we used
two 450 capacitors, one for each of our two standard servo
motors so the current and voltage remained consistent (Fig 12).
We used a 9V battery to power the Arduino 101 board, wires,
two standard servo motors, and one continuous motor.
Fig 12. Motion 1: Initial state for rotating motion
Instead of using a second arm, we added a mirror at the end of
the arm. The mirror was one of our ideas from our initial
sketches where the addition of a mirror can play into the guilt
factor of distracted team members when they look at their own
reflections [4]. Furthermore, we mounted the mirror on a hinge
from the tip of the first arm, giving the robot one more degree
of freedom to extend the arm and flick the mirror (Fig 13). This
additional degree of freedom allowed extra expressive motions
like a taunting wiggle to be designed and programmed.
Fig 13. Motion 2 and 3: Arm extend and mirror flick
We also added a wiggle feature as our fourth motion in
addition to rotating 360 degrees, extending the arm, and
flicking the mirror for when distracted participants initially
ignore the mirror (Fig 14). The wiggle provides an extra
expressive motion subtly flicking up and down to taunt
participants to pay attention and focus on their work.
Fig 14. Motion 4: Taunting wiggle motion
IV. ONLINE SURVEY AND FIELD STUDY
A. Overview of Online Survey Process
One week later, we used Cornell Qualtrics to create a series of
pre-task discovery questions asking participants for their age,
student status, gender, and frequency meeting in groups and
collaborating on laptops (Fig 15). Then, we presented
participants with a series of two video clips: one with our final
Reflector robot rotating at a slow and fast speed and then
another with the robot flipping its arm up and flicking the
mirror at a distracted participant.
Fig 15. Qualtrics interface with one video clip
Then, we used Amazon Mechanical Turk to create a request
and recruit 50 unique workers to take the short survey and
watch a few videos for .15 cents (Fig 16). We gave each
worker 15 minutes to complete the assignment and did not
require any additional qualifications workers needed to meet.
Fig 16. Mechanical Turk worker interface
In each of our two video clips, we asked participants to
describe what they thought the robot was doing in the video,
the degree to which the laptop formed a physical barrier, and
the degree to which they agreed with the following statements:
the laptop formed a physical barrier against communication,
the robot encouraged group communication, robot reacted in a
negative way, robot reacted in a positive way, and robot was
attentive towards team members.
Afterwards, we asked them what feeling they thought the
robot expressed and how confident they were about their
answer. Lastly, we asked participants if they had any final
thoughts or comments.
B. Overview of Field Study Process
We ran two comparative field studies. In each of them, we led
three participants to an empty lab room in Gates Hall, 2nd
Floor. Before we turned on the robot, we asked participants
for informed consent then asked them to put their laptops
around the robot. We sat in a room five feet away from our
participants and controlled the robot through a series of
commands with the BLE application on a Nexus Android
phone using the Wizard of Oz process where participants
interacted with the Reflector robot they believed to be
autonomous, even though it was operated by one of us.
Study 1 - We bought in random participants and asked them to
work on the same specified task from our video interaction
analysis: solve brain puzzles together (Fig 17). We wished to
observe how people who have not worked together before
performed a task and to which extent we would require robotic
intervention to stop participants from getting distracted on
their laptops. One team member participated in the field study
as a confederate to purposely be distracted during the activity
and observe others’ response to the robot movement.
Fig 17. Study 1, confederate on far left and participants on a task
Study 2 - We bought in a team of 3 members who were
working on a semester long class project (Fig 18). These
students were very comfortable with each other and dived into
work immediately. They had their light fun moments while
working together on their project and our aim to observe
where we could intervene in this team was much clearer.
Fig 18. Study 2, three participants without an assigned task
V. RESULTS AND DISCUSSION
A. Overview on Online Survey Findings
Out of 52 total responses, we rejected 6 worker responses
based on fake information like Lorem Ipsum. As depicted in
Fig 19, most responders were senior students (13) or graduate
students (14). Ages ranged from 18-22 (17) to 36-64 (1). 24
participants identified as Male, while 20 identified as Female.
Fig 19. Online survey participant age range
29% of participants felt the statement that they meet with
others to perform collaborative work described them very well
and 29% believed it described them moderately well.
Interestingly, all of our responders varied in how much work
they believed they did on laptops (Fig 20).
Fig 20. Responses to amount of collaborative work done on laptops
B. Analysis on Fast and Slow Rotation Movement
Our first video clip showed the robot alternating between fast
and slow movement. Many participants (20) felt the robot was
scanning and searching. 15 participants felt the robot was
rotating around, while 4 participants had no idea. 12
participants did not notice the robot changed speeds, while 22
participants did notice the robot changed speeds. Out of those
22 participants, 2 participants wanted a medium speed, 10
liked the slower speed, and the rest preferred the fast speed.
One participant mentioned, “I wouldn't be able to tell you
anything but spin without the title. With the word "Reflector"
title, it probably distracts members by catching their attention,
like how reflective vests work for biker/runners at night.”
Interestingly, most participants varied in how they rated the
rotating robot. Most responses concentrated around somewhat
agree, neither agree nor disagree, and somewhat diagree over
whether the laptop formed a physical barrier against group
communication or whether the robot was attentive (Fig 21).
Fig 21. Participant ratings on robot rotation
On the other hand, 39% of participants felt that the robot
expressed attentiveness when it was rotating around (Fig 22).
Fig 22. Feelings expressed by robot rotation
C. Analysis on Arm and Mirror Tilt Motion
25 participants felt the robot that flicked its arm out and tilted
the mirror was drawing attention to the distracted team
member. 23 participants felt that one of the team members
was distracted and the robot was trying to wave to get his
attention. People thought the robot “tapped” or “pointed in one
direction” or “showed its arm” to the distracted team member.
One participant noted that “I don't think the members reacted
much but I think it would be distracting. It may be better if the
robot communicated attention back to the task in a more
discrete method such as a vibrating bluetooth watch strap or
light indicator on the object of interest. The sound of the robot
moving along with the visual view for me is distracting.”
Interestingly, most participants varied in how they rated the
robot extending its arm and flicking its mirror. Most responses
concentrated around agree, somewhat agree, and neither agree
nor disagree over whether the laptop formed a physical barrier
against group communication or whether the robot was
attentive (Fig 23).
Fig 23. Participant ratings on arm flick and mirror tilt
On the other hand, 57% of participants felt that the robot
expressed confrontation when it flicked its mirror out and 54%
of participants felt the robot expressed attentiveness to other
team members in the group (Fig 24).
Fig 24. Feelings expressed by arm flick and mirror tilt
D. Overview on Field Study Findings
Comparing Study 1 and 2, we observed that in cases where the
teams were random and the task assigned, participants worked
more and were less distracted. In Study 2, the participants
knew each other well and after a point started looking at
Snapchat and other such social media, which gave us cause to
use the robot more often.
In Study 1, we noticed that participants did not react much
when they saw the confederate purposefully trigger the robot
(Fig 25). Initially, they saw the arm flick and mirror tilt, but
were so focused on solving the brain puzzles that they forgot
the robot. This might also be caused by the fact that
participants found the Flockdraw interface difficult to use, so
they closed their laptop screens and switched to scratch paper.
Fig 25. Study 1 – Disiniterested participants focused on the task
In Study 2, our participants were very comfortable working
with each other as they were all already familiar with each
other (Fig 26). The robot grabbed their attention and
participants thought it was very cool. One of the participants
mentioned that he was startled by the sudden movement of the
arm. Mid-way through the discussion, another participant
started using Snapchat. Though the robot looked at him and
raised its arm, he was unsure what it meant.
Fig 26. Study 2 – Distracted participant blocks the mirror
VI. CONCLUSION AND FUTURE WORK
Laptops can form a physical barrier to team discussion and
distract team members from paying attention to each other.
Throughout our design and development process, we iterated
through several different prototypes starting from sketches to
our wireless Reflector robot programmed to respond to simple
keyboard commands from a Nexus Android phone. We learned
how important it is to iterate early and often throughout our
design process. Originally, we planned on creating a 3D
printed robot, but had to abandon our idea in favor of a wooden
prototype due to time constraints and a long waiting line.
Moreover, we learned to remain open to suggestions with our
online survey and data collection. Originally, we thought
sending a Cornell Qualtrics survey to our friends would gather
enough responses. When we received only six responses, one
of our team members suggested we put our survey onto
Amazon Mechanical Turk. From that point on, we received
much more responses than we originally expected.
In the future, we would like to explore our limitations in terms
of material and time to work on incorporating smoother
transitions and lateral robot movement across the table. With
this in mind, we wish to create different sized robots and test
them on different tables.
In our field study, we saw that participants often moved their
laptops further apart when they realized our large robot was
stuck between their laptops. A smaller robot or robot with
extendable features could create a more fluid, subtle interaction
within a group, especially when it is rotating around the table.
Moreover, we are also interested in quieter motors and
smoother movements of the arm. Our current arm flick and
mirror tilt is very abrupt, so much so that one participant even
put his arm up to stop the mirror from seeing him.
More importantly, we would like to make our robot completely
autonomous. Currently, our robot is controlled through the
Wizard of Oz technique where one researcher inputs different
commands based on what he/she observes on the video screen.
In the future, we would install an eye tracking and screening
device on computer screens that is integrated with our robot to
change movements based on whether or not participants are
distracted on social media or other websites on their laptops.
Lastly, we wish to add mobility to our robot with a moving
base to move towards distracted participants and effectively
stop the robot from getting stuck between laptops.
ACKNOWLEDGMENT
Thank you Professor Malte Jung, Solace Shen, and Guy
Hoffman for the wonderful research papers you provided and
interesting ideas you gave us throughout our design process.
REFERENCES
[1] Lohnes, S., & Kinzer, C. (2007). Questioning assumptions about
students’ expectations for technology in college classrooms. Innovate,
3(5).
[2] Maravita, A., Spence, C., Sergent, C., & Driver, J. (2002). Seeing Your
Own Touched Hands in a Mirror Modulates Cross-Modal Interactions.
Psychological Science, 13(4), 350-355. doi:10.1111/j.0956-
7976.2002.00463.x
[3] Wittreich, W. (1959) Visual perception and personality. Scientific
American,200,56-60

More Related Content

What's hot

Building Usable AR Interfaces
Building Usable AR InterfacesBuilding Usable AR Interfaces
Building Usable AR InterfacesMark Billinghurst
 
User Experience (UX) design discussion notes 1 - EATL mobile app dev contest ...
User Experience (UX) design discussion notes 1 - EATL mobile app dev contest ...User Experience (UX) design discussion notes 1 - EATL mobile app dev contest ...
User Experience (UX) design discussion notes 1 - EATL mobile app dev contest ...Masrur Hannan
 
Human-Computer Interaction: An Overview
Human-Computer Interaction: An OverviewHuman-Computer Interaction: An Overview
Human-Computer Interaction: An OverviewSabin Buraga
 
The Next Generation of Online Social Media Applications in Education
The Next Generation of Online Social Media Applications in EducationThe Next Generation of Online Social Media Applications in Education
The Next Generation of Online Social Media Applications in Educationbeboac
 
Integrating iPads and Tablets into Library Services
Integrating iPads and Tablets into Library ServicesIntegrating iPads and Tablets into Library Services
Integrating iPads and Tablets into Library ServicesALATechSource
 
Lecture 1: Human-Computer Interaction Course (2015) @VU University Amsterdam
Lecture 1: Human-Computer Interaction Course (2015) @VU University AmsterdamLecture 1: Human-Computer Interaction Course (2015) @VU University Amsterdam
Lecture 1: Human-Computer Interaction Course (2015) @VU University AmsterdamLora Aroyo
 
Presentation6 Technical Module
Presentation6 Technical ModulePresentation6 Technical Module
Presentation6 Technical ModuleOylum Boran
 
Crowdsourcing Documentation in Software Engineering
Crowdsourcing Documentation in Software EngineeringCrowdsourcing Documentation in Software Engineering
Crowdsourcing Documentation in Software EngineeringMargaret-Anne Storey
 
2013 khci_virtual space syntax analysis on pervasive social computing
2013 khci_virtual space syntax analysis on pervasive social computing2013 khci_virtual space syntax analysis on pervasive social computing
2013 khci_virtual space syntax analysis on pervasive social computing4dspace
 
Social Robotics for Assisted Living
Social Robotics for Assisted LivingSocial Robotics for Assisted Living
Social Robotics for Assisted LivingRod Walsh
 
Artificial Intelligence for Undergrads
Artificial Intelligence for UndergradsArtificial Intelligence for Undergrads
Artificial Intelligence for UndergradsJose Berengueres
 
Exploring the physical web
Exploring the physical webExploring the physical web
Exploring the physical webyiibu
 
UX for Artificial Intelligence / UXcamp Europe '17 / Berlin / Jan Korsanke
UX for Artificial Intelligence / UXcamp Europe '17 / Berlin / Jan KorsankeUX for Artificial Intelligence / UXcamp Europe '17 / Berlin / Jan Korsanke
UX for Artificial Intelligence / UXcamp Europe '17 / Berlin / Jan KorsankeJan Korsanke
 
Multimedia presentationmillerj
Multimedia presentationmillerjMultimedia presentationmillerj
Multimedia presentationmillerjantinene
 
How artificial intelligence impact engineering
How artificial intelligence impact engineeringHow artificial intelligence impact engineering
How artificial intelligence impact engineeringUSM Systems
 
Ready for Change? I. Advocacy to Knowledge Workers
Ready for Change?  I. Advocacy to Knowledge WorkersReady for Change?  I. Advocacy to Knowledge Workers
Ready for Change? I. Advocacy to Knowledge WorkersJunichi Otagaki
 
FORWARD TO REALITY - PHYSICAL COMPUTING – THE NEXT LEVEL OF WEB INTERACTION
FORWARD TO REALITY - PHYSICAL COMPUTING – THE NEXT LEVEL OF WEB INTERACTION FORWARD TO REALITY - PHYSICAL COMPUTING – THE NEXT LEVEL OF WEB INTERACTION
FORWARD TO REALITY - PHYSICAL COMPUTING – THE NEXT LEVEL OF WEB INTERACTION MediaFront
 

What's hot (18)

Building Usable AR Interfaces
Building Usable AR InterfacesBuilding Usable AR Interfaces
Building Usable AR Interfaces
 
User Experience (UX) design discussion notes 1 - EATL mobile app dev contest ...
User Experience (UX) design discussion notes 1 - EATL mobile app dev contest ...User Experience (UX) design discussion notes 1 - EATL mobile app dev contest ...
User Experience (UX) design discussion notes 1 - EATL mobile app dev contest ...
 
Human-Computer Interaction: An Overview
Human-Computer Interaction: An OverviewHuman-Computer Interaction: An Overview
Human-Computer Interaction: An Overview
 
The Next Generation of Online Social Media Applications in Education
The Next Generation of Online Social Media Applications in EducationThe Next Generation of Online Social Media Applications in Education
The Next Generation of Online Social Media Applications in Education
 
Integrating iPads and Tablets into Library Services
Integrating iPads and Tablets into Library ServicesIntegrating iPads and Tablets into Library Services
Integrating iPads and Tablets into Library Services
 
Lecture 1: Human-Computer Interaction Course (2015) @VU University Amsterdam
Lecture 1: Human-Computer Interaction Course (2015) @VU University AmsterdamLecture 1: Human-Computer Interaction Course (2015) @VU University Amsterdam
Lecture 1: Human-Computer Interaction Course (2015) @VU University Amsterdam
 
Presentation6 Technical Module
Presentation6 Technical ModulePresentation6 Technical Module
Presentation6 Technical Module
 
Crowdsourcing Documentation in Software Engineering
Crowdsourcing Documentation in Software EngineeringCrowdsourcing Documentation in Software Engineering
Crowdsourcing Documentation in Software Engineering
 
2013 khci_virtual space syntax analysis on pervasive social computing
2013 khci_virtual space syntax analysis on pervasive social computing2013 khci_virtual space syntax analysis on pervasive social computing
2013 khci_virtual space syntax analysis on pervasive social computing
 
Social Robotics for Assisted Living
Social Robotics for Assisted LivingSocial Robotics for Assisted Living
Social Robotics for Assisted Living
 
Artificial Intelligence for Undergrads
Artificial Intelligence for UndergradsArtificial Intelligence for Undergrads
Artificial Intelligence for Undergrads
 
Exploring the physical web
Exploring the physical webExploring the physical web
Exploring the physical web
 
UX for Artificial Intelligence / UXcamp Europe '17 / Berlin / Jan Korsanke
UX for Artificial Intelligence / UXcamp Europe '17 / Berlin / Jan KorsankeUX for Artificial Intelligence / UXcamp Europe '17 / Berlin / Jan Korsanke
UX for Artificial Intelligence / UXcamp Europe '17 / Berlin / Jan Korsanke
 
Multimedia presentationmillerj
Multimedia presentationmillerjMultimedia presentationmillerj
Multimedia presentationmillerj
 
3D Virtual Worlds
3D Virtual Worlds3D Virtual Worlds
3D Virtual Worlds
 
How artificial intelligence impact engineering
How artificial intelligence impact engineeringHow artificial intelligence impact engineering
How artificial intelligence impact engineering
 
Ready for Change? I. Advocacy to Knowledge Workers
Ready for Change?  I. Advocacy to Knowledge WorkersReady for Change?  I. Advocacy to Knowledge Workers
Ready for Change? I. Advocacy to Knowledge Workers
 
FORWARD TO REALITY - PHYSICAL COMPUTING – THE NEXT LEVEL OF WEB INTERACTION
FORWARD TO REALITY - PHYSICAL COMPUTING – THE NEXT LEVEL OF WEB INTERACTION FORWARD TO REALITY - PHYSICAL COMPUTING – THE NEXT LEVEL OF WEB INTERACTION
FORWARD TO REALITY - PHYSICAL COMPUTING – THE NEXT LEVEL OF WEB INTERACTION
 

Viewers also liked

Accessibilite site de luxe en Asie
Accessibilite site de luxe en AsieAccessibilite site de luxe en Asie
Accessibilite site de luxe en AsieIZIASIA
 
Cheque Cancellation Requisition Form
Cheque Cancellation Requisition FormCheque Cancellation Requisition Form
Cheque Cancellation Requisition FormDiyana Arus
 
Change of Address Form Takaful
Change of Address Form TakafulChange of Address Form Takaful
Change of Address Form TakafulDiyana Arus
 
قرآن کريم در مورد سياه چاله‌ها چه مي گويد؟
قرآن کريم در مورد سياه چاله‌ها چه مي گويد؟قرآن کريم در مورد سياه چاله‌ها چه مي گويد؟
قرآن کريم در مورد سياه چاله‌ها چه مي گويد؟Dr Fereidoun Dejahang
 
Cancel Cheque PAMB
Cancel Cheque PAMBCancel Cheque PAMB
Cancel Cheque PAMBDiyana Arus
 
Geografische benaderingen
Geografische benaderingenGeografische benaderingen
Geografische benaderingenJeroen Bottema
 
Form alteration-pamb-baru
Form alteration-pamb-baruForm alteration-pamb-baru
Form alteration-pamb-baruDiyana Arus
 
Numerical and analytical studies of single and multiphase starting jets and p...
Numerical and analytical studies of single and multiphase starting jets and p...Numerical and analytical studies of single and multiphase starting jets and p...
Numerical and analytical studies of single and multiphase starting jets and p...Ruo-Qian (Roger) Wang
 
Formato n° 01 descripción de equipos
Formato n° 01   descripción de equiposFormato n° 01   descripción de equipos
Formato n° 01 descripción de equiposnicoca260
 
hydraulic robotic arm
hydraulic robotic armhydraulic robotic arm
hydraulic robotic armBijoy Mondal
 

Viewers also liked (18)

Overbetuwe
OverbetuweOverbetuwe
Overbetuwe
 
048 the numbers of significance
048 the numbers of significance048 the numbers of significance
048 the numbers of significance
 
tugas wwokwik
tugas wwokwiktugas wwokwik
tugas wwokwik
 
Accessibilite site de luxe en Asie
Accessibilite site de luxe en AsieAccessibilite site de luxe en Asie
Accessibilite site de luxe en Asie
 
053 the parting of the sea (2)
053 the parting of the sea (2)053 the parting of the sea (2)
053 the parting of the sea (2)
 
Cheque Cancellation Requisition Form
Cheque Cancellation Requisition FormCheque Cancellation Requisition Form
Cheque Cancellation Requisition Form
 
Change of Address Form Takaful
Change of Address Form TakafulChange of Address Form Takaful
Change of Address Form Takaful
 
دريا و خشكي
دريا و خشكيدريا و خشكي
دريا و خشكي
 
025 the formation of petrol
025 the formation of petrol025 the formation of petrol
025 the formation of petrol
 
قرآن کريم در مورد سياه چاله‌ها چه مي گويد؟
قرآن کريم در مورد سياه چاله‌ها چه مي گويد؟قرآن کريم در مورد سياه چاله‌ها چه مي گويد؟
قرآن کريم در مورد سياه چاله‌ها چه مي گويد؟
 
Cancel Cheque PAMB
Cancel Cheque PAMBCancel Cheque PAMB
Cancel Cheque PAMB
 
Geografische benaderingen
Geografische benaderingenGeografische benaderingen
Geografische benaderingen
 
Form alteration-pamb-baru
Form alteration-pamb-baruForm alteration-pamb-baru
Form alteration-pamb-baru
 
Numerical and analytical studies of single and multiphase starting jets and p...
Numerical and analytical studies of single and multiphase starting jets and p...Numerical and analytical studies of single and multiphase starting jets and p...
Numerical and analytical studies of single and multiphase starting jets and p...
 
Sw2012 pdf completo
Sw2012 pdf completoSw2012 pdf completo
Sw2012 pdf completo
 
Guide of Effecting Teaching
Guide of Effecting TeachingGuide of Effecting Teaching
Guide of Effecting Teaching
 
Formato n° 01 descripción de equipos
Formato n° 01   descripción de equiposFormato n° 01   descripción de equipos
Formato n° 01 descripción de equipos
 
hydraulic robotic arm
hydraulic robotic armhydraulic robotic arm
hydraulic robotic arm
 

Similar to Reflector_Robot_ResearchPaper

Mini thesis presentation
Mini thesis presentationMini thesis presentation
Mini thesis presentationYou-Wen Liang
 
Musstanser Avanzament 4 (Final No Animation)
Musstanser   Avanzament 4 (Final   No Animation)Musstanser   Avanzament 4 (Final   No Animation)
Musstanser Avanzament 4 (Final No Animation)Musstanser Tinauli
 
Rp 3 published
Rp  3 publishedRp  3 published
Rp 3 publishedAman Jain
 
The study of attention estimation for child-robot interaction scenarios
The study of attention estimation for child-robot interaction scenariosThe study of attention estimation for child-robot interaction scenarios
The study of attention estimation for child-robot interaction scenariosjournalBEEI
 
Introduction to Prototyping: What, Why, How
Introduction to Prototyping: What, Why, HowIntroduction to Prototyping: What, Why, How
Introduction to Prototyping: What, Why, HowAbdallah El Ali
 
David McKenzie, Darwin Muljono and Elizabeth B.-N. Sanders: Collective Dream...
David McKenzie, Darwin Muljono and Elizabeth B.-N. Sanders:  Collective Dream...David McKenzie, Darwin Muljono and Elizabeth B.-N. Sanders:  Collective Dream...
David McKenzie, Darwin Muljono and Elizabeth B.-N. Sanders: Collective Dream...RSD Relating Systems Thinking and Design
 
ROBOT HUMAN INTERFACE FOR HOUSEKEEPER ROBOT WITH WIRELESS CAPABILITIES
ROBOT HUMAN INTERFACE FOR HOUSEKEEPER ROBOT WITH WIRELESS CAPABILITIESROBOT HUMAN INTERFACE FOR HOUSEKEEPER ROBOT WITH WIRELESS CAPABILITIES
ROBOT HUMAN INTERFACE FOR HOUSEKEEPER ROBOT WITH WIRELESS CAPABILITIESIJCNCJournal
 
A WEARABLE HAPTIC GAME CONTROLLER
A WEARABLE HAPTIC GAME CONTROLLERA WEARABLE HAPTIC GAME CONTROLLER
A WEARABLE HAPTIC GAME CONTROLLERijgttjournal
 
Introduction to Artificial Intelligence
Introduction to Artificial IntelligenceIntroduction to Artificial Intelligence
Introduction to Artificial Intelligencesnehal_152
 
adrianorenzi_duxu2014
adrianorenzi_duxu2014adrianorenzi_duxu2014
adrianorenzi_duxu2014Adriano Renzi
 
Human Computer Interaction
Human Computer InteractionHuman Computer Interaction
Human Computer InteractionSandy Harwell
 
AtkinFVFsonDFDFFDFDFDFDFFADASADADADADAS.pdf
AtkinFVFsonDFDFFDFDFDFDFFADASADADADADAS.pdfAtkinFVFsonDFDFFDFDFDFDFFADASADADADADAS.pdf
AtkinFVFsonDFDFFDFDFDFDFFADASADADADADAS.pdfquickfix043
 
Usability Testing
Usability TestingUsability Testing
Usability TestingAndy Budd
 
To borg or not to borg - individual vs collective, Gavin Bell fowa08
To borg or not to borg - individual vs collective, Gavin Bell fowa08To borg or not to borg - individual vs collective, Gavin Bell fowa08
To borg or not to borg - individual vs collective, Gavin Bell fowa08Gavin Bell
 
Selected topics in Computer Science
Selected topics in Computer Science Selected topics in Computer Science
Selected topics in Computer Science Melaku Bayih Demessie
 
Bill Moggridge-Designing interactions-the mouse and the desktop
Bill Moggridge-Designing interactions-the mouse and the desktopBill Moggridge-Designing interactions-the mouse and the desktop
Bill Moggridge-Designing interactions-the mouse and the desktopdilemakiner
 
Bill Moggridge-Designing interactions-the mouse and the desktop
Bill Moggridge-Designing interactions-the mouse and the desktopBill Moggridge-Designing interactions-the mouse and the desktop
Bill Moggridge-Designing interactions-the mouse and the desktopdilemakiner
 
RMV Artificial Intelligence
RMV Artificial IntelligenceRMV Artificial Intelligence
RMV Artificial Intelligenceanand hd
 

Similar to Reflector_Robot_ResearchPaper (20)

Mini thesis presentation
Mini thesis presentationMini thesis presentation
Mini thesis presentation
 
Musstanser Avanzament 4 (Final No Animation)
Musstanser   Avanzament 4 (Final   No Animation)Musstanser   Avanzament 4 (Final   No Animation)
Musstanser Avanzament 4 (Final No Animation)
 
Rp 3 published
Rp  3 publishedRp  3 published
Rp 3 published
 
The study of attention estimation for child-robot interaction scenarios
The study of attention estimation for child-robot interaction scenariosThe study of attention estimation for child-robot interaction scenarios
The study of attention estimation for child-robot interaction scenarios
 
Introduction to Prototyping: What, Why, How
Introduction to Prototyping: What, Why, HowIntroduction to Prototyping: What, Why, How
Introduction to Prototyping: What, Why, How
 
David McKenzie, Darwin Muljono and Elizabeth B.-N. Sanders: Collective Dream...
David McKenzie, Darwin Muljono and Elizabeth B.-N. Sanders:  Collective Dream...David McKenzie, Darwin Muljono and Elizabeth B.-N. Sanders:  Collective Dream...
David McKenzie, Darwin Muljono and Elizabeth B.-N. Sanders: Collective Dream...
 
ROBOT HUMAN INTERFACE FOR HOUSEKEEPER ROBOT WITH WIRELESS CAPABILITIES
ROBOT HUMAN INTERFACE FOR HOUSEKEEPER ROBOT WITH WIRELESS CAPABILITIESROBOT HUMAN INTERFACE FOR HOUSEKEEPER ROBOT WITH WIRELESS CAPABILITIES
ROBOT HUMAN INTERFACE FOR HOUSEKEEPER ROBOT WITH WIRELESS CAPABILITIES
 
A WEARABLE HAPTIC GAME CONTROLLER
A WEARABLE HAPTIC GAME CONTROLLERA WEARABLE HAPTIC GAME CONTROLLER
A WEARABLE HAPTIC GAME CONTROLLER
 
Introduction to Artificial Intelligence
Introduction to Artificial IntelligenceIntroduction to Artificial Intelligence
Introduction to Artificial Intelligence
 
adrianorenzi_duxu2014
adrianorenzi_duxu2014adrianorenzi_duxu2014
adrianorenzi_duxu2014
 
Human Computer Interaction
Human Computer InteractionHuman Computer Interaction
Human Computer Interaction
 
AtkinFVFsonDFDFFDFDFDFDFFADASADADADADAS.pdf
AtkinFVFsonDFDFFDFDFDFDFFADASADADADADAS.pdfAtkinFVFsonDFDFFDFDFDFDFFADASADADADADAS.pdf
AtkinFVFsonDFDFFDFDFDFDFFADASADADADADAS.pdf
 
Ai and bots
Ai and botsAi and bots
Ai and bots
 
E0352435
E0352435E0352435
E0352435
 
Usability Testing
Usability TestingUsability Testing
Usability Testing
 
To borg or not to borg - individual vs collective, Gavin Bell fowa08
To borg or not to borg - individual vs collective, Gavin Bell fowa08To borg or not to borg - individual vs collective, Gavin Bell fowa08
To borg or not to borg - individual vs collective, Gavin Bell fowa08
 
Selected topics in Computer Science
Selected topics in Computer Science Selected topics in Computer Science
Selected topics in Computer Science
 
Bill Moggridge-Designing interactions-the mouse and the desktop
Bill Moggridge-Designing interactions-the mouse and the desktopBill Moggridge-Designing interactions-the mouse and the desktop
Bill Moggridge-Designing interactions-the mouse and the desktop
 
Bill Moggridge-Designing interactions-the mouse and the desktop
Bill Moggridge-Designing interactions-the mouse and the desktopBill Moggridge-Designing interactions-the mouse and the desktop
Bill Moggridge-Designing interactions-the mouse and the desktop
 
RMV Artificial Intelligence
RMV Artificial IntelligenceRMV Artificial Intelligence
RMV Artificial Intelligence
 

Reflector_Robot_ResearchPaper

  • 1. Is the Laptop a Physical Barrier Against Team Communication? An Analysis into the Design and Implementation of Robot Reflector Lucy He Information Science Cornell University Ithaca, NY lh486@cornell.edu Pehuen Moore Computer Science Cornell University Ithaca, NY ppm44@cornell.edu Alex Sinfarosa Information Science Cornell University Ithaca, NY as898@cornell.edu Janani Subhashini Umamaheswaran Information Science Cornell University Ithaca, NY ju48@cornell.edu Yu Meng (Kate) Zhou Mechanical Engineering Cornell University Ithaca, NY ykz2@cornell.edu Keywords—robot, mirror, laptop, barrier, reflection, teamwork, technology, communication, human-robot interaction I. INTRODUCTION Laptops can become a distracting factor as opposed to a useful tool in team communication. When a group of students sit around a round table with their laptops open, team members often find it difficult to see what each person is doing on his/her computer. Each team member may become distracted with social media or other websites, especially when no one knows they are not paying attention and working productively. In addition to laptops distracting students with internet content, students might also view the “flipped-up” laptop screen as a physical barrier. Lohnes & Kinzer (2007) found that students often feel more distracted and less focused on the task at hand when working in a group with open laptops [1]. These students compared the secrecy and lack of focus they felt behind their laptop screens to having dinner with an elaborative vase of flowers in the center of the table [1]. Laptop screens might hide team members, but a mirror reflection could help them avoid distraction. Maravita et al. (2002) found that visual distractors produce stronger interference on tactile judgments when placed close to the stimulated hand, but observed indirectly as distant mirror reflections [2]. Consequently, reflected glass on a laptop screen or a mirror might cause people to feel more aware and guilty about their lack of focus in their mirror reflections [2]. Because of all of these negative impacts around distraction and secrecy surrounding laptops and teamwork efficiency, we proposed the idea of building a small assistive robot that could let team members focus while working together on laptops. II. MOTIVATION AND RESEARCH QUESTION The focus of our research was to explore how a laptop could form a physical barrier to team communication. We wished to understand how students got distracted and explore whether any external sources might minimize their distraction and increase work productivity. Specifically, this external source could be a robot that politely interferes when team members are distracted. The method of interference can vary from physical contact with team members to subtle motions like tapping, peering, and twirling that express emotions. In order to increase adaptability and convenience of use, our robot should be small and portable. Our final research question was: How can a robot intervene and help individual team members communicate better by minimizing use on their laptops and also valuing and holding each team member accountable to group discussion by not being a source of distraction? III. ROBOT DESIGN AND DEVELOPMENT A. Brainstorming Each of us met up to discuss how we could create an expressive robot to stop team members from hiding behind and getting distracted by their laptop screens. Specifically, we focused on answering these three questions:
  • 2.  How is the laptop a physical barrier to group communication?  How can we increase communication between different members in a team? I.e. leader, programmer, designer, business strategist?  How could the robot fit between computers? Then we created a benchmarking mind map centered around these number of factors for how we could build this robot including types of motions: tapping or peering, material needed: cardboard or paper, and environment: team collaboration with laptops, students typing up a report together, or two or three person groups that want to improve accountability with fewer distractions on the laptop (Fig 1). Fig. 1. Benchmarking mind map B. Interaction Video Analysis Our team created a task and video-recorded how individuals on laptops get distracted in group work. At the start, one of us sent all individuals a link to Flockdraw, an online drawing collaboration platform. Three team members had 10-15 minutes to solve two visual brain puzzles together. They could use the online chat or talk to each other in person as well as draw out their ideas (Fig 2). When they finished one problem, they all agreed on the answer before typing it in the chat box. Fig. 2. Flockdraw user interface C. Experimental Setup We recruited three senior college students to participate in a social experiment at Duffield Hall. None of them had met each other before. One of our participants was a business student, while the others were engineers. At the start, we had each of them sit at an empty table, open up their laptops to an email with two questions and Flockdraw, and start discussing the problem (Fig 3). Two of our team members video-recorded their interactions from two different angles for ten minutes. Fig. 3. Three participants discussing puzzles on their laptops D. Analysis and Coding Scheme Forty minutes later, our team watched the videos again and started developing a coding scheme (Fig 4). We observed that all our participants engaged in channel blending where they alternated across different electronics like a calculator, smartphone, and the Flockdraw interface to solve the problems. They mainly used the calculator and their smartphone to calculate values for the first question and the Flockdraw interface to draw out their ideas. They rarely used the online chat interface, but preferred to speak their thoughts out loud. Interestingly, the two engineers talked the majority of the time and often leaned towards each other. They understood how to solve the problem, but the business student felt left out and her opinion undervalued because she did not understand what was going on and often avoided eye contact. Fig. 4. Coding scheme
  • 3. E. Initial Sketches One week later, we made sketches to further explore how our robot should look like (Fig 5). These initial design ideas range from a robot peaking over to look at a team member’s computer screen to robots that can tap on the laptop to get the team member’s attention. We also thought about making the robot rotate during the idling state as if it was watching users. Moreover, we considered the idea of using a mirror or another reflective surface that plays into the guilty factor and lets distracted team members see their reflections [4]. According to Wittreich (1959), one's judgment of his mirror image as seen through different view changes mainly in detail, while judgments of another person's image change in overall size and shape [4]. Consequentially, when people look at their own reflection, they are more likely to remember specific details and realize what they are doing is wrong. Fig. 5. Top left: tapping pointer, top right: turning tapper, middle left: peeking robot, middle right: tapping attennae, bottom: tapping tin man F. First Paper Robot Prototypes A few days later, we created two paper prototype robots to explore the physical action and interaction participants physically experience with the tapping motion and peering motion. The Pointer lightly taps against a participant’s computer screen when he/she is not paying attention, increasing in intensity depending on how distracted he/she is and how long it must wait before he/she focuses (Fig 6). When we experimented with participants, we realized that the robot was too small and the sound too soft for individuals to realize something was going on beyond outside noise. Fig 6. Pointer paper prototype lightly taps against laptop screen The Gazer resembles two anthromoporhic eyes that rotate around to see who is paying attention (Fig 7). When someone does not pay attention, it stops and peers over his/her laptop screen. Moreover, it also rotates back and forth to get the person’s attention. When we experimented with participants, we realized that the robot was too annoying. Participants did not enjoy when the eyes smacked against their laptop screen or the vigorous eye motion across their screens. Fig 7. Gazer paper prototype peering over a laptop screen G. Second Cardboard Robot Prototype One week later, we made our first technical prototype using three Arduino motors, a cardboard body, two rulers, and a mirror (Fig 8). There are two joints in the arm of this robot, with one at the base and the other at the end of the bottom ruler. The base (red cardboard box) can also rotate. We had some trouble balancing the rotating base as the center of mass is not directly on top of the bottom motor.
  • 4. Fig 8. Cardboard and wired robot design H. Three Motions in Interactive Flowchart In order to explore and develop three expressive motions for our robot, one of our team members created an interactive flowchart depicting how the robot interacts based on how distracted participants are (Fig 9). Originally, the robot simply rotates 360 degrees to watch out for distracted participants. When it notices a team member getting distracted online, it stops in front of that person's screen. If the team member stays distracted for more than five minutes, then the robot starts tapping on the screen. If ten minutes pass, then the robot leans over the laptop to show that person a mirror reflection. Once the team member stops getting distracted, the robot resumes rotating 360 degrees and surveilling the crowd. Fig 9. Interactive flowchart depicting robot rotation, tapping, and gazing I. Third CAD 3D Printed Prototype Two weeks later, we came back together and brainstormed ideas for a more aesthetically appealing, 3D printed model (Fig 10). We decided to go with a unique pear shape for the base and conceal one continuous 360 degrees motor and wires inside. We would stick on two other motors to the base, one of which moves the arm up and down and another that pulls the string on the second arm and flicks the mirror up and down. Unfortunately, we could not get our parts 3D printed in time because we sent them to Rhodes Hall too late. But, we did end up using many of our ideas from our 3D printed model in our final prototype like our decision to conceal wires inside a wooden base and the flicking arm with a mirror attached. Fig 10. Left: CAD 3D printed design, right: 3D printed parts J. Final Wireless Reflector Robot Prototype The final design has one arm with one joint at the base. We kept a rectangular prism base as it appeared neat and functional to hide all electronics inside. Moreover, we used a Bluetooth Mate Gold with a BLE application connected to an Arduino 101 board to wirelessly connect and remove extraneous wires from our robot. The BLE application reads from the serial port and writes to the Bluetooth port on the Arduino 101 board (Fig 10). We typed in letters which triggered commands on the robot: o for open, c for close, r for right, l for left, d for decrease, i for increase, and w for wiggle (Fig 11). Fig 11. Left: Arduino 101 board, right: BLE application We also strategically placed the motor at the base of the arm, so that the weight was evenly distributed. Moreover, we used two 450 capacitors, one for each of our two standard servo motors so the current and voltage remained consistent (Fig 12). We used a 9V battery to power the Arduino 101 board, wires, two standard servo motors, and one continuous motor. Fig 12. Motion 1: Initial state for rotating motion Instead of using a second arm, we added a mirror at the end of the arm. The mirror was one of our ideas from our initial sketches where the addition of a mirror can play into the guilt factor of distracted team members when they look at their own reflections [4]. Furthermore, we mounted the mirror on a hinge from the tip of the first arm, giving the robot one more degree of freedom to extend the arm and flick the mirror (Fig 13). This additional degree of freedom allowed extra expressive motions like a taunting wiggle to be designed and programmed. Fig 13. Motion 2 and 3: Arm extend and mirror flick
  • 5. We also added a wiggle feature as our fourth motion in addition to rotating 360 degrees, extending the arm, and flicking the mirror for when distracted participants initially ignore the mirror (Fig 14). The wiggle provides an extra expressive motion subtly flicking up and down to taunt participants to pay attention and focus on their work. Fig 14. Motion 4: Taunting wiggle motion IV. ONLINE SURVEY AND FIELD STUDY A. Overview of Online Survey Process One week later, we used Cornell Qualtrics to create a series of pre-task discovery questions asking participants for their age, student status, gender, and frequency meeting in groups and collaborating on laptops (Fig 15). Then, we presented participants with a series of two video clips: one with our final Reflector robot rotating at a slow and fast speed and then another with the robot flipping its arm up and flicking the mirror at a distracted participant. Fig 15. Qualtrics interface with one video clip Then, we used Amazon Mechanical Turk to create a request and recruit 50 unique workers to take the short survey and watch a few videos for .15 cents (Fig 16). We gave each worker 15 minutes to complete the assignment and did not require any additional qualifications workers needed to meet. Fig 16. Mechanical Turk worker interface In each of our two video clips, we asked participants to describe what they thought the robot was doing in the video, the degree to which the laptop formed a physical barrier, and the degree to which they agreed with the following statements: the laptop formed a physical barrier against communication, the robot encouraged group communication, robot reacted in a negative way, robot reacted in a positive way, and robot was attentive towards team members. Afterwards, we asked them what feeling they thought the robot expressed and how confident they were about their answer. Lastly, we asked participants if they had any final thoughts or comments. B. Overview of Field Study Process We ran two comparative field studies. In each of them, we led three participants to an empty lab room in Gates Hall, 2nd Floor. Before we turned on the robot, we asked participants for informed consent then asked them to put their laptops around the robot. We sat in a room five feet away from our participants and controlled the robot through a series of commands with the BLE application on a Nexus Android phone using the Wizard of Oz process where participants interacted with the Reflector robot they believed to be autonomous, even though it was operated by one of us. Study 1 - We bought in random participants and asked them to work on the same specified task from our video interaction analysis: solve brain puzzles together (Fig 17). We wished to observe how people who have not worked together before performed a task and to which extent we would require robotic intervention to stop participants from getting distracted on their laptops. One team member participated in the field study
  • 6. as a confederate to purposely be distracted during the activity and observe others’ response to the robot movement. Fig 17. Study 1, confederate on far left and participants on a task Study 2 - We bought in a team of 3 members who were working on a semester long class project (Fig 18). These students were very comfortable with each other and dived into work immediately. They had their light fun moments while working together on their project and our aim to observe where we could intervene in this team was much clearer. Fig 18. Study 2, three participants without an assigned task V. RESULTS AND DISCUSSION A. Overview on Online Survey Findings Out of 52 total responses, we rejected 6 worker responses based on fake information like Lorem Ipsum. As depicted in Fig 19, most responders were senior students (13) or graduate students (14). Ages ranged from 18-22 (17) to 36-64 (1). 24 participants identified as Male, while 20 identified as Female. Fig 19. Online survey participant age range 29% of participants felt the statement that they meet with others to perform collaborative work described them very well and 29% believed it described them moderately well. Interestingly, all of our responders varied in how much work they believed they did on laptops (Fig 20). Fig 20. Responses to amount of collaborative work done on laptops B. Analysis on Fast and Slow Rotation Movement Our first video clip showed the robot alternating between fast and slow movement. Many participants (20) felt the robot was scanning and searching. 15 participants felt the robot was rotating around, while 4 participants had no idea. 12 participants did not notice the robot changed speeds, while 22 participants did notice the robot changed speeds. Out of those 22 participants, 2 participants wanted a medium speed, 10 liked the slower speed, and the rest preferred the fast speed. One participant mentioned, “I wouldn't be able to tell you anything but spin without the title. With the word "Reflector" title, it probably distracts members by catching their attention, like how reflective vests work for biker/runners at night.” Interestingly, most participants varied in how they rated the rotating robot. Most responses concentrated around somewhat agree, neither agree nor disagree, and somewhat diagree over whether the laptop formed a physical barrier against group communication or whether the robot was attentive (Fig 21). Fig 21. Participant ratings on robot rotation
  • 7. On the other hand, 39% of participants felt that the robot expressed attentiveness when it was rotating around (Fig 22). Fig 22. Feelings expressed by robot rotation C. Analysis on Arm and Mirror Tilt Motion 25 participants felt the robot that flicked its arm out and tilted the mirror was drawing attention to the distracted team member. 23 participants felt that one of the team members was distracted and the robot was trying to wave to get his attention. People thought the robot “tapped” or “pointed in one direction” or “showed its arm” to the distracted team member. One participant noted that “I don't think the members reacted much but I think it would be distracting. It may be better if the robot communicated attention back to the task in a more discrete method such as a vibrating bluetooth watch strap or light indicator on the object of interest. The sound of the robot moving along with the visual view for me is distracting.” Interestingly, most participants varied in how they rated the robot extending its arm and flicking its mirror. Most responses concentrated around agree, somewhat agree, and neither agree nor disagree over whether the laptop formed a physical barrier against group communication or whether the robot was attentive (Fig 23). Fig 23. Participant ratings on arm flick and mirror tilt On the other hand, 57% of participants felt that the robot expressed confrontation when it flicked its mirror out and 54% of participants felt the robot expressed attentiveness to other team members in the group (Fig 24). Fig 24. Feelings expressed by arm flick and mirror tilt D. Overview on Field Study Findings Comparing Study 1 and 2, we observed that in cases where the teams were random and the task assigned, participants worked more and were less distracted. In Study 2, the participants knew each other well and after a point started looking at Snapchat and other such social media, which gave us cause to use the robot more often. In Study 1, we noticed that participants did not react much when they saw the confederate purposefully trigger the robot (Fig 25). Initially, they saw the arm flick and mirror tilt, but were so focused on solving the brain puzzles that they forgot the robot. This might also be caused by the fact that participants found the Flockdraw interface difficult to use, so they closed their laptop screens and switched to scratch paper. Fig 25. Study 1 – Disiniterested participants focused on the task In Study 2, our participants were very comfortable working with each other as they were all already familiar with each other (Fig 26). The robot grabbed their attention and participants thought it was very cool. One of the participants mentioned that he was startled by the sudden movement of the arm. Mid-way through the discussion, another participant
  • 8. started using Snapchat. Though the robot looked at him and raised its arm, he was unsure what it meant. Fig 26. Study 2 – Distracted participant blocks the mirror VI. CONCLUSION AND FUTURE WORK Laptops can form a physical barrier to team discussion and distract team members from paying attention to each other. Throughout our design and development process, we iterated through several different prototypes starting from sketches to our wireless Reflector robot programmed to respond to simple keyboard commands from a Nexus Android phone. We learned how important it is to iterate early and often throughout our design process. Originally, we planned on creating a 3D printed robot, but had to abandon our idea in favor of a wooden prototype due to time constraints and a long waiting line. Moreover, we learned to remain open to suggestions with our online survey and data collection. Originally, we thought sending a Cornell Qualtrics survey to our friends would gather enough responses. When we received only six responses, one of our team members suggested we put our survey onto Amazon Mechanical Turk. From that point on, we received much more responses than we originally expected. In the future, we would like to explore our limitations in terms of material and time to work on incorporating smoother transitions and lateral robot movement across the table. With this in mind, we wish to create different sized robots and test them on different tables. In our field study, we saw that participants often moved their laptops further apart when they realized our large robot was stuck between their laptops. A smaller robot or robot with extendable features could create a more fluid, subtle interaction within a group, especially when it is rotating around the table. Moreover, we are also interested in quieter motors and smoother movements of the arm. Our current arm flick and mirror tilt is very abrupt, so much so that one participant even put his arm up to stop the mirror from seeing him. More importantly, we would like to make our robot completely autonomous. Currently, our robot is controlled through the Wizard of Oz technique where one researcher inputs different commands based on what he/she observes on the video screen. In the future, we would install an eye tracking and screening device on computer screens that is integrated with our robot to change movements based on whether or not participants are distracted on social media or other websites on their laptops. Lastly, we wish to add mobility to our robot with a moving base to move towards distracted participants and effectively stop the robot from getting stuck between laptops. ACKNOWLEDGMENT Thank you Professor Malte Jung, Solace Shen, and Guy Hoffman for the wonderful research papers you provided and interesting ideas you gave us throughout our design process. REFERENCES [1] Lohnes, S., & Kinzer, C. (2007). Questioning assumptions about students’ expectations for technology in college classrooms. Innovate, 3(5). [2] Maravita, A., Spence, C., Sergent, C., & Driver, J. (2002). Seeing Your Own Touched Hands in a Mirror Modulates Cross-Modal Interactions. Psychological Science, 13(4), 350-355. doi:10.1111/j.0956- 7976.2002.00463.x [3] Wittreich, W. (1959) Visual perception and personality. Scientific American,200,56-60