MIT Enterprise Forum of NYC hosted The UX of Tomorrow: Designing for the Unknown on June 4th, 2015 at Shutterstock featuring Beverly May, Ryan Gossen, Jay Vidyarthi, and Jeff Feddersen. This is Jeff's presentation from the event.
Trained in computer science and music, Jeff works with software and hardware to make computers do new and unusual things. He is currently part of a team developing a sculptural reflection of energy and resource flows in what is being heralded as the world`s greenest office building. His work for groups ranging from the Hayden Planetarium and the Connecticut Science Center to Sony and HBO has resulted in award-winning public interactive experiences.
Jeff teaches at NYU`s graduate Interactive Telecommunications Program, where he has a residency to develop video curricula supporting physical computing and energy. His novel musical instruments and kinetic sound sculptures have been performed on and exhibited internationally, and he is the co-inventor of an electronic wind instrument based on the Japanese shakuhachi (US patent #7723605).
The next ten years of technology will see many of Ray Kurzweil`s predictions come alive: Embedded, invisible, unwired electricity and internet-based interactions will drive every aspect of our lived environment. The physical and digital worlds are merging, powered by incredible changes in computing, universal connectivity as well as Artificial Intelligence (AI) and machine learning. This pending wave is certain to change every aspect of our human-computer interaction.
Major technological leaps present interesting design and UX challenges and require a wholesale shift in perspective by designing for the as-yet unknown. Screens, keyboards, and mouse dominated yesterday and today. Tomorrow, these systems will be initiated, controlled, and tracked through location and environment, semantic context, a wave of the arm, a blink of an eye, a directed gaze, a heartbeat, a crowd-driven trend, even a brainwave.
Whole new approaches and design systems need to be considered for what the next wave of products do, what they look and feel like, and how they can be more meaningful, useful, relevant, and intuitive.
This talk discussed the UX of tomorrow for the next wave of product design based on some of the very first products and services on the market that hint at the integrate
08448380779 Call Girls In Civil Lines Women Seeking Men
The UX of Tomorrow: Designing for the Unknown by Jeff Feddersen
1. The UX of Tomorrow:
Designing for the
Unknown
MIT Enterprise Forum of NYC
June 4, 2015
Jeff Feddersen
fddrsn.net
2. Background: three alternate UX projects
• Li Ning Sport Challenge
• HBO “Superwall”
• Target StyleScape
Physical Computing @ NYU
• What pcomp is
• How it is taught
• Example projects
16. 1. Interactive Overview
Gizmos Eyes Mobile Humans
Simple hardware sensors
strategically located
throughout the space
Video/Depth streams
processed to support
interaction
Guests use their devices Event staff in the mix
Four broad categories of interaction tech have distinct infrastructure, execution, and cost
implications. Any of the four can be mixed together, and each can be scaled from
small+targeted to broad+comprehensive integration with the cinemagraph.
Design
docum
ent for Stylescape
17. Gizmos
Capture Board
CPU Data to cinemagraph
Display wall
Proximity
sensor
Floor pad
Switch
Button
Motion detector
In this scenario, the space will have small, simple sensors custom-built into the
environment or integrated with props. A single central computer reads the state of each
sensor, filters the data, and sends triggers to the video system.
Design
docum
ent for Stylescape
18. Eyes
CPU
Data to cinemagraph
Display Wall
CPU CPU CPU CPU CPU CPU
Cameras - either 2- or 3D, and used singly or in an array - watch the crowd. Computers
(approximately 1 per image stream) process the data into triggers for the video system.
Design
docum
ent for Stylescape
19. Mixed
Capture
CPU
Data to cinemagraph
Display wall
CPU CPU
The four categories can be mixed together to best
support specific interactions. However, cost and effort
are cumulative because there is almost no
infrastructure overlap.
CPU
Design
docum
ent for Stylescape
20. Summary
Gizmos Eyes Mobile Humans
Simple, scalable, many
possibilities from the same
components
Could be cool and subtle.
Can cover large space
Familiarity, contact beyond
event
Flexible, open ended,
resilient
Needs integration into
props, lots of cabling
breakable
Needs lots of processing to
extract smart triggers.
Optical cameras: lighting
Common Requires staffing, training,
management
Low cost (scalable) High cost (1:1 CPU camera) All costs in software/
campaign
Low cost
Medium effort (scalable) High effort Medium to High effort Low
Design
docum
ent for Stylescape
21. 3. Plan
IR RE CC CC RE IR
IR
IR Mic Custom or stock
control surface
CPU
With mic in
ADC
6-12 channels
CC
* *
*
* These might also be
accomplished with sensors.
IR
RE
CC
Mic
Infrared, motion, or similar
Rotary encoder or similar
Contact closure
Microphone
Design
docum
ent for Stylescape
22. Common attributes:
• Heterogenous systems with distinct boundaries
• Components joined by “network glue”
• (Typically UDP/OSC in my case)
• Concept precedes solution
25. From: https://itp.nyu.edu/physcomp/
WHAT IS PHYSICAL COMPUTING?
Physical Computing is an approach to computer-human
interaction design that starts by considering how
humans express themselves physically. Computer
interface design instruction often takes the computer
hardware for given — namely, that there is a keyboard, a
screen, speakers, and a mouse or trackpad or
touchscreen — and concentrates on teaching the
software necessary to design within those boundaries. In
physical computing, we take the human body and its
capabilities as the starting point, and attempt to
design interfaces, both software and hardware, that can
sense and respond to what humans can physically do.
26.
27. Requires thinking about
• 1-bit
• digital I/O e.g. button, LED
• Many-bits
• analog I/O e.g. knob, fading LED
• Ways to transduce aspects of the physical world
to varying electrical properties (typically
changing resistance->changing voltage)
28. Handle messy “real-world” inputs
Derive meaning from input:
what did user do vs. what did user want
Reconnect to meaningful output
29. Learn communication protocols like…
• Asynchronous Serial
• I2C
• SPI
…so you can connect to other “smart”
components such as:
• Accelerometers
• GPS
• Display drivers
• just about anything else…