Unit 2 Physical Computing

Physical Computing is an approach to computer-human interaction design that starts by considering how humans express themselves physically. Computer interface design instruction often takes the computer hardware for given — namely, that there is a keyboard, a screen, speakers, and a mouse or trackpad or touchscreen — and concentrates on teaching the software necessary to design within those boundaries. In physical computing, we take the human body and its capabilities as the starting point, and attempt to design interfaces, both software and hardware, that can sense and respond to what humans can physically do. Starting with a person’s capabilities requires an understanding of how a computer can sense physical action. When we act, we cause changes in various forms of energy. Speech generates the air pressure waves that are sound. Gestures change the flow of light and heat in a space. Electronic sensors can convert these energy changes into changing electronic signals that can be read and interpreted by computers. In physical computing, we learn how to connect sensors to the simplest of computers, called microcontrollers, in order to read these changes and interpret them as actions. Finally, we learn how microcontrollers communicate with other computers in order to connect physical action with multimedia displays. Physical computing takes a hands-on approach, which means that you spend a lot of time building circuits, soldering, writing programs, building structures to hold sensors and controls, and figuring out how best to make all of these things relate to a person’s expression.

Examples in art and design

1. Jürg Lehni: Robotic chalk-drawing machine, Viktor.
2. David Bowen: Tele-Present Water, Tele-Kinetic Installation.
3. “Vending Machine” – Ellie Harrison

4. Georg Reil & Kathy Scheuring – Fine Collection of Curious Objects, Sound Sculptures.

Menu Class Structure