Archive for the 'product design' Category

16NovThe next step after Clocky, Catapy!

If you're new here, you may want to subscribe to my RSS feed to receive the latest Architectradure's articles in your reader or via email. Thanks for visiting!

Go Catapy, go!

Catapy from Yuichiro Katsumoto on Vimeo.

11JanThe affective intelligent driving agent!

AIDA is part of the Sociable Car - Senseable Cities project which is a collaboration between the Personal Robots Group at the MIT Media Lab and the Senseable Cities Group at MIT. The AIDA robot was designed and built by the Personal Robots Group, while the Senseable Cities Group is working on intelligent navigation algorithms.

blocks_image_2_11.jpg

One of the aim of the project is to expand the relationship between the car and the driver with the goal of making the driving experience more effective, safer, and more enjoyable. As part of this expanded relationship, the researchers plan to introduce a new channel of communication between automobile and driver/passengers. This channel would be modeled on fundamental aspects of human social interaction including the ability to express and perceive affective/emotional state and key social behaviors.

In pursuit of these aims they have developed the Affective Intelligent Driving Agent (AIDA), a novel in-car interface capable of communicating with the cars occupants using both physical movement and a high resolution display. This interface is a research platform, which can be used as a tool for evaluating various topics in the area of social human-automobile interaction. Ultimately, the research conducted using the AIDA platform should lead to the development of new kinds of automobile interfaces, and an evolution in the relationship between car and driver.

Currently the AIDA research platform consists of a fully functional robotic prototype embedded in a stand-alone automobile dash. The robot has a video camera for face and emotion recognition, touch sensing, and an embedded laser projector inside of the head. Currently a driving simulator is being developed around the AIDA research platform in order to explore this new field of social human-automobile interaction. The researcher’s intention is that a future version of the robot based on the current research will be installed into a functioning test vehicle.

The robot is super cute, I wonder how it can be more distracting than it is, maybe it should be installed in the back with the kids as a baby sitter, kids would have a blast with it! Don’t miss this video!

20NovMusic making machines

I am such a fan of everyday objects with personality, like in the work of Yuri Suzuki, where music is constructed from daily domestic noises, or technologically advanced machines that produce music like in the pneumatic quintet by Pe Lang and Zimoun. I discovered recently the stunning work of Felix Thorn, the Felix’s machines, music making sculptures.

Video

.

19NovGesture Objects: movie making at the extension of natural play

I passed my PhD critique successfully! My committee: Hiroshi IshiiEdith Ackermann and Cynthia Breazeal. I will now focus on few more studies and building few more projects as much as I can before graduating (in 9 months). A little bit on my presentation …

1.jpg

Gesture Objects: Play it by Eye - Frame it by Hand!

I started with my master thesis Dolltalk, where I establish the ability to access perspective as part of gesture analysis built into new play environments. I then, move into a significant transition phase, where I research the cross-modal interface elements that contribute to various perspective taking behaviors. I also present new technologies I implemented to conduct automatic film assembly.

3.jpg

The structure of my presentation

At each step, I present the studies that allow me to establish principles which I use to build the final project, the centerpiece of my third phase of research, Picture This. At its final point, Picture This is a fluid interface, with seamless integration of gesture, object, audio and video interaction in open-ended play.

2.jpg

With Picture This! children make a movie from their toys views, using their natural gestures with toys to animate the character and command the video making assembly. I developed a filtering algorithm for gesture recognition through which angles of motions are detected and interpreted!

Finally, I developed a framework that I call “gesture objects” synthesizing the research as it relates to the field of tangible user interfaces.

4.jpg

Gesture Objects Framework: In a gesture object interface, the interface recognizes gestures while the user is holding objects and the gesture control of those object in the physical space influences the digital world.

A .pdf of my slides!

01NovCup communicator

cupcom_06_small.jpg

cupcom_01_small.jpg

Cup communicator by Duncan Wilson. Tug the cord to activate, squeeze to talk and hold to the mouth and ear.

The design of the Cup Communicator is focused on the gesture of use and the relationship between the users and object. I aim to explore the potential of the product as a medium for interaction and reassess the way we use technology.

The form and function of the Cup Communicator refer to the ‘two-cans and string’ children’s toy and the physical factors involved with that device. This typology and its associations remind us of the magic and playfulness of our first communication devices.

cupcom_02_small.jpg

cupcom_03_small.jpg

cupcom_04_small.jpg


Archives

Content

Open Directory Project at dmoz.org
Add to Technorati Favorites
Digital Art Blogs - BlogCatalog Blog Directory