Archive for the ‘interaction design’ Category

Mushtari: wear a microbial factory

Wednesday, July 15th, 2015

Front view of Mushtari filled with chemiluminescent fluid. Image: Paula Aguilera and Jonathan Williams.

How can we design relationships between the most primitive and sophisticated life forms? Can we design wearables embedded with synthetic microorganisms that can enhance and augment biological functionality? Meet Mushtari, a 3D-printed wearable designed as a 58 meter long microbial factory that uses synthetic biology to convert sunlight into useful products for humans and microbes.

Mushtari is created par William Patrick, Sunanda Sharma and Steven Keating, from the Mediated Matter group at MIT Media Lab en collaboration avec Stratasys.

They explored these questions through the creation of Mushtari, a 3D printed wearable with 58 meters of internal fluid channels. The wearable is designed to function as a microbial factory that uses synthetic biology to convert sunlight into useful products for the wearer.

More info here.

The next step after Clocky, Catapy!

Wednesday, November 16th, 2011

Go Catapy, go!

Catapy from Yuichiro Katsumoto on Vimeo.

At UIST this Monday: Scopemate, a robotic microscope!

Monday, October 17th, 2011

I am at UIST this Monday to present one of my project along with my mentor Paul Dietz since I joined Microsoft Applied Sciences Group. It is a very quick but efficient solution for the ones who like to solder small components!

Summary
Scopemate is a robotic microscope that tracks the user for inspection microscopy. In this video, we propose a new interaction mechanism for inspection microscopy. The novel input device combines an optically augmented web-cam with a head tracker. A head tracker controls the inspection angle of a webcam fitted with ap-propriate microscope optics. This allows an operator the full use of their hands while intuitively looking at the work area from different perspectives. This work was done by researchers Cati Boulanger and Paul Dietz in the Applied Sciences Group at Microsoft and will be presented at UIST 2011 this Monday as both a demo and a poster!

Video

The evolution of the architectural medium in engaging digital 3D

Friday, August 12th, 2011

A pretty neat thesis from the Graduate School of Design at Harvard, Greg Tran explains that the traditional mode of material production moves forward, but three new forms of design emerge. Digital 3d immersion is the first and is most similar to virtual reality (but has little to nothing to do with architecture.) It is a simulated environment which is entirely digital and relies on material/site specificity as little as possible. Digital 3d renovation is where existing facilities are retrofit with site specific D3d software and environment recognition, but the final condition is Digital 3d architecture. This bridges the design gap between the digital and the material.

The purpose of his thesis is not to design an architecture that works perfectly within this new medium, but rather to highlight the medium itself, research potentials, create kernel ideas and discover the implications that this type of reality would hold.

Video

More versions:

Final segment here (2.5 minutes) Mediating Mediums - The Digital 3d (Part 3)
Short version here (5.5 minutes) Mediating Mediums - The Digital 3d (Short Version)
Long version here (19minute version)  - Mediating Mediums: The Digital 3d

220 petites Pixel-tiles

Tuesday, November 2nd, 2010

It’s really nice to see friends and co-workers from the MIT Media Lab making their ways to the contemporary art scene. Zigelbaum and Coelho keeps winning awards! After celebrating their Design Miami/Basel Designers of the Future award, they are now exhibiting in New York, you can see their work at the Johnson Trading Gallery.

They will show their computational light installation which steals the pixel from the screen and re-introduces it to the physical world. An ambitious, pulsating LED installation completes itself only when touched by the visitor, each movement modifying and transforming the work itself.

The gun-testing vault at Riflemaker will house 220 luminescent pixel-tiles. Visitors to the gallery will be able to change the colours of the tiles, create a rhythmic pulse and re-arrange the overall form of the square, magnetic blocks.
mail-attachment2.jpeg

Zigelbaum & Coelho is a design studio founded by Jamie Zigelbaum and Marcelo Coelho. Their work utilises physical, computational, and cultural materials in the service of creating new, but fundamentally human, experiences.

mail-attachment3.jpeg

The affective intelligent driving agent!

Monday, January 11th, 2010

AIDA is part of the Sociable Car - Senseable Cities project which is a collaboration between the Personal Robots Group at the MIT Media Lab and the Senseable Cities Group at MIT. The AIDA robot was designed and built by the Personal Robots Group, while the Senseable Cities Group is working on intelligent navigation algorithms.

blocks_image_2_11.jpg

One of the aim of the project is to expand the relationship between the car and the driver with the goal of making the driving experience more effective, safer, and more enjoyable. As part of this expanded relationship, the researchers plan to introduce a new channel of communication between automobile and driver/passengers. This channel would be modeled on fundamental aspects of human social interaction including the ability to express and perceive affective/emotional state and key social behaviors.

In pursuit of these aims they have developed the Affective Intelligent Driving Agent (AIDA), a novel in-car interface capable of communicating with the cars occupants using both physical movement and a high resolution display. This interface is a research platform, which can be used as a tool for evaluating various topics in the area of social human-automobile interaction. Ultimately, the research conducted using the AIDA platform should lead to the development of new kinds of automobile interfaces, and an evolution in the relationship between car and driver.

Currently the AIDA research platform consists of a fully functional robotic prototype embedded in a stand-alone automobile dash. The robot has a video camera for face and emotion recognition, touch sensing, and an embedded laser projector inside of the head. Currently a driving simulator is being developed around the AIDA research platform in order to explore this new field of social human-automobile interaction. The researcher’s intention is that a future version of the robot based on the current research will be installed into a functioning test vehicle.

The robot is super cute, I wonder how it can be more distracting than it is, maybe it should be installed in the back with the kids as a baby sitter, kids would have a blast with it! Don’t miss this video!

When building blocks meet craft …

Monday, November 30th, 2009

 screen-shot-2009-11-30-at-42707-pm.pngscreen-shot-2009-11-30-at-42654-pm.png

Looking for baby clothing at Muji Japan as suggested by Kimiko, I came accross this o! surprising Lego & Muji love affair. I don’t understand Japanese so I did not make lots of sense with the text, but it seems pretty neat:

screen-shot-2009-11-30-at-42940-pm.pngscreen-shot-2009-11-30-at-42923-pm.png

You combine Lego bricks to craft materials to fluidly assemble creatures, people, or even Christmas cards. A great way to expand the way kids work with traditional Lego blocks, integrating unlimited paper craft creations, meaning unlimited imagination.

screen-shot-2009-11-30-at-42507-pm.png

As soon as this becomes available here, I’ll get myself a kit!!

Gesture Objects: movie making at the extension of natural play

Thursday, November 19th, 2009

I passed my PhD critique successfully! My committee: Hiroshi IshiiEdith Ackermann and Cynthia Breazeal. I will now focus on few more studies and building few more projects as much as I can before graduating (in 9 months). A little bit on my presentation …

1.jpg

Gesture Objects: Play it by Eye - Frame it by Hand!

I started with my master thesis Dolltalk, where I establish the ability to access perspective as part of gesture analysis built into new play environments. I then, move into a significant transition phase, where I research the cross-modal interface elements that contribute to various perspective taking behaviors. I also present new technologies I implemented to conduct automatic film assembly.

3.jpg

The structure of my presentation

At each step, I present the studies that allow me to establish principles which I use to build the final project, the centerpiece of my third phase of research, Picture This. At its final point, Picture This is a fluid interface, with seamless integration of gesture, object, audio and video interaction in open-ended play.

2.jpg

With Picture This! children make a movie from their toys views, using their natural gestures with toys to animate the character and command the video making assembly. I developed a filtering algorithm for gesture recognition through which angles of motions are detected and interpreted!

Finally, I developed a framework that I call “gesture objects” synthesizing the research as it relates to the field of tangible user interfaces.

4.jpg

Gesture Objects Framework: In a gesture object interface, the interface recognizes gestures while the user is holding objects and the gesture control of those object in the physical space influences the digital world.

A .pdf of my slides!

Cup communicator

Sunday, November 1st, 2009

cupcom_06_small.jpg

cupcom_01_small.jpg

Cup communicator by Duncan Wilson. Tug the cord to activate, squeeze to talk and hold to the mouth and ear.

The design of the Cup Communicator is focused on the gesture of use and the relationship between the users and object. I aim to explore the potential of the product as a medium for interaction and reassess the way we use technology.

The form and function of the Cup Communicator refer to the ‘two-cans and string’ children’s toy and the physical factors involved with that device. This typology and its associations remind us of the magic and playfulness of our first communication devices.

cupcom_02_small.jpg

cupcom_03_small.jpg

cupcom_04_small.jpg

Play-it-by-eye! Collect movies and improvise perspectives with tangible video objects

Monday, June 22nd, 2009

My journal paper Play-it-by-eye! Collect movies and improvise perspectives with tangible video objects is now published at Cambridge University Press!

The paper in .pdf ->here<-

We present an alternative video-making framework for children with tools that integrate video capture with movie production. We propose different forms of interaction with physical artifacts to capture storytelling. Play interactions as input to video editing systems assuage the interface complexities of film construction in commercial software. We aim to motivate young users in telling their stories, extracting meaning from their experiences by capturing supporting video to accompany their stories, and driving reflection on the outcomes of their movies. We report on our design process over the course of four research projects that span from a graphical user interface to a physical instantiation of video. We interface the digital and physical realms using tangible metaphors for digital data, providing a spontaneous and collaborative approach to video composition. We evaluate our systems during observations with 4- to 14-year-old users and analyze their different approaches to capturing, collecting, editing, and performing visual and sound clips.