Archive for the ‘technology’ Category

Mushtari: wear a microbial factory

Wednesday, July 15th, 2015

Front view of Mushtari filled with chemiluminescent fluid. Image: Paula Aguilera and Jonathan Williams.

How can we design relationships between the most primitive and sophisticated life forms? Can we design wearables embedded with synthetic microorganisms that can enhance and augment biological functionality? Meet Mushtari, a 3D-printed wearable designed as a 58 meter long microbial factory that uses synthetic biology to convert sunlight into useful products for humans and microbes.

Mushtari is created par William Patrick, Sunanda Sharma and Steven Keating, from the Mediated Matter group at MIT Media Lab en collaboration avec Stratasys.

They explored these questions through the creation of Mushtari, a 3D printed wearable with 58 meters of internal fluid channels. The wearable is designed to function as a microbial factory that uses synthetic biology to convert sunlight into useful products for the wearer.

More info here.

Making the Invisible Visible in Video

Thursday, June 21st, 2012

MIT researchers — graduate student Michael Rubinstein, recent alumni Hao-Yu Wu ‘12, MNG ‘12 and Eugene Shih SM ‘01, PhD ‘10, and professors William Freeman, Fredo Durand and John Guttag — will present new software at this summer’s Siggraph, the premier computer-graphics conference, that amplifies variations in successive frames of video that are imperceptible to the naked eye.

See the researchers’ full video and learn more on the project’s webpage: http://people.csail.mit.edu/mrub/vidmag/

The next step after Clocky, Catapy!

Wednesday, November 16th, 2011

Go Catapy, go!

Catapy from Yuichiro Katsumoto on Vimeo.

At UIST this Monday: Scopemate, a robotic microscope!

Monday, October 17th, 2011

I am at UIST this Monday to present one of my project along with my mentor Paul Dietz since I joined Microsoft Applied Sciences Group. It is a very quick but efficient solution for the ones who like to solder small components!

Summary
Scopemate is a robotic microscope that tracks the user for inspection microscopy. In this video, we propose a new interaction mechanism for inspection microscopy. The novel input device combines an optically augmented web-cam with a head tracker. A head tracker controls the inspection angle of a webcam fitted with ap-propriate microscope optics. This allows an operator the full use of their hands while intuitively looking at the work area from different perspectives. This work was done by researchers Cati Boulanger and Paul Dietz in the Applied Sciences Group at Microsoft and will be presented at UIST 2011 this Monday as both a demo and a poster!

Video

The secrets of a pop-up book!

Saturday, February 19th, 2011

screen-shot-2011-02-19-at-100429-pm.png

My current favorite pop up book for the iPad, the Three Little Pigs and the Secrets of a Pop-Up Book. Almost as interactive as a real pop up book! Thank you Sumit!
Of course, my favorite tech-pop up book for the iPad comes from les éditions volumiques!

You can find it -> here

220 petites Pixel-tiles

Tuesday, November 2nd, 2010

It’s really nice to see friends and co-workers from the MIT Media Lab making their ways to the contemporary art scene. Zigelbaum and Coelho keeps winning awards! After celebrating their Design Miami/Basel Designers of the Future award, they are now exhibiting in New York, you can see their work at the Johnson Trading Gallery.

They will show their computational light installation which steals the pixel from the screen and re-introduces it to the physical world. An ambitious, pulsating LED installation completes itself only when touched by the visitor, each movement modifying and transforming the work itself.

The gun-testing vault at Riflemaker will house 220 luminescent pixel-tiles. Visitors to the gallery will be able to change the colours of the tiles, create a rhythmic pulse and re-arrange the overall form of the square, magnetic blocks.
mail-attachment2.jpeg

Zigelbaum & Coelho is a design studio founded by Jamie Zigelbaum and Marcelo Coelho. Their work utilises physical, computational, and cultural materials in the service of creating new, but fundamentally human, experiences.

mail-attachment3.jpeg

The affective intelligent driving agent!

Monday, January 11th, 2010

AIDA is part of the Sociable Car - Senseable Cities project which is a collaboration between the Personal Robots Group at the MIT Media Lab and the Senseable Cities Group at MIT. The AIDA robot was designed and built by the Personal Robots Group, while the Senseable Cities Group is working on intelligent navigation algorithms.

blocks_image_2_11.jpg

One of the aim of the project is to expand the relationship between the car and the driver with the goal of making the driving experience more effective, safer, and more enjoyable. As part of this expanded relationship, the researchers plan to introduce a new channel of communication between automobile and driver/passengers. This channel would be modeled on fundamental aspects of human social interaction including the ability to express and perceive affective/emotional state and key social behaviors.

In pursuit of these aims they have developed the Affective Intelligent Driving Agent (AIDA), a novel in-car interface capable of communicating with the cars occupants using both physical movement and a high resolution display. This interface is a research platform, which can be used as a tool for evaluating various topics in the area of social human-automobile interaction. Ultimately, the research conducted using the AIDA platform should lead to the development of new kinds of automobile interfaces, and an evolution in the relationship between car and driver.

Currently the AIDA research platform consists of a fully functional robotic prototype embedded in a stand-alone automobile dash. The robot has a video camera for face and emotion recognition, touch sensing, and an embedded laser projector inside of the head. Currently a driving simulator is being developed around the AIDA research platform in order to explore this new field of social human-automobile interaction. The researcher’s intention is that a future version of the robot based on the current research will be installed into a functioning test vehicle.

The robot is super cute, I wonder how it can be more distracting than it is, maybe it should be installed in the back with the kids as a baby sitter, kids would have a blast with it! Don’t miss this video!

Music making machines

Friday, November 20th, 2009

I am such a fan of everyday objects with personality, like in the work of Yuri Suzuki, where music is constructed from daily domestic noises, or technologically advanced machines that produce music like in the pneumatic quintet by Pe Lang and Zimoun. I discovered recently the stunning work of Felix Thorn, the Felix’s machines, music making sculptures.

Video

.

Gesture Objects: movie making at the extension of natural play

Thursday, November 19th, 2009

I passed my PhD critique successfully! My committee: Hiroshi IshiiEdith Ackermann and Cynthia Breazeal. I will now focus on few more studies and building few more projects as much as I can before graduating (in 9 months). A little bit on my presentation …

1.jpg

Gesture Objects: Play it by Eye - Frame it by Hand!

I started with my master thesis Dolltalk, where I establish the ability to access perspective as part of gesture analysis built into new play environments. I then, move into a significant transition phase, where I research the cross-modal interface elements that contribute to various perspective taking behaviors. I also present new technologies I implemented to conduct automatic film assembly.

3.jpg

The structure of my presentation

At each step, I present the studies that allow me to establish principles which I use to build the final project, the centerpiece of my third phase of research, Picture This. At its final point, Picture This is a fluid interface, with seamless integration of gesture, object, audio and video interaction in open-ended play.

2.jpg

With Picture This! children make a movie from their toys views, using their natural gestures with toys to animate the character and command the video making assembly. I developed a filtering algorithm for gesture recognition through which angles of motions are detected and interpreted!

Finally, I developed a framework that I call “gesture objects” synthesizing the research as it relates to the field of tangible user interfaces.

4.jpg

Gesture Objects Framework: In a gesture object interface, the interface recognizes gestures while the user is holding objects and the gesture control of those object in the physical space influences the digital world.

A .pdf of my slides!

When atoms become bits!

Saturday, November 7th, 2009

 ad59_tuttukibako_black1.jpg

Directly from Japan, look at the Tuttuki Bako Virtual Finger Game! 100% real and fantastically crazy by simply sticking your finger in the hole and a digital representation appears on the screen. Then you can use your virtual finger to play all kinds of cool mini games… from swinging a panda to having a karate fight with a tiny little man. It’s so odd yet so wonderful.

You can find it at ThinkGeek

ad59_tuttukibako_black_screen.jpg