Archive for the 'research' Category

21JunMaking the Invisible Visible in Video

If you're new here, you may want to subscribe to my RSS feed to receive the latest Architectradure's articles in your reader or via email. Thanks for visiting!

MIT researchers — graduate student Michael Rubinstein, recent alumni Hao-Yu Wu ‘12, MNG ‘12 and Eugene Shih SM ‘01, PhD ‘10, and professors William Freeman, Fredo Durand and John Guttag — will present new software at this summer’s Siggraph, the premier computer-graphics conference, that amplifies variations in successive frames of video that are imperceptible to the naked eye.

See the researchers’ full video and learn more on the project’s webpage: http://people.csail.mit.edu/mrub/vidmag/

16NovThe next step after Clocky, Catapy!

Go Catapy, go!

Catapy from Yuichiro Katsumoto on Vimeo.

17OctAt UIST this Monday: Scopemate, a robotic microscope!

I am at UIST this Monday to present one of my project along with my mentor Paul Dietz since I joined Microsoft Applied Sciences Group. It is a very quick but efficient solution for the ones who like to solder small components!

Summary
Scopemate is a robotic microscope that tracks the user for inspection microscopy. In this video, we propose a new interaction mechanism for inspection microscopy. The novel input device combines an optically augmented web-cam with a head tracker. A head tracker controls the inspection angle of a webcam fitted with ap-propriate microscope optics. This allows an operator the full use of their hands while intuitively looking at the work area from different perspectives. This work was done by researchers Cati Boulanger and Paul Dietz in the Applied Sciences Group at Microsoft and will be presented at UIST 2011 this Monday as both a demo and a poster!

Video

11JanThe affective intelligent driving agent!

AIDA is part of the Sociable Car - Senseable Cities project which is a collaboration between the Personal Robots Group at the MIT Media Lab and the Senseable Cities Group at MIT. The AIDA robot was designed and built by the Personal Robots Group, while the Senseable Cities Group is working on intelligent navigation algorithms.

blocks_image_2_11.jpg

One of the aim of the project is to expand the relationship between the car and the driver with the goal of making the driving experience more effective, safer, and more enjoyable. As part of this expanded relationship, the researchers plan to introduce a new channel of communication between automobile and driver/passengers. This channel would be modeled on fundamental aspects of human social interaction including the ability to express and perceive affective/emotional state and key social behaviors.

In pursuit of these aims they have developed the Affective Intelligent Driving Agent (AIDA), a novel in-car interface capable of communicating with the cars occupants using both physical movement and a high resolution display. This interface is a research platform, which can be used as a tool for evaluating various topics in the area of social human-automobile interaction. Ultimately, the research conducted using the AIDA platform should lead to the development of new kinds of automobile interfaces, and an evolution in the relationship between car and driver.

Currently the AIDA research platform consists of a fully functional robotic prototype embedded in a stand-alone automobile dash. The robot has a video camera for face and emotion recognition, touch sensing, and an embedded laser projector inside of the head. Currently a driving simulator is being developed around the AIDA research platform in order to explore this new field of social human-automobile interaction. The researcher’s intention is that a future version of the robot based on the current research will be installed into a functioning test vehicle.

The robot is super cute, I wonder how it can be more distracting than it is, maybe it should be installed in the back with the kids as a baby sitter, kids would have a blast with it! Don’t miss this video!

19NovGesture Objects: movie making at the extension of natural play

I passed my PhD critique successfully! My committee: Hiroshi IshiiEdith Ackermann and Cynthia Breazeal. I will now focus on few more studies and building few more projects as much as I can before graduating (in 9 months). A little bit on my presentation …

1.jpg

Gesture Objects: Play it by Eye - Frame it by Hand!

I started with my master thesis Dolltalk, where I establish the ability to access perspective as part of gesture analysis built into new play environments. I then, move into a significant transition phase, where I research the cross-modal interface elements that contribute to various perspective taking behaviors. I also present new technologies I implemented to conduct automatic film assembly.

3.jpg

The structure of my presentation

At each step, I present the studies that allow me to establish principles which I use to build the final project, the centerpiece of my third phase of research, Picture This. At its final point, Picture This is a fluid interface, with seamless integration of gesture, object, audio and video interaction in open-ended play.

2.jpg

With Picture This! children make a movie from their toys views, using their natural gestures with toys to animate the character and command the video making assembly. I developed a filtering algorithm for gesture recognition through which angles of motions are detected and interpreted!

Finally, I developed a framework that I call “gesture objects” synthesizing the research as it relates to the field of tangible user interfaces.

4.jpg

Gesture Objects Framework: In a gesture object interface, the interface recognizes gestures while the user is holding objects and the gesture control of those object in the physical space influences the digital world.

A .pdf of my slides!


Archives

Content

Open Directory Project at dmoz.org
Add to Technorati Favorites
Digital Art Blogs - BlogCatalog Blog Directory