Author: Julie Knight

  • 23JunYour hand is the new magic wand!

    14.jpg

    OnObject designed by Keywon Chung, Michael Shilman, Chris Merrill and Hiroshi Ishii is a small device user wears on hand to program physical objects to respond to gestural triggers.

    13.jpg

    Attach an RFID tag to any objects, grab them by the tag, and program their responses to your grab, release, shake, swing, and thrust gestures using built in microphone or on-screen interface. Using OnObject, children, parents, teachers and end users can instantly create gestural object interfaces and enjoy them. Copy-paste the programming from one object to another to propagate the interactivity in your environment.

    19.jpg

    Watch the videos!

    Applications

    Children recording on objects

    System


  • 19JunSIGGRAPH 2010 : Emerging Technologies Trailer

    If you’re new here, you may want to subscribe to my RSS feed to receive the latest Architectradure’s articles in your reader or via email. Thanks for visiting!


  • 17JunOk go magic numbers!

    If you’re new here, you may want to subscribe to my RSS feed to receive the latest Architectradure’s articles in your reader or via email. Thanks for visiting!

    Jeff Liebermann has created so many creative projects that I blogged on, including a few collaborative projects that we worked on together (lucky me!) Now he is doing the coolest music videos. Recently, Eric Gunther and Jeff Liebermann just made a music video with OK GO. You’ve seen Ok go on treadmills and in their backyard, but you’ve never seen them like this. With some fancy cameras and a little magic, they figured out how to dance with time.

    For those of you who like numbers…

    The fastest they go is 172,800x, compressing 24 hours of real time into a blazing 1/2 second. The slowest is 1/32x speed, stretching a mere 1/2 second of real time into a whopping 16 seconds. This gives them a fastest to slowest ratio of 5.5 million. If you like averages, the average speed up factor of the band dancing is 270x. In total they shot 18 hours of the band dancing and 192 hours of LA skyline timelapse – over a million frames of video – and compressed it all down to 4 minutes and 30 seconds! Oh, and notice that it’s one continuous camera shot.

    They also made a special friend in the process. Her name is Orange Bill and she’s a goose. You will agree that she clearly has a future in music videos.

    The song is called End Love and it’s off their new album.

    And they had hella fun making it and I believe them!


  • 17JunMagic books

    If you’re new here, you may want to subscribe to my RSS feed to receive the latest Architectradure’s articles in your reader or via email. Thanks for visiting!

    Talking about creative projects and about amazing people, Etienne Mineur, who was my professor in Paris (lucky me again!!) along with Bertrand Duplat just released their éditions volumiques.

    They consider paper a new computing platform, envisioning an OS made of paper, video games in paper, etc… They also research on the relationship between the act of reading and the physical handling of a book along with their relationship to new technologies. The core concept in this work is to stop opposing the digital world to the paper world but on the contrary to find a synergy and complementarity: working on tangible books, connected and magical.

    Here they are, this new series of prototypes and research on this subject, some of the magic book will be available in September 2010!

    Some avant-première videos!

    (i)Pawn

    (i)Pirates

    The Night of the Living Dead Pixels

    Meeting-Zombies

    Le livre qui tourne ses pages tout seul

    le livre qui disparaît

  • 02JunA tribute to a love for books

    If you’re new here, you may want to subscribe to my RSS feed to receive the latest Architectradure’s articles in your reader or via email. Thanks for visiting!

    French Artist Olivier Vaubourg, based in Zagreb, Croatia, explores the relationship between light, textures and the chosen words of books he loves.

    The enlightened Machiavelli

    4639945384_7cd250dc9d.jpg

    Baudrillard | The perfect crime | Blood

    4641065377_d0e5b4e6fc.jpg

    Lyotard | Condition post-moderne | Ligne

    4645369857_0ac20f381d.jpg

    Guattari | Chaosmose | Slice

    4657817520_d6e5a5fe79.jpg

    Do you feel the power ?

    4660846266_ac236754b7.jpg


  • 11MayAmazon’s reviews

    If you’re new here, you may want to subscribe to my RSS feed to receive the latest Architectradure’s articles in your reader or via email. Thanks for visiting!

    screen-shot-2010-05-11-at-72119-pm.png

    I was reading some Amazon reviews and stumbled onto this dude’s reviews that are hilarious!

    Start with the Creative Fatality Gaming Headset, and you’ll move onto the review for Uranium Ore …

    screen-shot-2010-05-11-at-72347-pm.png

    I will spend more time reading Amazon’s reviews from now on, thank you Adam!!

  • 16FebPicture This! into a product

    If you’re new here, you may want to subscribe to my RSS feed to receive the latest Architectradure’s articles in your reader or via email. Thanks for visiting!

    Working on tangible video capturing, editing and performing systems since ‘00, it is nice to see a product that is directly translated from my research! At the MIT Media Lab, I was Mattel fellow for four consecutive terms and for my PhD I created Picture This, basically dolls with camera integrated in their accessories to alternate view points, record and play back videos. Mattel will release their first doll with video recorder integrated in July 2010!

    barbie1.jpgbarbie2.jpgbarbie3.jpg

    Pictures from Chick Chiplets

    So more details about the product: Mattel developed a toy that features a video camera built directly into Barbie’s necklace with a LCD video screen on her back, so you can record and view everything that Barbie’s seen and experienced!

    lcdbarbie6.jpg

    You can record videos up to 30 minutes long and even edit videos (add music and sound effects) on Barbie.com. The Barbie Video Girl Doll will cost around $50 and will be available in July 2010…


  • 16FebAugmented (hyper)Reality: Domestic Robocop

    If you’re new here, you may want to subscribe to my RSS feed to receive the latest Architectradure’s articles in your reader or via email. Thanks for visiting!

    Augmented (hyper)Reality: Domestic Robocop from Keiichi Matsuda on Vimeo.

  • 02FebA DIY multitouch keyboard & a smart glass without electronics!

    If you’re new here, you may want to subscribe to my RSS feed to receive the latest Architectradure’s articles in your reader or via email. Thanks for visiting!

    I’ve had a chance to give a talk at Microsoft Research this January and also to meet fantastic researchers. I’ve had very inspiring discussions about the future of HCI bringing design, fabrication and strong theoretical foundations into the mix. I’ve also visited unique labs and see some neat projects. For instance, Paul Dietz from the Applied Sciences Group showed me his and his team keyboard, applying his technical contribution for the famous MERL diamond touch table into a regular keyboard, making it not only multi touch (you can press multiple keys at the same time as input and receive outputs accordingly), but it is also pressure sensitive! The keyboard was presented at UIST this year.
    Here’s a video:

    The research has already been partially integrated into a product, that will be released by March 2010. This keyboard, the SideWinder X4, will be extremely nice for keyboard gamers (like myself) who suffers from the ghosting problem: when my keyboard loses track of key presses when I am already holding down another key. This new keyboard allows a gamer to press up to 26 keys at the same time!!Among other research products, the team explored the possibility for a table top interface, such as Microsoft surface, to recognize everyday objects without the use of any electronics. The team applied optics to a simple empty vs half full glass detection problem, so a drinking glass can sense when a refill should be offered. The glass had to be modified in fabrication with a prism-like structure at the bottom of the glass to reflect light when it is not submerged with liquid. The surface table sends IR light directly up towards the prism and when the glass is almost empty, the IR light reflect back at a different angle than when the glass is full. Such a nice trick and it allows the table to function with passive objects containing no electronic components or moving parts!

    screen-shot-2010-02-02-at-54527-pm.png

  • 11JanThe affective intelligent driving agent!

    If you’re new here, you may want to subscribe to my RSS feed to receive the latest Architectradure’s articles in your reader or via email. Thanks for visiting!

    AIDA is part of the Sociable Car – Senseable Cities project which is a collaboration between the Personal Robots Group at the MIT Media Lab and the Senseable Cities Group at MIT. The AIDA robot was designed and built by the Personal Robots Group, while the Senseable Cities Group is working on intelligent navigation algorithms.

    blocks_image_2_11.jpg

    One of the aim of the project is to expand the relationship between the car and the driver with the goal of making the driving experience more effective, safer, and more enjoyable. As part of this expanded relationship, the researchers plan to introduce a new channel of communication between automobile and driver/passengers. This channel would be modeled on fundamental aspects of human social interaction including the ability to express and perceive affective/emotional state and key social behaviors.

    In pursuit of these aims they have developed the Affective Intelligent Driving Agent (AIDA), a novel in-car interface capable of communicating with the cars occupants using both physical movement and a high resolution display. This interface is a research platform, which can be used as a tool for evaluating various topics in the area of social human-automobile interaction. Ultimately, the research conducted using the AIDA platform should lead to the development of new kinds of automobile interfaces, and an evolution in the relationship between car and driver.

    Currently the AIDA research platform consists of a fully functional robotic prototype embedded in a stand-alone automobile dash. The robot has a video camera for face and emotion recognition, touch sensing, and an embedded laser projector inside of the head. Currently a driving simulator is being developed around the AIDA research platform in order to explore this new field of social human-automobile interaction. The researcher’s intention is that a future version of the robot based on the current research will be installed into a functioning test vehicle.

    The robot is super cute, I wonder how it can be more distracting than it is, maybe it should be installed in the back with the kids as a baby sitter, kids would have a blast with it! Don’t miss this video!