Category: research

  • 02AprAbsolut

    If you’re new here, you may want to subscribe to my RSS feed to receive the latest Architectradure’s articles in your reader or via email. Thanks for visiting!

    the official Absolut Quartet ad, shot by Laurent Seroussi and designed by TBWA.

    Absolut Quartet ad, shot by Laurent Seroussi and designed by TBWA.

    Jeff did it again. We followed his adventures right after he won the competition. Now he completed the proposal and currently exhibits his spectacular robotic work. Music and vodka works in pair and this time beautiful mechanics come into play. Jeff Liebermann and Dan Paluska worked together on Absolut Quartet.

    closeup of some of the 100 custom electronics boards fabricated, one for every note.

    Closeup of some of the 100 custom electronics boards fabricated, one for every note.

    Absolut Quartet, a commission for the Absolut Visionaries project, is a music making machine like no other. The audience becomes part of the performance, while watching something that appears impossible. You can log in to ABSOLUTMACHINES.COM for a chance to interact with the machine. You will enter a 4-8 second theme, and the machine will generate, in real-time, a unique musical piece based on the input melody you have provided.

    the marimba shooting mechanisms and closeup of the wine players. photo by sesse lind.

    The marimba shooting mechanisms and closeup of the wine players. Photo by Sesse Lind.

    You will see this melody played by three instruments. The main instrument is a ballistic marimba, which launches rubber balls roughly 2m into the air, precisely aimed to bounce off of 42 chromatic wooden keys. The second instrument is an array of 35 custom-tuned wine glasses, played by robotic fingers. Finally, an array of 9 ethnic percussion instruments rounds out the ensemble.



    Video

    Don’t forget to check the sound machines by Pe Lang and Zimoun, and by Festo.

    Posted by Cati Vaucelle @ Architectradure


  • 15MarThese little things in the dark …

    If you’re new here, you may want to subscribe to my RSS feed to receive the latest Architectradure’s articles in your reader or via email. Thanks for visiting!

    Our childhood was filled with creatures hidden in the dark. The feeling of them existing outside of our imagination was a source of interaction with the physical world, creating places for them to live. Our imaginary friends were sharing our secrets, they were our closest partner in the world discovery. One book that I recommend on the subject is The House of Make-Believe: Children’s Play and the Developing Imagination by Dorothy G. Singer and Jerome L. Singer, one of my favorite book on imagination and child development.



    Children interacting with Kage no Sekai

    When I discovered Kage no Sekai, I immediately felt in love with it. The piece projects cute tiny creatures on shadows -and only on shadows- so that anyone can play with them, try to grab them, make them exist in specific places with shadows created just for them, or even trap them (see video of the children interaction with the system).



    Photo by the authors of Kage no Sekai

    “This device expresses this perspective not by using existing media but in the real world itself. The mechanism is concealed, giving the device the appearance of an ordinary piece of furniture. Although at first glance it looks like a regular wooden table, if you look at the shadows on its surface you’ll see the movement of mysterious life forms. When you approach it to have a better look, they sense your presence and hide away. They do not emerge while human shadows are cast over the table, but the life forms hiding within a distant shadow are watching them.”

    Video




  • 06FebPainting on real objects

    If you’re new here, you may want to subscribe to my RSS feed to receive the latest Architectradure’s articles in your reader or via email. Thanks for visiting!

    Deepak Bandyopadhyay, Ramesh Raskar, Henry Fuchs have built a prototype system for virtual painting on real movable objects. A project from 2001 that should by now be easier to democratize!

    Imagine a world where all the objects around you can be animated and augmented interactively in real time; where you can, for instance, paint virtual designs on objects in the environment, which then stay in place as you modify or move them around! This opens up new possibilities for interaction in augmented environments, and gives rise to new applications in tele-immersion, medicine, architecture, art and user interfaces.

    Check also Kimiko Ryokai’s digital paintbrush. Her brush allows artists to draw digitally with an “ink” they just picked up from their immediate environment.

    Posted by Cati Vaucelle

    Architectradure

    ………………………………………


  • 08SepHand-eye coordination at 22 months?

    If you’re new here, you may want to subscribe to my RSS feed to receive the latest Architectradure’s articles in your reader or via email. Thanks for visiting!

    Researching on hand-eye coordination, around 5-7 they still are supposed to develop it. I found this 22 month old toddler pretty good at playing Wii-Tennis!

    Hand-eye coordination – Definition

    Hand-eye coordination is the ability of the vision system to coordinate the information received through the eyes to control, guide, and direct the hands in the accomplishment of a given task, such as handwriting or catching a ball. Hand-eye coordination uses the eyes to direct attention and the hands to execute a task.

    Description

    Vision is the process of understanding what is seen by the eyes. It involves more than simple visual acuity (ability to distinguish fine details). Vision also involves fixation and eye movement abilities, accommodation (focusing), convergence (eye aiming), binocularity (eye teaming), and the control of hand-eye coordination. Most hand movements require visual input to be carried out effectively. For example, when children are learning to draw, they follow the position of the hand holding the pencil visually as they make lines on the paper.

    From “Hand-Eye Coordination.” Encyclopedia of Children’s Health. Ed. Kristine Krapp and Jeffrey Wilson. Gale Group, Inc., 2005. eNotes.com. 2006. 8 Sep, 2007

    More description here

    .pdf of the paper


  • 05SepOperation for adults!

    If you’re new here, you may want to subscribe to my RSS feed to receive the latest Architectradure’s articles in your reader or via email. Thanks for visiting!



    Operation, game by Hasbro

    Today, I met with TMG alumni Paul Yarin. One of his latest project, the interactive sensing module for laparoscopic trainer, developed with Wendy Plesniak reminded me of the funniest childhood game Operation created by Hasbro. The child practices coordination skills by removing the patients symptoms with the tweezers.

    The sophisticated and impressive Interactive sensing module for laparoscopic trainer is a self-contained simulator for structured testing and training of skills used in laparoscopic surgery. Digital video and electronic sensors capture user performance and is approved to be used by medical centers to train and test critical laparoscopic skills. This is such a clever implementation. The advantages of physical objects as tools and the power of computer simulation are combined at their best.

    “This interactive laparoscopic training simulator combines the best of physical and virtual simulation into a plug ‘n’ play solution. It combines validated physical reality exercises, computerized assessment, and validated McGill Metrics. Electronic sensors and digital video capture user performance with a PC interface.”



    An example of practice task

    Real Laparoscopic Simulation’s web site


  • 20AugThe hand free controller by nintendo

    If you’re new here, you may want to subscribe to my RSS feed to receive the latest Architectradure’s articles in your reader or via email. Thanks for visiting!

    In 1989, Nintendo invents the hand free controller. A player controls the video game without the use of the hands, by using his/her neck muscles. Found on nesplayer. Exploration of physical limitations in game design is interesting, especially when in 2006 Nintendo invents the hand necessity controller, the popular Nintendo Wii.


  • 27JulWhy Toys Shouldn’t Work “Like Magic”

    If you’re new here, you may want to subscribe to my RSS feed to receive the latest Architectradure’s articles in your reader or via email. Thanks for visiting!

    Mark D. Gross, Michael Eisenberg, “Why Toys Shouldn’t Work “Like Magic”: Children’s Technology and the Values of Construction and Control,” digitel, pp. 25-32, The First IEEE International Workshop on Digital Game and Intelligent Toy Enhanced Learning (DIGITEL’07), 2007

    abstract

    The design and engineering of children’s artifacts-like engineering in general-exhibits a recurring philosophical tension between what might be called an emphasis on “ease of use” on the one hand, and an emphasis on “user empowerment” on the other. This paper argues for a style of technological toy design that emphasizes construction, mastery, and personal expressiveness for children, and that consequently runs counter to the (arguably ascendant) tradition of toys that work “like magic”. We describe a series of working prototypes from our laboratories-examples that illustrate new technologies in the service of children’s construction and we use these examples to ground a wider-ranging discussion of toy design and potential future work.


  • 27JulVideo, toys and perspective taking

    If you’re new here, you may want to subscribe to my RSS feed to receive the latest Architectradure’s articles in your reader or via email. Thanks for visiting!

    I discovered this fabulous experimental research on perspective taking by developmental psychologist Masuo Koyasu.

    Masuo Koyasu’s web site (in Japanese only).

    In the 1980s, I was interested in studying the development of perspective-taking in young children. Piaget’s “three mountains task” had demonstrated that children find it difficult to understand how something looks to a person who is in a different position from themselves. In fact, younger children exhibit a strong tendency to choose their own view when asked to indicate how an object looks to someone in another position, a tendency that Piaget called “egocentrism.” I thought there are three dimensions of egocentrism (up and down, front and back, and left and right), and that children’s difficulty in understanding different perspectives might be because they do not receive feedback about other people’s perspectives. To test this hypothesis, I conducted a series of experiments with kindergarteners.


    Figure 1. Experimental Situation
    A:Child,B:Experimenter,C:Sample Photos,D:Place to put toy animal(s),E:Three toy animals,F:Still camera or video camera

    The task in the first experiment was to face a camera set up across from them and then to arrange one to three toy animals in a way that would produce a photograph like the sample (Figure 1). Forty-three percent of the four-year-olds exhibited front and back egocentrism by placing the toy animals’ backs to the camera. That tendency had mostly disappeared among the five-year-olds and six-year-olds, but it became clear that hardly any of the four- to six-year-olds could position two or three toy animals in the correct left-to-right order. In a second experiment, I used a video camera instead of a still camera and provided video feedback, showing an image of the toy animals as viewed from the opposite side on a color CRT monitor. In the control group, which was shown only the CRT monitor, the children were able to correct their front-back egocentrism on their own but were not informed of their errors. Even in the experimental group, which received instruction and practice in correcting left-right egocentrism, the effect on their post-test results was clearly small (Figure 2).


    Figure 2. Mean number correct in each condition

    Until the age of about seven, most children facing a teacher who says, “Let’s raise our right hands” while raising his or her own right hand will raise their left hands.
    Incidentally, research into perspective-taking abilities has traditionally focused on investigating how children understand other people’s viewpoints, but I have noticed a serious limitation in the paradigm commonly used to study this. In the case of the “three mountains task,” even if children can’t directly guess the viewpoint of a person in another position, they can solve the problem by conducting a mental simulation in which they imagine that they have gone to the other person’s position, or by a type of mental rotation, in which they imagine that the object has been placed on a lazy Susan and rotated to the correct position. The lack of methodological distinctions in the perspective-taking paradigm was a major problem. As I was worrying about how to think about this problem, I encountered research into “theory of mind.” In particular, I spent ten months as a visiting scholar in the Department of Experimental Psychology at the University of Oxford from 1994 to 1995, where I had the opportunity to come into contact with the front lines of British research into cognitive development. After returning to Japan, I began studying “theory of mind,” but at that time, hardly anyone else in the country was doing so. Without intending to, I have had to carry out the role of “missionary” in the field of “theory of mind” in Japan.
    The most famous experiment in “theory of mind” is the false belief task (the so-called “Sally and Anne task”) of Josef Perner and his colleagues. “Sally puts a doll in a basket. While Sally is away, Anne takes the doll out of the basket and puts it into a box nearby. Sally then returns and the child is asked where Sally will look for her doll.” In general, three-year-olds can’t pass this task, but they become able to do so between the ages of four and six. It has also been demonstrated that even high-functioning autistic children can’t pass this task. It is odd that most young children are easily deceived by this task, which is no problem at all for adults. I have been observing the daily lives of children at a Kyoto kindergarten once a week for three years, as well as conducting developmental research, including the false belief task. As a result, I have obtained longitudinal data on “theory of mind” (Figure 3).


    Figure 3. Results of a longitudinal study of “theory of mind”

    The data presented in this figure began with 15 children, with 4 more children transferring in at the ages of four and five, for a total of 19 children at the end. Only one child regressed from being able to pass the task to failing it, but he was a boy who became extremely nervous and made mistakes in the testing situation at age five and six. The fact that I was conducting experiments on children with whom I was in contact on a daily basis made me feel that I could interpret the results more broadly.

  • 04DecThe ambient peacock explorer

    If you’re new here, you may want to subscribe to my RSS feed to receive the latest Architectradure’s articles in your reader or via email. Thanks for visiting!

    The Ambient Peacock Explorer

    I developed the Ambient Peacock Explorer as a framework for mobile units to document on their environment and report back to a central hub

    I believe that new work in this area can physically substantiate the documentation through tagging, the incorporation of physical communities or other conceptual redefinitions of the environment one seeks to capture.

    structure mobiles units + headquarter

    mobile units independent from the headquarter
    One shell per context of exploration. Shell inflated on top of the structure to indicate where the mobile unit is going.
    Context based shells per unit
    Water: Jelly Fish Organic Shell
    Countryside: Wooden structure
    City: Inflatable Concrete
    Air: Blimp

    headquater
    Is composed of four gathering areas: the air, the countryside, the city and the water area, a studio and an editing room. Each wall receives life feed from the mobile units based on each unit context. Environmental data from sensing mobile units are also projected on the walls as meta information. The headquarter itself retro-project on its roof the life feed of its environment and on the external walls displays the video from mobiles units. The production center also invites to discuss the documentaries and environmental issues and by that is also a showcase building.

    technology specs
    Live feed video camera from mobile units to headquarter. Each mobile unit is composed of one video camera connected via satellite to the headquarter. The life feed video camera is sent to the headquarter and projected onto the contextual area outside wall and inside wall (as part of the cafeteria gathering) area.

    Video recording and metadata from mobile units to headquarter. Each mobile unit documents by recording visual environmental elements and use sensing technologies to combine video recorded and environmental data for later post production video retrieval. For instance GPS technology for location data retrieval, temperature, wind information and so forth. The production companies will retrieve video recordings of the mobile units and metadata associated to them.

    Live feed video camera from mobile units onto mobile units. Each mobile unit would retro project their life feed footage onto their semi-transparent fabric structure to melt within its environment.

    Headquarter data exchange with mobile units. Information and request coming from headquarter to define what to explore by real time exchange video footage. Scenario: if the mobile unit is in the air and crosses a bird migration, the headquarter could visualize it and request more detailed footage or more sensing environmental data coming from the bird migration.

    communication system diagram



    visual scenario of the ambient peacock explorer



    the blimp
    the air mobile unit.
    The shell inflated on top of the structure indicates the mobile unit is going to document from above and in the air.



    the mobile unit
    common to all contexts is controlled by two people. It has real time contact with the headquarter via satellite. One person controls the mobile unit and one gathers data, exchanges information, and prepare the unit to its environmental use. The unit consists of the inflatable blimp on top, the floatation device including the organic shell at the bottom, a projector to display environmental data inside each shell.



    the mobile unit going into water.

    The compressor is used for the floatation device and an organic semi transparent shell is added around the structure. The environmental life feed video is projected on the shell. The mobile unit is waterproof.



    external view of the headquarter as a showcase.

    Four walls: the air, the countryside, the city and the water. Each wall receives life feed from the mobile units based on each unit context.

    The Ambient peacock explorer is a project I made with Philip Vriend for the Kinetic Architecture class, Assignment 2, November 2005.

    By Cati in kinetic architecture

  • 17OctAt UIST this Monday: Scopemate, a robotic microscope!

    If you’re new here, you may want to subscribe to my RSS feed to receive the latest Architectradure’s articles in your reader or via email. Thanks for visiting!

    I am at UIST this Monday to present one of my project along with my mentor Paul Dietz since I joined Microsoft Applied Sciences Group. It is a very quick but efficient solution for the ones who like to solder small components!

    Summary
    Scopemate is a robotic microscope that tracks the user for inspection microscopy. In this video, we propose a new interaction mechanism for inspection microscopy. The novel input device combines an optically augmented web-cam with a head tracker. A head tracker controls the inspection angle of a webcam fitted with ap-propriate microscope optics. This allows an operator the full use of their hands while intuitively looking at the work area from different perspectives. This work was done by researchers Cati Boulanger and Paul Dietz in the Applied Sciences Group at Microsoft and will be presented atUIST 2011this Monday as both a demo and a poster!

    Video