Category: interaction design and technology.

  • 07MayDesign a multi touch pad in 15 min

    If you’re new here, you may want to subscribe to my RSS feed to receive the latest Architectradure’s articles in your reader or via email. Thanks for visiting!

    I love this type of video tutorial. Here is a tutorial on how to make your own multi touch pad in 15 minutes using a web cam, cardboard box, a piece of glass and software. I think some regular ambient light is needed! Also the next step, not the least difficult, is to have the software running. The idea is when you place you fingers on the surface, you create shadows with your fingers. The webcam detects these shadows, sends the image to the tracking software to track the shadows as they move around.

    Below are some basic info to start:

    Materials

    * Cardboard Box

    * Piece of Clear Flat Sturdy Material (ie. Glass, acrylic, plexiglas)

    * Paper (ie. printer paper, tracing paper, almost any paper)

    * Webcam or Video Camera

    * Computer

    * Optional Picture Frame

    Finger Tracking Software

    * Touchlib Beta v2: – Written by David Wallen

    * Download, unzip and copy the config.xml into your touchlib directory

    More by AudioTouchVia.

    Posted by Cati Vaucelle @ Architectradure

    …………………………………………………………………………………

    Blog Jouons Blog Maison Blog Passion


  • 29AprNew interaction technique for timeline control in video scenes

    If you’re new here, you may want to subscribe to my RSS feed to receive the latest Architectradure’s articles in your reader or via email. Thanks for visiting!

    Dragon is a research project by Thorsten Karrer, Malte Weiss, and others at the Media Computing Group, RWTH Aachen University in Germany.

    Objects on video scenes are used to control their trajectories in time, basically any object that appears in the video becomes a slider that can control the video timeline. The project is meant to be a “more” frame-accurate in-scene video navigation than usual systems and during studies users found more natural the use of this video navigation “slider” than traditional timeline sliders. It seems to me a great WYSIWYG for video!

    Dragon

    ->Video <-

    Abstract

    We present DRAGON, a direct manipulation interaction technique for frame-accurate navigation in video scenes.

    This technique benefits tasks such as professional and amateur video editing, review of sports footage, and forensic analysis of video scenes.

    By directly dragging objects in the scene along their movement trajectory, DRAGON enables users to quickly and precisely navigate to a specific point in the video timeline where an object of interest is in a desired location. Examples include the specific frame where a sprinter crosses the finish line, or where a car passes a traffic light.

    Through a user study, we show that DRAGON significantly reduces task completion time for in-scene navigation tasks by an average of 19–42% compared to a standard timeline slider.

    Qualitative feedback from users is also positive, with multiple users indicating that the DRAGON interaction felt more natural than the traditional slider for in-scene navigation.

    Posted by Cati Vaucelle @ Architectradure

    …………………………………………………………………………………

    Blog Jouons Blog Maison Blog Passion


  • 29AprNew interaction technique for timeline control in video scenes

    If you’re new here, you may want to subscribe to my RSS feed to receive the latest Architectradure’s articles in your reader or via email. Thanks for visiting!

    Dragon is a research project by Thorsten Karrer, Malte Weiss, and others at the Media Computing Group, RWTH Aachen University in Germany.

    Objects on video scenes are used to control their trajectories in time, basically any object that appears in the video becomes a slider that can control the video timeline. The project is meant to be a “more” frame-accurate in-scene video navigation than usual systems and during studies users found more natural the use of this video navigation “slider” than traditional timeline sliders. It seems to me a great WYSIWYG for video!

    Dragon

    ->Video <-

    Abstract

    We present DRAGON, a direct manipulation interaction technique for frame-accurate navigation in video scenes.
    This technique benefits tasks such as professional and amateur video editing, review of sports footage, and forensic analysis of video scenes.
    By directly dragging objects in the scene along their movement trajectory, DRAGON enables users to quickly and precisely navigate to a specific point in the video timeline where an object of interest is in a desired location. Examples include the specific frame where a sprinter crosses the finish line, or where a car passes a traffic light.
    Through a user study, we show that DRAGON significantly reduces task completion time for in-scene navigation tasks by an average of 1942% compared to a standard timeline slider.
    Qualitative feedback from users is also positive, with multiple users indicating that the DRAGON interaction felt more natural than the traditional slider for in-scene navigation.

    Posted by Cati Vaucelle @ Architectradure
    …………………………………………………………………………………
    Blog Jouons Blog Maison Blog Passion

  • 16AprYour email in the future

    If you’re new here, you may want to subscribe to my RSS feed to receive the latest Architectradure’s articles in your reader or via email. Thanks for visiting!

    After Fuzzmail a program that allows to write a message that unfolds in time -with Fuzzmail you end up sending what you dynamically write instead of a flat/clean email!- we now have Time Machiner, the email time machine that sends an email in the future! It does not go beyond 2030 so you cannot go too crazy, but still it leaves you time to prepare some freaky surprises. Also one can imagine pre-sending the birthday wishes, so that you will never seem to have forgotten, see example below!

    Time Machine

    When the email is being sent, you receive this nice confirmation screen. The team has a sure sense of humor!

    Time Machine 2

    Posted by Cati Vaucelle @ Architectradure

    …………………………………………………………………………………

    Blog Jouons Blog Maison Blog Passion


  • 01AprApril fool, the USB pregnancy Test

    Jonah sent me this April fool! So happy first of April to you all!!! I’d love hanging tiny fishes in your back, but because we are on a blog here, I will just propose this very nice High Tech Pregnancy Test! What can the digital do with my no-tech data? Like a little fish in the water, this pregnancy test can be plug in to your ipod. Wait, no. Your iChat! so you can update your friends instantly about your status. Be careful to select the right end of the test, other than that it is pretty straightforward.

    p-teq.jpg

    The power from your USB port starts the electrospray ionization process, creating a spectrograph of the various masses for your analysis (…) The mass spectrometry software on the device comes with several sequenced hormones, including hCG (human Chorionic Gonadotropin), hCG-H (hyperglycosylated hCG – for detection before your first missed period), and LH (luteinizing hormone – for detection of your most fertile days). We like the fact that it does all three (…) While most home tests can detect a level of 15-50 mIU/mL of hCG, the enhanced methodology of the USB Pregnancy Test Kit can detect 5-50 mIU/mL, and will show you the exact concentration via its friendly onscreen interface. In addition, the LCD display on the device itself will light up and show you the symbol of a baby, no baby, or multiples and your Estimated Delivery Date based on the concentration of hCG, hCG-H, and LH in your urine – ThinkGeek

  • 24NovAn intelligent bar of soap!

    If you’re new here, you may want to subscribe to my RSS feed to receive the latest Architectradure’s articles in your reader or via email. Thanks for visiting!

    The Bar of Soap, created by Michael Bove, Stacie Slotnick and Brandon Taylor at the MIT Media Laboratory, is a handheld device that recognizes how it is being held and adjusts its functionality accordingly. For instance if you want to make a phone call, just hold the bar of soap like a phone!

    The device senses the pattern of touch and orientation when it is held, and reconfigures to become one of a variety of devices, such as phone, camera, remote control, PDA, or game machine. Pattern-recognition techniques allow the device to infer the user’s intention based on grasp. We are now adding display surfaces across the entire device so that buttons and indicators can be created where needed for a particular mode.


  • 12SepVisualizing Audio Cues

    If you’re new here, you may want to subscribe to my RSS feed to receive the latest Architectradure’s articles in your reader or via email. Thanks for visiting!

    Today at Interact 2007 I discovered the work of Tony Bergstrom who happens to be Karrie Karahalios’s student. He presented today the Conversation Clock table.

    On the Conversation Clock table, lapel microphones monitor conversation while the visualization of history is projected in the center. The Conversation Clock provides a visual history of interaction and communication. Each contribution displays bars colored to indicate the speakers’ identities. The lengths of these bars indicate the degree of participation, measured by volume. As a conversation progresses, a history is built with concentric rings reminiscent of the rings on a tree.

    The Conversation Clock displays various conversational cues such as turn taking, interruption, conversational dominance, silence, agreement, aural back-channels, mimicry, time spans, rhythm and flow. If an individual has not been speaking, their lack of aural contribution is made clear in the rings. Of course, if individual is speaking at length and dominating the conversation, one can easily observe this as well. Aspects such as interruption, silences, and argument also make visual impressions on the table.

    As a result, the Conversation Clock allows people to interpret their role in interaction. The visualization of audio allows people who speak the most to regulate their speech and speak less and the persons who speak the less to speak more.

    “Live visualization of audio through social mirrors can provide influential cues for

    individual participation in conversation. Participants alter themselves in order

    to equalize the contribution of individuals.”

    Paper to download


  • 01JulMotion and identity

    If you’re new here, you may want to subscribe to my RSS feed to receive the latest Architectradure’s articles in your reader or via email. Thanks for visiting!



    A few little dots moving make a figure look male or female, heavy or light, relaxed or nervous, happy or sad. A fun tool to play with by Bio Motion Lab.

    Another famous tool that I love, sodaconstructor, a Java applet that animates two dimensional models made out of masses and springs. In this one, emotion and anthropomorphism take place at the spring level.

    A while back, Le ciel est bleu designed very successful moving creatures depending on gravity points.

    In the robotic sphere, Guy Hoffman’s lamp is designed as a collaborative desk assistant and this by following your intentions around your desk. Even if the lamp is non-anthropomorphic per se, its motion certainly gives it an anthropomorphic credibility that serves the function of a desk assistant. Video of its first steps!


  • 26JunUndercover

    If you’re new here, you may want to subscribe to my RSS feed to receive the latest Architectradure’s articles in your reader or via email. Thanks for visiting!

    I recently spent time with friends Jean-baptiste Labrune and Dana Gordon these past two days, and we had awesome research discussions about the future of HCI! Dana Gordon explained me her fascinating projects and I loved her Undercover blanket.

    The Undercover contains 24 wireless speakers and provides a special physical sound experience. The Undercover allows you to enjoy the vibrations of the speakers on your body and provides a private mobile soundscape.

    The blanket has an embedded array of small speakers that can receive a wireless audio signal via a Bluetooth connection. This audio signal can be beamed from any kind of audio device, such as mp3 player, television, computer, radio, etc. The volume controllers were designed in a way, which suits the blanket’s natural cuddling behaviour. The upper corners (A.K.A ‘the blanket’s ears’) control the volume – (pull the right one for higher volume and the left one for lower).


  • 02JunA translator between human gestures and machine functions

    If you’re new here, you may want to subscribe to my RSS feed to receive the latest Architectradure’s articles in your reader or via email. Thanks for visiting!

    SmartRetina: a translator between human gestures and machine functions.

    SmartRetina is a lightfast gesture-tracking platform written in Macromedia Flash 8, utilizing its flash.geom. package, flash.display package, Video class, Camera class and their motion-tracking capabilities.

    Video of SmartRetina as a navigation tool.

    Based on this technology, the company created Mossalibra an interactive game installation, operated solely by intuitive human gestures. “While dancing to surrounding music, the user (represented in pixelated form as a pure gesture) can mimic a given set of gestures in order to gain points.”