Radio channels simulation: Unity scripting with C#

We learned C# scripting basics for Unity and a quick exercise in class was to develop a radio scene that plays new audio clip on keypress. I created the scene with an Audio Source gameobject, and assigned a script called radioChannelSwitch to it. The gameobject looks like this:

screen-shot-2016-10-05-at-6-35-01-pm

Audio Source gameobject has an array of audio clips called My clips. I defined the size and contents of this array with Unity GUI by dragging and dropping those 5 audio files from project assets to Element 0-4 fields.

AudioClip on top refers to file STE-015, which is 0th element of My clips array. When triggered, Audio Source plays the file that’s currently pointed by AudioClip. So a keypress (Space bar in this case) needs to switch AudioClip to point to next file (or Element) from My clips array.

The script radioChannelSwitch.cs looks like this:

using UnityEngine;
using System.Collections;

public class radioChannelSwitch : MonoBehaviour {

    public AudioClip [] myClips;
    private int currentClip;

    // Use this for initialization
    void Start () {
        GetComponent<AudioSource> ().clip = myClips [0];
         // initialize Audio Clip to 0th clip from the array

}
    
    // Update is called once per frame
    void Update () {
    
        if (Input.GetKeyDown(KeyCode.Space))
        {
            Debug.Log (Updated+ currentClip);
            GetComponent<AudioSource> ().clip myClips[currentClip];
            GetComponent<AudioSource> ().Play ();
            currentClip++;
            if (currentClip == myClips.Length)
                currentClip = 0;
        }
    }
}

 

 

Unity VR scenes

Intro to Unity was an exciting class, we learnt new techniques such as adding terrain, lights, objects and textures to the scenes. I created this space from my recurring dreams where I see myself in the middle of a desert with a beautiful night sky. There are huge chess pieces carved into rocks, all in a formation that looks like it is an ongoing game in an extremely slow pace. As if time is frozen while the pieces are deciding their next moves. As if I have this magical ability to pause the things around me, walk around leisurely, and take a look at all those giant pieces.

screen-shot-2016-09-29-at-3-15-03-pm

I worked on another scene to practice these tools and try some more like particle systems and flickering lights. This scene has an old castle wall constructed in the middle of snow-clad hills with some light and smoke behind the wall. I am can’t wait to experience this one on Cardboard, perhaps with ambient sound, as the light and smoke might create areas of attraction in this space.

screen-shot-2016-09-29-at-3-19-37-pm

Presence and Immersion

A year ago I worked on a pair of headphones that respond to 3 axis head tracking and produce relevant real-time variations in channel panning and volume levels. It was experienced by around 300 people at ITP Winter Show in December 2015, and though the aim was to apply learnings from Physical Computing class, I was amazed to see the immense potential of DIY head mounted devices in creating a virtual environments. Back then I had not thought much about the difference between presence and immersion as pointed out by Philipp Maas in this article, neither did I have a clear idea of the characteristics of VR described by Brenda Laurel.

The exhibition demo played star wars theme in loop (timely piece of music as the movie was released the same week that year) with a simple thought from presentation perspective: The engagement shouldn’t last beyond more than a couple of minutes since there will be a huge number of visitors. It worked in the sense that no one had to skip this experience because of too many people waiting ahead of them to experience it. As Maas suggests, Presence was the default state of this experience; it was communicated to the users to wear the headphones by presenting a looping video that showed people merrily using the same device, which also gave users the cues to tilt and turn.

Almost all kids and teenagers absolutely immersed themselves into the experience than most of the adults did. Many people had no expressive reactions beyond a friendly nod acknowledging that they’ve ‘understood’ what the experience is. A very dramatic cultural element was observed in achieving spatial immersion versus the narrative immersion because the music was from a strikingly American movie. Star was fans engaged more into further chitchats with me and with people around them. Lastly, people who didn’t know or follow star wars were not as amazed by the experience, although in my opinion they enjoyed at least some degree of spatial immersion.

Questions:

  1. For a given VR experience, users will always first go through some set of instructions (either directly presented by creator of that experience, or subconsciously revised by users’ earlier encounters with similar-looking devices that need headmounting etc). Now the user knows that (s)he is going to experience something virtual, does it make it further difficult to achieve immersion?
  2. “To keep the audience immersed in content, we need to anticipate and design for their emotional state at any given moment.” How to implement it when designing performances for a group of people where each individual in the audience might have a different emotional state, especially in the beginning when narration is yet to actively drive their states?