It has been a bit since my last post. In that time I have been writing my dissertation and conducting interviews on how archaeologists react to immersive and non-immersive representations of virtual archaeological data. It’s been a year since we started Lh3.x and in that time technology as always has eclipsed the originally intended platform. Lh3.x was built using Autodesk Maya built assets imported into Unity 4.5-5 and then with some additional modelling and texture mapping changes within Unity. A year ago the only immersive virtual platform we felt could handle the complexity and detail of the (re)imagined data was the Oculus Rift DK2.
However as the assets came together in Unity towards February, I noticed that there was considerable issues with frame rate latency within the DK2. A substantial portion of people are unable to use VR headgear due to the frame rate issues, including myself. If I spent more than 2min’s with the DK2 on, I felt immediately sick. So I was stuck with developing an environment in which I was unable to participate and quite possibly could cause others issues as well. At the time however, Google Cardboard would have been unsuitable for the level of detail we were attempting and the HTCVive still hadn’t arrived, so it was decided to continue along a DK2 path. We did try to acquire the commercial release of the Oculus Rift early, but were unsuccessful.
Sustainable Archaeology (SA) had early access to the new HTCVive and although the original Lh3.x wasn’t built for the HTCVive platform, Colin Creamer from the SA started hacking an HTCVive version of Lh3.x. Even with the hack, it was clear that the new technology was far superior to what the DK2 was providing. Having a discussion with Craig Barr, who was the key technical partner on this project, it was decided that we would attempt to convert the OR Unity version of Lh3.x into an HTCVive version. Craig had his own HTCVive system so he was able to rapidly test what worked and what didn’t. The conversion was not easy, but Craig was able to port a large portion of what we had in the OR Unity version over to the HTCVive environment. The Vive consists of a headset, two hand controllers and two motion sensors. The DK2 requires a single motion sensor (to detect head movement) and an XBox game controller to allow for movement within the virtual space.
The HTCVive required a more powerful graphics card and processor to run. For my interviews, we have been using a Alienware Aurora5 with an Nvidia 970 graphics card. From a cost perspective, the combination of the HTCVive and the AW Aurora 5 is roughly $5K CDN, so very cost prohibitive and very difficult to deploy to larger crowds. Unlike the AW Laptop and DK2 setup we used previously, the HTCVive also required more time and equipment to setup.
As you can see, just to setup the environment, I needed to bring along the AW Aurora, monitor, light stands for the motion sensors and the HTCVive itself. In the classic “back of the trunk” shot of archaeological equipment going out on a dig, the image below is representative of my trip over to ASI to conduct the first setup and interviews.
Just getting the equipment into the demonstration space, whether across town or in the lab, was still time consuming. Ideally, one should have at least two people to move equipment around, however the HTCVive digital calibration is easily done with one individual. Physically setting up and digitally calibrating the equipment took about 45min’s. The HTCVive requires to two sensors elevated above head height. Unlike the OR DK2, the HTCVive uses the physical space in order to allow users to physically walk while in the digital environment. Kudos to HTC for making the Vive digital calibration and tracking setup so easy! Whether you choose limited space or “map out” your usable space, both setup procedures are easy and quick.
If you would like more information on how to setup the HTCVive, please consult the Steam website. Once the physical space has been mapped digitally, the user then puts the headset on and can use the hand controllers to navigate within the virtual desktop space and then if controls are provided within an application, be able to affect objects or the environment within the simulation. In our case, Craig provided a “teleporting” tool to allow users to move from section of the digital environment to another when their physical space ran out. By “teleporting” this then allows users to explore throughout the environment and not just the space determined by the room-scale setup.
The difference between the HTCVive and the Oculus Rift is that with the HTCVive you are actually engaged physically within the digital environment. When you walk physically, you are walking within the virtual environment. If you want to pick something up with the controllers (your digital hands), that action must be programmed into the game engine. The OR is similar but you are either stationary standing or sitting and using a game controller to walk within digital space and/or pick up items, which functionality also needs to be programmed. I’m hesitant to use the term “immersive” however, between the two platforms the HTCVive is a highly physically interactive toolset which can convey immersive like qualities.
Once the head mounted display is on and the virtual environment is activated, users can interact with the environment in the same manner as they would within the physical environment. Again however, to pick items up or to affect change within the digital space, these actions have to be programmed. The monitor is primarily used for the non HTCVive participants to interact with the user and see what the user is experiencing. This interaction proved very useful when discussing features that where representative in the virtual space with the user and myself.
In Longhouse 4.0, I will be going into depth on the interviews conducted with archaeologists and heritage professionals as they use the immersive and non-immersive longhouse experiences. Some of the key take-aways from the interview process have been; a) users want to interact with the environment and are somewhat constrained to being a passive participant (the Oculus Story Studio has called this the Swayze Effect, where you can be within the environment but cannon effect change) b) that users would prefer immersive experiences over highly detailed and photorealistic desktop interactions c) that there is a technological fetish for innovative tools and users have to go through this stage first before gaining insight into knowledge construction within virtual space.
Stay tuned for the next blog but if you have any questions or comments, please do not hesitate to post them here!