Wednesday, 30 April 2008

Group A - Introduction and abstract

Our definition of pervasive computing is where technology is advancing beyond the traditional personal computing (laptops and desktop) environment into embedded systems where the connectivity of many devices is invisible to the user. These devices can be in any form of natural or man-made products, like clothing, tools, appliances, cars, homes and the human body. It combines wireless computing, voice recognition, the Internet and AI to weave together all these aspects where the connectivity is "invisivible" and non-intrusive.

As a group, we arrived at this definition by discussing the 8 different examples from our original post and agreeing upon which was the most suitable and descriptive for our groups view on the subject.

These technologies are explored in more detail in our posts below, which contain our research material and our understanding of the technologies.

Our group posts cover how pervasive computing breaks the desktop computing paradigm, our analysis and evaluation of these technologies and how future HCI issues may arise with the implementation of technology into a wider audience.

Monday, 28 April 2008

Usability Testing of Virtual and Augmented realities.

(From Scott Willmott)

Virtual and augmented realities are both non-command user interfaces and thus make it more difficult to evaluate and assess their levels of usability. The standard usability tests implemented for command user interfaces are difficult to implement as many of their principals either do not apply or are out of context.

Johnson’s (1998) claim that summative evaluations are inappropriate for the evaluation of desktop VR because he says that “it is hard to find valid statistical measures that support summative evaluation of desktop VR”

At present there are various usability tests which can be adapted and incorporated into evaluating the usability of these systems, I believe the most effective and useful for these technologies would be the heuristics adopted to test the usability of video games. Although there are no specific pre-defined heuristics for evaluating the usability of games, after carrying out research it is obvious that most of them are based around and upon the following three areas, which were defined by Clanton (1998):

- The Game Interface

- The Game Play

- The Game Mechanics

The Game Interface would include the visual representation of the system, therefore would be responsible for the consideration of such things as the screen layout, What is contained within the screen, how feedback is relayed back to the user, how items are displayed within the screen, how the user is able to view the items on screen and asses how much the user is required to know or learn in order to use the system effectively.

The Game Mechanics would be concerned with how the system works, how the user is required to interact with the system and what is the outcome of the user’s interactions with it. It is also responsible for the production of feedback to the user with regards to actions they perform or outcomes of the system.

The Game Play would be, in this case concerned with how the user feels when interacting with the system, the user should always feel in control of the system and confident in what they are doing or what they are trying to achieve, as well as feeling confident of being able to cope with any errors should they occur.

For a full set of example heuristics covering these 3 main topic areas see the following white paper links:

http://melissafederoff.com/heuristics_usability_games.html#heuristics_literature

www.behavioristics.com/downloads/usingheuristics.pdf

www.ipsi.fraunhofer.de/ambiente/pergames2006/final/PG_Roecker_Usability.pdf

Another method of measuring usability would be to take an ethnographic approach, and observe the users as much as possible using the proposed technlogies, and include these findings wihtin the design stage, in order to highlight exactly what they want and how they want it to be achieved.

There are various techniques that can be adopted to achieve this, some of which are:

- Shadowing – Watch target users, gather information on their requirements and how they achieve certain tasks


- Diary Studies – Ask users to keep diaries commenting on their usage and good and bad points of the system


- Activity Studies – Ask users to achieve tasks as they would normally and highlight any issues or views


Other Usability tests that should be considered would be a pervasive usability test, as it directly relates to the technologies concerned with pervasive computing, thus Virtual and Augmented Reality.

Pervasive Usability

Pervasive usability is the evaluating of a design’s usability at each stage during the design process. Pervasive Usability is different from other usability evaluations in that it is not only conducted at the start of a projects lifecycle but throughout the whole process.

There are 3 main steps within pervasive usability testing which are:

- Analyze

- Conceptualize

- Final Design, Hosting and Maintenance


A full definition of what is carried out at each one of the above stages can be found at the following link:

http://www.sitepoint.com/article/planning-uncertain-future

Usability Findings

Due to the lack of usability testing carried out on the next generation interfaces and technologies there are only very limited result sets to analyse and comment on. Although, below are some usability issues highlighted by Jakob Nielsen within his paper on non-command user interfaces

Feedback – The production of feedback to users is one of the predefined usability heuristics, achieving this in some more next generation interfaces may be much more difficult. This may relate to how a person interacts with the system for example with virtual reality, the movement of a user may not be recognised by the system thus feedback would be required informing the user of this. Also if a users movement or gesture is required for system interaction then there will be delay in providing feedback to the user as the system can not achieve this until the complete movement and/or gesture has been completed.

Navigation was another area for concern highlighted within this resource. A study conducted by two interface specialist’s evaluated virtual reality, using a head mounted display and a glove to move around. The findings were that the movement of the user by walking was very accurate where as the movement initiated by a hand gesture was less accurate and on many occasions caused confusion when the gesture was not intentional.

Full study of these usability issues can be found at:

http://www.useit.com/papers/noncommand.html

References

-Melissa Federoff (2002), melissa federoff thesis [online] available from http://melissafederoff.com/heuristics_usability_games.html#heuristics_literature [25th April 2008]

-Desurvire, Caplan (n.d), Toth Using Heuristics to Ealuate The Playability of Games [online] available from http://www.behavioristics.com/downloads/usingheuristics.pdf [25th April 2008]

- Carsten Röcker, Maral Haar (n.d.) Exploring the Usability of Video Game Heuristics for Pervasive Game Development in Smart Home Environments [online] available from http://www.ipsi.fraunhofer.de/ambiente/pergames2006/final/PG_Roecker_Usability.pdf
[28th April 2008]

-Suneet Kheterpal Pervasive Usability [online] available from http://www.sitepoint.com/article/planning-uncertain-future [28th April 2008]

- Jakob Nielsen NonCommand User Interface [online] available from http://www.useit.com/papers/noncommand.html [26th April 2008]

Pervasive computing - an overview

(From Shaun Wilson)

A clear and concise description and explanation of the chosen technology

The groups chosen topic for discussion is 3D Interaction (Virtual Reality / Augmented reality). Virtual reality is a technology that allows a human to interact in a virtual computer simulated environment. This argues against Mark Weiser’s idea about what the core of ubiquitous computing is. Mark Weiser says the following:

“Ubiquitous computing is roughly the opposite of virtual reality. Where virtual reality puts people inside a computer-generated world, ubiquitous computing forces the computer to live out here in the world with people. Virtual reality is primarily a horse power problem; ubiquitous computing is a very difficult integration of human factors, computer science, engineering, and social sciences.”

(Weiser 1996)

What he means by this is that the human should not be thrown into a virtual environment but in fact we should work towards the computer bending to our world and bringing the computing experience into it.

The idea of 3D interaction follows on from this, creating spaces and using new styles of input devices to create a more open and flowing interaction experience for the user. The input methods it centres around are usually either streaming video from augmented spaces or using 3-Dimensional digitalization of physical objects.

Interactions that involve issuing commands to the application in order to change the systems mode or activates some functionality falls under the category of system control. Techniques that support system control tasks in 3-Dimensions are classified as:


  • Graphical menus
  • Voice commands
  • Gestural interaction
  • Virtual tools with specific functions

An insight into how your chosen technology ‘breaks’ the paradigm of desktop computing.

The first thing that breaks the paradigm are the ways that the users interact with the systems. The world of desktop computing uses the long standing interaction devices of mice and keyboards. 3D augmented virtual reality uses a wide variety of interaction tools, specifically tailoring to a users environment and usage.

The second way it breaks the paradigm is the way it lets users interact with multiple parts of the system, in the case of virtual reality allowing them to perceive a whole world and environment, which adds to the overall user experience.

An analysis of the usability and HCI problems still to be overcome before the chosen technology becomes widely adopted in the market.

Just some of the problems that need to be overcome before this technology is adapted are:

  • Long standing habit of computer users
  • Manufacturing costs
  • Use problems such as space at home to house necessary equipment
  • Manufacture changes to using the new interaction methods

References

Mark Weiser (1996) Ubiquitous Computing [online] available from http://www.ubiq.com/hypertext/weiser/UbiHome.html [1st May 2008]

Bibliography

http://en.wikipedia.org/wiki/Ubiquitous_computing

http://www.dcs.gla.ac.uk/~johnson/papers/validation/3dpaper.html

http://www.se.rit.edu/~jrv/research/ar/introduction.html

http://www.sics.se/dive/

http://tele-immersion.citris-uc.org/

Thoughts on pervasive computing

From Shyam Raithatha

The group talked about all the different types of computer equipment, that general people were using on a day to day routine. This included things that were used in peoples cars and other types of technology devices, for example, mobile phone device, PDA, Internet and games stations.

An insight into how your chosen technology ‘breaks’ the paradigm of desktop computing.

I had a look into the Virtual Reality aspect and how it is different from desktop computing. The companies that are developing the 3D Interaction with Virtual Reality are trying to make it easier for surgeons to practice, and help them understand the different concepts. They show how it could be brought into universities that are training doctors and surgeons. This technology is also being used for virtual gaming. This is for people that like to play internet games and like the way it makes the images look so realistic.

At the moment this is using computers to create the interaction but there are virtual reality head sets beginning to be introduced for the people to use. For example, Virtual reality video games where you feel like you are in the game it makes it more interesting for the user. The users are then able to physically see what they are doing and how they have to try and fix the problem e.g. surgeons can carry out operations on the virtual body and can demonstrate how they would do it in real life. The lectures can also observe what the surgeon student is doing and it is easier to give them advice of how to improve.

Examples of this technology in the real world

Exploratory system: this is used with computer screen to Conducted through the use of a computer to exhibit the virtual world (used in games)

Video Mapping: This is where video input is showing the shadow of a 2D Graphical Images (used when presenting news/weather presenting)

Immersive system: This is where the users are using a helmet or 3D glasses which allows the user to have interaction with the virtual world and the different environments that the can visibly imagine.

Cave system: This is where a 3D image is incorporated within a cube which has different screens round the users. For Example “The Jaguar car manufacturer uses this to promote new models to businesses before manufacture.”

Telepresence: This is where sensors are linked up with the real world with the human sensors operator. This is so that user can experience the different environment that they can be in for example NASA plan to use telepresence for future space explorations this is to help the astronauts experience what it will be like to be out of space.

Mixed Reality: This is where two different technologies are used to create a computer generated input and then compound it with teleprence.

Surgery: “A surgeon can see the brain, with a CAT scan super-imposed on it along with a real-time ultrasound.” (Every 2005)

The different types of software that is out there at the moment:
· Driving simulators
· Flight simulators
·NASA
http://www.coventry.ac.uk/ec/~pevery/306is/T6/cnurds/3d_interaction.htm#Potential%20Applications

I agree with the other posts and their opinion concerning Augmented Perspectives. I have also found some things to add to it.

This is a small example of how Virtual Reality and Augmented Reality works together:

“We demonstrate basic 2D and 3D interactions in both a Virtual Reality (VR) system, called the Personal Space Station, and an Augmented Reality (AR) system, called the Visual Interaction Platform. Since both platforms use identical (optical) tracking hardware and software, and can run identical applications, users can experience the effect of the way the systems present their information to them (as VR or AR). Since the systems use state-of-the-art tracking technology, the users can also experience the opportunities and limitations offered by such technology at first hand. Such hands-on experience is expected to enrich the discussion on the role that VR and AR systems (with optical tracking) could and/or should play within Ambient Intelligence.”
(http://portal.acm.org/citation.cfm?id=1031419.1031425)

As shown above that both technologies can work together and they are both used for the same sort of job. There is a good video to show how the VR is made
http://www.youtube.com/watch?v=Jd3-eiid-Uw&feature=related.

Examples of the Technologies begin used:

Lexus

This is an example of how virtual reality work within cars industry. The new Lexus has a computerise system which let it automatically park the car. This show how technology is improving and it will park a car.(
http://gizmodo.com/gadgets/clips/lexus-self-parking-car-video-and-review-196551.php)

Toyota, VW

There is also a sample of the new VW and Toyota that is out in the US that also has this parking facility.
http://motoring.sky.com/news_features/vws-selfparking-car-story.aspx
http://www.engadget.com/2006/04/05/toyotas-self-parking-car-coming-soon-to-us/.

These shows have 3D virtual reality software can help new technologies.There are other organisations that also use virtual reality to create new designs within the field they of work. For example architecture uses the software to create new design and to virtual see what the structure will looking like after the construction has been done. Car design new the software to also have a look at their design before creating a prototype this is to make sure that have the angles they required and also to make sure that the car looks like the company want it to.

Here are a few example of this
  1. http://news.bbc.co.uk/1/hi/technology/3472589.stm
  2. http://www.carbodydesign.com/articles/2005/2005-09-08-digital-car-design/2005-09-08-digital-car-design.php

An analysis of the usability and HCI problems still to be overcome before your chosen technology becomes widely adopted in the market.

Helmets are too heavy: - which can cause the users to get headaches.

Computers are slow: - this would be due to all the graphical design and speed of everything which cause the computer to crash because it doesn’t have enough power or speed to run what the user want.

Touch feedback systems: - This is an issues due to lack of experience that the user has with the system.

Side-effects of using helmets: - nausea, headaches and claustrophobia

Full details of your research sources, including working links.

  1. http://www.cwi.nl/research/2005/31MulderEs.pdf
  2. http://graphics.tudelft.nl/~vrphobia/dissertation.pdf
References:

Jean-Bernard Martens (n. d.) Experiencing 3D Interactions in Virtual Reality and Augmented Reality [online] available from http://portal.acm.org/citation.cfm?id=1031419.1031425 [1st May 2008]

Dr. J.D. Mulder (n. d.) VIRTUAL REALITY: 3D-INTERACTION IN AVIRTUAL WORLD [online] available from http://www.cwi.nl/research/2005/31MulderEs.pdf [1st May 2008]

Prof.dr.ir. F.W. Jansen (n. d.) Human-Computer Interaction and Presence inVirtual Reality Exposure Therapy [online] available from http://graphics.tudelft.nl/~vrphobia/dissertation.pdf [1st May 2008]

Peter Every (n. d.)
3D Interaction - Virtual Reality [online] available from http://www.coventry.ac.uk/ec/~pevery/306is/T6/cnurds/3d_interaction.htm#Potential%20Applications [1st May 2008]

Virtual Reality Used in Training

(from Kimberly Scott)

Using a virtual reality training program that can simulate a dangerous situation allows the trainee to experience a realistic potential situation without putting themselves in danger. The advantage of this technology is giving the trainee the experience they need to handle these situations safely, without the technology the first time they would encounter the situation would be in real life when it was happening.

In a terrorist attack:
In early 2003 Missouri-Rolla University researched into the possibility of a virtual reality simulation system. The system would help first responders train for a terrorist attack situation including situations involving weapons of mass destruction.
"The goal of this project is to examine the feasibility for development of a virtual reality training system where people such as policemen, firefighters and hazardous material technicians can be trained effectively." (Dr. Ming Leu, 2003)
The simulation would be programmed with various scenarios that the trainee could run through, some of them will involve attacks with chemical agents. The trainees would be required to administer first aid while in a high pressure situation, this simulation could be run over and over again.

Flight simulators:
Flight simulators are used to give pilots the experience of flying a real plane using visual and auditory stimuli to give the most realistic experience possible, sometimes even movement will be used to increase the realism.

Driving simulators:
Driving simulators are used to teach students to drive and also used to train emergency services employees to drive in ambulances, fire engines and police cars. The more sophisticated version of the simulator includes sitting in an actual car and a video projector is used to show a 2D projection of the environment.

References:
Science daily (2003) Virtual reality training [online] available from
.
Ascent (2007) What are flight simulators [online] available from
.
E-safety support (n.d) Virtual devices in driving simulators [online] available from

Sunday, 27 April 2008

3D Interaction Devices

(From Erko Aaberg)

Description and explanation of your chosen technology

(Following text could be an addition to what has already been posted by Domenico under the same heading.)

3D Interaction devices

The 3D Interaction in Virtual or Augmented Reality environments also requires more sophisticated user interfaces (input and output devices) than standard desktop computers provide. Here is a broad list of the main types of devices for use in 3D environments, with some examples.

(Lee and Simon have already provided good examples of some of the most recently developed 3D interaction devices).

3D Output devices

  • 3D glasses - Most commonly one of the following:
    • LCD shutter glasses - Are synchronised with a display monitor. The left and right glasses one at a time become dark or transparent, while the display correspondingly switches between images to be seen by the left or right eye. http://www.berezin.com/3D/ShutterFAQ.htm.
    • Polarised glasses - Two projectors project a different image on the same screen. The polarised lenses of the goggles then separate different image for the left and right eye. This technology is being widely used in the IMAX 3D movies.

3D Input devices

  • Eye trackers – Measure eye position and movements as an input for moving in 3D space.
  • Motion trackers with camera – Track the motion of a part or the whole human body or some external devices without intrusive devices (using only a camera). The following equipment allows moving an avatar in Second Life just with body movements (includes a video): http://dvice.com/archives/2008/04/second_life_cre.php.

As the technologies become more immersive, especially in augmented and mixed realities, the input and output devices may become more unified into a single device. One example is the data glove mentioned above. Different simulators can include a set of different input or output devices.

Neural stimulation could be even a step forward from currently imaginable 3D user interfaces, possibly providing total immersive cognition and control over some external system. This has been thoroughly discussed in science fiction like The Matrix Trilogy movies.

Neural stimulation can be used both for input and output devices. Currently more usable devices have been invented for input functionality. The Direct Neural Interface or Brain-Computer Interface can be used to control any device just by using thoughts. This kind of interface can be either invasive (brain implants) or non-invasive (electrode headset). With a brain implant, scientists have come very close to transmitting speech into computers: http://newsvote.bbc.co.uk/2/hi/health/7094526.stm.
A headset for use in computer games has recently been developed by Emotive Systems: http://uk.gear.ign.com/articles/772/772295p1.html. There are still many difficulties to overcome before such technologies will be widely available, those include the correct interpretation of the neural signals and right positioning of the electrodes or implants.

Neural output could be used to restore vision for blind people (http://ieet.org/index.php/IEET/more/1633/), but also for direct vision without monitors. Domenico has already referenced in his post a similar study about using Virtual Reality to help people with visual impairment (http://www.dcs.gla.ac.uk/%7Estephen/visualisation/).

As neural stimulation creates a direct channel between human and computer, it is definitely possible to reverse the direction of commands, making humans act based on computer input. Research has been done about controlling human movement: http://www.forbes.com/personaltech/2005/08/04/technology-remote-control-humans_cx_lh_0804remotehuman.html.


Thoughts on Augmented Reality

Simon Woodward

We have also looked into aspects of Augmented Reality, and have found some useful information that confirms some of the points made previously.

Continuing the theme of humans interacting with machines (computers) in more advanced technologies and techniques, we have been looking into new technology from Microsoft, in the form of Mircosoft Surface.

Microsoft Surface is a unique system based around a glass-topped table (which is a touch and object sensative display screen) that in effect becomes an augmented reality "hub" for all users interaction, completely removing the users traditional input via a keyboard or mouse, replacing it with the use of the humans hands to mimmick the way in which they would interact with specific objects in real life, and giving them a virtual output.

The videos on the Microsoft Surface website show how this would be achieved, and in one of the best examples, the user places a digitial camera upon the table top, which is then recognised as a digital imaging device, and the photos from the camera are then displayed in a "fanned out" effect across the screen. The layout is representative of what a real table may look like if real photos were feathered around on top of it. The user then interacts with these photos, using only fingers and hands, by dragging, pulling, moving, strecthing and modifying the photo's layout, placing, size, order etc. It is blurring the line between how a user would interact with real life objects and how they would interact with the same objects in their digital format on a computer. In fact, this technology goes as far as to completely remove this line that defines the two activities as seperate entities, making the digital experience as much a part of real life as its more traditional counterparts.

See the videos "The Possibilites", "The Power", "The Magic" on the Microsoft Surface website - http://www.microsoft.com/surface/index.html

All the videos on the website show off the stunning power and future technology of Surface.

These exciting and ground breaking technological advances strengthen the points made previously, about the "paradigm of desktop computing" be broken by users interacting with machines and computers in a way in which no prior or advance knowledge of the technology is needed. This is shown in its prime by the example discussed earlier regarding the photos being displayed on the "screen" (table top) - a user who has no knowledge of computers (or a limited knowledge, but only basic) who may not have a strong idea of how to manipulate/organise digital photos on a computer would find it emmensly easy using the Surface technology because all they would have to do would be to organise the photo's how they would in real life. If they were sorting through a collection of photos in real life (from a photo album etc) then it is higly likely they would infact use a real table top surface to lay all the photo's out, change their ordering, move them into piles for keeping or throwing away etc. It incorporates and mirrors the traditional actions and processes, and also adds extra advancement and features (being able to then manipulate the photos to a greater extent) - so it provides an overwhelming bonus.

To a greater extent, the outcome and purpose of this technology is to minimise the level of step-by-step un-natural interaction that a user has with a computer in order to complete a task. The less interaction that needs to take place, the more adapatable and useable the technology becomes by a much wider audience. Organising photos by using only intuitive natural processes and responses is much easier and quicker than having to then link these processes and responses to a set of computer-input interactions.

Another fascinating look at VR comes from a company called EON Reality, who use state of the art technology to step away from the traditional forms of communcation, into a more augmented state. They use a mixture of 3D displays, super-imposed video and intuitive interaction (similar to that used within the realms of Micrsoft Surface) to prove outstanding data communcation which really does set the pace for future and emerging technologies.

The video linked below shows a quick demonstration of how a 3D projected render of a man at a conference/event can be used to give a speech/talk - from anywhere in the world - yet still have his human presence felt in the room.

http://www.eonreality.com/products_teleimmersion.html

EON also have some extremely interesting ideas on VR and how humans interact with it. They want to step away from the more commonly accepted notions that VR must have the associated use of a user headset or other aids, by delving into immersive 3D by "floating" the content in an area of space, letting the user interact with whatever is infront of them, simply by using their hands. This links back to the Micrsoft Surface technology, and even takes it a step futher, because where the Surface system would require you to have some sort of touch based interaction with the table-top surface, the EON technology is completely touch free, literally letting the user control the content in mid air. In terms of the photo-organising example from the Surface video, this would mean that the users photos would be projected onto a screen (like Surface) BUT the user would simple place their hands in appropriate positions in the air space in front of them to move the photos around and organise them etc.

A video of the EON technology in action can be viewed here -
http://www.eonreality.com/products_3dholodisplays.html

We think that the EON technology would be harder to adapt to in the first instance because it is still considered an "out there" technology. It is also less representative of how a normal user would interact and operate with objects in the real world. Again, to use the photo example, - a user would be able to pick up the operation of the Surface technology a lot faster because of the way in which it more directly mimmicks the actions of real life (moving the images with your hands) where as the EON technology requires no hands on interaction, and whilst this technology is very impressive, it may be considered one step ahead of its self when trying to increase the user friendlyness of systems. Non-familiar users would be less inclinded to instinctively and intuitively use the in-air EON system than they would with the Surface system which is much closely linked to the traditional operation.

However, as the speed of technology grows faster, we think it is quite likely that the relation between the "traditional" methods of operating and performing day to day tasks to that of performing them in VR, will become somewhat irrelevant because the emerging technology will become familiar - it will become the traditional way, as the times change and move forward. Users will be familiar with not having to touch or physically interact with a system - these actions will become memories of the past.

The affordance of using this future technology would most likely be intuitive because it will be a non-command interface. Going back to the Microsoft Surface technology, the digitial photographs affords moving and arranging by touching. But in the future, like automatic doors, this raises issues with usbaility and HCI testing for this technology, where it will be difficult to define how to test the HCI suitability. The photos will no longer afford physical feedback, only visible (seeing the photos moving). In terms of Synchronous and Asynchronous environments, the Surface technology would be synchronous technology because it will be instant interaction and feedback with devices and techonlogy i.e. instantly placing your digital camera device on the Microsoft Surface table and then digitial photos being displayed. You no longer have to download the photo's from the digitial camera and then display them, they automatically appear.

How Augmented Perspectives break the paradigm of desktop computing

(From Lee Shakespeare)

Augmented Perspectives


Currently there is a technology called an Augmented Perspective, this allows humans to ineract with computers via different patterented physical objects. Each patterned object, usually black and white card, has a distinct shape which the computer will recognise. Using a camera, the computer can detect these different patterns and project a real-time 3D model onto the surface.

Currently this can only be seen on the computer screen, however by physical movement alone the user will be able to alter the 3D model. Eliminating the users contact with the computer itself makes the technology more user-friendly, allowing people who have no previous computer experience to use the technology without training. All they would have to do is pick up and move a physical object and the computer would do the rest.


This breaks the paradigm of traditional desktop computing because only a screen and a small camera would need to be visible to the user. This technology would be very effective for people who are not computer literate (i.e. Children or Elderly) because this technology requires no previous knowledge.
The ultimate goal of this technology is for the mirror world (the image on the computer screen) to become the real world. This would mean that the 3D image would be projected directly onto the patterned card. Currently Augmentive Perspective only uses equipment which is accessible to most users, making this technology increasingly popular with modern users.This technology is being developed by companies as well as open source groups.
_________________________

Metaio is a company which is heavily investing in augmented solutions for business. Augmented Perspective is one of their technologies.

Video 1: Here is a example of how this technology could work.
Link:
http://www.metaio.com/

Video 2: Here is another example of this technology in action. This focuses on the setup of the hardware and then shows the image projected on a monitor, from a perspective which shows how the technology might work when using 3D Projection in the future.
Link:
http://www.metaio.com/flvplayer.php?video=flv_62_5-719_9947.flv$320$240

Example: A book was used as an example in the previous video, here is some software produced by The Human Interface Technology Laboratory New Zealand which will output the same effect. All that is needed is a compatible webcam.
Link:
http://www.hitlabnz.org/wiki/BlackMagic_Book



Saturday, 26 April 2008

Research Into VR/Augmented Reality

(From Domenico Buonocore)

A clear and concise description and explanation of your chosen technology

There are many different aspects of Virtual Reality, such as ‘Immersive VR’, ‘Desktop VR’, ‘Command and control’ and ‘Augmented Reality’. I will briefly explain the differences between the various aspects of Virtual Reality below. (Dix, Finlay, Abowd and Beale 2004)


Immerse VR

Immerse VR
allows the user to be fully “immersed” into the virtual world. This could mean they are using such equipment like ‘VR Goggles’, ‘VR Helmet’, ‘VR full body kit’ and a ‘VR Dataglove’. (Dix, Finlay, Abowd and Beale 2004) Being fully immersed into the virtual world allows the user to be completely inside this world whereby they can interact with the objects around them.


Desktop VR

Desktop VR allows the user to interact with 3D objects using the mouse and the keyboard. Examples of where this has been used is in ‘football’ games and other games like “Flight Simulator”, “DOOM” and “Quake 3” too, however with Quake 3 the default maps can be transformed using ‘VRML’ which stands for Virtual Reality Markup Language. Figure 1 illustrates the transformed Quake 3 map in VRML and Figures 2 and 3 illustrate the use of VRML in a football game. VRML allows virtual worlds to be spread across the Internet which can be integrated with other virtual worlds. The user can have the option of navigating through these worlds and interacting with the objects in front of them using both the keyboard and the mouse. Furthermore the use of these interactions can also take the user from one virtual world to another. (Dix, Finlay, Abowd and Beale 2004)

Figure 1 - Quake 3 VRML (Grahn)

Figure 2 - Football Game VRML (Virtual Reality Laboratory 2004)

Figure 3 - Football Game VRML (Virtual Reality Laboratory 2004)

Command and Control VR

Command and Control VR
allows the user to be put in a virtual world but be surrounded by real physical surroundings. For e.g. the use of flight simulators. The user is in a pretend cockpit where the windows are replaced with large screens that have the terrain projected to them in which case the cockpit moves around to simulate being in a real flight simulation. (Dix, Finlay, Abowd and Beale 2004)

Augmented Reality

Augmented Reality is where both VR and the real world meet. Virtual images are projected over the user as an overlay whereby the user can interact with the objects in front of them. (Dix, Finlay, Abowd and Beale 2004) The use of similar technology has been used in ‘X-Men: The Last Stand’ in the war simulation at the beginning of the film.

The disadvantage of Augmented Reality is that both the overlay of the virtual world and the physical objects must be exactly aligned otherwise problems could occur whereby the interaction of objects could be miscalculated and would most definite confuse the user but also could be fatal depending on the interaction carried out. The advantage of such technology is that with the use of the user’s gaze and position it is detected by the virtual world in which case the environment is safe. (Dix, Finlay, Abowd and Beale 2004)

An insight into how your chosen technology ‘breaks’ the paradigm of desktop computing

In relation to the other aspects of VR I have decided to choose the use of Desktop VR in particular the use of VRML. One example I have found which is supported with evidence is the use of surgery. Operations can be carried out by surgeons in a virtual world. The use of carrying out surgery in such a way is to perfect the technique in carrying out a certain procedure. The patient’s body is scanned and the data is passed and transformed into the virtual world. What’s more is that the use of haptic feedback is incorporated in this simulation whereby the surgeon can feel the texture and the resistance whilst the incision is being made in the “virtual body”. See Figures 4 – 6 for examples of where this has been used. (Dix, Finlay, Abowd and Beale 2004)

Figure 4 - Surgery VRML (State and Ilie 2004)

Figure 5 - Surgery VRML (State and Ilie 2004)

Figure 6 - Surgery VRML (State and Ilie 2004)


An analysis of the usability and HCI problems still to be overcome before your chosen technology becomes widely adopted in the market

Immerse VR can be costly as it requires a lot of processing power and thus it is still not ready for mass market. (Dix, Finlay, Abowd and Beale 2004) Furthermore it could be uncomfortable to wear the gear that comes with immersive VR. (Prashanth)

Furthermore, the user in the virtual world could also suffer from ‘motion sickness’ if there is latency in the system relaying the images to the user, whereby the user will become disorientated from the dizziness.
(Dix, Finlay, Abowd and Beale 2004)

With augmented VR the registration of the overlay and the physical objects need to be exact as disussed above as it could be disastrous if these images are not correctly aligned.
(Dix, Finlay, Abowd and Beale 2004)

References

Websites

Grahn, H. N/A [online] available from <http://home.snafu.de/hg/vrml/q3bsp/q3mpteam3_shot.jpg> [25 April 2008]
– Uses VRML

Prashanth, B.R. AN INTRODUCTION TO VIRTUAL REALITY IN SURGERY [online] available from
<
http://www.edu.rcsed.ac.uk/lectures/Lt12.htm#Applications> [25 April 2008]

State, A. and Ilie, A. (2004) 3D+Time Reconstructions [online] available from <
http://www.cs.unc.edu/Research/stc/Projects/ebooks/reconstructions/indext.html> [25 April 2008]
– Uses VRML

Virtual Reality Laboratory (2004) The Virtual Football Trainer [online] available from
<
http://www-vrl.umich.edu/project/football/> [25 April 2008]
– Uses VRML

Books


Dix, A. , Finlay, J. , Abowd, G. and Beale, R. (2004) HUMAN-COMPUTER INTERACTION. 3rd ed. Essex:Pearson Education Limited

Other related resources found but not used above:

YouTube Videos

YouTube (2008) Physics and Augmented Reality – Part 1 [online] available from
<
http://www.youtube.com/watch?v=enXTKvhE7yk
> [24 April 2008]

YouTube (2008) Physics and Augmented Reality – Part 2 [online] available from
<
http://www.youtube.com/watch?v=umbTreYhidM> [24 April 2008]

YouTube (2008) Augmented Reality Encyclopedia [online] available from
<
http://www.youtube.com/watch?v=oHkUOpYNhoM&feature=related>
[24 April 2008]

YouTube (2008) Virtual Museum powered by Augmented Reality [online] available from
<
http://www.youtube.com/watch?v=mzMvpTT-h3w> [24 April 2008]

YouTube (2008) 2D/3D Helicopter (Augmented Reality) [online] available from
<
http://www.youtube.com/watch?v=jV5ODoMs2TI> [24 April 2008]
– Can definitely be used for desktop screens, especially the flat screens


Movies that have incorporated different views on VR:

Websites

IMDB (2008) X-Men: The Last Stand [online] available from <http://www.imdb.com/title/tt0376994/> [24 April 2008]
– The beginning part of the film of the war simulation

IMDB (2008) The Lawnmower Man [online] available from <
http://www.imdb.com/title/tt0104692/> [24 April 2008]
– Uses the full body gear i.e. ‘VR Goggles’, ‘VR Body Suit’ and ‘VR Dataglove

IMDB (2008) TRON [online] available from <
http://www.imdb.com/title/tt0084827/> [24 April 2008]
– Uses the full body gear too, although I was too young to remember this film ;)

Online Papers For VR

Brewster, S. and Pengelly, H. Visual Impairment, Virtual Reality and Visualisation [online] available from <http://www.dcs.gla.ac.uk/~stephen/visualisation/>
[25 April 2008]
– VR for blind people

Villanueva, R., Moore, A. and Wong, W. (2004) Usability evaluation of non-immersive, desktop, photo-realistic virtual environments [online] available from <http://eprints.otago.ac.nz/152/01/28_Villanueva.pdf> [25 April 2008]

Weaver, A., Kizakevich, P., Stoy, W., Magee, H., Ott, W. and Wilson, K. Usability Analysis of VR Simulation Software [online] available from <http://www.rti.org/pubs/Usability.PDF> [25 April 2008]

YouTube (2008) Pay Check 1 [online] available from <
http://www.youtube.com/watch?v=JiDmvNW8K2w>
[24 April 2008]
– The movie Paycheck in the beginning part of the film definitely “breaks” desktop computing


Lecture Notes

Jonathan Cohen (2000) 600.450 Virtual Worlds, 12, ‘Introduction to Virtual Reality’, Johns Hopkins University <http://www.cs.jhu.edu/~cohen/VW2000/Lectures/Introduction_to_VR.color.pdf>

Thursday, 24 April 2008

Definitions of "Pervasive Computing"

(From Domenico Buonocore)

These are the definitions I have come across to describe "Pervasive Computing". I feel that each one of these definitions relate to each other and are supported with their sources.
  1. A definition: http://www.webopedia.com/TERM/p/pervasive_computing.htm
  2. “Computing devices become so commonplace that we do not distinguish them from the ‘normal’ physical surroundings.” (Dix, Finlay, Abowd and Beale 2004)
  3. “Attempt to break away from the traditional desktop interaction paradigm and move computation power into the environment that surrounds the user.(Dix, Finlay, Abowd and Beale 2004)
  4. The goal of pervasive computing is to create ambient intelligence where network devices embedded in the environment provide unobtrusive connectivity and services all the time, thus improving human experience and quality of life without explicit awareness of the underlying communications and computing technologies.(Elsevier B.V 2007) (http://www.elsevier.com/wps/find/journaldescription.cws_home/704220/description#description)
  5. Ubiquitous computing is the method of enhancing computer use by making many computers available throughout the physical environment, but making them effectively invisible to the user.(Weiser 1993) (http://www.ubiq.com/hypertext/weiser/UbiCACM.html)
  6. Ubiquitous computing, or calm technology, is a paradigm shift where technology becomes virtually invisible in our lives. Instead of having a desk-top or lap-top machine, the technology we use will be embedded in our environment.(Riley)
    (
    http://www.cc.gatech.edu/classes/cs6751_97_fall/projects/say-cheese/marcia/mfinal.html)
  7. Opposite of VR. Reinforces the computer to be present in the real world with the users and not the users in with the VR.
  8. Integrating technology in other objects other than those used for computing i.e. clothing and everyday objects to allow connectivity with other objects.