20 Following


Horrible Hauntings: An Augmented Reality Collection of Ghosts and Ghouls

Horrible Hauntings: An Augmented Reality Collection of Ghosts and Ghouls - Shirin Yim Bridges, William Maughan Bridges provides a very basic overview of ten specific historical ghost stories. The words begin with a narrative hook, followed by summary of the historical account. Each double has a full-page illustration that seems to be just a blank picture of a setting. But these blank settings are the backdrop for use by a phone app to create an augmented reality. The first one was probably the best, where hovering the app over the book makes a 3-D image of a ghost ship appear.

I haven't seen a lot of Augmented Reality books, but my 8-year-old daughter sometimes plays with the AR on her Nintendo 3DS. The games aren't very well developed, so she doesn't play with them often. I'd like to see more of this.

The technology makes it into an activity book more than an aesthetically experienced piece of art work. It's what the book prompts you to do that creates the experience, more than the book itself.

It looks like Bridges cooperated with a relative to get the app developed. The concept makes it easy to discuss the modern and post-modern. Making a book interact with a smartphone or tablet is a clear use of recent modern technology. But one of the things that happens with the 3-D images in the app, is that they appear to pop off the page. For example, on the Headless Horseman page I can rotate the phone and see the image off the edges of the page. It makes me attend to the frame of the page in new ways, and also to notice what the background image was like before and after the app interacted with it.

This all begs the question of how illustrator Maughan collaborated with Jason Yim in the development of the app and the painting of the illustrations. Do the illustrations have to meet some kind of technical specs so that the app can 'recognize' which animation it is supposed to bring forward? How is this information coded in the illustration? Or did the coders simply use existing illustrations and write code to recognize them?