Categories
Augmented Reality Business Strategy

Augmented Real(ity) Estate

I would like to live in a world in which the real estate agent [information finder (an "explorer" that uses AR)] and the transaction platform are all (or nearly all) digital.

Funda Real Estate, one of the largest real estate firms in the Netherlands, was (to the best of my knowledge) the first Layar customer (and partner). Initially developed in collaboration with our friend Howard Ogden 3 years ago, the Funda layer in the Layar browser permits people to "see" the properties for sale or rent around them, to get more information and contact an agent to schedule a visit.

A few hours ago, Jacob Mullins a self-proclaimed futurist at Shasta Ventures, shared with the world on TechCrunch how he came to the conclusion that real estate and Augmented Reality go together! Bravo, Jacob! I think the saying is "In real estate there are three things that matter: Location. Location. Location." Unfortunately, none of the companies he cites as having "lighthouse" examples are in the real estate industry.

Despite the lack of proper research in his contribution, property searching with AR is definitely one of the best AR use cases in terms of tangible results for the agent and the user. It's not exclusively an urban AR use case (you could do it in an agricultural area as well) but a property in city-center will certainly have greater visibility on an AR service than one in the country. The problem with doing this in most European countries is that properties are represented privately by the seller's agent and there are thousands of seller agents, few of whom have the time or motivation to provide new technology alternatives (read "opportunity").

In the United States, most properties appear (are "listed") in a nationwide Multiple Listing Service and a buyer's agent does most of the work. Has a company focused and developed an easy to use application on top of one of the AR browsers (or an AR SDK) using the Multiple Listing Service in the US?

My hypothesis is that at about the time the mobile location-based AR platforms were introduced (mid-2099), the US real estate market was on its way or had already imploded. People were looking to sell, but not purchase property. 

This brings up the most important question neither raised or answered in Jacob's opinion piece on TechCrunch: what's the value proposition for the provider of the AR feature? Until there are strong business models that incentivise technology providers to share in the benefits (most likely through transactions) there's not going to be a lot of innovation in this segment.

Are there examples in which the provider of an AR-assisted experience for Real Estate is actually receiving a financial benefit for accelerating a sale or otherwise being part of the sales process? Remember, Jacob, until there are real incentives, there's not likely to be real innovation. Maybe, if there's a really sharp company out there, they will remove the agents entirely from the system.

Looking for property is an experience beginning at a location (remember the three rules of real estate?), the information parts of which are delivered using AR. Help the buyer find the property of their dreams, then help seller component, and YOU are the agents.

Categories
Augmented Reality Events

Aurasma at GDC12 and SXSW12

I was unable to attend the Game Developers Conference last week in San Francisco, but it sounds like it was a good event. I enjoyed reading Damon Hernandez's post on Artificial Intelligence. Damon and I are working together on the AR in Texas Workshops March 16 and 17.

At GDC12, Aurasma was in the ARM booth showing Social AR experiences. During this video interview David Stone gave some numbers and his excitement about the platform nearly leaves him speechless.

The SXSW event is going on this week and Aurasma is there as well. In Austin, Aurasma broke the news about their partnership with Marvel Comics. This is could have been good news for the future of AR-enhanced books. Unfortunately, the creative professionals who worked on this demonstration let us down. Watch the movie of this noisy animation showing what the character is capable of doing, and ask yourself "how many times does a "reader" want to watch this?"

I fear the answer is: Zero. Is there any aspect of this experience sufficiently valuable for a customer to return? I could be wrong.

What more could the character have done? Well, something related to the story of the comic book, for starters!

Categories
Augmented Reality Events Standards

Interview with Neil Trevett

In preparation for the upcoming AR Standards Community Meeting March 19-20 in Austin, Texas, I’ve conducted a few interviews with experts. See here my interview with Marius Preda. Today’s special guest is Neil Trevett.

Neil Trevett is VP of Mobile Content at NVIDIA and President of the Khronos Group, where he created and chaired the OpenGL working group, which has defined the industry standard for 3D graphics on embedded devices. Trevett also chairs the OpenCL working group at Khronos defining an open standard for heterogeneous computing.

Spime Wrangler: When did you begin working on standards and open specifications that are or will become relevant to Augmented Reality?

NT: It’s difficult to say because so many different standards are enabling ubiquitous computing and AR is used in so many different ways. We can point to graphics standards, geo-spatial standards, formatting, and other fundamental domains. [editor’s note: Here’s a page that gives an overview of existing standards used in AR.]

The lines between computer vision, 3D, graphics acceleration and use are not clearly drawn. And, depending on what type of AR you’re talking about, these may be useful, or totally irrelevant.

But, to answer your question, I’ve been pushing standards and working on the development of open APIs in this area for nearly 20 years. I first assumed a leadership role in 1997 as President of the Web3D Consortium (until 2005). In the Web3D Consortium, we worked on standards to bring real-time 3D on the Internet and many of the core enablers for 3D in AR have their roots in that work.

Spime Wrangler: You are one of the few people who has attended all previous meetings of the International AR Standards Community. Why?

NT: The AR Standards Community brings together people and domains that otherwise don’t have opportunities to meet. So, getting to know the folks who are conducting research in AR, designing AR, implementing core enabling technologies, even artists and developers was a first goal. I need to know those people in order to understand their requirements. Without requirements, we don’t have useful standards. I’ve been taking what I learn during the AR Standards community meeting and working some of that knowledge into the Khronos Group.

The second incentive for attending the meetings is to hear what the other standards development organizations are working on that is relevant to AR. Each SDO has its own focus and we already have so much to do that we have very few opportunities to get an in depth report on what’s going on within other SDOs, to understand the stage of development and to see points for collaboration.

Finally, the AR Standards Community meetings permit the Khronos Group to share with the participants in the community what we’re working on and to receive direct feedback from experts in AR. Not only are the requirements important to us, but also the level of interest a particular new activity receives. If, during the community meeting I detect a lot of interest and value, I can be pretty sure that there will be customers for these open APIs down the road.

Spime Wrangler: Can you please describe the evolution you’ve seen in the substance of the meetings over the past 18 months?

NT: The evolution of this space has been rapid, by standards development standards! This is probably because a lot of folks have thought about the potential of AR as just another way of interfacing with the world. There’s also been decades of research in this area. Proprietary silos are just not going to be able to cover all the use cases and platforms on which AR could be useful. 

In Seoul, it wasn’t a blank slate. We were picking up on and continuing the work begun in prior meetings of the Korean AR Standards community that had taken place earlier in 2010. And the W3C POI Working Group had just been approved as an outcome of the W3C Workshop on AR and the Web.

Over the course of 2011 we were able to bring in more of the SDOs. For example, the OGC and Web3D Consortium started presenting their activities during the Second community meeting. The OMA Mob AR Enabler work item presented and ISO SC24 WG 9 chair, Gerry Kim, participated in the Third Meeting in conjunction with the Open Geospatial Consortium’s meeting in Taiwan.

We’ve also established and been moving forward with several community resources. I’d say the initiation of work on an AR Reference Architecture is an important milestone.

There’s a really committed group of people who form the core, but many others are joining and observing at different levels.

Spime Wrangler: What are your goals for the meeting in Austin?

NT: During the next community meeting, the Khronos Group expects to share the progress made in the newly formed StreamInput WG. We’re just beginning this work but there’s great contributions and we know that the AR community needs these APIs.

I also want to contribute to the ongoing work on the AR Reference Architecture. This will be the first meeting in which MPEG will join us and Marius Preda will be making a presentation about what they have been doing as well as initiating new work on 3D Transmission standards using past MPEG standards.

It’s going to be an exciting meeting and I’m looking forward to participating!

Categories
Internet of Things Research & Development Social and Societal

City WalkShop

Adam Greenfield is one of the thought leaders I follow closely on urban technology topics. Adam and his network (including but going beyond the Urbanscale consulting practice) are far ahead of most people when it comes to understanding and exploring the future of technology in cities.

In this post I'm capturing information about this small event conducted in November 2010 in collaboration with Do Projects (in the context of the Drumbeat Festival) because it inspires me. I've also found documentation about two more of these done in spring of 2011 (Bristol and London). On March 11, there will be another one taking place in Cologne, Germany in collaboration with Bottled City.

City WalkShop experiences are "Collective, on-the-field discovery around city spots intensive in data or information, analyzing openness and sharing the process online."

I discovered the concept of WalkShops when I was exploring Marc Pous' web page. Marc just founded the Internet of Things Munich meetup group a few weeks ago and, in addition to being eager to meet other IoT group founders (disclosure: I founded IoT Zurich meetup in October 2011), I learned that he is a native of Barcelona (where the IoT-Barcelona group meets).

I got acquainted with Marc's activities and came across the Barcelona WalkShop done with Adam.

The WalkShop Barcelona is documented in several places. There's the wiki page on UrbanLabs site that describes the why and the what, and I visited the Posterous page. Here's the stated goal:

What we’re looking for are appearances of the networked digital in the physical, and vice versa: apertures through which the things that happen in the real world drive the “network weather”, and contexts in which that weather affects what people see, confront and are able to do.

Here's a summary of Systems/Layers process:

Systems/Layers is a half-day “walkshop” organized by Citilab and Do projects held in two parts. The first portion of the activity is dedicated to a slow and considered walk through a reasonably dense and built-up section of the city at hand. This portion of the day will take around 90 minutes, after which we gather in a convenient “command post” to map, review and discuss the things we’ve encountered.

I'd love to participate or organize another of these WalkShops in Barcelona in 2012, going to the same places and, as one of the outcomes of the process, to compare how the city has evolved. Could we do it as a special IoT-Barcelona meeting or in the framework of Mobile World Capital?

I also envisage getting WalkShops going in other cities. Maybe, as spring is nearing and people are outside more, this could be a side project for members of other IoT Meetup Groups?

Categories
3D Information Augmented Reality Innovation

Playing with Urban Augmented Reality

AR and cities go well together. One of the reasons is that, by comparison with rural landscapes, the environment is quite well documented (with 3D models, photographs, maps, etc). A second reason is that some features of the environment, like the buildings, are stationary while others, like the people and cars, are moving. Another reason for these to fit naturally together is that there's a lot more information that can be associated with places and things than those of us passing through can see with our "naked" eyes. There's also a mutual desire: people –those who are moving about in urban landscapes, and those who have information about the spaces–need or want to make these connections more visible and more meaningful.

The applications for AR in cities are numerous. Sometimes the value of the AR experience is just to have fun. Let's imagine playing a game that involves the physical world and information encoded with (or developed in real time for use with) a building's surface. Mobile Projection Unit (MPU) Labs is an Australian start up doing some really interesting work that demonstrates this principle. They've taken the concept of the popular mobile game "Snake" and, by combining it with a small projector, smartphone and the real world, made something new. Here's the text from their minimalist web page:

"When ‘Snake the Planet!” is projected onto buildings, each level is generated individually and based on the selected facade. Windows, door frames, pipes and signs all become boundaries and obstacles in the game. Shapes and pixels collide with these boundaries like real objects. The multi-player mode lets players intentionally block each other’s path in order to destroy the opponent."

Besides this text, there's a quick motivational "statement" by one of the designers (this does not play in the page for me, but click on vimeo logo to open it):

 

 

And this 2 minute video clip of the experience in action:

I'd like to take this out for a test drive. Does anyone know these guys?

Categories
Augmented Reality

Augmented Vision 2

It’s time, following my post on Rob Spence’s Augmented Vision and the recent buzz in the blog-o-sphere on the topic of eyewear for hands-free AR (on TechCrunch Feb 6, on Wired on Feb 13, on Augmented Planet Feb 15), to return to this topic.

I could examine the current state of the art of the technology for hands-free AR (the hardware, the software and the content). But there’s too much information I could not reveal, and much more I have yet to discover.

I could speculate about if, what and when Google will introduce its Goggles, as been rumored for nearly 3 months. By the way, I didn’t need a report to shed light on this. In April 2011, when I visited the Google campus, one of the people with whom I met (complete with his personal display) was wearable computing guru and director of the Georgia Institute of Technology Contextual Computing Group, Thad Starner. A matter of months later, he was followed to Google by Rich deVaul whose 2003 dissertation on The Memory Glasses project certainly qualifies him on the subject of eyewear.  There could, in the near future, be some cool new products rolling out for us, “ordinary humans,” to take photos with our sunglasses and transfer these to our smartphones. There might be tools for creating a log of our lives with these, which would be very helpful. But these are not, purely speaking, AR applications.

Instead, let me focus on who, in my opinion, is most likely to be adopting the next generation of non-military see-through eyewear for use with AR capabilities. It will not be you nor I, or the early technology adopter next door who will have the next generation see-through eyewear for AR. 

It will be those for whom having certain, very specific pieces of additional information available in real time (with the ability to convey them to others) while also having use of both hands, is life saving or performance enhancing. In other words, professional applications are going to come first. In the life saving category, those who engage in the most dangerous field in the world (i.e., military action) probably already have something close to AR.

Beyond defense, let’s assume that those who respond to a location new to them for the purpose of rescuing people endangered by fire, flooding, earthquakes, and other disasters, need both of their hands as well as real time information about their surroundings. This blog post on the Tanagram web site (where the image above is from), makes a very strong case for the use of AR vision.

People who explore dark places, such as underwater crevices near a shipwreck or a mine shaft already have cameras on their heads and suits that monitor heart rate, temperature, pressure and other ambient conditions. The next logical step is to have helpful information superimposed on the immediate surroundings. Using cameras to recognize natural features in buildings (with or without the aid of markers) and then altimeters to determine the depth underground or height above ground to which the user has gone, floor plans and readings from local activity sensors could be very valuable for saving lives. 

I hope never to have to rely on these myself, but I won’t be surprised if one day I find myself rescued from a dangerous place by a professional wearing head-mounted gear with Augmented Reality features.

Categories
Augmented Reality Business Strategy

Between Page, Screen, Lake and Life

In my post entitled Pop-up Poetry I wrote about the book/experience Between Page and Screen. Print, art and AR technology mix in very interesting ways, including this one, but I point out (with three brilliant examples) that this project is not the first case of a "magic book."

Like many similar works of its genre, Between Page and Screen uses FLARToolKit to project (display) images over a live video coming from the camera that is pointed at the book's pages. Other tools used in Between Page and Screen include the Robot Legs framework, Papervision for 3D effects, BetweenAS3 for animation and JibLib Flash. Any computer (whose user has first downloaded the application) with a webcam can play the book, which will be published in April. Ah ha! I thought it was available immediately, but now learn that one can only pre-order it from SiglioPress.com.

And, as suggests Joann Pan in her post about the book on Mashable, "combining the physicality of a printed book with the technology of Adobe Flash to create a virtual love story" is different. Pan interviewed the author and writer of the AR code. She writes, "Borsuk, whose background is in book art and writing, and Bouse, developing his own startup, were mesmerized by the technology. The married duo combined their separate love of writing and technology to create this augmented reality art project that would explore the relationship between handmade books and digital spaces."

The more I've thought about this project and read various posts about Between Page and Screen, in recent days, the more confident I am that I might experience a magic book once or twice, but my preferred reading experience is to hold a well-written, traditional book. I decided to come back to this topic after I read about another type of "interactive" book on TechCrunch.

First thing that caught my eye was the title. Fallen Lake. Fallen Leaf Lake! Of course! I used to live in the Sierra Nevada mountains, where the action in this novel is set, and Fallen Leaf Lake is an exceptionally beautiful body of water. But the post by John Biggs points out that the author of Fallen Lake, Laird Harrison, is going to be posting clues and "extra features" about the characters in the book by way of a password protected blog.

All these technology embellishments on books seem complicated. They're purpose, Biggs believes, is to differentiate the work in order to get some tech blogger to write about the book and then, maybe, sell more copies.

Finally, Biggs points out that what he really wants in a book, what we all want and what will "save publishing," is good (excellent) writing. Gimmicks like AR and blog posts might add value, but first let's make sure the content is well worth the effort.

Categories
3D Information Augmented Reality Innovation

Improving AR Experiences with Gravity

I’m passionate about the use of AR in urban environments. However, having tested some simple applications, I have been very disappointed because the sensors on the smartphone I use (Samsung GalaxyS) and the alogrithms for feature detection we have commercially are not well suited to show me really stable or very precise augmentations over the real world.

I want to be able to point at a building and get specific information about the people or activities (e.g., businesses) within at a room-by-room/window-and-door level of precision. Instead, I’m lucky if I see small 2D labels that jiggle around in space, and don’t stay “glued” to the surface of a structure when I move around. Let’s face it, in an urban environment, humans don’t feel comfortable when the nearby buildings (or their parts) shake and float about!

Of course, this is not the only obstacle to urban AR use and I’m not the first to discover this challenge. It’s been clear to researchers for much longer. To overcome this in the past some developers used logos on buildings as markers. This certainly helped with recognizing which building I’m asking about and, based on the size of the logo, estimating my distance from it, but there’s still low precision and poor alignment with edges.

In 4Q 2011 metaio began to share what its R&D team has come up with to address this among other issues associated with blending digital information into the real world in more realistic ways. In the October 27 press release, the company described how, by combining gravity awareness with camera-based feature detection, it is able to improve the speed and performance of detecting real world objects, especially buildings.

The applications for gravity awareness go well beyond urban AR. “In addition to enabling virtual promotions for real estate services, the gravity awareness in AR can also be used to improve the user experience in rendering virtual content that behaves like real objects; for example, virtual accessories, like a pair of earrings, will move according to how the user turns his or her head.”

The concept of Gravity Alignment is very simple. It is described and illustrated in this video:

Earlier this week (on January 30, 2012), metaio released a new video about what they’ve done over the past 6 months to bring this technology closer to commercial availability. The video below and some insights about when gravity aligned AR will be available on our devices have been written up in Engadget and numerous general technology blogs in recent days.

I will head right over to the Khronos Group-sponsored AR Forum at Mobile World Congress later this month to see if ARM will be demonstrating this on stage and to learn more about the value they expect to add to make Gravity Aligned AR part of my next device.

Categories
Augmented Reality Social and Societal

AR-4-Teens

Changing human behavior is difficult. Go ahead, try it! Exercise more! Eat tomatoes for breakfast!

Changing behavior with new technology is right up there amongst the world's greatest challenges, after world hunger and a few other issues. In my post on AR-4-Kids, I concluded that if children were introduced to AR during infancy, adopting it would never be in question. Certainly using it in daily life would seem natural. Do companies providing AR today really need to wait for the crop of children born in 2010 or later (introduced to tablets with Ernie and Bert speaking to one another) to play with these technologies and keep them as part of their behaviors? 

Suppose you learned as an infant to adopt new technology? "Millennials" are among those who, from their earliest memories have mobile, Internet-connected technology in their hands and pockets and, in some cases, intuitively figure out the role it plays in their lives. 

Results of a study covered in several mobile marketing portals (but is surprisingly difficult to find on the Ypulse site), are not encouraging. The high school and college-age participants of Ypulse's survey are “baffled” by augmented reality technology, particularly when it's infused in mobile apps on leading smartphones. Neither of the news bulletins I've read about this study (here and here) describe the Ypulse methodology or sample size. However, below are the findings at a glance from the Mobile Marketing Watch portal:

  • Only 11% of high schoolers and collegians have ever used an augmented reality app.
  • retailers like Macy’s and brands like Starbucks have come out with mobile AR apps. They’re fun and clever, but as with QR codes, Millennials don’t always get the point.

Among students who have used AR apps:

  • 34% think they’re easy and useful;
  • 26% think they’re easy but not useful;
  • 18% think they’re useful but not easy; and
  • 9% think AR apps are neither useful nor easy to use.

We have a long way to go before the technology put in the hands of consumers, even the magnificent millennials today, meets true needs, adds value and gives satisfaction. Heck, we're not even near a passing grade! 

Those who design and produce AR experiences must reduce their current reliance on advertising agencies and gimmicks. Or at least they must reduce emphasis on "wow" factor that clearly has no purpose (except to engage the potential customer with a brand or logo).

Utility, especially utility in the here and now, is more important than anything else to change behavior and increase the adoption of AR. 

How useful is your AR experience?

Categories
Augmented Reality Events

Augmented Olympics

The London 2012 Olympic Games are fast approaching. I'm eager to see the extent to which Augmented Reality could be applied to this global celebration of human athleticism. I'm keeping a list of all the campaigns and applications being developed for this special event. Today the first instance was entered in my list. If you want access to this list, send me a message.

BP America recently launched as a component of its Team US support, a campaign using AR to raise public awareness of the US Olympic team.  They've worked with rising stars in archery, cycling, gymnastics, track & field and swimming, as well as some athletes with handcaps shown in the graphic below, to develop content (video clips and photographs). In addition to populating their web site, they had the help of New York City-based Augme to package the content into AR experiences triggered by using trading cards.

 

The system seems a bit of a stretch. There are a lot steps for users, even if they are sports fans.

Imagine this:

  1.  BP America will need to spend a few (certainly 5 figures) dollars and weeks letting people know that there are trading cards in upcoming issues of Bloomberg BusinessWeek magazine.
  2. Then, after buying the BusinessWeek and finding a card, the user will need to follow instructions leading them to a web page where they can launch the image recognition. A press release says that the app is also available for mobile (I was unable to find it, but let's assume for the moment that it's available on iTunes)
  3. Finally, if they are able to get the software to work, the Internet connection is high speed and their computer is sufficiently powerful, they will need to have a web cam. All included in a smartphone, of course.
  4. Those with all the components will then raise the trading card in front of the webcam or smartphone.

I'm not a BusinessWeek subscriber and will probably not find these cards, but I'd really like to know how the use of AR in this scenario is going to bring more value to sports fans than watching the videos and looking at photos already on BP America's web site.

I'll contact the people at Augme who designed the campaign and ask if I can see the statistics on this experiment in a few months time.