Categories
Augmented Reality Business Strategy

Augmented Real(ity) Estate

I would like to live in a world in which the real estate agent [information finder (an "explorer" that uses AR)] and the transaction platform are all (or nearly all) digital.

Funda Real Estate, one of the largest real estate firms in the Netherlands, was (to the best of my knowledge) the first Layar customer (and partner). Initially developed in collaboration with our friend Howard Ogden 3 years ago, the Funda layer in the Layar browser permits people to "see" the properties for sale or rent around them, to get more information and contact an agent to schedule a visit.

A few hours ago, Jacob Mullins a self-proclaimed futurist at Shasta Ventures, shared with the world on TechCrunch how he came to the conclusion that real estate and Augmented Reality go together! Bravo, Jacob! I think the saying is "In real estate there are three things that matter: Location. Location. Location." Unfortunately, none of the companies he cites as having "lighthouse" examples are in the real estate industry.

Despite the lack of proper research in his contribution, property searching with AR is definitely one of the best AR use cases in terms of tangible results for the agent and the user. It's not exclusively an urban AR use case (you could do it in an agricultural area as well) but a property in city-center will certainly have greater visibility on an AR service than one in the country. The problem with doing this in most European countries is that properties are represented privately by the seller's agent and there are thousands of seller agents, few of whom have the time or motivation to provide new technology alternatives (read "opportunity").

In the United States, most properties appear (are "listed") in a nationwide Multiple Listing Service and a buyer's agent does most of the work. Has a company focused and developed an easy to use application on top of one of the AR browsers (or an AR SDK) using the Multiple Listing Service in the US?

My hypothesis is that at about the time the mobile location-based AR platforms were introduced (mid-2099), the US real estate market was on its way or had already imploded. People were looking to sell, but not purchase property. 

This brings up the most important question neither raised or answered in Jacob's opinion piece on TechCrunch: what's the value proposition for the provider of the AR feature? Until there are strong business models that incentivise technology providers to share in the benefits (most likely through transactions) there's not going to be a lot of innovation in this segment.

Are there examples in which the provider of an AR-assisted experience for Real Estate is actually receiving a financial benefit for accelerating a sale or otherwise being part of the sales process? Remember, Jacob, until there are real incentives, there's not likely to be real innovation. Maybe, if there's a really sharp company out there, they will remove the agents entirely from the system.

Looking for property is an experience beginning at a location (remember the three rules of real estate?), the information parts of which are delivered using AR. Help the buyer find the property of their dreams, then help seller component, and YOU are the agents.

Categories
Events Internet of Things

IoT via Cloud Meetup in Zurich

The other day I traveled 2 hours and 45 minutes from Montreux to Zurich and 2 hours and 50 minutes home following a 2-hour meetup group meeting at the ETHZ. It was a classic case of my desire to meet and speak with interesting people being sufficiently strong to outweigh my feeling that I have too much to do in too little time. See Time Under Pressure. Fortunately, I could work while on the train and, in keeping with my thinking about Air Quality, I (probably) didn't contribute to the total Swiss CO2 emissions for the day. And what is really amazing is that the meetup was worth my investment. I previously mentioned that I was looking forward to catching up with Dominique Guinard, co-founder and CTO of EVRYTHNG, a young Zurich start up, and co-founder of Web-of-Things portal.

Dom did not disappoint me or the 20 people who joined the meetup. In addition to great content, he is an excellent presenter. He started out at a very high level and yet was quickly able to get into the details of implementations. He included a few demonstrations during the talk and a couple of interesting anecdotes. We learned that his sister doesn't really see the point to him sharing (via Facebook) the temperature readings from his sunspot gadget. And how he was inspired when WalMart IT management came to MIT for a visit and mentioned that they were considering a $200,000 project to connect security cameras to tags in objects in order to reduce theft. In 2 days, Dom (and others, I presume) had a prototype showing that the Web of Things could address the issue with open interfaces. My favorite story during the talk brought up the problems that can arise when you don't have sufficient security. Dom was giving a demonstration of Web of Things once when a hacker in the audience saw the IP address. He was able to go into Dom's server and within minutes (during Dom's talk) the power on his laptop shut off!

In addition to Dom's stage-setting talk, we had the pleasure of having Matthias Kovatsch, researcher in the Institute for Pervasive Computing at ETHZ, and the architect of Copper, a generic browser for the IoT based on Constrained Application Protocol (CoAP). Matthias presented the status of the projects on which he is working and the results of an ETSI/IETF plugfest to which he went in Paris. The consolidated slides of the IETF-83 CoRE meeting include the Plugtests wrap-up slides (slightly edited). It's really exciting to see how this project is directly contributing to part of the standards proving process!

In addition to these talks, Benjamin Wiederkehr, co-founder of Interactive Things, an experience design and architecture services firm based in Zurich, gave us great insights into the process and the tools they used to achieve the new interactive visualization of cell phone use in Geneva. Learn all about this project by visiting Ville Vivante web site, in collaboration with the City of Geneva.

Valuable evening, folks! Thank you for making another trip to Zurich worth the effort!

Categories
Internet of Things Research & Development

The Air We Breathe

In IoT circles, air is a popular topic. There is so much of it and, at the same time, it is so fundamental to the quality of life on our planet.

During the IoT-4-Cities event Andrea Ridolfi, co-founder of SensorScope, presented about the use of sensors mounted on buses and trams to measure air quality in the cities of Lausanne and Zurich as part of the OpenSense project.

This is a really interesting collaboration that I hope will develop systems for commercial deployments using an architecture similar to this one below.

Since deploying these systems widely will be expensive, going to scale will probably require getting citizens involved in air quality sensing. The citizen participation component of air quality sensing was the topic of presentations by Michael Setton, VP of Marketing of Sensaris and Jan Blom, User Experience Researcher at Nokia Research.

 

 

On March 30, the same day as the IoT-4-Cities meeting, the IoT-London meetup group held a workshop and 10 people built their first sensors. The web site with materials shared during the workshop would be a great basis for people to get started.

In parallel, Ed Borden of Pachube (LogMeIn) has put the Air Quality Egg project up on Kickstarter.com and it took off like a rocket, meeting its financial goal of $39,000 in less than 10 days. There's still three weeks before the project closes on Thursday April 26, 2012.

I want to get some people from Switzerland involved in building a prototype of the Air Quality Egg as a DIY project for the IoT Zurich meetup community, but, unfortunately, I and another enthusiast, JP de Vooght, lack all the necessary skills.

  • Are you interested in leading an AQE workshop or getting involved?
  • Do you have a venue where about 10 people can meet for a half day (with benches where use of sodering tools is convenient)? What else is needed? a 3D printer?

Join the Air Quality Egg Project and contact JP before April 25! We can promote the activity on the IoT-Zurich meetup list and page.

Categories
Augmented Reality Business Strategy

Augmented Reality SDK Confusion

I don't feel confused about AR SDKs but I wonder if some of those who are releasing new so-called AR SDKs have neglected to study the AR ecosystem. In my depiction of the Augmented Reality ecosystem, the "Packaging" segment is at the center, between delivery and three other important segments. 

Packaging companies are those that provide tools and services to produce AR-enriched experiences. Think of it this way: when content has been "processed" through the packaging segment, a user who has the right sensors detecting its context receives ("experiences") that content in context, or more specifically, in "camera view" (i.e., visually inserted over the physical world), as an "auditory" enrichment (i.e., a sound is produced for the user at a specific location or context) or "haptic" enrichment (i.e., the user feels something on their body when a sensor connects with some published augmentation that sends a signal to the user). That's all AR in a nutshell.

In the packaging segment we find many sub-segments. This includes at least the AR SDK and toolkit providers, the Web-hosted content publishing platforms and the developers that provide professional services to content owners, brands and merchants (often represented by their appointed agencies).

Everyone, regardless of the segment, is searching for a business model that will work for Augmented Reality in the long run. In order for value (defined for the moment as "something for which you either pay attention to or pay money for use") to flow through an ecosystem segment it's simple: you must have those that are buying-with their time or their money-and those who sell to the buyers. With the packaging segment in the middle, the likelihood is high that things that matter in the long run, that generate revenues, will involve this segment.

The providers of software development tools for producing AR-enriched experiences (aka AR SDKs) all have the same goal (whether they announce it or not). The "game" today, while exploring all possible revenue streams, is get the maximum number of developers on your platform. If you have more developers, you might get the maximum number of projects executed on/with your platform. It's the number of projects (or augmentations) that's the real metric that matters most. The SDK providers reach for this goal by attracting developers to their tools (directly or indirectly, using contests and other strategies) and/or by doing projects with content providers themselves (and thus competing with the developers). Cutting the developer segment out is not scalable and cannibalizing your buyers is not recommended either, but those are separate subjects.

For some purposes, and since it drives the use of their products, packaging companies rely on and frequently partner with the providers of enabling technologies, the segment represented in the lower left corner of the figure. More about that below.

Since we are in the early days and no one is confident about what will work, virtually all the packaging segment players have multiple products or a mix of products and services to offer. They use/trade technologies among themselves and are generally searching for new business models. And the enabling technology providers get in the mix as well.

The assumption is that if a company is using an SDK, they are somehow "locked in" and the provider will be able to charge for something in the future, or that, if you are a hardware provider, your chips will have an advantage accelerating experiences developed with your SDK. If manufacturers of devices learn that experiences produced using a very popular SDK are always accelerated with a certain chipset, they might sell more devices, hence order more of these chips, or pay a premium for them. This logic probably holds true as long as there aren't standards or open source alternatives to a proprietary SDK.

Let's step back to a few years ago when AR SDKs were licensed on an annual or project basis to developers. The revenue from licensing SDKs to third party developers on a project basis is the business model that was a primary revenue generator for computer vision-based SDK provider AR Toolworks, and annual licensing was relatively successful for the two largest companies (in terms of AR-driven revenues pre-2010), Total Immersion and metaio.  These were also the largest revenue generating models for over a dozen other less well-known companies until mid-2011. That's approximately when a "simple" annual or per-project licensing model was washed away, primarily by Qualcomm.

Although it is first an enabling technology provider (blue segment), Qualcomm released its computer vision-based SDK, Vuforia, with a royalty- and cost-free license in last days of 2010 and more widely in early 2011. To compound the issue, Aurasma (an activity of Hewlett Packard since the Q32011 HP acquisition of Autonomy) came out in April 2011 with their no-cost SDK. Qualcomm and Aurasma aren't the first ones to do this. No one ever talks about it any more, but Nokia Point & Find (officially launched in April 2009 after a long closed beta) was the pre-smartphone era (Symbian) version. It contained and exposed via APIs all the various (visual) search capabilities within Nokia and was released as a service platform/SDK. This didn't catch on for a variety of reasons.

So, where are we? Still unclear on why there are so many AR SDKs, or companies that say they offer them.

AR SDKs are easily and frequently confused with Visual Search SDKs. Visual Search SDKs permit a developer to use algorithms that match what's in the camera's view with images on which the algorithm was "trained," a machine learning term for processing an image or a frame of video and extracting/storing natural features in a unique arrangement (a pattern) which, when detected again in the same or a similar arrangement will produce a match. A Visual Search SDK leaves what happens after the match up to the developer. A match could bring up a Web page, like a match in a QR code scanner does. Or it could produce an AR-enriched experience.

Therefore, Visual Search can be used by and is frequently part of an AR SDK. Many small companies are providing "just" the Visual Search SDKs: kooaba, Mobile Acuity, String Labs, Olaworks, milpix, eVision among others. And apparently there's still room for innovation here. Catchoom, a Telefonica I&D spin-off that's going to launch at ARE2012, is providing the Visual Search for junaio and Layar's Vision experiences.

Another newcomer that sounds like it is aiming for the same space that Catchoom has in its cross hairs (provides "visual search for brands") is Serge Media Corporation, a company founded (according to its press release) by three tech industry veterans and funded by a Luxembourg-based consortium. The company introduced the SergeSDK. Here's where the use of language is fuzzy and the confusion is clear. The SergeSDK Web page says that Aurasma is a partner. Well, maybe HP is where they are getting the deep pockets for the $1M prize for the best application developed using their SDK! If Aurasma is the provider of the visual search engine, then the SergeSDK is actually only a search "carousel" that appears at the top of the application. Sounds like a case where Aurasma is going to get more developers using its engine.

Hard to say how well this will work in the long run, or over just the next year. There are few pockets deeper than those of Google and Apple when it comes to Visual Search (and AR). These companies have repeatedly demonstrated that they have been incubating the technologies and big plans are in store for us.

All right. Let's summarize. By comparison with other segments, the packaging segment of the AR ecosystem is a high risk zone. It will either completely disappear or explode. That's why there are so many players and everyone wants to get in the action!

Stayed tuned as in the next 6 months this segment undergoes the most rapid and unpredictable changes when Google and Apple make their entries.

Categories
Social and Societal

Time Under Pressure

In my most recent post I wrote a bit about what happens when I leave my office. At events I meet a lot of new people, and when out on the road I encounter objects that aren't familiar to me. It can be enlightening but it can be also dangerous and costly if time is your most precious resource (and time is the most limiting resource for populating this blog).

Here's an example of what we all try to avoid. Eric Picker came to Lausanne to give a 20-minute talk about the use of sensors and telecommunications to monitor water quality and quantity during our IoT-4-Cities workshop. His trip to Lausanne was not as smooth as it could have been but he arrived with few hours to spare. The talk was very rewarding and he met some new people and took the opportunity to get in a few hikes in Switzerland.

It was on the return trip that his flight was cancelled due to strikes by the French air traffic controllers trade union and, well, the French train system failed him as well (he missed every connection). It took two days for him to travel back to Cannes. Without incident, Geneva is one-hour away from Nice (by air).

Last week in San Francisco I was nine time zones out of synch with home base and (on the record) was there only to attend the New Digital Economics Brainstorm and chair the AR Innovator's Showcase on the same evening (March 27). I knew that in a hot bed of activity like the Bay Area, I couldn't miss the opportunity to connect with others. In the end, there were people who I couldn't catch but almost every precious minute was accounted for. Among the meetings, I had a great philosophical session with Gene Becker, another with Erik Wilde, visited the quiet offices of Quest Visual, and had lunch with the founder of Vizor (a project in stealth mode). Caught up with spirit sister Kaliya Hamlin, during which we learned about converting communities of interest into consortia with Global Inventures. I had quality sessions with representatives of Total Immersion, metaio, PRvantage, NVIDIA and The Eye Cam.

As a consultant, my value is a mixture of my knowledge about subjects and the time I have available to dispense it, to use it or to increase it. I do everything I can to manage my time. I've been to many portals and have read books on the topic of time management. Like everyone, I suppose, I try to avoid wasting time, and I use some software tools to save time. It's a topic of much interest to me but here's the ironic twist I've been reading and hearing more about recently: the more you stress about anything, including the time you have, the less of it you (may) have! For example, here, here, here and here. I hate to leave you with this negative thought but it's what's on my mind!

Categories
Events Internet of Things

Where EVRYTHNG Connects

Over the past 10 days I've been traveling and participating in important workshops and events in the US so writing and posting to this blog has been infrequent. My recent face-to-face meetings involved those attending the AR-in-Texas workshops, followed by the participants of the Fifth AR Standards Community Meeting that I chaired in Austin. Then, I participated in the Open Geospatial Consortium's quarterly Technical Committee meetings. I'm currently in San Francisco to attend the New Digital Economics Brainstorm.

I haven't counted but I estimate that within a week's time, during and between these events, I've met with over 100 people individually or in small groups. During the trip just prior to this one, the five days of Mobile World Congress in Barcelona, I met and spoke with at least that many and probably closer to 200 people.

A significant slice of these (the majority, I am guessing), are people with whom I have a history–simply meaning that we may have spoken by Skype, phone or in person, or exchanged some e-mail. Our meetings in the physical space, however, differ from those we conduct virtually. We all know that the Internet has formed far more links between people than physical contacts could ever hope to make, however, meeting in person still brings us value. How much? Well, that's difficult to measure in time and in terms of revenue. Certainly they provide me sufficient value to warrant my leaving my office to attend meetings! I could probably ramble on and reflect further about this interpersonal on-line/in-person communication dichotomy but one tangent I want to explore with you is slightly different.

When I'm traveling I also come into contact with many many objects. Products, places, things. I wonder how many objects (new ones, old ones, ones I've seen/encountered before) I come into contact in a day. What value do these bring to me? How would I discover this?

Think of a ‘Facebook for Things’ with apps, services and analytics powered by connected objects and their digital profiles. With billions of product and other objects becoming connected, tagged and scannable, there’s a massive opportunity for a company that can provide the trusted engine for exchanging this active object information.

One of the companies that is responding to the opportunity is EVRYTHNG. I hope to see many new and familiar people in the room on April 3 in Zurich when I'll be chairing the next Internet of Things face-to-face meeting featuring the start up EVRYTHNG. Why should you be there?

One reason is that co-founder Dominique Guinard will be talking from his company's perspective about:

– What is the Web of Things?
– Web of Things: How and Why?
– Problem Statement: Hardware and Cloud Infrastructures for Web-augmented Things
– Web-enabling Devices and Gateways
– Active Digital Identities (ADIs)
– EVRYTHNG as a storage engine
– Problem Solved: Connecting People & Products
– Vision: Every Thing Connected
– Projects and Concrete Example of How and Why ADIs are Useful.
– Using our cloud services and APIs to build your next internet of things / web of things applications.

Let's connect in Zurich!

Categories
Innovation Social and Societal

In digital anima mundi

Each year the producers of TED bring beautifully-articulated, thought-provoking content to the world. Those of us who are not invited or choose not to attend the event in person get free access to these talks in the weeks that follow the end of the live production. My first session from the 2012 TED program was by Peter Diamandis about our fascination with negative trends and the concepts he has captured in Abundance, the book.

An example of abundance in the world/on the web is the page on which Gene Becker of Lightening Laboratories shares his notes and slides of an inspirational talk he gave last week on a SXSW stage. Thank you for sharing these, Gene!

 

Categories
2020 Social and Societal

Anticipatory Services (1)

It's March 18, 2012. I've entered the doors of a nice hotel in the outskirts of Austin, Texas. As I greet the agent and approach the reception desk I get my photo ID and my credit card out and lay them on the counter. After a moment of looking at a box on the counter, the agent at the desk replies to me that the fitness center and the swimming pool are on the ground floor. In response to this information (which I will not use), I ask how I can connect to the Internet service in the room and the procedure to follow to have something printed out.

I enter my room and, in anticipation of my arrival, the fan and air conditioner are turning full tilt. I immediately find the thermostat on the wall and turn off everything. The temperature is to my liking. I open the curtains to let in the natural light.

If I were an author, I would write a poem or a short story about a time in the future when all the people, places and things around me are able to detect who I am, what I'm saying, to whom and every gesture I make. The environment will be organized in a way that my every need will have been considered and the options are made available.

I will be able to choose how to be reached (since there won't be these antiquated devices such as telephones and computers any more). Though the inner workings will be invisible to my naked eye, the "alert" surfaces of my surroundings and the objects I carry with me will be the interfaces by which I receive suggestions and make my choices known. When I arrive at the restaurant for an evening cocktail, I'll be served a bowl of freshly made tortilla chips accompanied by dishes of guacamole and salsa.

For a small monthly or annual fee, my preferred provider of premium anticipatory services will be tracking and logging my every move and anticipating my future. When the experiences  I have exceed expectations, I will be happy to pay yet more!

I've recently learned, from an article written by Bruce Sterling in the April 2012 issue of the Smithsonian Magazine on the Origins of Futurism, of the HG Wells essays of 1902 on the topic of life in 2000. The "Anticipations," as the author entitled them, are available as part of the Guttenberg project. I am really looking forward to digesting these, in particular how he thought of the urban life we lead today.

I wonder if he also thought of anticipatory services.

I regret that my visibility into the near future confirms I will be unable to digest these works in the coming days. And that the next place I stay an agent will similarly greet me as a guest with expectations they seek, but will fail, to meet. When I am home, my expectations are lower and the astounding reliability of my world to deliver beyond what I need is much appreciated.

Categories
Augmented Reality

GOING OUTSIDE

Spring (and the Greenville Avenue St. Patrick’s Day Parade) has brought Dallasites out in droves today. Locals tell me that this unusual. I’m reminded of this wonderful short article, originally published in The New Yorker March 28, 2011 issue.

JUST IN TIME FOR SPRING

By Ellis Weiner

Introducing GOING OUTSIDE, the astounding multipurpose activity platform that will revolutionize the way you spend your time.

GOING OUTSIDE is not a game or a program, not a device or an app, not a protocol or an operating system. Instead, it’s a comprehensive experiential mode that lets you perceive and do things firsthand, without any intervening media or technology.

GOING OUTSIDE:

1. Supports real-time experience through a seamless mind-body interface. By GOING OUTSIDE, you’ll rediscover the joy and satisfaction of actually doing something. To initiate actions, simply have your mind tell your body what to do—and then do it!

Example: Mary has one apple. You have zero apples. Mary says, “Hey, this apple is really good.” You think, How can I have an apple, too? By GOING OUTSIDE, it’s easy! Simply go to the market—physically—and buy an apple. Result? You have an apple, too.

Worried about how your body will react to GOING OUTSIDE? Don’t be—all your normal functions (respiration, circulation, digestion, etc.) continue as usual. Meanwhile, your own inboard, ear-based accelerometer enables you to assume any posture or orientation you wish (within limits imposed by Gravity™). It’s a snap to stand up, sit down, or lie down. If you want to lean against a wall, simply find a wall and lean against it.

2. Is completely hands-free. No keyboards, mice, controllers, touch pads, or joysticks. Use your hands as they were meant to be used, for doing things manually. Peeling potatoes, applauding, shooting baskets, scratching yourself—the possibilities are endless.

3. Delivers authentic 3-D, real-motion video, with no lag time or artifacts. Available colors encompass the entire spectrum to which human eyesight is sensitive. Blacks are pure. Shadows, textures, and reflections are beyond being exactly-like-what-they-are. They are what they are.

GOING OUTSIDE also supports viewing visuals in a full range of orientations. For Landscape Mode, simply look straight ahead—at a real landscape, if you so choose. To see things to the left or the right, shift your eyes in their sockets or turn your head from side to side. For Portrait Mode, merely tilt your head ninety degrees in either direction and use your eyes normally. Vision-correcting eyeglasses not included but widely available.

4. Delivers “head-free” surround sound. No headphones, earbuds, speakers, or sound-bar arrays required—and yet, amazingly, you hear everything. Sound is supported over the entire audible spectrum via instantaneous audio transmission. As soon as a noise occurs and its sound waves are propagated to your head, you hear it, with stunning realism, with your ears.

Plus, all sounds, noises, music, and human speech arrive with remarkable spatial-location accuracy. When someone behind you says, “Hey, are you on drugs, or what?,” you’ll hear the question actually coming from behind you.

5. Supports all known, and all unknown, smells. Some call it “the missing sense.” But once you start GOING OUTSIDE you’ll revel in a world of scent that no workstation, media center, 3-D movie, or smartphone can hope to match. Inhale through your nose. Smell that? That’s a smell, which you are experiencing in real time.

6. Enables complete interactivity with inanimate objects, animals, and Nature™. Enjoy the texture of real grass, listen to authentic birds, or discover a flower that has grown up out of the earth. By GOING OUTSIDE, you’ll be astounded by the number and variety of things there are in the world.

7. Provides instantaneous feedback for physical movement in all three dimensions. Motion through 3-D environments is immediate, on-demand, and entirely convincing. When you “pick up stuff from the dry cleaner’s,” you will literally be picking up stuff from the dry cleaner’s.

To hold an object, simply reach out and grasp it with your hand. To transit from location to location, merely walk, run, or otherwise travel from your point of origin toward your destination. Or take advantage of a wide variety of available supported transport devices.

8. Is fully scalable. You can interact with any number of people, from one to more than six billion, simply by GOING OUTSIDE. How? Just go to a place where there are people and speak to them. But be careful—they may speak back to you! Or remain alone and talk to yourself.

9. Affords you the opportunity to experience completely actual weather. You’ll know if it’s hot or cold in your area because you’ll feel hot or cold immediately after GOING OUTSIDE. You’ll think it’s really raining when it rains, because it is.

10. Brings a world of cultural excitement within reach. Enjoy access to museums, concerts, plays, and films. After GOING OUTSIDE, the Louvre is but a plane ride away.

11. Provides access to everything not in your home, dorm room, or cubicle. Buildings, houses, shops, restaurants, bowling alleys, snack stands, and other facilities, as well as parks, beaches, mountains, deserts, tundras, taigas, savannahs, plains, rivers, veldts, meadows, and all the other features of the geophysical world, become startlingly and convincingly real when you go to them. Take part in actual sporting events, or observe them as a “spectator.” Walk across the street, dive into a lake, or jump on a trampoline surrounded by happy children. After GOING OUTSIDE, you’re limited not by your imagination but by the rest of Reality™.

Millions of people have already tried GOING OUTSIDE. Many of your “friends” may even be GOING OUTSIDE right now! Why not join them and see what happens?

Categories
Augmented Reality Standards

Open and Interoperable AR

I’ve been involved and observed technology standards for nearly 20 years. I’ve seen the boom that came about because of the W3C's work and the Web standards that were put in place early. The standards for HTTP and HTML made content publishing for a wider audience much more attractive to the owners and developers of content than having to format their content for each individual end user application. 

I’ve also seen standards introduced in emerging industries too early. For example, the ITU H.320 standards in the late 1980s were too limiting and stifled innovation in the videoconferencing industry a decade later. Even though there was an effort to correct the problem in the mid-1990s with H.323, the architectures were too limiting and eventually much of the world went to SIP (IETF Session Initiation Protocol). But even SIP has only had limited impact when compared with Skype for the adoption of video calling. So, this is an example where although there are good standards available, they are implemented by large companies and the mass market just wants things that work, first time and every time.  AR is a much larger opportunity and probably closer to the Web than video conferencing or video calling.

With AR, there’s more than just a terminal and a network entity or two terminals talking to one another. As I wrote in my recent post about the AR Standards work, AR is starved for content and without widespread adoption of standards, publishers are not going to bother with making their content available. In addition to it being just too difficult to reach audiences on fragmented platforms, there’s not a clear business model. If, however, we have easy ways to publish to massive audiences, traditional business models such as premium content subscription and Pay to watch or experience, are viable.  

I don’t anticipate that mass market AR can happen without open AR content publishing and management as part of other enterprise platforms. The systems have to be open and to interoperate at many levels. That's why in late 2009 I began working with other advocates of open AR to bring experts in different fields together. We gained momentum in 2011 when the Open Geospatial Consortium and the Khronos Group recognized our potential to help. These two standards development organizations see AR as very central to what they provide. The use of AR drives the adoption of faster, high performance processors (which members of the Khronos Group provide) and location-based information.

There are other organizations very consistently participating and making valuable contributions to each of our meetings. In terms of other SDOs, in addition to OGC and Khronos, the W3C, two sub committees from ISO/IEC, Open Mobile Alliance, Web3D Consortium and Society of Information Display are reporting regularly about what they’re doing. The commercial and research organizations that attend include, for example, the Fraunhofer IGD, Layar, Wikitude, Canon, Opera Software, Sony Ericsson, ST Ericsson and Qualcomm. We also really value the dozens of independent AR developers who come and contribute their experience as well. Mostly they’re from Europe but at the meeting in Austin we expect to have a new crop of US-based AR developers showing up.

Each meeting is different and always very valuable. I'm very much looking forward to next week!

Categories
Augmented Reality Events

Aurasma at GDC12 and SXSW12

I was unable to attend the Game Developers Conference last week in San Francisco, but it sounds like it was a good event. I enjoyed reading Damon Hernandez's post on Artificial Intelligence. Damon and I are working together on the AR in Texas Workshops March 16 and 17.

At GDC12, Aurasma was in the ARM booth showing Social AR experiences. During this video interview David Stone gave some numbers and his excitement about the platform nearly leaves him speechless.

The SXSW event is going on this week and Aurasma is there as well. In Austin, Aurasma broke the news about their partnership with Marvel Comics. This is could have been good news for the future of AR-enhanced books. Unfortunately, the creative professionals who worked on this demonstration let us down. Watch the movie of this noisy animation showing what the character is capable of doing, and ask yourself "how many times does a "reader" want to watch this?"

I fear the answer is: Zero. Is there any aspect of this experience sufficiently valuable for a customer to return? I could be wrong.

What more could the character have done? Well, something related to the story of the comic book, for starters!

Categories
Augmented Reality Events Standards

Interview with Neil Trevett

In preparation for the upcoming AR Standards Community Meeting March 19-20 in Austin, Texas, I’ve conducted a few interviews with experts. See here my interview with Marius Preda. Today’s special guest is Neil Trevett.

Neil Trevett is VP of Mobile Content at NVIDIA and President of the Khronos Group, where he created and chaired the OpenGL working group, which has defined the industry standard for 3D graphics on embedded devices. Trevett also chairs the OpenCL working group at Khronos defining an open standard for heterogeneous computing.

Spime Wrangler: When did you begin working on standards and open specifications that are or will become relevant to Augmented Reality?

NT: It’s difficult to say because so many different standards are enabling ubiquitous computing and AR is used in so many different ways. We can point to graphics standards, geo-spatial standards, formatting, and other fundamental domains. [editor’s note: Here’s a page that gives an overview of existing standards used in AR.]

The lines between computer vision, 3D, graphics acceleration and use are not clearly drawn. And, depending on what type of AR you’re talking about, these may be useful, or totally irrelevant.

But, to answer your question, I’ve been pushing standards and working on the development of open APIs in this area for nearly 20 years. I first assumed a leadership role in 1997 as President of the Web3D Consortium (until 2005). In the Web3D Consortium, we worked on standards to bring real-time 3D on the Internet and many of the core enablers for 3D in AR have their roots in that work.

Spime Wrangler: You are one of the few people who has attended all previous meetings of the International AR Standards Community. Why?

NT: The AR Standards Community brings together people and domains that otherwise don’t have opportunities to meet. So, getting to know the folks who are conducting research in AR, designing AR, implementing core enabling technologies, even artists and developers was a first goal. I need to know those people in order to understand their requirements. Without requirements, we don’t have useful standards. I’ve been taking what I learn during the AR Standards community meeting and working some of that knowledge into the Khronos Group.

The second incentive for attending the meetings is to hear what the other standards development organizations are working on that is relevant to AR. Each SDO has its own focus and we already have so much to do that we have very few opportunities to get an in depth report on what’s going on within other SDOs, to understand the stage of development and to see points for collaboration.

Finally, the AR Standards Community meetings permit the Khronos Group to share with the participants in the community what we’re working on and to receive direct feedback from experts in AR. Not only are the requirements important to us, but also the level of interest a particular new activity receives. If, during the community meeting I detect a lot of interest and value, I can be pretty sure that there will be customers for these open APIs down the road.

Spime Wrangler: Can you please describe the evolution you’ve seen in the substance of the meetings over the past 18 months?

NT: The evolution of this space has been rapid, by standards development standards! This is probably because a lot of folks have thought about the potential of AR as just another way of interfacing with the world. There’s also been decades of research in this area. Proprietary silos are just not going to be able to cover all the use cases and platforms on which AR could be useful. 

In Seoul, it wasn’t a blank slate. We were picking up on and continuing the work begun in prior meetings of the Korean AR Standards community that had taken place earlier in 2010. And the W3C POI Working Group had just been approved as an outcome of the W3C Workshop on AR and the Web.

Over the course of 2011 we were able to bring in more of the SDOs. For example, the OGC and Web3D Consortium started presenting their activities during the Second community meeting. The OMA Mob AR Enabler work item presented and ISO SC24 WG 9 chair, Gerry Kim, participated in the Third Meeting in conjunction with the Open Geospatial Consortium’s meeting in Taiwan.

We’ve also established and been moving forward with several community resources. I’d say the initiation of work on an AR Reference Architecture is an important milestone.

There’s a really committed group of people who form the core, but many others are joining and observing at different levels.

Spime Wrangler: What are your goals for the meeting in Austin?

NT: During the next community meeting, the Khronos Group expects to share the progress made in the newly formed StreamInput WG. We’re just beginning this work but there’s great contributions and we know that the AR community needs these APIs.

I also want to contribute to the ongoing work on the AR Reference Architecture. This will be the first meeting in which MPEG will join us and Marius Preda will be making a presentation about what they have been doing as well as initiating new work on 3D Transmission standards using past MPEG standards.

It’s going to be an exciting meeting and I’m looking forward to participating!