Categories
Augmented Reality Policy, Legal, Regulatory Social and Societal

Where is the line?

If, when using the camera and microphone of my future AR-assisted display, I were to record an experience including a conversation, where is the line between what is yours and what is mine? Or if I had my personal cultural assistant operating while visiting a gallery or my digital shopping assistant in a store. What parts of the data belong to the museum or the retail merchant?

In the past 6 months there's been a lot of attention paid in the media to subjects of personal privacy (and personal data protection) and the use of Project Glass. No, Glass isn't AR. But Glass has enough in common with products that do (and will) provide AR features that we can use this post by Robert Scoble as an example of what the dialog is and will be around.

We've probably all read articles like this one and had discussions with people about how to manage personal privacy when sensors are always (or almost always) on. Scoble is of the opinion that privacy advocates sounding the alarm now will look like fools in a year's time.

I don't think it's that simple. Sure, there are already laws to protect personal privacy. In California, it's illegal to record another person's voice without consent or to capture video or photos where there is an expectation of privacy. I'm told that there will be ways address concerns with on-device (on display) software and/or hardware (e.g., Glass' projector light is on when you are recording, a flap over your camera). I'd like to learn about approaches to address and control privacy that might be more discreet and more reliable, or that would work without relying on the manufacturer of the device, perhaps using cloud-based services.

These still don't get to the question of the law. I'm going to be giving a presentation to an association of Law Librarians tomorrow during which I will raise this question of capture in public spaces. Although the managers of law libraries don't themselves argue cases of this type, they might know of precedent. I look forward to an interesting dialog about the direction of change and how those who manage public spaces will address this delicate subject in the future.

When there is precedent in the public parts of our lives, we will all be wiser about where to draw the line in our personal spaces.

Categories
Augmented Reality Social and Societal

The Contextually-Aware Age

I am writing a position paper about how position (ironically) can reduce the amount of data sent to a mobile device and make what is sent more likely to be of interest to the user. Since the paper will be distributed as part of a campaign to promote an OGC event about mobile standards later this month, I’m using the phrase “Next Generation Location-based Services” to describe this, but I’m exploring other closely-related ideas and topics.

A lot of people, especially academics (here’s an example), have developed, as far back as a decade ago, some excellent thinking around such services and there was a very comprehensive CSC Grant report about NGLBS published in 2010 on the subject.

I was watching the recently-posted video by Robert Scoble on the new real-time information feed that TagWhat released the other day. About 8 minutes into the video, I heard that with Forbes writer Shel Israel, Robert is co-writing a book about contextually-aware services called the Age of Context. I look forward to reading it.

The Scobleizer tweet stream is packed with excellent examples of how mobile sensors on devices are going to impact the services we use. So, while the concept of Next Generation LBS isn’t new and it really hasn’t become common (yet), it’s fair to say that it is trendy.

What Scoble hasn’t discussed, at least in what I’ve read of his writing on the topic to date, is the problem of proprietary APIs.

Scoble is a huge fan of social media, especially Google+. On the day that I was watching the TagWhat video interview, I also learned that Robert was going to be using the new Smith Optics Elite Division ski goggles with Recon Instruments technology. I haven’t read Robert’s review of these yet but I hope he does a special about Augmented Reality when skiing.

In the social media “universe” there are few, if any, open standards. Unfortunately, many of the services to which the applications are sending requests for contextually-relevant information are social media services, using proprietary interfaces. Application developers must either develop their own data and limit their users’ access to other appropriate data sets, or write to and use the unique features of each service interface in order to relay the data to their servers, where they can filter the data as needed by the user and according the preferences and context settings of the mobile client application.

Lack of standards for contextually-relevant data places a heavy burden on developers. In contrast, if the burden on developers can be reduced significantly with the use of open data, linked data and, above all, open Web-based services, more developers will use more diverse data sets in their services and the era of contextually-aware mobile services will blossom.

My position paper describes how some standards already exist to address the problem. The Open GeoSpatial Consortium provides the most widely-implemented set of open standards for geospatially-referenced information for use in the development of contextually-aware enterprise and consumer services. I’m looking forward to learning more about these during the upcoming Next Generation LBS event the OGC is holding on February 27. I hope to meet many of you there!

Categories
3D Information Business Strategy

Disrupting How People Collaborate

I've recently co-founded with Ignacio Mondine a new company, Two Way View. Two Way View will develop, manufacture and sell new products that enable "Transparent Collaboration."

Transparent Collaboration is what people do when they use Two Way View products to share, annotate and modify data, co-surf or otherwise have an experience that combines the digital and physical worlds. You might think that I’m talking about Augmented Reality, and there are related concepts, but Transparent Collaboration products don’t overlay digital data on the physical world. They allow me to use the physical world, directly, by way of a light pen or my fingertips on the touch-sensitive screen, to modify the digital world in real time.

The crucial difference and where we are disruptive, as you can see in this video, is how Two Way View products also allow the two people who are sharing and collaborating to see one another in real size, see where the other is looking, and to work together only an arm's length away, just as if they were on opposite sides of the same sheet of glass.

It's the fusion of a digital white board and telepresence system.

No one has proven that people will change their behaviors to use it, but I'm really excited about the highly disruptive technology this company is bringing to market. We might be in the position described by Clayton Christensen in his now classic book, the Innovator's Dilemna.  The giant companies that currently provide telepresence–Cisco and Polycom–are feeling the slow down in sales but, more importantly, they may have neglected to continue innovating within their markets; an upstart with a different approach and higher value comes in.

What lessons can we use from Christensen's masterpiece? I found this short essay on TechCrunch and it helped me to formulate Two Way View's answers with respect to Christensen's four key takeaways:

  1. Understand what is the source of your disruption. Is it a new product or a new way to distribute an existing product? Two Way View will use, to the best of its ability, the existing IT and telecommunication products distribution channels to introduce at least one new product.
  2. Pay attention to opportunities in new distribution channels. Two Way View could also distribute its products outside the traditional telepresence channels and, as an OEM, go through vertical market distribution channels. We will explore it.
  3. Start by marketing to the group of customers for which the incumbent in your industry has the lowest margin or the lowest interest to defend. Two Way View is keen to explore the markets in which large, complex 3D models are common place. The "traditional" data collaboration systems may be inappropriate in these use cases and lack the human element of collaboration between creative professionals.
  4. Remember these lessons when you are at the top. Stay tuned!

We are going to make an impact in 2013!

Categories
Events Innovation

Sizing Up CES 2013

The Consumer Electronics Association (CEA) marketing team chose to brand CES2013 as the "world's largest CES-signinnovation event." With 1.92 million net square feet of exhibit space and over 150,000 visitors (half of whom could be Korean), it is indisputably the largest trade show. The word "Innovation" was on banners and headlines everywhere. My eyes rolled and I sighed in pain as I waited in long lines for everything. Despite the express service shuttle buses and the noise being made about driver-less automobiles (although Google was not exhibiting), no one has innovated a new way to move so many individual people easily through spaces of these sizes. Perhaps patience is a cultural virtue of companies that exhibit at CES.

Finally, when I dove into the middle of Central Hall, I found over-sized televisions and cameras, but not what I was looking for. I spent hours, crawling around the enormous, sprawling booths of the Japanese and Korean consumer electronics "gorillas" including Sharp, Samsung (clearly the largest presence at the show in terms of footprint), Panasonic, Sony, Canon, Mitsubishi, showing screens that would not fit in any Swiss dwelling. These were interspersed with slightly smaller but still outlandishly large booths of Polaroid (I thought they were out of business!), automotive companies (in North Hall),  HiSense and others. CNET published this album of photos taken in the biggest CES2013 booths.

Only a year before, in January 2012, many of these same companies were showing Augmented Reality-enhanced products. But, in 2013, AR was not in evidence. I looked at dashboards and windshields, tablets and camera viewfinders. A crowd gathered around the Vuzix booth where the M100 was being shown. Vuzix received some attention from the media (here and here), but we didn't learn or see anything new. Motorola Solution's HC1, a head-mounted computer aimed at people in construction trades, was reportedly being shown but I didn't find it. Intel's demonstration of Perceptual Computing was the highlight in Central Hall. In an interview with the BBC, Mooly Eden, president of Intel Israel speaks about how gestures, gaze and voice will replace the touch computer human interfaces we have today. Patient, perceptive people staffing information booths assured me that the strategic planners who decide what warrants showing at CES just had different priorities this year: these new computer human interfaces. In the LG Electronics stand I ignored the curved TV screens and, with a little effort, found gesture as the interface for AR-assisted dressing room solutions provided by Zugara. It looked precisely the same as when I saw it over two years ago. Contrary to plan and despite my interest in new mobile displays (Samsung was showing Youm, the flexible displays about which it has also spoken for several years), I didn't linger in the Samsung Electronics booth.

In South Hall the situation and my mood improved. Here, AR was a little more in evidence as were key enabling technologies. I caught up with Kopin showing the Golden-i solution that it has partnered with Verizon to provide for first responders. In the Qualcomm booth, Vuforia's news items covered their cloud-based service that now permits users to add their own content as targets in a mobile AR experience (for example, the ModelAN3D application) and new text (as opposed to "simply" image) recognition capabilities.The first application to be enabled with this feature, Big Bird's Words, helps users find and learn new words in their environment.

The crowds around NVIDIA's Project Shield were thick and the reviews by Slashgear, Wired, Mashable, Droid Life and others were enthusiastic, with only a few exceptions. It certainly merits the many awards received. Why doesn't NVIDIA make this the first big AR-assisted power gamer platform?

A little further I met Limitless Computing, a small company that escaped my attention before, even though it has been featured in the media showing its AR capabilities. It launched its VYZAR 2D and 3D AR engine for producing mobile AR experiences at CES but I'm not sure what it really does different from the SiteSpace 3D which is highly valuable for industrial AR use cases where Sketch up is used with KML. This merits further investigation. Limitless marketing folks need a tutorial in AR terminology.

South Hall is also where I found the AR-assisted games by Sphero which made it on the PC World editors list of the 10 best CES2013 technology products (in 31 slides), and on which virtual reality and devices for 3D experiences were also featured. I found it odd that the  night vision companies I found in South Hall had never heard of Augmented Reality!

I didn't have to explain mobile AR's benefits to the many young companies I found in CES2013's Eureka Park in the Venetian. There I met with folks from Innovega, Luminode, Precision Augmented Reality Works, 3DeWitt, among dozens of others. Of course, it was stimulating to see so many new products and people in such a short time, but there remains a lot of follow up before I can assess if CES2013 was truly worth the effort.

Categories
Business Strategy Events

The Horsemen from Above

As I prepare for CES2013, I have themes with which to search for companies I'll visit and to organize what I'll learn there. One of these filters I use when thinking about a company's relative importance is its size. Another is the value proposition (to me and to the world).

A third way to think of companies is their culture. This one goes in many dimensions (East/West, Open/Proprietary, Off the books R&D vs. InHouse, etc), and is the topic of this post because I read a piece posted yesterday on TechCrunch about the importance of Samsung Electronics.

It's no surprise that business culture is very closely tied to the culture in which the company's employees work and live. For most people who have not lived and worked there, Korean business culture is very difficult to decode. This summary, on a website for Danes working in Korea, really seems to capture the essence:

The Confucian mind-set is a fundamental part of Korean culture. In accordance with Confucian principles, people of higher rank or age are treated with an explicit respect, both socially and in business matters. Employees of Korean companies have a strong sense of loyalty towards their employer and in any situation of conflict they are expected to seek confirmation or take the side of the employer regardless of the logic behind the arguments.

Confucian emphasis on education can be felt throughout Korean society. Koreans are in general very well educated and attach much importance to academic excellence and degrees obtained. The admission examinations for Korean universities are important events as the result of the examinations determine the future of thousands of young Koreans. Networks established during the high-school and college years often play a big role in the following career and throughout life.

It seems that spirituality is a theme in this post!

In the New Testament, the four horsemen of the apocalypse ride white, red, black, and pale horses. They represent Conquest, War, Famine, and Death, respectively. In 2011, as some predicted the Apocalypse in 2012, the metaphor was used in the business and financial press to analyze the impacts of businesses on the future of technology.

The TechCrunch post suggests that Samsung Electronics is reaching the same level of importance in terms of influencing emerging technology  trends and developments as are Google, Apple, Amazon and Facebook (some analysts I read on this topic do not include Facebook on the list and, in its place, have found IBM to be a top technology influencer in 2013). Hence, Samsung would be the fifth horseman. If the goal is to emphasize how important a company is on mobile platforms, then Samsung is more important than Amazon and this list could be kept to four. And, in terms of size, they hit the mark. The New York Times reports that Samsung will reach $8.3B in profits in the quarter that ended December 31, 2012.

The problem that Western analysts face in deciding where to position Samsung is that they have low insights into how the company has achieved its success to date, and what Samsung is planning. Given the difference in corporate culture, it is difficult, as difficult as predicting how spirituality impacts other domains, for those surveyed for these rankings to calibrate how Samsung will shape mobile platforms in the next 12 to 18 months.

While at CES, I will be spending a lot of time in the Samsung Electronics booth, as well as those of many other Asian companies, and will continue my quest to better understand how they, despite their business cultures being very different from American and European, are going to impact mobile business opportunities, particularly in Augmented Reality fields, in the next year.

Categories
2020 Augmented Reality

Viewing Small Parts of Big Data

2013 is only 32 hours "old." Can I view how my place on the planet has evolved in those hours? No.

I can't see how many people went skiing on the mountain at which I'm looking or what the temperature was (or is) as I walk along the edge of Lake Geneva. But the information is "there." The data to answer these questions are somewhere, already part of Big Data. Dawn of 2013 on Lake Geneva

It could be displayed on my laptop screen but I'm convinced it would be more valuable to see the same data on and in the world. It's getting easier with cloud-based tools to make small, select data sets available in AR view, but we are far from being surrounded in this data.

One of the issues stems from needing a human, a developer, to associate an action with every published data set. Click on the digital button to go to the coupon. Click on the flag to see the height of the mountain.

What if some data were just data? Could we, without associating an action to them, have data automatically made visible?

The high cost and effort still required to make "just data" about a user's context and surroundings accessible on the fly are major impediments to our being able to use the digital universe more fully. Overcoming these obstacles will require open interfaces, many of which could be defined by standards, upon which new services will be offered.

But there also must be deeper thought and research into how we are presented with data, what it looks like, and the dimensions of the data provided to us. What is the appropriate resolution? How do I adjust this?

These questions are not trivial. I'm disappointed that the new IDC study on Big Data in 2020 doesn't go into how we will visualize Big Data in the future. I hope a future study will examine how small a data set is still valuable to different tasks, different modes of a professional or consumer user. This research could help us better understand the Big Data opportunities as well as helping us better quantify the value of Augmented Reality.

Let's hope that we will see this topic raised by others (with the data to define it better) before the end of this young year!

Categories
Augmented Reality Business Strategy

Get Quantitative

Models developed by industry analysts frequently predict astronomically large growth for Augmented Reality. A good report has to have a lot of illustrative examples, describing the richness that AR is capable of delivering and forecasting how this will evolve over the years in the forecast period.

I don't intend to publish a full market research report about AR because, at the speed the business is evolving, it will be out-of-date before it is completed. This is primarily because market and trend forecasting aren't the only services I provide and my projects don't all need the quantitative proof to be successful.  That said, regularly and reliably measuring the growth of AR and sharing those metrics with the world are very much on my list of goals for the near future.

In May 2012 I began the process of getting quantitative about mobile AR by inviting a half-dozen of my friends in AR companies to share a little of what they are doing. In a survey I asked questions about:

  • What AR metrics are available?
  • When did you begin capturing mobile AR metrics?
  • How are these metrics captured?
  • Who uses (what is the purpose of gathering) these metrics?
  • Who has access to them?
  • How long are they stored?
  • What are the conditions or agreements with the users or content publishers?

I learned that there is no consistency in how metrics are collected or used in 2012. Maybe that's good news because it permits us to develop new methods and avoid having to re-engineer systems.

In preparation for a new campaign on mobile AR metrics, their importance and methods for acquiring and comparing them, I've been gathering my own examples of how metrics are gathered and communicated.

I like the infographics approach. The one I provide here was issued by Blippar about the campaign they did with Shortlist magazine but I learned about it when it was published in this post by Onno Hansen the author of the IDentifEYE blog about AR and Education.

The fact that nearly 10% of the audited circulation (over 51K out of 529K) actually used the application/features, is not all that surprising when you understand that the target audience of this publication, "high class city-dwelling men," is highly technology-centric. They also like a good deal. The weekly print media publication is distributed free at newsstands in UK.

One more indicator that these guys like a good deal is click through rate of 13.4%. This is phenomenal for a magazine, I think. If you have other examples of high CTR for print media, I'd love to hear about them.

Categories
Augmented Reality Business Strategy Standards

Three Giants (and the Hobbit)

In Greek Mythology, the Hekatonkheires were children of Gaia (Earth) and Uranus (sky). They were three incredibly strong giants, each had 100 hands and 50 heads, and their ferocity surpassed that of all the Titans.

In today's information universe, at least in this version of the story, they are reincarnated and will have the option of doing good, defeating evil and helping the planet to become a better, richer place for human beings. There are many beautiful analogies between Greek Mythology and my current thinking, but I will limit myself to only the Giants. The three can be none other than Apple, Microsoft and Google.

For years we've heard rumors and seen signs in the patent filings (first in Feb 2010, then here, and more recently, here and here) that at least a few people within Apple are working on displays that would provide users the ability to view 3D without glasses or to wear displays on their heads at eye level, otherwise known (in my lingo) as "hands-free displays." Of course, these could be useful for a variety of applications not limited to Augmented Reality, but depending on their features, and how/when the content provided to them appeared, we can assume that these devices would (will) at least be suitable for certain mobile AR applications.

A year ago, when rumors began to circulate about Google's now widely acknowledged and publicized eyewear project (Project Glass), my anxiety about Apple coming out with a new product that would once again transform the form factor of mobile computing (a good thing), as well as further (bad news) its closed and proprietary, and very valuable, useful developer ecosystem, lessened.

At least the end users, I told myself, will have a choice: an Android alternative to iOS-based hands-free displays and, instead of one proprietary platform controlling the universe, we will have two. Google will provide Android-friendly and Android-based eyewear for content and experiences that are published in what will probably begin as a Google proprietary format, while Apple will provide different (or some similar) content and experiences published only in its proprietary format.

At least with two Giants there's no doubt there will be a war! My fear remained that one of the two would win and we would not benefit from the future of open and interoperable AR content and experiences, based on open interfaces, standards and wild flashes of innovation emerging out of nowhere but catching on (one can suppose this will happen more easily in open ecosystems that in closed ones).

My hopes were boosted when on November 13 Vuzix finally disclosed its answer to ProjectGlass, the M100. The announcement that the M100 won the CES 2013 Design and Engineering award (in the Wireless Handset Accessory Category) got picked up by some big bloggers here, here and here as well as hundreds of smaller readership sites. I think of Vuzix as the Hobbit in this case. Don't worry, there were few giants in Tolkein mythology so I'm not going to go far here!

When, earlier this week the news broke (see Guardian's article and the Slate.com piece) that Microsoft has been granted a patent on its own development project (no doubt with the help and support of Nokia) resembling those of Apple and Google, I shrieked with delight!

A third giant entering the ring has two impacts for all end users, for all the content in existence already and to come, and for our AR Standards Community activity.

First, and most directly, it puts content publishers, the likes of Goliaths like CondeNast (as well as the micro-publishers) in a position of having to support–in their future AR-ready content catalogs–multiple display platforms which is (I hope) prohibitively expensive. The content publishers will be able to unite and encourage the display providers to open at least some interfaces to common code and over time maybe even have full interoperability. In an ideal scenario, the owners of content around the world, beyond the three giants themselves, will simply ignore the three competing platforms until a set of simple tags and functionality are agreed upon and implemented across them.

Second, the work our community began in summer of 2012 on the requirements Hands-free AR devices will have the benefit of more open minds, people who are working on their own hardware and software that want to be heard and express their vision of an open world while the war of the Titans is raging. The parallel, I believe, is that today's innovators and entrepreneurs who want to develop totally new information experiences in the real world, unlike anything we've had or seen before and for the benefit mankind, are like the Olympians of Greek mythology. Perhaps, by having the three Giants agree on some level, if not completely, about open Augmented Reality, the doors for the future of AR for the "rest of us" will open.

And, there another reason for mentioning the Greek Mythology and my hope the myth of the Giants is not entirely re-enacted in our days. In my modern version, the three giants are allowed to come out and play nice with Cronus. If your Greek Mythology is as rusty as mine, I will share with you that during the War of the Titans, the Giants helped the Olympians overthrow the Titans, of whom Cronus was king. In modern day, Khronos Group is a strong supporter of open AR through open, royalty-free specifications of interfaces for hardware in most of our mobile devices.

Categories
Augmented Reality Research & Development

Physical World as an Interface

Although there are a growing number of excellent examples and even reports of positive return on investment on mobile AR, I shudder at the thought that the first applications the term "Augmented Reality" will bring to the minds of most people will be a game played with the wrapping on a candy bar. OK. I get it. The power of engagement.

AR experiences that fail to bring value to the user (beyond a quick thrill) in return for their attention are unhealthy for our image. People fail, or are not paid enough, to think sufficiently about the impact this technology will have and how to use it.

In my opinion, one profound impact of AR will be to turn the user's immediate environment into the interface for search and interactivity with digital information. Time for a new term: turning the physical world into the interface for digital is an extension of Skeuomorphism.

According to the Wikipedia definition, a skeuomorph is a physical ornament or design on an object copied from a form of the object when made from another material or by other techniques. It's a principle that Apple, while under the direction of Steve Jobs, was known for. The debate over the merits of Apple's extensive use of skeuomorphism became the subject of substantial media attention in October 2012, a year after Jobs' death, largely as the result of the reported firing of Scott Forstall, described as "the most vocal and high-ranking proponent of the visual design style favored by Mr. Jobs".

There are already examples of AR permitting the physical world to become the interface for the digital one. One I'm hoping will be repeated in many public spaces is the interactive lobby. If you are not already aware of this Interactive Spaces project, developed earlier this year for an Experience Center on the Mountain View Google campus, I highly recommend getting acquainted with the goals and people behind it on this blog post.

In this example, the cameras in the ceiling detect the user's presence and moving around in the space causes objects to move, sounds to be produced and more.

Expect many more examples in 2013.

Categories
3D Information Events

IndoorGML Workshop

In-person meetings with domain experts are extremely important to my continued growth and to my contribution to the advancement of others. In Korea this week I'm enjoying a full week with OGC members and others in the Korean technology community. I could write at length about all that I learned in the first day during the public opening sessions hosted by the Korean Ministry of Land, Transportation and Marine (but my time is so heavily booked I must choose the topics on which I prepare a post carefully!).

The first IndoorGML Workshop, which I chaired today, was worth the many hours of travel. There were approximately 40 people in the workshop. Only 20% of those in the room said that they were researchers. Another 10% said that they considered indoor topics to be the focus of their work. And, only one person raised a hand when I asked if there were any users in the room. That was very interesting considering that, in my opinion, we are all users of indoor technologies every day. Perhaps my definition of a "user" was not clear. 

The goals of the workshop were clear and simple to express: we wanted to brief people about the status of the IndoorGML specification and to hear from six invited speakers about what they are doing that could benefit from IndoorGML or contribute to the greater utilization of IndoorGML. Having clear goals doesn't necessarily make them them easy to achieve but in this case the contributions fit the bill.

Each speaker spoke excellent English (they were all Korean-based) and was well prepared. They spanned the gamut from describing a new tool to edit IndoorGML files to the requirements of Martime management services for defining the use of indoor spaces in ships carrying passengers. Between these were two mobile application projects (one for use in the Coex Center where the 82nd OGC Technical Committee meetings are being conducted) and two projects used indoor navigation with robotics. 

After each talk I thought of connections between these speakers, their projects (all but one new to me) and some of my past and current projects. I look forward to following up with each and using the workshop as a springboard to new dialog in the future.

The presentations will be available in the next few days on the IndoorGML Workshop web site so that others may also benefit while requiring less travel time and cost.

Categories
Augmented Reality Events

InsideAR 2012

InsideAR, conducted earlier this week in Munich, was an outstandingly well-balanced event. It was also a sizable “AR industry insiders” gathering produced without a professional event organization. We were nearly 500 participants for two jamb-packed days. Thanks to metaio for producing this exceptional human experience!

What made it different and sufficiently exceptional for me to be thinking about it for days, even to the point of inspiring me to dedicate this post to the event? First, the people. There were people from every continent and segment of the ecosystem. For example, I had the pleasure of being introduced to Ennovva, an experience design and AR development company from Bogota, Columbia. There was a smattering of American companies that don’t frequently make it to European AR industry events: Second Site, Vuzix, and Autodesk. Of course, the European community of AR developers was well represented, and there were many loyal metaio customers who have been using AR with highly quantifiable results, such as Lego, Volkswagen and IKEA. Smaller companies were also in good standing. There were Asian partners and customers in attendance. There were newbies seeking to be introduced to AR as well as founders of the industry, such as Daniel Wagner and Ron Azuma.

metaio's announcements were also important and impressive. Junaio is coming along nicely but so are Creator and metaio Engineer.

Representing the technologies for AR, many of metaio's large partners were there—ARM, NVIDIA, ST Ericsson among them. And, notably, the hosts even welcomed their competitors. I chatted with representatives from Qualcomm, Wikitude, MOB Labs, however, didn’t see any Layar folks in attendance (Martin Adam, of mCRUMBS, was showing Layar-based experiences).

The presentations were (with only one exception) outstanding. Each day there were several sessions featuring metaio products. Watch the keynote here. On stage, the balance between live demonstrations and slideware was admirable, making the new product announcements compelling and strategies easy to understand. Clearly, engineering at metaio has been very busy over the past year, but so have those who operate the company’s communications systems. The company even launched a new industry magazine!

Audience attention was still high before lunch on Tuesday when I shared the community's vision for open and interoperable AR and how this group of dedicated people is working together to approach the diverse challenges. See slides here and video of the Open AR talk here. I expect to see some of the new faces who came up to me after the talk at future community meetings.

In the exhibition space, metaio and its partners showcased AR through many fantastic demonstrations, permitting visitors to touch and use AR in specific use cases and domains, such as automotive, games and packaging. The Augmented City, one of my favorite domains for AR to bring value to citizens and managers of urban settlements, was highly featured in sessions and in the demonstration area.

I thoroughly enjoyed watching, speaking and catching up with the whole metaio team—from the co-founders to the very newest employees (wave to Anton Fedosov and congratulation for the smooth landing!). They all moved together like a well-oiled team and event production machine, from the front desk to the staff meeting areas in the loft and made us feel like part of their family. It was also an opportunity to put faces to names I recognized. Irina Gusakova, who was an invaluable resource by e-mail prior to InsideAR2012, made me feel like we were long lost friends.

Finally, it seems trivial to some but in my experience it is important to fuel the body as well as the mind. Beverages were always plentiful and the food was authentic and available when needed. A visit to Munich would not be complete without Octoberfest and metaio saw to it that we finished in style under a tent in the center of the city’s annual festivities. This event is definitely on my calendar for 2013!

Categories
Augmented Reality

Is it Augmented Reality?

Television. From a former life I vaguely remember this broadcast medium that was (and still is for some people) provided on a screen in a defined sequence of segments called "shows" in an order defined by something that was called a "program." The content is professionally produced and sometimes approaches the real world. Then there is this genre of television called "reality TV" but that's something else.

Companies that prepare content for broadcast sometimes mix a video signal from a television camera with a digital data stream in such a way that one (the digital data) overlays the signal from the camera, synchronized in real time so well that the viewer can imagine that the line is "drawn" in perspective in the real world. The most clear case of this is the first down line in America football. A line appears on the television over the video to show the viewer where the ball stopped on its way to the goal. Those who are in the stadium cannot see the line.

A recent article published about Augmented Reality (principally about the use of AR in medical use cases) on the National Science Foundation web site described the experience of seeing first-down line on television as an example of Augmented Reality. Unfortunately, the differences between composing a video in the studio and sending out to millions of viewers over a broadcast communication medium and composing an AR experience in real time on a user's device for viewing from precisely one pose are too numerous to be overlooked.

Here are a number of ways the two differ:

#1 pose: the content that is captured by the television camera is destined to be broadcast to a mass audience. It may be broadcast globally, locally, but it is still a one-to-many signal. In a broadcast technology studio the user (the individual for whom the composed scene is visible), the viewer's pose (context and position with respect to reality) is in no way utilized to create the experience (remember "AR Experiences"). In television there are "viewers" and in AR there are "users."

Test: If the viewer looks 180 degrees from where the composed scene is rendered, no longer viewing the television at all, the scene (first down line overlay on the video signal) is still there. If an AR user looks away from the point of interest, the augmentation no longer appears.

#2 real time: see point #1. Broadcast looks like it's real time, but it's delayed with respect to what the user does.  If you replay the same sequence of frames captured by the television camera, say 5 milliseconds, an hour or a year later, the same overlay will be possible. The AR experience requires that all the elements be exactly the same to reproduce the same AR experience. By definition, every AR experience is unique because we are unable to travel backwards in time to repeat a moment in the past.

#3 reality: What the user sees on the TV screen is digital overlay over digital media. It is composed centrally by software in the studio. Is the television camera operator's point of view the "reality"? Yes, but only for the operator of the camera.

I understand that the mobile device-based AR experiences we have today suffer from the same weakness in the definition of "reality".

My conclusion is that "broadcast AR" is a misnomer. It may be helpful to introduce the concepts of digital overlay but should not be confused with "real" AR. Over time, as more people have AR experiences of their own, we will have less need to use poor analogies to define AR and there may even come a day when we will be able to drop the label "AR" entirely.