Categories
Augmented Reality Business Strategy Standards

Three Giants (and the Hobbit)

In Greek Mythology, the Hekatonkheires were children of Gaia (Earth) and Uranus (sky). They were three incredibly strong giants, each had 100 hands and 50 heads, and their ferocity surpassed that of all the Titans.

In today's information universe, at least in this version of the story, they are reincarnated and will have the option of doing good, defeating evil and helping the planet to become a better, richer place for human beings. There are many beautiful analogies between Greek Mythology and my current thinking, but I will limit myself to only the Giants. The three can be none other than Apple, Microsoft and Google.

For years we've heard rumors and seen signs in the patent filings (first in Feb 2010, then here, and more recently, here and here) that at least a few people within Apple are working on displays that would provide users the ability to view 3D without glasses or to wear displays on their heads at eye level, otherwise known (in my lingo) as "hands-free displays." Of course, these could be useful for a variety of applications not limited to Augmented Reality, but depending on their features, and how/when the content provided to them appeared, we can assume that these devices would (will) at least be suitable for certain mobile AR applications.

A year ago, when rumors began to circulate about Google's now widely acknowledged and publicized eyewear project (Project Glass), my anxiety about Apple coming out with a new product that would once again transform the form factor of mobile computing (a good thing), as well as further (bad news) its closed and proprietary, and very valuable, useful developer ecosystem, lessened.

At least the end users, I told myself, will have a choice: an Android alternative to iOS-based hands-free displays and, instead of one proprietary platform controlling the universe, we will have two. Google will provide Android-friendly and Android-based eyewear for content and experiences that are published in what will probably begin as a Google proprietary format, while Apple will provide different (or some similar) content and experiences published only in its proprietary format.

At least with two Giants there's no doubt there will be a war! My fear remained that one of the two would win and we would not benefit from the future of open and interoperable AR content and experiences, based on open interfaces, standards and wild flashes of innovation emerging out of nowhere but catching on (one can suppose this will happen more easily in open ecosystems that in closed ones).

My hopes were boosted when on November 13 Vuzix finally disclosed its answer to ProjectGlass, the M100. The announcement that the M100 won the CES 2013 Design and Engineering award (in the Wireless Handset Accessory Category) got picked up by some big bloggers here, here and here as well as hundreds of smaller readership sites. I think of Vuzix as the Hobbit in this case. Don't worry, there were few giants in Tolkein mythology so I'm not going to go far here!

When, earlier this week the news broke (see Guardian's article and the Slate.com piece) that Microsoft has been granted a patent on its own development project (no doubt with the help and support of Nokia) resembling those of Apple and Google, I shrieked with delight!

A third giant entering the ring has two impacts for all end users, for all the content in existence already and to come, and for our AR Standards Community activity.

First, and most directly, it puts content publishers, the likes of Goliaths like CondeNast (as well as the micro-publishers) in a position of having to support–in their future AR-ready content catalogs–multiple display platforms which is (I hope) prohibitively expensive. The content publishers will be able to unite and encourage the display providers to open at least some interfaces to common code and over time maybe even have full interoperability. In an ideal scenario, the owners of content around the world, beyond the three giants themselves, will simply ignore the three competing platforms until a set of simple tags and functionality are agreed upon and implemented across them.

Second, the work our community began in summer of 2012 on the requirements Hands-free AR devices will have the benefit of more open minds, people who are working on their own hardware and software that want to be heard and express their vision of an open world while the war of the Titans is raging. The parallel, I believe, is that today's innovators and entrepreneurs who want to develop totally new information experiences in the real world, unlike anything we've had or seen before and for the benefit mankind, are like the Olympians of Greek mythology. Perhaps, by having the three Giants agree on some level, if not completely, about open Augmented Reality, the doors for the future of AR for the "rest of us" will open.

And, there another reason for mentioning the Greek Mythology and my hope the myth of the Giants is not entirely re-enacted in our days. In my modern version, the three giants are allowed to come out and play nice with Cronus. If your Greek Mythology is as rusty as mine, I will share with you that during the War of the Titans, the Giants helped the Olympians overthrow the Titans, of whom Cronus was king. In modern day, Khronos Group is a strong supporter of open AR through open, royalty-free specifications of interfaces for hardware in most of our mobile devices.

Categories
Events Internet of Things Standards

Sensors, Their Observations and Uses

There was a splash when my suitcase fell into the puddle next to the taxi outside the Exeter St. David's rail station last night. Although it was sunny under blue skies for both the days in London, while I was indoors attending the Open IoT Assembly, I was expecting rain in England and came prepared, I thought; it was the force of gravity that I had underestimated and caught me (and my suitcase) by surprise.

"More rain" announced the lonely receptionist at the White Hart hotel when I inquired about today's forecast. Instead the sky could not be bluer or clearer of clouds. Is the unpredictability of the weather an omen for the day? At least I won't be traveling with wet belongings on my way to the UK Met Office for the open session of the OGC meeting and back to the rail station this afternoon. 

I'm attending the meeting only for a few hours so that I can conduct in person meetings with the chairs and conveners of the IndoorGML Standards Working Group, the Sensors 4 IoT Standards Working group and other luminaries in the geospatial realm. I find it highly appropriate that the sensors I've used for weather have been so highly inaccurate today! I trust that my internal confidence about my meetings will serve me better today than they did last night.

Categories
Augmented Reality Standards

Open and Interoperable AR

I’ve been involved and observed technology standards for nearly 20 years. I’ve seen the boom that came about because of the W3C's work and the Web standards that were put in place early. The standards for HTTP and HTML made content publishing for a wider audience much more attractive to the owners and developers of content than having to format their content for each individual end user application. 

I’ve also seen standards introduced in emerging industries too early. For example, the ITU H.320 standards in the late 1980s were too limiting and stifled innovation in the videoconferencing industry a decade later. Even though there was an effort to correct the problem in the mid-1990s with H.323, the architectures were too limiting and eventually much of the world went to SIP (IETF Session Initiation Protocol). But even SIP has only had limited impact when compared with Skype for the adoption of video calling. So, this is an example where although there are good standards available, they are implemented by large companies and the mass market just wants things that work, first time and every time.  AR is a much larger opportunity and probably closer to the Web than video conferencing or video calling.

With AR, there’s more than just a terminal and a network entity or two terminals talking to one another. As I wrote in my recent post about the AR Standards work, AR is starved for content and without widespread adoption of standards, publishers are not going to bother with making their content available. In addition to it being just too difficult to reach audiences on fragmented platforms, there’s not a clear business model. If, however, we have easy ways to publish to massive audiences, traditional business models such as premium content subscription and Pay to watch or experience, are viable.  

I don’t anticipate that mass market AR can happen without open AR content publishing and management as part of other enterprise platforms. The systems have to be open and to interoperate at many levels. That's why in late 2009 I began working with other advocates of open AR to bring experts in different fields together. We gained momentum in 2011 when the Open Geospatial Consortium and the Khronos Group recognized our potential to help. These two standards development organizations see AR as very central to what they provide. The use of AR drives the adoption of faster, high performance processors (which members of the Khronos Group provide) and location-based information.

There are other organizations very consistently participating and making valuable contributions to each of our meetings. In terms of other SDOs, in addition to OGC and Khronos, the W3C, two sub committees from ISO/IEC, Open Mobile Alliance, Web3D Consortium and Society of Information Display are reporting regularly about what they’re doing. The commercial and research organizations that attend include, for example, the Fraunhofer IGD, Layar, Wikitude, Canon, Opera Software, Sony Ericsson, ST Ericsson and Qualcomm. We also really value the dozens of independent AR developers who come and contribute their experience as well. Mostly they’re from Europe but at the meeting in Austin we expect to have a new crop of US-based AR developers showing up.

Each meeting is different and always very valuable. I'm very much looking forward to next week!

Categories
Augmented Reality Events Standards

Interview with Neil Trevett

In preparation for the upcoming AR Standards Community Meeting March 19-20 in Austin, Texas, I’ve conducted a few interviews with experts. See here my interview with Marius Preda. Today’s special guest is Neil Trevett.

Neil Trevett is VP of Mobile Content at NVIDIA and President of the Khronos Group, where he created and chaired the OpenGL working group, which has defined the industry standard for 3D graphics on embedded devices. Trevett also chairs the OpenCL working group at Khronos defining an open standard for heterogeneous computing.

Spime Wrangler: When did you begin working on standards and open specifications that are or will become relevant to Augmented Reality?

NT: It’s difficult to say because so many different standards are enabling ubiquitous computing and AR is used in so many different ways. We can point to graphics standards, geo-spatial standards, formatting, and other fundamental domains. [editor’s note: Here’s a page that gives an overview of existing standards used in AR.]

The lines between computer vision, 3D, graphics acceleration and use are not clearly drawn. And, depending on what type of AR you’re talking about, these may be useful, or totally irrelevant.

But, to answer your question, I’ve been pushing standards and working on the development of open APIs in this area for nearly 20 years. I first assumed a leadership role in 1997 as President of the Web3D Consortium (until 2005). In the Web3D Consortium, we worked on standards to bring real-time 3D on the Internet and many of the core enablers for 3D in AR have their roots in that work.

Spime Wrangler: You are one of the few people who has attended all previous meetings of the International AR Standards Community. Why?

NT: The AR Standards Community brings together people and domains that otherwise don’t have opportunities to meet. So, getting to know the folks who are conducting research in AR, designing AR, implementing core enabling technologies, even artists and developers was a first goal. I need to know those people in order to understand their requirements. Without requirements, we don’t have useful standards. I’ve been taking what I learn during the AR Standards community meeting and working some of that knowledge into the Khronos Group.

The second incentive for attending the meetings is to hear what the other standards development organizations are working on that is relevant to AR. Each SDO has its own focus and we already have so much to do that we have very few opportunities to get an in depth report on what’s going on within other SDOs, to understand the stage of development and to see points for collaboration.

Finally, the AR Standards Community meetings permit the Khronos Group to share with the participants in the community what we’re working on and to receive direct feedback from experts in AR. Not only are the requirements important to us, but also the level of interest a particular new activity receives. If, during the community meeting I detect a lot of interest and value, I can be pretty sure that there will be customers for these open APIs down the road.

Spime Wrangler: Can you please describe the evolution you’ve seen in the substance of the meetings over the past 18 months?

NT: The evolution of this space has been rapid, by standards development standards! This is probably because a lot of folks have thought about the potential of AR as just another way of interfacing with the world. There’s also been decades of research in this area. Proprietary silos are just not going to be able to cover all the use cases and platforms on which AR could be useful. 

In Seoul, it wasn’t a blank slate. We were picking up on and continuing the work begun in prior meetings of the Korean AR Standards community that had taken place earlier in 2010. And the W3C POI Working Group had just been approved as an outcome of the W3C Workshop on AR and the Web.

Over the course of 2011 we were able to bring in more of the SDOs. For example, the OGC and Web3D Consortium started presenting their activities during the Second community meeting. The OMA Mob AR Enabler work item presented and ISO SC24 WG 9 chair, Gerry Kim, participated in the Third Meeting in conjunction with the Open Geospatial Consortium’s meeting in Taiwan.

We’ve also established and been moving forward with several community resources. I’d say the initiation of work on an AR Reference Architecture is an important milestone.

There’s a really committed group of people who form the core, but many others are joining and observing at different levels.

Spime Wrangler: What are your goals for the meeting in Austin?

NT: During the next community meeting, the Khronos Group expects to share the progress made in the newly formed StreamInput WG. We’re just beginning this work but there’s great contributions and we know that the AR community needs these APIs.

I also want to contribute to the ongoing work on the AR Reference Architecture. This will be the first meeting in which MPEG will join us and Marius Preda will be making a presentation about what they have been doing as well as initiating new work on 3D Transmission standards using past MPEG standards.

It’s going to be an exciting meeting and I’m looking forward to participating!

Categories
Augmented Reality Events Standards

Interview with Marius Preda

On March 19 and 20, 2012 the AR Standards Community will gather in Austin, Texas. In the weeks leading up to the next (the fifth) International AR Standards Community meeting, sponsored by Khronos Group and the Open Geospatial Consortium, experts are preparing their position papers and planning contributions.

I am pleased to be able to share a recent interview with one of the participants of the upcoming meeting, Marius Preda. Marius is Associate Professor with the Institut TELECOM, France, and the founder and responsible of GRIN – Graphics and Interactive Media. He is currently the chairperson of MPEG 3D Graphics group. He has been actively involved in MPEG since 1998, especially focusing on Video and 3D Graphics coding. He is the main contributor of the new animation tools dedicated to generic synthetic objects. More recently, he is the leader of MPEG-V and MPEG AR groups. He is also the editor of several MPEG standards.

Marius Preda’s research interests include 3D graphics compression, virtual character, rendering, interactive rich multimedia and multimedia standardization. And, he also leads several research projects at the institutional, national and European and international levels.

Spime Wrangler:  Where did you first learn about the work going on in the AR Standards community?

MP: In July 2011, during the preparations of the 97th MPEG meeting, held in Turin, Italy, I had the pleasure to meet Christine Perey. She came to the meeting of MPEG-V AhG, a group that is creating, under the umbrella of MPEG, a series of standards dealing with sensors, actuators and, in general, the frontier between physical and virtual world.

Spime Wrangler:  What sorts of activities are going on in MPEG (ISO/IEC JTC1 SC29 WG11) that are most relevant to AR and visual search? Is there a document or white paper you have written on this topic?

MP: Since 1998, when the first edition of MPEG-4 was published, the concept of mixed – natural and synthetic – content was made possible in an open and rich standard, relatively advanced for that time. MPEG-4 was not only advancing the compression of audio and video, but also introducing, for the first time, the compression for graphics assets. Later on, MPEG revisited the framework for 3D graphics compression and grouped in Part 16 of MPEG-4 several tools allowing compact representation of 3D assets.

Separately, MPEG published in 2011 the first edition of MPEG-V specification, a standard defining the representation format for sensors and actuators. Using this standard, it is possible to deal with data from simplest sensors such as temperature, light, orientation, position to very complex ones such as biosensors and motion cameras. Similarly for actuators. From the simple vibration effect today embedded in almost all the mobile phones to complex motion chairs such the ones used in 4D theatres, these can all be specified in standard-compliant libraries.

Finally, several years ago, MPEG standardized MPEG-7, a method for describing descriptors attached to media content. This work is currently being extended. With a set of compact descriptors for natural objects, we are working on Visual Search. MPEG has also ongoing work on compression of 3D video, a key technology in order for realistic augmentation of the captured image to be provided and rendered in real time.

Based on these specifications and the significant know-how in domains highly relevant to AR, MPEG decided in December 2011 to publish an application format for Augmented Reality, grouping together relevant standards in order to build a deterministic, solid and useful model for AR applications and services.

More information related to MPEG standards is available here.

Spime Wrangler:  Why are you going to attend the meeting in Austin? I mean, what are your motivations and what do you hope to achieve there?

MP: The objective of AR Standards is laudable but, at the same time, relatively difficult to achieve. There are currently several, probably too many, standardization bodies that claim to deliver relevant standards for AR to the industry. Our duty, as standard organizations, is to provide interoperable solutions. This is not new. Organizations, including Standards Development bodies, always try to use mechanisms such as liaisons or to cooperate rather than to compete.

A recent very successful example of this is the work on video coding jointly created by ISO/IEC MPEG and ITU-T VCEG and published individually under the names MPEG-4 AVC and h.264 respectively. In fact, there is exactly the same document and a product compliant with one is implicitly compliant with the second. My motivation in participating to Austin meeting is to verify if such collaborative approach is possible as well in the field of AR.

Spime Wrangler: Can you please give us a sneak peak into what you are going to present and share with the community on March 19-20?

MP: I’ll present two aspects of MPEG work related to AR. In a first presentation, I’ll talk about the MPEG-4 Part 16 and Part 25. The first is proposing a set of tools for 3D graphics compression, the second an approach on how to apply these tools to scene graph representations other than the one proposed by MPEG-4, e.g. COLLADA and X3D. So, as you can see, there are several AR-related activities going on in parallel.

In the second presentation I’ll talk about the MPEG Augmented Reality Application Format (ARAF), and MARBle, an MPEG browser developed by the TOTEM project for AR (currently available for use on Android phones). ARAF is an ongoing activity in MPEG and early contact with other standards body may help us all to work towards the same vision of providing a one stop solution for AR applications and services.

Categories
Internet of Things Standards

Can We Define IoT (continued)?

Ovidiu Vermesan is Chief Scientist at SINTEF Information and Communication Technology, Oslo, Norway and co-editor (with Peter Friess, EC Coordinator, DG Information Society and Media) of a book of essays on Internet of Things. This is a real masterpiece for those who are seeking a comprehensive look at all the different trends around IoT.

Vermesan recently contributed to a mailing list discussion on the topic of how we will define the IoT (see my post in January 2012 on the topic here and from June 2011 here). This was in response to a suggestion that IoT should be limited to those applications enabled with RFID. I find the historical perspective of this debate interesting so I obtained permission to publish the memo here (with a few edits).

"Internet of Things is much more than M2M communication and wireless sensor networks, 2G/3G/4G, RFID, etc. These are enabling technologies that will make "Internet of Things" applications possible.

So, what is the Internet of Things? Let's look at some statements made in the past 90 years:
 
1926 – Nikola Tesla in an interview with Colliers magazine:  "When wireless is perfectly applied the whole earth will be converted into a huge brain, which in fact it is, all things being particles of a real and rhythmic whole………and the instruments through which we shall be able to do this will be amazingly simple compared with our present telephone. A man will be able to carry one in his vest pocket."
 
1991 – Mark Weiser's Scientific American article on ubiquitous computing ‘The Computer for the 21st Century’, where he stated “The most profound technologies are those that disappear. They weave themselves into the fabric of everyday life until they are indistinguishable from it”.
 
1999 – Neil Gershenfeld published his book "When Things Start to Think" and stated “in retrospect it looks like the rapid growth of the World Wide Web may have been just the trigger charge that is now setting off the real explosion, as things start to use the Net.” 1999.
 
2004 – Neil Gershenfeld, Raffi Krikorian and Danny Cohen in the article "The Internet of Things" stated "The principles that gave rise to the Internet are now leading to a new kind of network of everyday devices, an "Internet-0" (unfortunately, this is not accessible without a subscription to Scientific American).
 
2009 – Kevin Ashton in an article in RFID Journal: "I could be wrong, but I'm fairly sure the phrase "Internet of Things" started life as the title of a presentation I made at Procter & Gamble (P&G) in 1999".

Thank you to Vermesan and all those who foresaw and shared this trend, and to those who continue to seek a definition that will hold firmly into the future while also serving us today!

Categories
Internet of Things Standards

Does IoT Include Virtual Things?

Marco Carugi, one of ZTE's standards "agents," is presiding over the fourth day of the Fourth Meeting of the Global Standards Initiative on the Internet of Things.

In case it has escaped your attention, this is a short description of the activity, from the GSI-IoT Web page:

"The Global Standards Initiative on Internet of Things (IoT-GSI) promotes a unified approach in ITU-T for development of technical standards (Recommendations) enabling the Internet of Things on a global scale. ITU-T Recommendations developed under the IoT-GSI by the various ITU-T Questions – in collaboration with other standards developing organizations (SDOs) – will enable worldwide service providers to offer the wide range of services expected by this technology. IoT-GSI also aims to act as an umbrella for IoT standards development worldwide."

One of the recurring topics on the agenda of past meetings, this week's meeting, and the focus of the first two hours of today, is the definition of the Internet of Things. The definition, and my opinion about the work invested on this topic, have evolved from my post about it last June. It would not be appropriate for me to share/post the current ITU draft definition before it is published by the IoT-GSI group, so I will limit my remarks to my views on the process and one of the core decisions regarding the scope of IoT.

The tenacity and dedication demonstrated by the contributors is impressive. The systematic decomposition of each and every word used in the definition, the possible qualifiers and interpretations of these terms has produced dozens of internal documents and figures, tracing who contributed what. This traceability is fundamental to all standards processes. What impresses me is the way these engineers have systematically attacked so many fuzzy concepts using a language that is, for nearly all those involved, not their mother tongue (English).

One delicate point on which there has been some debate (though far less than I anticipated) is if the IoT definition will include in its scope (in addition to physical things), something called "virtual things." These virtual things are files, such as eBooks, photographs, songs and 3D objects, that can be stored, processed and accessed.

In the past I felt that virtual things should not be in scope but, after listening to contributions, I can see an argument for the IoT to include virtual things. Virtual things will not have attributes such as physical dimensions, communications protocols nor will they be able to measure the physical world. And, physical things will not have a digital dimension (kb of storage). Both virtual and physical things have attributes such as the date and time of creation, an author or manufacturer and an authority to which we will address requests. And, perhaps the most important attribute of all, the physical world might be modified as a result of communication with either virtual and physical things.

The inclusion of virtual things is important in my mind, but it is not the most hotly debated assertion. Other topics such as if there are, and how to describe, the "personalities" attributable to things is even more unclear and sensitive.

Being an observer of these discussions certainly has confirmed my interest in developing, as early as possible, an Augmented Reality glossary of terms on which SDOs will be able to form their own definitions. It has also opened my eyes to the careful editing process defined by the ITU for its work. Results of this process will be published later this year and I hope, for the sake of progress towards standards development, that the definitions will stand the test of time.

Categories
Innovation Internet of Things Standards

ITU Initiatives Help it Remain Relevant

The International Telecommunications Union is a standards development organization based in Geneva, Switzerland and over the past 30 years has published very important specifications for telephony and networking. Over the past decade, and especially the last 5 years, as the Internet expanded and overtook telephony as the principle vehicle for personal communications, the IETF and other standards bodies seemed to take a leadership role in communications standards.

Membership attrition drove the ITU to search for new agendas and redefine itself in the new world. Periodic examination of goals and missions is necessary in any organization, but particularly important for one whose members or customers must pay fees and seek justification for their expenses. I think that, for the ITU, the results of this process of soul searching are beginning to bear fruit.

I'm currently following the ITU Joint Coordination Activity on Internet of Things standards, which began in May 2011, and have attended two meetings of this group. Its survey of the IoT standards landscape will be very valuable to many groups when published. The motivations and the process are very complementary to the work the AR Standards Community is doing. I'm also highly impressed by and seek to attend and observe future meetings of the Internet of Things Global Standards Initiative (IoT GSI). In this group representatives from these seven ITU Study Groups work together:

  • SG 2 – Operational aspects of service provision and telecommunications management
  • SG 3 – Tariff and accounting principles including related telecommunication economic and policy issues
  • SG 9 – Television and sound transmission and integrated broadband cable networks
  • SG 11 – Signalling requirements, protocols and test specifications
  • SG 13 – Future networks including mobile and NGN
  • SG 16 – Multimedia coding, systems and applications
  • SG 17 – Security

This cross Study Group approach is very effective to address such a fundamental "cross domain" topic as standardization for the Internet of Things.

Recently the ITU TSAG (Telecommunications Standards Advisory Group) made two announcements that caught my eye and demonstrate other results of their efforts to stay relevant in the future as a standards body. The first is the formation of a new group focusing on Bridging the Gap: from Innovation to Standardization. One of the common objections to standards is that they stifle innovation so confronting this head on is an excellent initiative. The focus group's results will be released during an event in November 2012.

Second, the ITU TSAG announced that it is initiating another (parallel to the IoT JCA) "landscape" analysis activity on the topic of Machine-to-Machine communications. This charter for this new activity (pasted below from the announcement page for convenience) is currently open for comment.

"The Focus Group will initially focus on the APIs and protocols to support e-health applications and services, and develop technical reports in these areas. It is suggested that the Focus Group establish three sub-working groups:

  1. “M2M use cases and service models”,
  2. “M2M service layer requirements”, and
  3. “M2M APIs and protocols.”

Strong collaboration with stakeholders such as Continua Health Alliance and World Health Organization (WHO) is foreseen. The Focus Group concept allows for greater operational flexibility and crucially allows non-ITU members and other interested organizations to participate."

Although e-health applications are not all that interesting to me, I believe the concept of developing technical reports focusing on different areas will be very productive. And, as with the IoT-GSI, the M2M focus group will also be complementary to other ITU-T Study Groups, especially Study Groups 13 and 16, and to other relevant UN agencies, SDOs, forums/consortia, regulators, policy makers, industry and academia. I'll be observing this activity when they meet and work closely with the IoT-GSI in Geneva next month.

Categories
Augmented Reality Events Standards

AR Standards Community

When we place a phone call, we don't insert a pre-fix for Blackberry phone, a different prefix for calling someone who uses an iPhone and another for Android users. A call request is placed and connected, regardless of the device and software used by the receiver's handset.  When people publish content for the Web (aka "to be viewed using a browser"), they don't need to use a special platform for Internet Explorer, a special content management system or format for Opera Software users, another for Firefox users, and another for those who prefer to use Safari. And, as a result of substantial effort on the part of the mobile ecosystem, the users of mobile Web browsers can also view the same content as on a stationary device, adapted for the constraints of the mobile platform.

With open standards, content publishers can reach the largest potential audiences and end users can choose from a wealth of content sources.

Augmented Reality content and experiences should be available to anyone using a device meeting minimum specifications. If we do not have standards for AR, all that can and could be added to reality will remain stuck in proprietary technology silos.

In the ideal world, where open standards triumph over closed systems, the content a publisher releases for use in an AR-enabled scenario will not need to be prepared differently for different viewing applications (software clients running on a hardware platform).

The community working towards open and interoperable AR will be meeting March 19-20 in Austin, Texas to continue the coordination activities it performs on behalf of all content publishers, AR experience developers and end users.

If you can come, and even if you are unable to meet in person with the leaders of this community, you can influence the discussion by submitting a position paper according to our guidelines.
 

Categories
Research & Development Standards

Virtual Worlds and MPEG-V

Virtual Reality is not a domain on which i focus, however, I recognize that VR is at the far end of Milgram's continuum from Augmented Reality so there are interesting developments in VR which can be borrowed for wider application. For example, Virtual Reality has a long history of using 3 dimensionality, from which AR practitioners and designers have much to learn.

I'm particularly attentive to standards which could be shared between VR and AR. The current issue (vol 4 number 3) of the Journal of Virtual World Research is entirely dedicated to the MPEG-V, the standard developed in ISO/IEC JTC 1 SC 29 for Virtual World (ratified one year ago, January 2011).

This journal is the most comprehensive resource I've found on the standard. It is written and edited by some of those leading the specification's development including:

Jean H.A. Gelissen, Philips Research, Netherlands
Marius Preda, Insitut TELECOM, France
Samuel Cruz-Lara, LORIA (UMR 7503) / University of Lorraine, France
Yesha Sivan, Metaverse Labs and the Academic College of Tel Aviv-Yaffo, Israel

I will need to digest its contents carefully. Not much more to say about it than this at the moment!

Categories
Augmented Reality Social and Societal Standards

Virtual Public Art Project

Some believe that experiencing art in digital forms while interacting with or set in real world settings will be a widely adopted use case for Augmented Reality. People will be able to experience more examples of artistic expression, in different places and to contribute by expressing themselves through their software and mobile devices. Projects to explore the interaction of digital and physical objects are quite popular at the annual SIGGRAPH event.

One of the earliest projects using the Layar AR browser for artistic expression in public (and private) spaces is the Virtual Public Art Project begun in March 2010 by Christopher Manzione, a New York City artist and sculptor. Manzione created the VPAP by publishing his creations in a dedicated layer. The site says:

VPAP is the first mobile AR outdoor art experience ever, and maximizes public reception of AR art through compatibility with both iPhone 3GS and Android phones using the free Layar application.

Artists around the world have invested in putting their digital work into the VPAP layer. Projects like this one certainly have the potential to dramatically change how people interact with one another and with art, especially if they are also able to leave their comments or opinions about the artist's work.

One of the limitations of the current VPAP, and perhaps a reason it has not received attention since the fall/winter of 2010-2011, is that it is only viewable on one browser. If there were standards for AR formatting, as there are today for formatting content viewed in a Web browser, then any viewer application, capable of detecting the user's context such as the AR browsers from wikitude and metaio (junaio) would also provide access to the artists' work. In an ideal world one source of content could offer all users the same or similar experiences, using their software of choice.

In the face of multiple proprietary technology silos (and client applications with projects requiring wide, browser-neutral audiences), some AR experience developers offer services based on a single back end with interfaces to each of the publishing platforms. Examples include the Hoppala Augmentation by Hoppala Agency, BuildAR by MOB Labs and MARways by mCRUMBS. In each case, these platforms streamline the publishing process for the content creator to have the widest possible reach.

Will they also need to write interfaces to the next AR browsers? What will these platforms be capable of when Web browsers also support AR?

I guess these are not questions on which artists should be spending their time.

Categories
Internet of Things Standards

Can We Define IoT?

The Internet of Things can be understood, at its simplest, as the phenomenon whereby more things or objects are inter-connected to the Internet than people. I feel this statement is useful for a lay person but insufficient for business or technical purposes.

How do we define the Internet of Things in a way that is concise, yet clear, general, yet specific enough to be meaningful?

The question of the precise definition of the Internet of Things has commanded considerable intellectual debate in recent years. Getting the definition perfect is important for regulatory, legal, legislative and, lets not forget, funding purposes.

Several presentations explored this question at the European Commission-backed IoT day in Budapest in May 2011. Exchanges on a mailing list continue a debate among members of the IoT Joint Coordination Activity.

The definition proposed by Monique Morrow of Cisco, suggests:

"The Internet of Things consists of networks of sensors attached to
objects and communications devices, providing data that can be analyzed
and used to initiate automated actions. The data also generates vital
intelligence for planning, management, policy and decision-making."

Olivier Dubuisson of Orange FT Group defines Internet of Things as:

"A global ICT infrastructure linking physical objects and virtual objects
(as the informational counterparts of physical objects) through the
exploitation of sensor & actuator data capture, processing and transmission
capabilities. As such, the IoT is an overlay above the 'generic' Internet,
offering federated physical-object-related services (including, if relevant,
identification, monitoring and control of these objects) to all kinds of applications."

In my opinion, the IoT definition should not specify the purpose. It must describe only what the IoT IS and it probably will need to be a living definition, frequently updated to reflect the evolution of the state of the art.

As with the definition of Augmented Reality, however, there should and will be a day when the definition of the Internet of Things or AR no longer matter because the "thing itself" is so ubiquitous and easily understood that a definition is unnecessary. Between now and that day, expect these debates to continue. At least outside the events which I am responsible for organizing!