Categories
Augmented Reality Events

AR@MWC12

I'm heading to Barcelona for Mobile World Congress, the annual gathering of the mobile industry. It's always an exciting event and I meet a lot of interesting companies, some with whom I'm already acquainted, industry leaders, some new to the segments on which I focus but well known in mobile, and others that I've never heard of.

When I arrive in Barcelona on Sunday, I'm going to begin using the MWC AR Navigator, an application developed by mCRUMBS in collaboration with GSMA, the organizer of MWC, to make getting around the city and the event easier and efficient as possible (and with the assistance of AR, of course).

On Monday Feb 27 my first priority will be the Augmented Reality Forum. This half-day conference is sponsored by Khronos Group and four Khronos member companies: Imagination Technologies, ST Ericsson, ARM and Freescale. Through the AR Forum presentations, these companies are driving mobile AR awareness and sharing how their new hardware and open APIs from Khronos will improve AR experiences.

After the presentations, I will be moderating the panel discussion with the speakers. Join us!

In the following days, my agenda includes meetings with over 50 companies developing devices, software and content for AR experiences. Many of those with who I will meet don't have an actual booth. Finding these people among the 60,000 attendees would be impossible without appointments scheduled in advance (and the aid of the MWC AR Navigator)! Are you attending MWC and want to set aside a quick briefing with me, please contact me as soon as possible.

If you haven't booked meetings but want to see for yourself what's new for AR in 2012, I recommend that you at least drop by these booths for demonstrations:

  • Imagination Technologies – Hall 1 D45
  • ST Ericsson partner zone – Hall 7 D45
  • ARM – Hall 1 C01
  • Freescale – AV27
  • Qualcomm – Hall 8
  • Nokia/NAVTEQ – Hall 7
  • Alcatel-Lucent – Hall 6
  • Aurasma (HP) –Hall 7
  • Texas Instruments – Hall 8 A84
  • Intel – Hall 8 B192
  • VTT – Hall 2 G12
  • mCRUMBS and metaio – Hall 2.1 C60
  • HealthAlert App
 – Hall 2.1E65
  • Augmented Reality Lab – Hall 
2H47
  • Blippar – Avenue AV35
  • BRGR Media
 – Hall 2 F49
  • Pordiva – Hall 2 E66
  • wöwbile Mobile Marketing – Hall 7 D85

These are not the only places you will see AR. If you would like me to add others to this list, please leave a comment below.

Categories
Augmented Reality Innovation

Square Pegs and Round Holes

For several years I've attempted to bring AR (more specifically mobile AR) to the attention of a segment of media companies: those that produce and sell print and digital content (of all kinds) as a way of bringing value to both their physical and digital media assets (aka "publishers").

My investments have included writing a white paper and a position paper. I've mused on the topic in blog posts (here and here), conducted workshops, and traveled to North America to present my recommendations at meetings where the forward-looking segment of the publishing industry gathers (e.g., Tools of Change 2010, NFAIS 2011).

I've learned a lot in the process but I do not think I've had any impact on these businesses. As far as the publishers of books, newspapers, magazines and other print and digital content (and those who manage) are concerned, visual search is moderately interesting but mobile AR technology is a square peg. It just has not fit in the geometry of their needs (a round hole).

With their words and actions, publishers have demonstrated that they are moving as quickly as they possibly can (and it may not be fast enough) towards “all digital.” Notions of extending the life expectancy of their print media assets by combining them with interactive digital annotations are misplaced. They don’t have these thoughts. I was under the illusion that there would be a fertile place, at least worthy of exploration, between page and screen, so to speak. Forget it.

After digesting this and coming back to the topic (almost a year since having last pushed it) I’ve adjusted my thinking. Publishers are owners and managers of assets that are (and increasingly will be) used interactively. The primary difference between the publisher and other businesses that have information assets is that the publisher has the responsibility to monetize the assets directly, meaning by charging for the asset, not some secondary product or service. Relative sizes and complexity of digital archives could also be larger in a company that I would label “publisher,” but publishers come in all sizes so this distinction is, perhaps, not valuable.

Publishers are both feeding and reliant upon the digital media asset production and distribution ecosystem. Some parts of the ecosystem are the same  companies that served the publishers when their medium was print. For example, companies like MarkLogic and dozens of others (one other example here), provide digital asset management systems. When I approached a few companies in the asset management solution segment, they made it clear that if there’s no demand for a feature, they’re not going to build it.

Distribution companies, like Barnes & Noble and Amazon, are key to the business model in that they serve to put both the print (via shipping customers and bookstores) and digital (via eReaders) assets in the hands of readers (the human type).

Perhaps this is where differentiation and innovation with AR will make a difference. I hope to explore if and how the eReader product segment could apply AR technology to sell more and more often.
 
 

Categories
Augmented Reality Events Standards

Interview with Marius Preda

On March 19 and 20, 2012 the AR Standards Community will gather in Austin, Texas. In the weeks leading up to the next (the fifth) International AR Standards Community meeting, sponsored by Khronos Group and the Open Geospatial Consortium, experts are preparing their position papers and planning contributions.

I am pleased to be able to share a recent interview with one of the participants of the upcoming meeting, Marius Preda. Marius is Associate Professor with the Institut TELECOM, France, and the founder and responsible of GRIN – Graphics and Interactive Media. He is currently the chairperson of MPEG 3D Graphics group. He has been actively involved in MPEG since 1998, especially focusing on Video and 3D Graphics coding. He is the main contributor of the new animation tools dedicated to generic synthetic objects. More recently, he is the leader of MPEG-V and MPEG AR groups. He is also the editor of several MPEG standards.

Marius Preda’s research interests include 3D graphics compression, virtual character, rendering, interactive rich multimedia and multimedia standardization. And, he also leads several research projects at the institutional, national and European and international levels.

Spime Wrangler:  Where did you first learn about the work going on in the AR Standards community?

MP: In July 2011, during the preparations of the 97th MPEG meeting, held in Turin, Italy, I had the pleasure to meet Christine Perey. She came to the meeting of MPEG-V AhG, a group that is creating, under the umbrella of MPEG, a series of standards dealing with sensors, actuators and, in general, the frontier between physical and virtual world.

Spime Wrangler:  What sorts of activities are going on in MPEG (ISO/IEC JTC1 SC29 WG11) that are most relevant to AR and visual search? Is there a document or white paper you have written on this topic?

MP: Since 1998, when the first edition of MPEG-4 was published, the concept of mixed – natural and synthetic – content was made possible in an open and rich standard, relatively advanced for that time. MPEG-4 was not only advancing the compression of audio and video, but also introducing, for the first time, the compression for graphics assets. Later on, MPEG revisited the framework for 3D graphics compression and grouped in Part 16 of MPEG-4 several tools allowing compact representation of 3D assets.

Separately, MPEG published in 2011 the first edition of MPEG-V specification, a standard defining the representation format for sensors and actuators. Using this standard, it is possible to deal with data from simplest sensors such as temperature, light, orientation, position to very complex ones such as biosensors and motion cameras. Similarly for actuators. From the simple vibration effect today embedded in almost all the mobile phones to complex motion chairs such the ones used in 4D theatres, these can all be specified in standard-compliant libraries.

Finally, several years ago, MPEG standardized MPEG-7, a method for describing descriptors attached to media content. This work is currently being extended. With a set of compact descriptors for natural objects, we are working on Visual Search. MPEG has also ongoing work on compression of 3D video, a key technology in order for realistic augmentation of the captured image to be provided and rendered in real time.

Based on these specifications and the significant know-how in domains highly relevant to AR, MPEG decided in December 2011 to publish an application format for Augmented Reality, grouping together relevant standards in order to build a deterministic, solid and useful model for AR applications and services.

More information related to MPEG standards is available here.

Spime Wrangler:  Why are you going to attend the meeting in Austin? I mean, what are your motivations and what do you hope to achieve there?

MP: The objective of AR Standards is laudable but, at the same time, relatively difficult to achieve. There are currently several, probably too many, standardization bodies that claim to deliver relevant standards for AR to the industry. Our duty, as standard organizations, is to provide interoperable solutions. This is not new. Organizations, including Standards Development bodies, always try to use mechanisms such as liaisons or to cooperate rather than to compete.

A recent very successful example of this is the work on video coding jointly created by ISO/IEC MPEG and ITU-T VCEG and published individually under the names MPEG-4 AVC and h.264 respectively. In fact, there is exactly the same document and a product compliant with one is implicitly compliant with the second. My motivation in participating to Austin meeting is to verify if such collaborative approach is possible as well in the field of AR.

Spime Wrangler: Can you please give us a sneak peak into what you are going to present and share with the community on March 19-20?

MP: I’ll present two aspects of MPEG work related to AR. In a first presentation, I’ll talk about the MPEG-4 Part 16 and Part 25. The first is proposing a set of tools for 3D graphics compression, the second an approach on how to apply these tools to scene graph representations other than the one proposed by MPEG-4, e.g. COLLADA and X3D. So, as you can see, there are several AR-related activities going on in parallel.

In the second presentation I’ll talk about the MPEG Augmented Reality Application Format (ARAF), and MARBle, an MPEG browser developed by the TOTEM project for AR (currently available for use on Android phones). ARAF is an ongoing activity in MPEG and early contact with other standards body may help us all to work towards the same vision of providing a one stop solution for AR applications and services.

Categories
Internet of Things Standards

Can We Define IoT (continued)?

Ovidiu Vermesan is Chief Scientist at SINTEF Information and Communication Technology, Oslo, Norway and co-editor (with Peter Friess, EC Coordinator, DG Information Society and Media) of a book of essays on Internet of Things. This is a real masterpiece for those who are seeking a comprehensive look at all the different trends around IoT.

Vermesan recently contributed to a mailing list discussion on the topic of how we will define the IoT (see my post in January 2012 on the topic here and from June 2011 here). This was in response to a suggestion that IoT should be limited to those applications enabled with RFID. I find the historical perspective of this debate interesting so I obtained permission to publish the memo here (with a few edits).

"Internet of Things is much more than M2M communication and wireless sensor networks, 2G/3G/4G, RFID, etc. These are enabling technologies that will make "Internet of Things" applications possible.

So, what is the Internet of Things? Let's look at some statements made in the past 90 years:
 
1926 – Nikola Tesla in an interview with Colliers magazine:  "When wireless is perfectly applied the whole earth will be converted into a huge brain, which in fact it is, all things being particles of a real and rhythmic whole………and the instruments through which we shall be able to do this will be amazingly simple compared with our present telephone. A man will be able to carry one in his vest pocket."
 
1991 – Mark Weiser's Scientific American article on ubiquitous computing ‘The Computer for the 21st Century’, where he stated “The most profound technologies are those that disappear. They weave themselves into the fabric of everyday life until they are indistinguishable from it”.
 
1999 – Neil Gershenfeld published his book "When Things Start to Think" and stated “in retrospect it looks like the rapid growth of the World Wide Web may have been just the trigger charge that is now setting off the real explosion, as things start to use the Net.” 1999.
 
2004 – Neil Gershenfeld, Raffi Krikorian and Danny Cohen in the article "The Internet of Things" stated "The principles that gave rise to the Internet are now leading to a new kind of network of everyday devices, an "Internet-0" (unfortunately, this is not accessible without a subscription to Scientific American).
 
2009 – Kevin Ashton in an article in RFID Journal: "I could be wrong, but I'm fairly sure the phrase "Internet of Things" started life as the title of a presentation I made at Procter & Gamble (P&G) in 1999".

Thank you to Vermesan and all those who foresaw and shared this trend, and to those who continue to seek a definition that will hold firmly into the future while also serving us today!

Categories
Augmented Reality

Augmented Vision 2

It’s time, following my post on Rob Spence’s Augmented Vision and the recent buzz in the blog-o-sphere on the topic of eyewear for hands-free AR (on TechCrunch Feb 6, on Wired on Feb 13, on Augmented Planet Feb 15), to return to this topic.

I could examine the current state of the art of the technology for hands-free AR (the hardware, the software and the content). But there’s too much information I could not reveal, and much more I have yet to discover.

I could speculate about if, what and when Google will introduce its Goggles, as been rumored for nearly 3 months. By the way, I didn’t need a report to shed light on this. In April 2011, when I visited the Google campus, one of the people with whom I met (complete with his personal display) was wearable computing guru and director of the Georgia Institute of Technology Contextual Computing Group, Thad Starner. A matter of months later, he was followed to Google by Rich deVaul whose 2003 dissertation on The Memory Glasses project certainly qualifies him on the subject of eyewear.  There could, in the near future, be some cool new products rolling out for us, “ordinary humans,” to take photos with our sunglasses and transfer these to our smartphones. There might be tools for creating a log of our lives with these, which would be very helpful. But these are not, purely speaking, AR applications.

Instead, let me focus on who, in my opinion, is most likely to be adopting the next generation of non-military see-through eyewear for use with AR capabilities. It will not be you nor I, or the early technology adopter next door who will have the next generation see-through eyewear for AR. 

It will be those for whom having certain, very specific pieces of additional information available in real time (with the ability to convey them to others) while also having use of both hands, is life saving or performance enhancing. In other words, professional applications are going to come first. In the life saving category, those who engage in the most dangerous field in the world (i.e., military action) probably already have something close to AR.

Beyond defense, let’s assume that those who respond to a location new to them for the purpose of rescuing people endangered by fire, flooding, earthquakes, and other disasters, need both of their hands as well as real time information about their surroundings. This blog post on the Tanagram web site (where the image above is from), makes a very strong case for the use of AR vision.

People who explore dark places, such as underwater crevices near a shipwreck or a mine shaft already have cameras on their heads and suits that monitor heart rate, temperature, pressure and other ambient conditions. The next logical step is to have helpful information superimposed on the immediate surroundings. Using cameras to recognize natural features in buildings (with or without the aid of markers) and then altimeters to determine the depth underground or height above ground to which the user has gone, floor plans and readings from local activity sensors could be very valuable for saving lives. 

I hope never to have to rely on these myself, but I won’t be surprised if one day I find myself rescued from a dangerous place by a professional wearing head-mounted gear with Augmented Reality features.

Categories
2020 Business Strategy

When the Title Fits, Wear It

A GigaOM blog post that Gene Becker tweeted about today reminded me that many of those who were "traditionally employed" in past years are setting up their web sites, starting their blogs, joining the ranks and giving the "independent" way of life a try. According to MBO Partners there are over 16 million freelancers, consultants and other independently employed people in America today. By 2020, the number is expected to quadruple to approximately 65 million.

Who are these people, I ask myself? The CEO of MBO Partners (quoted in the GigaOM blog) reports that seven out of ten of those interviewed in a survey believe they are "experts in their field" and have advanced skills and education. Unfortunately, this statement is not supported with hard data or a report.

In my opinion, only clients (i.e., those who hire consultants) are in a position to decide if a consultant is an expert worthy of their fee. Are these people who MBO Partners interviewed successful? To be a successful consultant requires a lot of focus on clients while, at the same time, being vigilant, reading the tea leaves, to detect emerging trends and to explore new directions.

After 21 years working as an independent, it's a topic about which I am qualified to have opinions.

The most important qualifications of an independent in any field are to be passionate about a domain, to pay attention to detail, to be discrete about what is said (and what is omitted) and, for a consultant, to have a strong desire for the client to succeed. The last of these, to prioritize the client's success, is an essential component of a consultant's character that carries risk. Often clients can describe what they need and, together, the goal can be reached, the product shipped, the service proven valuable. But sometimes the client fails for reasons outside a consultant's control, regardless of the consultant's contribution to a company or project. Sometimes the client succeeds but doesn't recognize or feel the need to acknowledge the role of the consultant. Few of the other consultants who I've helped get started and with whom I've worked can live with these and other risks of being independent. 

Having grown accustomed to managing risk (and balancing other aspects of professional independence that I won't go into here), the title of "consultant" no longer reflects how I feel, who I am. To me, the title of "Spime Wrangler" captures better the element of uncertainty with which I am comfortable and the strong desire to pursue the unknown, to tackle or wrestle with the Spimes.

How many of those who decide to hang out their shingle between now and 2020 will share the sense of adventure that makes me leap out of bed each morning? I hope that those who do, whoever they are, will assume the title of Spime Wrangler as well.

Categories
Internet of Things Research & Development

The Big Data Bandwagon

Big Data and I go way back. How can I get on the Big Data Bandwagon?

It's not a domain into which I regularly stray but nuclear physics was the focus of both my parents' careers so their use of computers comes to mind whenever the topic of Big Data comes up. I'm stepping out of my comfort zone but I am going to hypothesize that the study of physics and the Internet of Things share certain attributes and that both are sexy because they are part of Big Data. You're looking at Big Data in the illustration here.

Both physics and IoT begin with the assumption that there's more to the world than what the naked eye can see or can be detected by any of our other human senses. You can't see atoms, or electrons or quarks or any of those smaller particles. All you can use to know they are there are the measurements of their impacts on other particles using sensors.

And sensors are also at the heart of the Internet of Things. In addition to the human-detectable phenomena, sensors embedded where we can't see them detect attributes we can't see, don't have a smell, don't make a sound or otherwise are too small, too large, too fast or too far away for us to use our "native" human sensors to detect. The sensors monitoring the properties of materials in physics (like the sensors in our environment monitoring the air quality, the temperature, the number of cars passing over a pressure sensor on the roadbed) communicate their readings with time stamps and these contribute to other readings as a set of data forms.

You get the rest: the raw data then become the building material upon which analyses can be performed. It's difficult for the common man to discern patterns from the illustration above or millions of sensor readings from a nuclear power plant. Machine learning and algorithms extract the patterns from the data for us and we use these patterns to gain insights and make decisions.

So, my point is that the concept of using computers to analyze large data sets to answer all kinds of questions–the core of Big Data–has been around the research community for decades and applies to many, if not all, fields. IBM has long been leading the charge on this. Here's an interesting project led by Jeff Jonas, Chief Scientist of IBM's Entity Analytics Group, that just celebrated its one year anniversary. A January 2012 HorizonWatching Trend Report presentation on Big Data points to lots of resources.

What's new with Big Data in 2012 is the relative ease with which these very large data sets can be reliably collected, communicated, stored and processed, and, in some cases, visualized.

A feature article about Big Data's relevance in our lives in the New York Times frames the subject well and then explains why Big Data is trending: everyone wants to see the past and the present, and to understand the world more clearly. With our improved "visibility" we might be able to make better decisions. The "text book" example is the Oakland Athletics baseball team comeback on which the book and movie, Moneyball, are based.

With the help of coverage in books, motion picture, major news media and tech bloggers, Big Data is one of the big memes of 2012. Trends like the widespread adoption of Big Data usually lead to large financial gains.

Let's see if I can use this data to make better decisions! Maybe I should re-brand everything I do so that the relationships of my activities to Big Data are more clear to others. Big Spimes? What do you think?

Categories
Internet of Things Standards

Does IoT Include Virtual Things?

Marco Carugi, one of ZTE's standards "agents," is presiding over the fourth day of the Fourth Meeting of the Global Standards Initiative on the Internet of Things.

In case it has escaped your attention, this is a short description of the activity, from the GSI-IoT Web page:

"The Global Standards Initiative on Internet of Things (IoT-GSI) promotes a unified approach in ITU-T for development of technical standards (Recommendations) enabling the Internet of Things on a global scale. ITU-T Recommendations developed under the IoT-GSI by the various ITU-T Questions – in collaboration with other standards developing organizations (SDOs) – will enable worldwide service providers to offer the wide range of services expected by this technology. IoT-GSI also aims to act as an umbrella for IoT standards development worldwide."

One of the recurring topics on the agenda of past meetings, this week's meeting, and the focus of the first two hours of today, is the definition of the Internet of Things. The definition, and my opinion about the work invested on this topic, have evolved from my post about it last June. It would not be appropriate for me to share/post the current ITU draft definition before it is published by the IoT-GSI group, so I will limit my remarks to my views on the process and one of the core decisions regarding the scope of IoT.

The tenacity and dedication demonstrated by the contributors is impressive. The systematic decomposition of each and every word used in the definition, the possible qualifiers and interpretations of these terms has produced dozens of internal documents and figures, tracing who contributed what. This traceability is fundamental to all standards processes. What impresses me is the way these engineers have systematically attacked so many fuzzy concepts using a language that is, for nearly all those involved, not their mother tongue (English).

One delicate point on which there has been some debate (though far less than I anticipated) is if the IoT definition will include in its scope (in addition to physical things), something called "virtual things." These virtual things are files, such as eBooks, photographs, songs and 3D objects, that can be stored, processed and accessed.

In the past I felt that virtual things should not be in scope but, after listening to contributions, I can see an argument for the IoT to include virtual things. Virtual things will not have attributes such as physical dimensions, communications protocols nor will they be able to measure the physical world. And, physical things will not have a digital dimension (kb of storage). Both virtual and physical things have attributes such as the date and time of creation, an author or manufacturer and an authority to which we will address requests. And, perhaps the most important attribute of all, the physical world might be modified as a result of communication with either virtual and physical things.

The inclusion of virtual things is important in my mind, but it is not the most hotly debated assertion. Other topics such as if there are, and how to describe, the "personalities" attributable to things is even more unclear and sensitive.

Being an observer of these discussions certainly has confirmed my interest in developing, as early as possible, an Augmented Reality glossary of terms on which SDOs will be able to form their own definitions. It has also opened my eyes to the careful editing process defined by the ITU for its work. Results of this process will be published later this year and I hope, for the sake of progress towards standards development, that the definitions will stand the test of time.

Categories
Augmented Reality Business Strategy

Between Page, Screen, Lake and Life

In my post entitled Pop-up Poetry I wrote about the book/experience Between Page and Screen. Print, art and AR technology mix in very interesting ways, including this one, but I point out (with three brilliant examples) that this project is not the first case of a "magic book."

Like many similar works of its genre, Between Page and Screen uses FLARToolKit to project (display) images over a live video coming from the camera that is pointed at the book's pages. Other tools used in Between Page and Screen include the Robot Legs framework, Papervision for 3D effects, BetweenAS3 for animation and JibLib Flash. Any computer (whose user has first downloaded the application) with a webcam can play the book, which will be published in April. Ah ha! I thought it was available immediately, but now learn that one can only pre-order it from SiglioPress.com.

And, as suggests Joann Pan in her post about the book on Mashable, "combining the physicality of a printed book with the technology of Adobe Flash to create a virtual love story" is different. Pan interviewed the author and writer of the AR code. She writes, "Borsuk, whose background is in book art and writing, and Bouse, developing his own startup, were mesmerized by the technology. The married duo combined their separate love of writing and technology to create this augmented reality art project that would explore the relationship between handmade books and digital spaces."

The more I've thought about this project and read various posts about Between Page and Screen, in recent days, the more confident I am that I might experience a magic book once or twice, but my preferred reading experience is to hold a well-written, traditional book. I decided to come back to this topic after I read about another type of "interactive" book on TechCrunch.

First thing that caught my eye was the title. Fallen Lake. Fallen Leaf Lake! Of course! I used to live in the Sierra Nevada mountains, where the action in this novel is set, and Fallen Leaf Lake is an exceptionally beautiful body of water. But the post by John Biggs points out that the author of Fallen Lake, Laird Harrison, is going to be posting clues and "extra features" about the characters in the book by way of a password protected blog.

All these technology embellishments on books seem complicated. They're purpose, Biggs believes, is to differentiate the work in order to get some tech blogger to write about the book and then, maybe, sell more copies.

Finally, Biggs points out that what he really wants in a book, what we all want and what will "save publishing," is good (excellent) writing. Gimmicks like AR and blog posts might add value, but first let's make sure the content is well worth the effort.

Categories
3D Information Augmented Reality Innovation

Improving AR Experiences with Gravity

I’m passionate about the use of AR in urban environments. However, having tested some simple applications, I have been very disappointed because the sensors on the smartphone I use (Samsung GalaxyS) and the alogrithms for feature detection we have commercially are not well suited to show me really stable or very precise augmentations over the real world.

I want to be able to point at a building and get specific information about the people or activities (e.g., businesses) within at a room-by-room/window-and-door level of precision. Instead, I’m lucky if I see small 2D labels that jiggle around in space, and don’t stay “glued” to the surface of a structure when I move around. Let’s face it, in an urban environment, humans don’t feel comfortable when the nearby buildings (or their parts) shake and float about!

Of course, this is not the only obstacle to urban AR use and I’m not the first to discover this challenge. It’s been clear to researchers for much longer. To overcome this in the past some developers used logos on buildings as markers. This certainly helped with recognizing which building I’m asking about and, based on the size of the logo, estimating my distance from it, but there’s still low precision and poor alignment with edges.

In 4Q 2011 metaio began to share what its R&D team has come up with to address this among other issues associated with blending digital information into the real world in more realistic ways. In the October 27 press release, the company described how, by combining gravity awareness with camera-based feature detection, it is able to improve the speed and performance of detecting real world objects, especially buildings.

The applications for gravity awareness go well beyond urban AR. “In addition to enabling virtual promotions for real estate services, the gravity awareness in AR can also be used to improve the user experience in rendering virtual content that behaves like real objects; for example, virtual accessories, like a pair of earrings, will move according to how the user turns his or her head.”

The concept of Gravity Alignment is very simple. It is described and illustrated in this video:

Earlier this week (on January 30, 2012), metaio released a new video about what they’ve done over the past 6 months to bring this technology closer to commercial availability. The video below and some insights about when gravity aligned AR will be available on our devices have been written up in Engadget and numerous general technology blogs in recent days.

I will head right over to the Khronos Group-sponsored AR Forum at Mobile World Congress later this month to see if ARM will be demonstrating this on stage and to learn more about the value they expect to add to make Gravity Aligned AR part of my next device.

Categories
2020 Internet of Things Social and Societal

My Refrigerator

It's convenient to store your white wine and perishables outside when your refrigerator is small. On our balcony, there are no predators to come and take our food. And the temperature outside remains a relatively stable level of cool. In Western Switzerland we are experiencing a cold snap and I brought in items I had been keeping outside so that they wouldn't freeze. I put them in the refrigerator but it was not easy finding room.

[side note: I'll never forget the remark made by an American I once visited shortly after she arrived in country. She asked me "What is it with these Barbie-size appliances?" In many parts of the world, Barbie-size appliances are all you need when you can easily and frequently stop at retail stores. We don't drive a pick-em-up-truck to the grocery store. We walk there, buy what we need and carry it home.]

When I need to stock up, I ask everyone in my family to pick up a few items, or I use the on-line shopping service, LeShop. It's time consuming to go through the catalog but it is convenient to have the products delivered to your door for approximately 4% of the purchase price. (LeShop charges 7.90 CHF to deliver a 200 CHF order).

Taking a break while perusing LeShop's catalog, I read this article in the New York Times about smart appliances of the future. "Is this the next step in the evolution of my Barbie-size appliance?", I asked myself.

I would find it terrifically useful if my next refrigerator not only kept an inventory of its (small but tightly packed cold box) contents, but also connected tightly (or even loosely) with my LeShop order.

What if I could select a recipe the night before, ask my refrigerator (including my balcony shelves) and pantry if there was any ingredient missing, and then have whatever I was missing brought to me? Almost as easy as going to a restaurant and ordering from a menu!

A fridge that synchronizes with my store would be very useful to me but maybe not everyone. Society may not want this time-saving feature. Some may like to shop for food. Before there were roads leading to every door, people questioned the benefit of the automobile. Until everyone had one in their home, office and pocket (or pocketbook) people questioned the utility of the telephone. Why have a camera in a telephone when you have both separately? Many other innovations have become essential components of daily life.

In a recent post RCR Wireless writer Marshall Kirkpatrick took this whole question of machines talking to one another further. He identified a topic that resembled my posts about new technology adoption and use of AR technology among kids and teens. Kirkpatrick points out how quickly technology has evolved since our parents and grandparents were born (television, Internet, etc) and asks:

How do we talk to children about such a radically new relationship with technology that will characterize the world they’ll work and play in as adults? Machine-to-Machine connectivity is not as easy to grasp as the prospect of people communicating with new devices.

And he brought in an illustration from an article co-authored by Dominique Guinard, one of the young Swiss IoT entrepreneurs. Please click on the illustration to enlarge! Under the illustration are Kirkpatrick's translations to English of all the things these connected devices are saying to the teen pictured on the right.

 

 

 

 

 

Freezer: I was thinking about defrosting today.

Clock: Aren’t you supposed to have left the house already by this time?

Faucet: I dripped all night! You should call the plumber.

Toaster: Do not give me too big a toast toast, this time, eh?

Cooking utensils: I remind you that you have not eaten any greens for three days.

Washing machine: And my clothes? who's going to hang them out to dry?

Categories
Augmented Reality Social and Societal

AR-4-Teens

Changing human behavior is difficult. Go ahead, try it! Exercise more! Eat tomatoes for breakfast!

Changing behavior with new technology is right up there amongst the world's greatest challenges, after world hunger and a few other issues. In my post on AR-4-Kids, I concluded that if children were introduced to AR during infancy, adopting it would never be in question. Certainly using it in daily life would seem natural. Do companies providing AR today really need to wait for the crop of children born in 2010 or later (introduced to tablets with Ernie and Bert speaking to one another) to play with these technologies and keep them as part of their behaviors? 

Suppose you learned as an infant to adopt new technology? "Millennials" are among those who, from their earliest memories have mobile, Internet-connected technology in their hands and pockets and, in some cases, intuitively figure out the role it plays in their lives. 

Results of a study covered in several mobile marketing portals (but is surprisingly difficult to find on the Ypulse site), are not encouraging. The high school and college-age participants of Ypulse's survey are “baffled” by augmented reality technology, particularly when it's infused in mobile apps on leading smartphones. Neither of the news bulletins I've read about this study (here and here) describe the Ypulse methodology or sample size. However, below are the findings at a glance from the Mobile Marketing Watch portal:

  • Only 11% of high schoolers and collegians have ever used an augmented reality app.
  • retailers like Macy’s and brands like Starbucks have come out with mobile AR apps. They’re fun and clever, but as with QR codes, Millennials don’t always get the point.

Among students who have used AR apps:

  • 34% think they’re easy and useful;
  • 26% think they’re easy but not useful;
  • 18% think they’re useful but not easy; and
  • 9% think AR apps are neither useful nor easy to use.

We have a long way to go before the technology put in the hands of consumers, even the magnificent millennials today, meets true needs, adds value and gives satisfaction. Heck, we're not even near a passing grade! 

Those who design and produce AR experiences must reduce their current reliance on advertising agencies and gimmicks. Or at least they must reduce emphasis on "wow" factor that clearly has no purpose (except to engage the potential customer with a brand or logo).

Utility, especially utility in the here and now, is more important than anything else to change behavior and increase the adoption of AR. 

How useful is your AR experience?