Categories
Internet of Things Social and Societal

Where are the users?

Information technology is supposed to benefit people. Not all people, necessarily. But some people, some of the time. Very frequently engineers are looking to solve a problem, but there are few or no end users involved in the design of solutions. There are many explanations, including lack of knowledge of the options, political agendas and financial considerations, for end users to be rare or absent during the design of systems.

Lack of flesh and blood end users is not always an impediment to progress or impact. In the first meeting of the ITU Machine-2-Machine Focus Group (notice that there are no humans in the chain) I attended earlier this week, the end users' needs are considered as part of Use Case descriptions and the final user is represented by actors in these use cases. This approach works well for many situations. But not for all.

Usman Haque's guest post on Wired's new (since mid-January 2012) Ideas Bank blog, makes the case that citizen participation is not optional when technologies work on intelligent city information architecture. In fact, Usman argues that citizens, not corporate giants like IBM, Cisco or General Electric should drive and be at the "center" of the activity.

He writes, "We, citizens, create and recreate our cities with every step we take, every conversation we have, every nod to a neighbour, every space we inhabit, every structure we erect, every transaction we make. A smart city should help us increase these serendipitous connections. It should actively and consciously enable us to contribute to data-making (rather than being mere consumers of it), and encourage us to make far better use of data that's already around us.

"The "smartness" of smart cities will not be driven by orders coming from the unseen central government computers of science fiction, dictating the population's actions from afar. Rather, smart cities will be smart because their citizens have found new ways to craft, interlink and make sense of their own data."

As I mentioned above, there are frequently excellent reasons for users or citizens to be represented by their proxies. In fact, isn't that what a representational democracy does? It elects people to represent citizens in decision making processes, just like actors in use cases represent the end users of technology systems.

That said, I agree with Usman that, first, it's not easy to bring about a major transformation from brick-and-mortar cities to smart cities, and that people have to drive or at least participate in the innovation or else the outcomes could be rejected by those they are intended to serve. And, furthermore, I agree that the infrastructure for smart cities may serve citizens but it requires investments at such enormous scales that only cities (or national governments in some cases) can fund the infrastructure, and only very large companies, like Cisco and IBM, can realistically be expected to build them.

Perhaps the absence of users in the case of city development, or at least in the planning of city change, is an issue that could be addressed by increasing the use of AR-assisted information delivery and retrieval programs.

Here's the scenario: Joe Citizen is sent a device with instructions (a tutorial) and asked to use it to review a proposed project and give his feedback. He takes the device out into the neighborhood, turns around, explores a proposed project from different angles and answers questions on the screen. He goes to the nearest city office and turns in the system complete with his ideas and feedback. If he doesn't return the system or perform the task on time, an invoice is sent for payment (similar to a fine if a citizen doesn't show up for jury duty).

For very modest investments, the citizens of a city could take an incremental step towards a smart city just by "seeing" the information that their city already has and can make available using Open Data systems. I haven't forgotten my plan to repeat the Barcelona City Walkshop this year.

Categories
Augmented Reality News

Skin in the Game

Those who read this blog are more than likely among the group that shares some level of conviction (maybe not as strongly as I do!) that Augmented Reality is going to be important. We are a small minority, but April 2012 will represent an important historical milestone. Perhaps it is as important as the June 2009 release of Layar's AR browser in terms of demonstrating that AR has traction, has graduated from the laboratory, captured imaginations and will have huge commercial impacts.

In the past three weeks many more people are finally getting to see the signs that indicate how important Augmented Reality will be in the future. The curtain that formerly prevented people from knowing how much investment was truly happening is dropping. Major companies are now following Qualcomm and putting resources into AR.

They are putting "skin in the game" and that's what it takes to convince many (including the mobile pundits such as Tomi Ahonen and Ajit Jaokar) that AR has passed a treshhold. In case you didn't catch it, Tomi posted on his blog, Communities Dominate Brands, on April 11, 2012 (probably the shortest blog he's ever written!) that he has seen the light and he can now believe that AR is the planet's 8th mass medium.

Twenty-eleven was a slow year for AR revenue growth but those who were paying attention could see small signs of the growing captial influx. Intel had already demonstrated its interest in AR by investing in Layar in late 2010 and in Olaworks in early 2007 and expanded its investments (e..g, in Total Immersion). Texas Instruments, Imagination Technologies, ARM, ST Ericsson and Freescale all revealed that they have established programs on their own or in partnership with AR companies to accelerate AR on mobile platforms.

But, with only a few exceptions, these announcements by semiconductor companies were "me too," forced by the apparent (media) successes of Qualcomm's Vuforia and Hewlett Packard's Aurasma. These last two companies have heavily contributed to the media's awareness of mobile AR, but, sadly, also contributed to the perceived image that AR is a gimmick.

We can now pinpoint the catalytic event when AR is taken more seriously: the 9:00 AM PDT April 4, 2012 announcement by Google that confirmed prolific rumors that it is indeed working on see-through head-mounted displays (posted on TechCrunch here). Many jumped to the conclusion that these are AR glasses, although the applications are not, strictly speaking, limited to AR.  A blog called Glasses from Google calls them "a new paradigm in computing."

While the figures have not been (and never will be) disclosed by Google, I estimate that the company has already invested well over $4M, approximately ten percent of the entire AR industry revenues in 2011, to get to its current prototype (even one that reboots regularly). And (most likely) without reaching far beyond its internal intellectual properties and a few consultants. Note that, in light of the Project Glass announcement, the Google acquisition of Motorola Mobility for total of about $12.5 billion is very strategic. It surpasses the October 2011 $10 bilion Hewlett Packard acquisition of Autonomy by only a few billion dollars. Very big Skin in the Game.

While the global media has for 15 days steadily reported on and speculated about Project Glass, and the video broke social media records, this is not the only example that there have been and are new investments being made in mobile AR in April 2012. Tiny by financial standards, but significant developments nevertheless, include the Olaworks acquisition, and today Total Immersion has announced that  Peter Boutros, a former Walt Disney VP will be its new president. And Google isn't the only company that's working on eyewear for information. For example, Oakley's Apri 17 announcement that it has been working on an eyewear project shouldn't have taken anyone by surprise but managed to make the headlines.

What and who is next?!

And how much skin will the next announcement be worth?

Without strong business cases this second question will be the most difficult to answer and, for this reason, it is a topic to which I have recently written another post.

Categories
Augmented Reality News

Intel’s First Full Acquisition of Korean Firm, Olaworks

The Korea Herald is not where I normally get my news. Nor do I regularly visit The Register (whose tag line is "Biting the Hand that Feeds IT"). But today I visited both in order to learn more about Intel's $30.7M acquisition of Olaworks.

In case you are not familiar with it, Olaworks was one of the early companies to dedicate itself first to computer vision (primarily face recognition) and then to apply its intellectual property to solve Augmented Reality challenges. The founder of Olaworks, Dr. Ryu Jung-hee, has been a long-standing friend and colleague and one of the most outgoing Koreans I've met. Ryu has attended at least four out of the past five AR Standards Community meetings and miraculously shows up at other events (e.g., he accepted my invitation to come to the first Mobile Monday Beijing meeting and showcase on the topic of mobile AR, and presented about Olaworks during the first AR in China meeting, one year ago).

Not only am I pleased for Ryu and the 60 employees who work for Olaworks, I'm also impressed that an analyst concluded that one reason for the acquisition might be Olawork's facial recognition technologies. At present LG Electronics, Pantech, and HTC make use of Olawork’s face recognition technology in their phones. Gartner analyst Ken Dulaney told The Reg that Intel’s decision to acquire was probably informed by the growing popularity of face recognition software in the consumer space. In fact, Texas Instruments recently shared with me that they are very proud of the facial recognition performance they have on the OMAP. Face recognition could be used for a lot of different applications (not just AR) when it is embedded into the SoC, as an un-named source suggested might be Intel's intention since Olaworks seems to be heading for integration with another Intel acquisition, Silicon Hive.

Another analyst speculating on the acquisition, Bryan Ma of IDC, sees the move as one of many steps Intel is taking to "prove it’s better than market leader ARM in the mobile space. It has been trying to position Medfield as a better performance processor using the same power consumption as ARM,” he told The Reg. “In the spirit of this it would make sense for Intel to move for technology and apps which can harness that horsepower to differentiate it from ARM.”

I'm not familiar with the Korean investment landscape but it may be important that the Private Equity Korea article on the acquisition makes a point about Intel's acquisition of Olaworks being the first full Korean acquisition the chip giant has made. It seems that we rarely hear about Korean startups in the West and I suspect that one reason is that the most common exit strategy of a young Korean company is acquisition by one of the global handset manufacturers (LG Electronics, HTC, or Samsung), or one of the large network operators. It's perfectly logical, not only from a cultural point of view but also because the Korean mobile market is large and has a long history of having its own national telecommunications standards.

After NTT-DoCoMo's launch of its 3G service in October 2001, the second 3G network to go commercially live was SK Telecom in South Korea on the CDMA2000 1xEV-DO technology in January 2002 (10 years ago). By May 2002 the second South Korean 3G network was launched by KTF on EV-DO and thus the Koreans were the first to see competition among 3G operators.

I hope that the Olaworks exit signals the opening of Korean technology silos and an opportunity for other regions of the world to benefit from the advances the Koreans have managed to make in their controlled 3G network environment.

Categories
Business Strategy True Stories

Searching for a Viable Business Model

Customers value what they pay for and (usually) pay for what they value. If customers do not pay for value, is the business model at fault?

This basic question is at the root of a current polemic facing Facebook. Companies in other markets as well, for example AR, must answer the question or die. I feel that the business model is not always the root of the problem. The take home message of this post is that just because a business model doesn't work for one company the first time its tried doesn't mean the same or a similar system will fail for another company in a different geographic context, or at another point in time, after users have been through an educational process.

Three years ago, around 2 PM in the afternoon of February 11, 2009, I sat in the audience of a session on social networking at the Mobile World Congress. I had already been focusing on mobile social networking for several years (since mid-2006) and remember its "birth" as a mobile Web site and an iPhone application that made it easier to upload photos and notes, and to exchange messages on the site, and a look-up function for phone numbers.

Facebook mobile was a year "old" and I had been following along since its birth in 2008. I wasn't alone. The rest of the world read about Facebook's rapid mobile growth in a mashable blog post, Gigaom blog post and an article on BusinessWeek.com.

At that time, Facebook operated these mobile services worldwide:

The number of users and the options for accessing the social network were quite similar in size and approach to where the mobile AR industry is today.

In the nine months that followed from January 2009, when there were a mere 20M mobile Facebook users, to September 2009 when the network had reached 65M mobile users, Facebook implemented what it called the "Facebook Credits" platform. It is now available using 80 payment methods in over 50 countries.

Facebook was alert to the fact that it was not monetizing user traffic and open to experimenting with business models. The PaymentsViews post on August 25, 2009 (which I've preserved below in case of catastrophic melt down), shows just how hard it was to use on mobile and how creatively Facebook pushed its Credits program.

For a variety of reasons, Facebook failed in its first attempts to monetize the mobile platform. First, there was and there remains friction in the mobile payments system. We've learned since the introduction of smartphone applications that even if it is phenomenally easier on a smartphone than on a feature phone, users don't want to take time to click through multiple screens to authorize a payment. There are dozens of companies that are working to make payments easier on mobile.

Then there was just a lack of creativity in the goods which were offered. In Japan, precisely the same business model worked relatively well and mobile social networks flourished where Facebook could not break in because the Japanese youth culture was more mobile saavy and the digital goods were far more innovative, dynamic and valuable to the customers. Other reasons for abandoning the Facebook Credits system include that the desktop business model is so lucrative.

in February 2012, a piece in the New York Times reminded us that although it now has more than half of its 845 million members logging into Facebook daily via a mobile device, it is still not monetizing its mobile assets.

If a company with an estimated pre-IPO valuation of $104 Billion is unable to figure out a strong business model for mobile social media, where are AR companies going to look for their cash cow?

I predict that the answer will include something very close to Facebook's original mobile Facebook Credits concept.

The reasons I feel strongly in the future of a mobile commerce for information model are that the presentation of the digital asset will be more refined and the timing for the business model will be completely different. When companies will offer their information assets in a contextually-sensitive AR-assisted package for a fixed increment of time and in a limited location, the users will know what they're getting and only purchase what they need.

In addition, users will have been through several generations of commercial "education" on mobile platforms. in the months and years prior to digital assets (information, games, content) being sold in small increments, users will have learned that they can purchase their transportation fares with their devices, their movie tickets and even their beverages in a night club.

While it will certainly not be the only business model on which the mobile AR services and content providers will need to rely, the tight relationship between the user's willingness to pay for an experience and the value provided will bring about repeat usage and, eventually, widespread adoption.

———————————————————-

In case of meltdown of this company's site, I have taken the liberty of putting the full text of PaymentsViews post below:

Purchasing Facebook Credits with Zong Mobile Payments

by Erin McCune on August 25, 2009

At Glenbrook we believe that social eCommerce and virtual currencies are the new frontier of payments. Person-to-person transfers, charity donations, and micropayments for virtual goods (e.g. games, music, e-books, etc.) are exploding within social networks and as the 800-pound-gorilla in the social networking space, all eyes are on Facebook. Estimates vary, but $300-500 million in transactions may happen within Facebook in 2009 (note 1), although thus far precious few of those transactions are funded by a native Facebook payment mechanism.

A couple days ago I decided to send my colleague Bryan a birthday gift on Facebook and was startled to discover that Facebook now as an option to buy Facebook Credits, Facebook’s fledgling virtual currency, via mobile phone using Zong (more from Payments Views on Zong here).  Developers on Facebook have accepted mobile payments for some time now, from Zong as well as other mobile payment providers, but the Facebook Gift Shop and Facebook Credits are Facebook services, not a developer product. And up until now (note 2) Facebook has only accepted credit card payments.

Being the payment geek that I am, I opted for the mobile phone payment option and took screen prints of the process flow. And then I wanted to compare the check out process via phone to the credit card check out process, so I bought Bryan a second gift (lucky Bryan) and took more screen prints. Continue reading to see a comparison of the check out process for the two payment methods.

But first, a little background…

The (Continuing) Evolution of Facebook Payments

  • There has been a long standing Facebook Gift Shop where users can purchase virtual gifts for one another for $1 each. (Some sponsored gifts are free.) Users purchase “gifts” with a credit card: MasterCard, Visa, AmEx.
  • December 2007:  Rumored that beta test of payments system for applications was imminent. Developers were instructed to sign up to participate (and had to sign an NDA).
  • November 2008: Converted gift shop dollars to “credits.” Each $1 buys 100 credits, so gifts that used to cost $1 are now priced at 100 credits. Still pay for Facebook Credits with a credit card.
  • March 2009: Facebook claims to be “looking at” a virtual currency system.
  • April 2009: Facebook introduces a limited pilot program whereby users can give credit to one another. If one user “likes” content that one of their friends has posted, the user can give them a virtual tip, using Facebook Credits. The only thing you can do with the credits is buy Gifts or give them to your other friends.
  • May 2009: Facebook announces “Pay With Facebook” a new feature that will enable users to make purchases from Facebook application developers. Funding is via Facebook Credits, which can be purchased only via credit card.
  • June 2009: Facebook began testing payment for virtual goods within Facebook using Pay With Facebooka nd Facebook Credits, starting with the GroupCard, Birthday Calendar, and MouseHunt applications.
  • August 2009: Facebook announces that the Gift Store is conducting an “alpha test” of non-Facebook gifts in the Facebook Gift Shop, including some physical goods (e.g. flowers, candy).
  • August 2009: It is now possible to purchase Facebook Credits with your mobile phone, via Zong.

Purchasing Facebook Credits via Mobile Phone

(Note: click on individual images to see larger version)

When I clicked on Bryan’s Facebook profile I was reminded to wish him Happy Birthday, and optionally, buy him a “gift”

Picture1

Up until now, Facebook has only accepted payment via Credit Card for Gift Credits. But now it is possible to pay with your mobile phone. Note that the “pay with mobile” option is listed first.

Picture2

I could select whether to purchase 15 ($2.99), 25 ($6.99), or 50 ($9.99) Facebook Credits and then prompted to enter my mobile phone number.

FB Payments Picture3

Meanwhile, I received a SMS text message from Zong providing a PIN number, confirming a payment of $2.99 to Facebook, and instructions on how to stop the payment or get help:

Zong Text Msg 1

I entered the PIN number provided, waited a few moments, and then got a confirmation screen.

FB Payments Picture4

Finally, I received two confirmation SMS text messages from Zong (not from Facebook):

Zong Text Msg 2

Zong Text Msg 3

Purchasing Facebook Credits with a Credit Card

For Bryan’s second gift (a virtual beer, I am sure he would have preferred a real one!) I opted to pay with a credit card. Note the difference in price per Facebook Credit (more on that in a minute).

FB CC Payment Picture1

Next I entered my card details and was immediately presented with a confirmation screen. The process is definitely quicker (and cheaper) if you purchase via credit card.

FB CC Payment Picture2

Finally, when I purchased via credit card I received a confirmation email directly from Facebook (whereas with the mobile phone payment I received the confirmation via SMS text from Zong, rather than Facebook).

FB CC Payment Picture3

Pricing Varies by Payment Method

When I paid with my mobile phone the price per Facebook Credit was twenty cents. I only paid ten cents per Facebook Credit when I made my purchase with a credit card. Zong charges the merchant (in this case Facebook) a higher processing fee than the credit card companies do. This is not uncommon. Payments via mobile phone are typically for virtual goods (ring tones, avatar super powers, games, etc.) with relatively low cost of goods, thus merchants are less price sensitive. Once they’ve done the coding, every incremental sale above and beyond development costs is profit. Mobile payments for virtual goods cost between 20-50% of the transaction amount, with most of the fee being passed on to mobile phone carriers. Given this pricing structure, it is not surprising that Facebook charges more per Facebook Credit when you buy with your mobile phone. It is unclear how much of the net fee application developers receive and how much Facebook retains, and if the split varies depending on payment method.

Other Forms of Payment Within Facebook

Keep in mind that Facebook Credits are just one way of purchasing goods within Facebook. Today, Facebook application developers monetize their games and other applications by accepting payment directly using PayPal, Google, Amazon FPS, or SocialGold. Or developers may opt to receive direct payment via mobile phone via Zong, Boku, or another mobile payment provider. Virtual currencies that can be used across a variety of social networks and game sites include Spare Change and SocialGold. It is also possible to earn virtual currency credit by taking surveys and participating in trials offered via Super Rewards, OfferPal Media, Peanut Labs and many others. And finally, game developers in particular, often accept payment via a prepaid card sold in retail establishments, such as the Ultimate Game Card. The social and gaming web is exploding with virtual currency offerings, yet thus far no one model or payment brand dominates.

We’ll continue to monitor Facebook’s payment evolution and track the development of social eCommerce here at Payments Views. In the meantime, you might enjoy these related Payments Views posts:

Notes

  1. Facebook transaction value estimates here.
  2. Caveat: I am not quite sure when Facebook started accepting Zong – sometime after June, as that was when I last checked in. I suspect, but haven’t confirmed, that the change was made in conjunction with last week’s announcement that the Gift Store is conducting an “alpha test” of non-Facebook gifts in the Facebook Gift Shop, including some physical goods (e.g. flowers, candy). If anyone out there knows for sure, please let us know in the comments.

end of full text here.

Categories
Augmented Reality Research & Development

Project Glass: The Tortoise and The Hare

Remember the Aesop’s fable about the Tortoise and the Hare? 11,002,798 viewers as of 9 AM Central European Time April 10, 2012. Since April 4, 2012 Noon Pacific Time, in five and a half days, over the 2012 Easter holiday weekend, the YouTube “vision video” of Google’s Project Glass has probably set a benchmark in terms of how quickly a short, exciting video depicting a cool idea can spread through modern, Internet-connected society. [update April 12, 2012: here’s an analysis of what the New Media Index found in the social media “storm” around Project Glass.]

The popularity of the video (and the Project Glass Google+ page with 187,000 followers) certainly demonstrates that beyond a few hundred thousand digerati who follow technology trends, there’s a keen interest in alternative ways of displaying digital information. Who are these 11M viewers? Does YouTube have a way to display the geo-location of where the hits originate?

Although the concepts shown in the video aren’t entirely new, the digerati are responding and engaging passionately with the concept of handsfree, wearable computing displays. I’ve seen (visited) no fewer than 50 blog posts on the subject of the Project Glass. Most are simply reporting on the appearance of the concept video and asking if it could be possible. There are those who have invested a little more thought.

Blair MacIntyre was one of the first to jump in with his critical assessment less than a day after the announcement. He fears that success (“winning the race”) to new computing experiences will be compromised by Google going too quickly when slow, methodical work will lead to a more certain outcome. Based on the research in Blair’s lab and those of colleagues around the world, Blair knows that the state-of-the-art on many of the core technologies necessary for this Project Glass vision to be real is too primitive to deliver (reliably in the next year) the concepts shown in the video. He fears that by setting the bar as high as the Project Glass video has, expectations will be set too high and failure to deliver will create a generation of skeptics. The “finish line” for all those who envisage a day when information is contextual and delivered in a more intuitive manner will move further out.

In a similar “not too fast” vein, my favorite post (so far, we are still less than a week into this) is Gene Becker‘s April 6 post (48 hours after announcement) on his The Connected World blog. Gene shares my fascination with the possibility that head-mounted sensors like those proposed for Project Glass would lead to continuous life capture. Continuous life capture has been shown for years (Gordon Bell has spent his entire career exploring it and wrote Total Recall, other technologies are actively being developed in projects such as SenseCam) but we’ve not had all the right components in the right place at the right price. Gene focuses on the potential for participatory media applications. I prefer to focus on the Anticipatory services that could be furnished to users of such devices.

It’s not explicitly mentioned, but Gene points out something I’ve raised and this is my contribution to the discuss about Project Glass with this post: think about the user inputs to control the system. More than my fingertips, more of the human body (e.g., voice, gesture) will be necessary to control a hands-free information capture, display and control system. Gene writes “Glass will need an interaction language. What are the hands-free equivalents of select, click, scroll, drag, pinch, swipe, copy/paste, show/hide and quit? How does the system differentiate between an interface command and a nod, a word, a glance meant for a friend?”

All movement away from keyboards and mice as input and user interface devices will need a new interaction language.

The success of personal computing in some way leveraged a century of experience with the typewriter keyboard to which a mouse and graphical (2D) user interface were late (recent) but fundamental additions. The success of using sensors on the body and in the real world, and the objects and places as interaction (and display) surfaces for the data will rely on our intelligent use of more of our own senses, use of many more metaphors between the physical and digital world, and highly flexible, multi-modal and open platforms.

Is it appropriate for Google to define its own handsfree information interaction language? I understand that the Kinect camera point of view is 180 degree different from that of a head-mounted device, and it is a depth camera, not a simple and small camera on the Project Glass device but what can we reuse and learn from Kinect? Who else should participate? How many failures before we get this one right? How can a community of experts and users be involved in innovating around and contributing to this important element of our future information and communication platforms?

I’m not suggesting that 2012 is the best or the right time to be codifying and to put standards around voice and/or gesture interfaces but rather recommending that when Project Glass comes out with a first product, it should include an open interface permitting developers to explore different strategies for controlling information. Google should offer open APIs for interactions, at least to research labs and qualified developers in the same manner that Microsoft has with Kinect, as soon as possible.

If Google is the hasty hare, as Blair suggests, is Microsoft the “tortoise” in the journey to provide handsfree interaction? What is Apple working on and will it behave like the tortoise?

Regardless the order of entry of the big technology players, there will be many others who notice the attention Project Glass has received. The dialog on a myriad of open issues surrounding the new information delivery paradigm is very valuable. I hope the Project Glass doesn’t release too soon but with virtually all the posts I’ve read closing by asking when the blogger can get their hands on and nose under a pair, the pressure to reach the first metaphorical finish line must be enormous.

Categories
Augmented Reality Business Strategy

Augmented Real(ity) Estate

I would like to live in a world in which the real estate agent [information finder (an "explorer" that uses AR)] and the transaction platform are all (or nearly all) digital.

Funda Real Estate, one of the largest real estate firms in the Netherlands, was (to the best of my knowledge) the first Layar customer (and partner). Initially developed in collaboration with our friend Howard Ogden 3 years ago, the Funda layer in the Layar browser permits people to "see" the properties for sale or rent around them, to get more information and contact an agent to schedule a visit.

A few hours ago, Jacob Mullins a self-proclaimed futurist at Shasta Ventures, shared with the world on TechCrunch how he came to the conclusion that real estate and Augmented Reality go together! Bravo, Jacob! I think the saying is "In real estate there are three things that matter: Location. Location. Location." Unfortunately, none of the companies he cites as having "lighthouse" examples are in the real estate industry.

Despite the lack of proper research in his contribution, property searching with AR is definitely one of the best AR use cases in terms of tangible results for the agent and the user. It's not exclusively an urban AR use case (you could do it in an agricultural area as well) but a property in city-center will certainly have greater visibility on an AR service than one in the country. The problem with doing this in most European countries is that properties are represented privately by the seller's agent and there are thousands of seller agents, few of whom have the time or motivation to provide new technology alternatives (read "opportunity").

In the United States, most properties appear (are "listed") in a nationwide Multiple Listing Service and a buyer's agent does most of the work. Has a company focused and developed an easy to use application on top of one of the AR browsers (or an AR SDK) using the Multiple Listing Service in the US?

My hypothesis is that at about the time the mobile location-based AR platforms were introduced (mid-2099), the US real estate market was on its way or had already imploded. People were looking to sell, but not purchase property. 

This brings up the most important question neither raised or answered in Jacob's opinion piece on TechCrunch: what's the value proposition for the provider of the AR feature? Until there are strong business models that incentivise technology providers to share in the benefits (most likely through transactions) there's not going to be a lot of innovation in this segment.

Are there examples in which the provider of an AR-assisted experience for Real Estate is actually receiving a financial benefit for accelerating a sale or otherwise being part of the sales process? Remember, Jacob, until there are real incentives, there's not likely to be real innovation. Maybe, if there's a really sharp company out there, they will remove the agents entirely from the system.

Looking for property is an experience beginning at a location (remember the three rules of real estate?), the information parts of which are delivered using AR. Help the buyer find the property of their dreams, then help seller component, and YOU are the agents.

Categories
Events Internet of Things

IoT via Cloud Meetup in Zurich

The other day I traveled 2 hours and 45 minutes from Montreux to Zurich and 2 hours and 50 minutes home following a 2-hour meetup group meeting at the ETHZ. It was a classic case of my desire to meet and speak with interesting people being sufficiently strong to outweigh my feeling that I have too much to do in too little time. See Time Under Pressure. Fortunately, I could work while on the train and, in keeping with my thinking about Air Quality, I (probably) didn't contribute to the total Swiss CO2 emissions for the day. And what is really amazing is that the meetup was worth my investment. I previously mentioned that I was looking forward to catching up with Dominique Guinard, co-founder and CTO of EVRYTHNG, a young Zurich start up, and co-founder of Web-of-Things portal.

Dom did not disappoint me or the 20 people who joined the meetup. In addition to great content, he is an excellent presenter. He started out at a very high level and yet was quickly able to get into the details of implementations. He included a few demonstrations during the talk and a couple of interesting anecdotes. We learned that his sister doesn't really see the point to him sharing (via Facebook) the temperature readings from his sunspot gadget. And how he was inspired when WalMart IT management came to MIT for a visit and mentioned that they were considering a $200,000 project to connect security cameras to tags in objects in order to reduce theft. In 2 days, Dom (and others, I presume) had a prototype showing that the Web of Things could address the issue with open interfaces. My favorite story during the talk brought up the problems that can arise when you don't have sufficient security. Dom was giving a demonstration of Web of Things once when a hacker in the audience saw the IP address. He was able to go into Dom's server and within minutes (during Dom's talk) the power on his laptop shut off!

In addition to Dom's stage-setting talk, we had the pleasure of having Matthias Kovatsch, researcher in the Institute for Pervasive Computing at ETHZ, and the architect of Copper, a generic browser for the IoT based on Constrained Application Protocol (CoAP). Matthias presented the status of the projects on which he is working and the results of an ETSI/IETF plugfest to which he went in Paris. The consolidated slides of the IETF-83 CoRE meeting include the Plugtests wrap-up slides (slightly edited). It's really exciting to see how this project is directly contributing to part of the standards proving process!

In addition to these talks, Benjamin Wiederkehr, co-founder of Interactive Things, an experience design and architecture services firm based in Zurich, gave us great insights into the process and the tools they used to achieve the new interactive visualization of cell phone use in Geneva. Learn all about this project by visiting Ville Vivante web site, in collaboration with the City of Geneva.

Valuable evening, folks! Thank you for making another trip to Zurich worth the effort!

Categories
Internet of Things Research & Development

The Air We Breathe

In IoT circles, air is a popular topic. There is so much of it and, at the same time, it is so fundamental to the quality of life on our planet.

During the IoT-4-Cities event Andrea Ridolfi, co-founder of SensorScope, presented about the use of sensors mounted on buses and trams to measure air quality in the cities of Lausanne and Zurich as part of the OpenSense project.

This is a really interesting collaboration that I hope will develop systems for commercial deployments using an architecture similar to this one below.

Since deploying these systems widely will be expensive, going to scale will probably require getting citizens involved in air quality sensing. The citizen participation component of air quality sensing was the topic of presentations by Michael Setton, VP of Marketing of Sensaris and Jan Blom, User Experience Researcher at Nokia Research.

 

 

On March 30, the same day as the IoT-4-Cities meeting, the IoT-London meetup group held a workshop and 10 people built their first sensors. The web site with materials shared during the workshop would be a great basis for people to get started.

In parallel, Ed Borden of Pachube (LogMeIn) has put the Air Quality Egg project up on Kickstarter.com and it took off like a rocket, meeting its financial goal of $39,000 in less than 10 days. There's still three weeks before the project closes on Thursday April 26, 2012.

I want to get some people from Switzerland involved in building a prototype of the Air Quality Egg as a DIY project for the IoT Zurich meetup community, but, unfortunately, I and another enthusiast, JP de Vooght, lack all the necessary skills.

  • Are you interested in leading an AQE workshop or getting involved?
  • Do you have a venue where about 10 people can meet for a half day (with benches where use of sodering tools is convenient)? What else is needed? a 3D printer?

Join the Air Quality Egg Project and contact JP before April 25! We can promote the activity on the IoT-Zurich meetup list and page.

Categories
Augmented Reality Business Strategy

Augmented Reality SDK Confusion

I don't feel confused about AR SDKs but I wonder if some of those who are releasing new so-called AR SDKs have neglected to study the AR ecosystem. In my depiction of the Augmented Reality ecosystem, the "Packaging" segment is at the center, between delivery and three other important segments. 

Packaging companies are those that provide tools and services to produce AR-enriched experiences. Think of it this way: when content has been "processed" through the packaging segment, a user who has the right sensors detecting its context receives ("experiences") that content in context, or more specifically, in "camera view" (i.e., visually inserted over the physical world), as an "auditory" enrichment (i.e., a sound is produced for the user at a specific location or context) or "haptic" enrichment (i.e., the user feels something on their body when a sensor connects with some published augmentation that sends a signal to the user). That's all AR in a nutshell.

In the packaging segment we find many sub-segments. This includes at least the AR SDK and toolkit providers, the Web-hosted content publishing platforms and the developers that provide professional services to content owners, brands and merchants (often represented by their appointed agencies).

Everyone, regardless of the segment, is searching for a business model that will work for Augmented Reality in the long run. In order for value (defined for the moment as "something for which you either pay attention to or pay money for use") to flow through an ecosystem segment it's simple: you must have those that are buying-with their time or their money-and those who sell to the buyers. With the packaging segment in the middle, the likelihood is high that things that matter in the long run, that generate revenues, will involve this segment.

The providers of software development tools for producing AR-enriched experiences (aka AR SDKs) all have the same goal (whether they announce it or not). The "game" today, while exploring all possible revenue streams, is get the maximum number of developers on your platform. If you have more developers, you might get the maximum number of projects executed on/with your platform. It's the number of projects (or augmentations) that's the real metric that matters most. The SDK providers reach for this goal by attracting developers to their tools (directly or indirectly, using contests and other strategies) and/or by doing projects with content providers themselves (and thus competing with the developers). Cutting the developer segment out is not scalable and cannibalizing your buyers is not recommended either, but those are separate subjects.

For some purposes, and since it drives the use of their products, packaging companies rely on and frequently partner with the providers of enabling technologies, the segment represented in the lower left corner of the figure. More about that below.

Since we are in the early days and no one is confident about what will work, virtually all the packaging segment players have multiple products or a mix of products and services to offer. They use/trade technologies among themselves and are generally searching for new business models. And the enabling technology providers get in the mix as well.

The assumption is that if a company is using an SDK, they are somehow "locked in" and the provider will be able to charge for something in the future, or that, if you are a hardware provider, your chips will have an advantage accelerating experiences developed with your SDK. If manufacturers of devices learn that experiences produced using a very popular SDK are always accelerated with a certain chipset, they might sell more devices, hence order more of these chips, or pay a premium for them. This logic probably holds true as long as there aren't standards or open source alternatives to a proprietary SDK.

Let's step back to a few years ago when AR SDKs were licensed on an annual or project basis to developers. The revenue from licensing SDKs to third party developers on a project basis is the business model that was a primary revenue generator for computer vision-based SDK provider AR Toolworks, and annual licensing was relatively successful for the two largest companies (in terms of AR-driven revenues pre-2010), Total Immersion and metaio.  These were also the largest revenue generating models for over a dozen other less well-known companies until mid-2011. That's approximately when a "simple" annual or per-project licensing model was washed away, primarily by Qualcomm.

Although it is first an enabling technology provider (blue segment), Qualcomm released its computer vision-based SDK, Vuforia, with a royalty- and cost-free license in last days of 2010 and more widely in early 2011. To compound the issue, Aurasma (an activity of Hewlett Packard since the Q32011 HP acquisition of Autonomy) came out in April 2011 with their no-cost SDK. Qualcomm and Aurasma aren't the first ones to do this. No one ever talks about it any more, but Nokia Point & Find (officially launched in April 2009 after a long closed beta) was the pre-smartphone era (Symbian) version. It contained and exposed via APIs all the various (visual) search capabilities within Nokia and was released as a service platform/SDK. This didn't catch on for a variety of reasons.

So, where are we? Still unclear on why there are so many AR SDKs, or companies that say they offer them.

AR SDKs are easily and frequently confused with Visual Search SDKs. Visual Search SDKs permit a developer to use algorithms that match what's in the camera's view with images on which the algorithm was "trained," a machine learning term for processing an image or a frame of video and extracting/storing natural features in a unique arrangement (a pattern) which, when detected again in the same or a similar arrangement will produce a match. A Visual Search SDK leaves what happens after the match up to the developer. A match could bring up a Web page, like a match in a QR code scanner does. Or it could produce an AR-enriched experience.

Therefore, Visual Search can be used by and is frequently part of an AR SDK. Many small companies are providing "just" the Visual Search SDKs: kooaba, Mobile Acuity, String Labs, Olaworks, milpix, eVision among others. And apparently there's still room for innovation here. Catchoom, a Telefonica I&D spin-off that's going to launch at ARE2012, is providing the Visual Search for junaio and Layar's Vision experiences.

Another newcomer that sounds like it is aiming for the same space that Catchoom has in its cross hairs (provides "visual search for brands") is Serge Media Corporation, a company founded (according to its press release) by three tech industry veterans and funded by a Luxembourg-based consortium. The company introduced the SergeSDK. Here's where the use of language is fuzzy and the confusion is clear. The SergeSDK Web page says that Aurasma is a partner. Well, maybe HP is where they are getting the deep pockets for the $1M prize for the best application developed using their SDK! If Aurasma is the provider of the visual search engine, then the SergeSDK is actually only a search "carousel" that appears at the top of the application. Sounds like a case where Aurasma is going to get more developers using its engine.

Hard to say how well this will work in the long run, or over just the next year. There are few pockets deeper than those of Google and Apple when it comes to Visual Search (and AR). These companies have repeatedly demonstrated that they have been incubating the technologies and big plans are in store for us.

All right. Let's summarize. By comparison with other segments, the packaging segment of the AR ecosystem is a high risk zone. It will either completely disappear or explode. That's why there are so many players and everyone wants to get in the action!

Stayed tuned as in the next 6 months this segment undergoes the most rapid and unpredictable changes when Google and Apple make their entries.

Categories
Social and Societal

Time Under Pressure

In my most recent post I wrote a bit about what happens when I leave my office. At events I meet a lot of new people, and when out on the road I encounter objects that aren't familiar to me. It can be enlightening but it can be also dangerous and costly if time is your most precious resource (and time is the most limiting resource for populating this blog).

Here's an example of what we all try to avoid. Eric Picker came to Lausanne to give a 20-minute talk about the use of sensors and telecommunications to monitor water quality and quantity during our IoT-4-Cities workshop. His trip to Lausanne was not as smooth as it could have been but he arrived with few hours to spare. The talk was very rewarding and he met some new people and took the opportunity to get in a few hikes in Switzerland.

It was on the return trip that his flight was cancelled due to strikes by the French air traffic controllers trade union and, well, the French train system failed him as well (he missed every connection). It took two days for him to travel back to Cannes. Without incident, Geneva is one-hour away from Nice (by air).

Last week in San Francisco I was nine time zones out of synch with home base and (on the record) was there only to attend the New Digital Economics Brainstorm and chair the AR Innovator's Showcase on the same evening (March 27). I knew that in a hot bed of activity like the Bay Area, I couldn't miss the opportunity to connect with others. In the end, there were people who I couldn't catch but almost every precious minute was accounted for. Among the meetings, I had a great philosophical session with Gene Becker, another with Erik Wilde, visited the quiet offices of Quest Visual, and had lunch with the founder of Vizor (a project in stealth mode). Caught up with spirit sister Kaliya Hamlin, during which we learned about converting communities of interest into consortia with Global Inventures. I had quality sessions with representatives of Total Immersion, metaio, PRvantage, NVIDIA and The Eye Cam.

As a consultant, my value is a mixture of my knowledge about subjects and the time I have available to dispense it, to use it or to increase it. I do everything I can to manage my time. I've been to many portals and have read books on the topic of time management. Like everyone, I suppose, I try to avoid wasting time, and I use some software tools to save time. It's a topic of much interest to me but here's the ironic twist I've been reading and hearing more about recently: the more you stress about anything, including the time you have, the less of it you (may) have! For example, here, here, here and here. I hate to leave you with this negative thought but it's what's on my mind!