Categories
Business Strategy Research & Development

Who is Leading Us Indoors?

Financial analysts' blogs are not on my list of top reads this summer so I was surprised to find myself reading this fresh post on SeekingAlpha. It is a thorough research study on the potential revenue to be generated from Nokia's patent portfolio. After describing how much Nokia has invested in R&D in the past 10+ years, the headline "Location Based Mapping Patents Are Hidden Jewel of Nokia Patent Portfolio" appears and the analyst jumped directly to the topic of indoor positioning.

Indoor positioning has been an increasingly important topic for my research. I'm not alone in discovering that there will be a large value stream to come on the basis of positioning users more accurately indoor (as well as outdoor) so the potential for innovation in this space is going to be huge. Well, that is if there are not already patents protecting such innovations and their future use. 

I found this post on Forbes to shed a lot of insights into the thoughts I had when I saw the Nokia and Groupon deal, creating Groupon Now! Of course, the Forbes post is more in depth and valuable. Here are a few points that this blogger extracted from the Grizzly Analytics report published in December 2011:

Of the five leading companies (Google, Apple, Microsoft, Nokia and RIM), Krulwich sees Microsoft and Nokia as the most likely to challenge Google in indoor positioning. He expects Microsoft and Nokia to launch a service sometime in 2012, perhaps tagged to Microsoft’s “Tango” Windows Phone update. Both companies have significant experience in indoor positioning. Microsoft has researched how to determine location using special radio beacons as well as by analyzing Wi-Fi signal strength. It has also experimented with what Krulwich calls movement tracking. That involves tracking a device as it moves away from a known location, such as a door to a building (which can be pinpointed via GPS because it is outdoors).

Beyond its research, Microsoft holds granted patents in indoor positioning. Krulwich counted at least five Microsoft patents related to determining phone location using wireless access points, radio beacons, device movements and other radio signals.

Nokia’s indoor positioning work is equally sophisticated with patents going back to at least 2006. In September 2006, Nokia filed a patent on “Direction of Arrival” detection. That strategy leverages ultra-wideband (UWB) radio technology to estimate location. In fall 2007, Nokia also filed three patents related to determining location via Wi-Fi signal strength.

Although Krulwich's prediction that Microsoft and Nokia would launch an indoor positioning service in 2012 has not yet been disproven, it's clear that Google has continued to make more noise around indoor than any of the other potential leaders. If Microsoft and Nokia are going to be battling out the indoor future with the likes of Apple, Google/Motorola, Qualcomm, Research in Motion, among others who have also written or read the writing on the walls, Microsoft and Nokia will need to acquire or partner with those who have a much higher rate of success in the mobile market.

Where does all this interest in the indoor mobile positioning space lead us? To finding and working with small innovative companies that have the potential to either implement well on the patents of others, or to generate new intellectual property for indoor positioning and, in either case, be acquired by one of the five major companies leading users of mobile services indoors.

Who are you and how can I help?

Categories
Internet of Things Research & Development Social and Societal

Even Minnesotans know about RFID

I don't have anything for or against Minnesota, but why would this little known state come up twice in a few days? This merits a little examining.

Earlier this week a friend of mine who lived in Minneapolis in the early 90s was telling me that upon her recent visit there she was amazed at the vibrant community living there. Is that why in the 2008 U.S. presidential election, 78.2% of eligible Minnesotans voted – the highest percentage of any U.S. state – versus the national average of 61.7%? I guess this could be a relevant factoid in a US presidential election year.

Then, I discovered that the University of Minnesota’s Institute on the Environment and Seoul National University have recently released a study on the use of three things that are squarely on my radar: smartphones, social networks, and "things" (in this case packages using RFID). And, if that wasn't enough to catch your attention, there's also a "green" component to this study. According to this article on the University of Minnesota's Institute on the Environment's web site:

The study used spatial and agent-based models to investigate the potential environmental benefits of enlisting social networks and smartphones to help deliver packages. While sensitive to how often trusted and willing friends can be found in close proximity to both the package and the recipient within a day, results indicate that very small degrees of network engagement can lead to very large efficiency gains.

Compared to a typical home delivery route, greenhouse gas emissions reductions from a socially networked pickup system were projected to range from 45 percent to 98 percent, depending on the social connectedness of the recipients and the willingness of individuals in their social networks to participate. System-wide benefits could be significantly lower under assumptions of less than 100% market adoption, however. In fact, the study points out that many of the gains might be nullified in the short term as fewer home truck deliveries make existing delivery systems less efficient. But, “with only 1-2% of the network leveraged for delivery, average delivery distances are improved over conventional delivery alone – even under conditions of very small market penetration,” the study concluded.

“What is important is that sharing be allowed in the system, not how many ultimately choose to share time or resources,” says study co-author Timothy Smith, director of IonE’s NorthStar Initiative for Sustainable Enterprise. “We find that providing the relatively few really inefficient actors in the network the opportunity to seek the help of many better positioned actors can radically improve performance.” This is particularly relevant today, Smith says, as online retailers such as Amazon begin introducing delivery pickup lockers in grocery, convenience and drug stores.

Perhaps there is, indeed, a natural link between voter participation and social networking for your local package delivery: if a citizen is more involved in the well being of the community and wants to vote, perhaps the same person will also be open to making small detours for the purpose of delivering a package and protecting the environment.

I suspect that the speakers about NFC and RFID at the upcoming IoT Zurich meetup event will be touching on this topic of citizens using their smartphones with near field communications, but probably not for the same applications. 

Categories
Innovation Internet of Things

Computer Vision on a Programmable Flying Board

Producing video content is said to be the way of the future. Every time I've attempted to develop a short video, I've found it difficult, orders of magnitude more effort than simply posting 500 words in a blog (and that is more difficult that it sounds). How are we going to overcome the barriers to video publishing? Perhaps a flying smart camera?

There will be many new tools coming out to help people who capture their lives (more or less continually) with video. For example, if using the Google Project Glass device, a person could log their lives (and their baby's life) and accumulate content quickly. But I've discovered since I started wearing the Looxcie X2 camera that it's not the capture of video that's the most difficult, it's making sense of it!

I'm bringing up these points because they converge precisely with a post I saw on TechCrunch. When the editors at TechCrunch announced in May that they were starting a series on "makers," as those who build hardware for business or pleasure are called in our circles, I perked up. In only the fourth episode, I learned about something that hits two of my key words: computer vision and programmable board.

The company featured in this video segment is Centeye, the maker of computer vision chips that have become the basis of the ArduEye, an open source project putting machine learning computer vision on Arduino.

Why is this important? Because it demonstrates that making sense of the video can be done with very little computational overhead. This diagram compares the vision chip with a CMOS camera that pushes all the pixels it captures to a CPU for storage or analysis (click on the figure to see an enlargement):

Now, this alone might not get your interest, however, in order to demonstrate the advantages of the low power/low computational overhead they put the board on a set of blades and made it into helicopter. You need to watch this video!

This might remind you of the Parrot AR Drone, but it's better because it doesn't require an iPhone.

Perhaps, in the place of or in addition to an HD camera on a pair of glasses, there could be a vision chip that helps to edit the captured content. This is already done in head mounted cameras for the defense sector, I'm told, however, it must be produced at low cost, low weight and low power consumption for the rest of us to benefit from these breakthroughs. I hope to see these chips used more widely when there are more people doing projects with ArduEye.

Categories
3D Information Business Strategy

Business Models for Indoor Positioning

Given its low penetration in today's smartphone-focused world (16% of 2011 smartphone sales, down from 33% in 2010, according to IDC) and its recent difficulties, Nokia is not frequently listed as a technology leader in 2012. But it is too soon to dismiss the company entirely.

Its deal with Groupon is worthy of note as an alternative to relying on device sales as a future revenue model.  Though it's not the first company to think of advertising as a business model, and advertising is my least preferred business model, having a robust indoor and close-proximity-to-point-of-sale technology will be highly strategic and might change advertising into something less distasteful.

The "Groupon Now!" service for Nokia Lumina smartphones (currently only available in the United States) works outdoor as well as indoor. The really big potential is to use the device's precise location to target highly appropriate messages to its owner/user. When I say "highly appropriate" I mean to target a notification based on so many factors about the user's current situation, that the advertising becomes an anticipatory service.

An "anticipatory service" is basically anything that is provided to a user just prior to their needing it in daily work or personal life in a way that it provides unprecedented levels of benefit. An existing anticipatory service is a routing service on GPS devices that takes a user around a traffic jam before you arrive in the traffic itself. Another is an alert when you are approaching the expiration date of your contract with an important merchant or service provider. As simple and common place as anticipatory services may seem today, they are not (often) based on user location and they rarely alert a user at the point of sale (i.e., a location).

Nokia's CTO office had its eye on Indoor Positioning-based services many years ago. When Nokia acquired Gate5 and Navteq it significantly increased its assets in the location and positioning technology space. Here's a 2009 video of Brett Murray talking about anticipatory services driven by indoor positioning.

If there's a company that needs to adopt a new business model, it has to be Nokia. I hope that this company's indoor positioning technology portfolio will help it either directly, through relationships directly with the providers of anticipatory services, like Groupon, or indirectly by licensing its patents to others who will be leveraging indoor position as one of the key triggers for notifications.

It will just need to do it quickly in order to beat Apple and Google to the punch line.

Categories
Business Strategy Innovation

Life is too short to be busy

In the past two months, there has been little time for idle thoughts about Spime Wrangling. The expression "Dawn-to-Dusk" to describe how hard a peasant worked in the fields in the middle ages or a worker in a South Asian sweatshop doesn't do justice to how fully engaged I've felt during the 2nd quarter of 2012. I've had to devote myself to my other (non-blogging) duties because I've been wrangling new spimes, while traveling. At least, that's what I've told myself.

I'm back. I'm still wrangling, and traveling, but I've readjusted my perspective on the "madness," the feeling of anxiety that something important might be slipping away. What order I may have been (or will be) able to impose on the world as we know it cannot be measured. Can't be done. It's like trying to quantify the size of the ocean or the impact that our sensor-izing the world (putting sensors everywhere) will have on society.

Don't try! The way spimes work (continuously producing, automatically, effortlessly) we are guaranteed that there's not just "something" that is escaping our attention while our attention is focused elsewhere, we are sleeping or otherwise relaxing, but rather, there's more that's escaping us than we will ever know.

Sounds like I'm heading into another of those "accelerating pace of change" pieces but I'm actually going the other way. Tim Kreider's June 30 New York Times essay, The 'Busy' Trap summarizes beautifully the point that I, and I think many other people, feel. The essay ends with the short sentence that I've used as the title of this post. Stop reading this post. Take a minute to absorb Kreider's suggestion that a break is in order.

Despite cool temperatures, overcast skies, it's summer in Western Europe. Time for holidays. Millions of people are, whether they choose or not, going to feel their output, perhaps even their productivity, drop sharply. Either by choice, precisely as Kreider has done, or by default, because so many others have gone for some idle time on the beach, we are entering the slow period of the year. And it is overdue! I will be using it to digest and to summarize the trends I've seen in the first 6 months of this year.

Categories
Internet of Things

Arduino at IoT Zurich

Arduino, for those who are not avid Do-It-Yourselfers, is the most popular IoT prototyping platform of all time.

It is the open source hardware and software platform on which, since about 6 or 7 years ago, when Massimo Banzi introduced it, thousands of projects have been developed. Here's a list of over a dozen practical Arduino projects you could build. Here's a blog about some of the strangest Arduino projects.

Even though there are dozens of books and portals about Arduino, it's still a very hot topic. On June 26, our IoT Zurich meetup group had two speakers presenting Arduino concepts and projects to over 30 people interested in making IoT happen.

Thomas Brühlmann, author of "Arduino Praxiseinstieg," started with an introduction to the Arduino platform (his slides are available here). He also showed a variety of examples and, to perhaps inspire, perhaps to embarrass us, he brought his young assistant (his son).

Following Thomas, we had Micheal Kroll, another local hacker with really valuable experience using Arduino. Michael's talk about the Bluetooth Low Energy Shield he has built was very interesting, showing the practical experience he has with the platform. His slides are available here.

Both of the speakers and their content were really valuable for our meetup members, especially those who are preparing to come to the first special full-day event we are organizing. On Saturday, July 7, 2012, the DIY IoT Workshop in Zurich will take 10 people through the steps of making their first IoT project. Led by our IoT-Zurich co-organizer, Thomas Amberg, co-founder of Yaler, this workshop is going to provide a small group with the hands-on experience and guidance that many need to launch their careers as DIY IoT community members. Thomas has 10 years of experience building things and really knows how to communicate this knowledge in a systematic way. We have only one place remaining for this workshop, so someone still has a chance to enroll!

After the July 7 workshop, the holidays will be in full swing so the IoT Zurich meetup group will take a break. On September 5 we will be having another meeting on the topic of RFID and NFC for IoT. This will be a mix of theoretical information, about the concepts behind these technologies, and practical info, the use cases and case studies of implementations of RFID performed by Vilant Systems.

Arduino and its uses will be one of the topics again during our September 21, 2012 IoT 4 Energy Hackathon. This one day event is going to bring together several hardware and software platforms and focus the minds of our local developer community on how we can use IoT to help consumers better manage their energy consumption. There will be a great group of experienced leaders providing hardware and software to meet the hackathon's three primary goals: 

  •     Create applications/projects that makes energy monitoring fun,
  •     Create applications/projects that help people become aware of their energy consumption and ultimately become more energy efficient, and
  •     Create applications/projects that are useful to many.

Surprising how much DIY IoT activity we can pack into a few months for those who live in a "small city" like Zurich!

Categories
Events Internet of Things Standards

Sensors, Their Observations and Uses

There was a splash when my suitcase fell into the puddle next to the taxi outside the Exeter St. David's rail station last night. Although it was sunny under blue skies for both the days in London, while I was indoors attending the Open IoT Assembly, I was expecting rain in England and came prepared, I thought; it was the force of gravity that I had underestimated and caught me (and my suitcase) by surprise.

"More rain" announced the lonely receptionist at the White Hart hotel when I inquired about today's forecast. Instead the sky could not be bluer or clearer of clouds. Is the unpredictability of the weather an omen for the day? At least I won't be traveling with wet belongings on my way to the UK Met Office for the open session of the OGC meeting and back to the rail station this afternoon. 

I'm attending the meeting only for a few hours so that I can conduct in person meetings with the chairs and conveners of the IndoorGML Standards Working Group, the Sensors 4 IoT Standards Working group and other luminaries in the geospatial realm. I find it highly appropriate that the sensors I've used for weather have been so highly inaccurate today! I trust that my internal confidence about my meetings will serve me better today than they did last night.

Categories
Internet of Things Social and Societal

Where are the users?

Information technology is supposed to benefit people. Not all people, necessarily. But some people, some of the time. Very frequently engineers are looking to solve a problem, but there are few or no end users involved in the design of solutions. There are many explanations, including lack of knowledge of the options, political agendas and financial considerations, for end users to be rare or absent during the design of systems.

Lack of flesh and blood end users is not always an impediment to progress or impact. In the first meeting of the ITU Machine-2-Machine Focus Group (notice that there are no humans in the chain) I attended earlier this week, the end users' needs are considered as part of Use Case descriptions and the final user is represented by actors in these use cases. This approach works well for many situations. But not for all.

Usman Haque's guest post on Wired's new (since mid-January 2012) Ideas Bank blog, makes the case that citizen participation is not optional when technologies work on intelligent city information architecture. In fact, Usman argues that citizens, not corporate giants like IBM, Cisco or General Electric should drive and be at the "center" of the activity.

He writes, "We, citizens, create and recreate our cities with every step we take, every conversation we have, every nod to a neighbour, every space we inhabit, every structure we erect, every transaction we make. A smart city should help us increase these serendipitous connections. It should actively and consciously enable us to contribute to data-making (rather than being mere consumers of it), and encourage us to make far better use of data that's already around us.

"The "smartness" of smart cities will not be driven by orders coming from the unseen central government computers of science fiction, dictating the population's actions from afar. Rather, smart cities will be smart because their citizens have found new ways to craft, interlink and make sense of their own data."

As I mentioned above, there are frequently excellent reasons for users or citizens to be represented by their proxies. In fact, isn't that what a representational democracy does? It elects people to represent citizens in decision making processes, just like actors in use cases represent the end users of technology systems.

That said, I agree with Usman that, first, it's not easy to bring about a major transformation from brick-and-mortar cities to smart cities, and that people have to drive or at least participate in the innovation or else the outcomes could be rejected by those they are intended to serve. And, furthermore, I agree that the infrastructure for smart cities may serve citizens but it requires investments at such enormous scales that only cities (or national governments in some cases) can fund the infrastructure, and only very large companies, like Cisco and IBM, can realistically be expected to build them.

Perhaps the absence of users in the case of city development, or at least in the planning of city change, is an issue that could be addressed by increasing the use of AR-assisted information delivery and retrieval programs.

Here's the scenario: Joe Citizen is sent a device with instructions (a tutorial) and asked to use it to review a proposed project and give his feedback. He takes the device out into the neighborhood, turns around, explores a proposed project from different angles and answers questions on the screen. He goes to the nearest city office and turns in the system complete with his ideas and feedback. If he doesn't return the system or perform the task on time, an invoice is sent for payment (similar to a fine if a citizen doesn't show up for jury duty).

For very modest investments, the citizens of a city could take an incremental step towards a smart city just by "seeing" the information that their city already has and can make available using Open Data systems. I haven't forgotten my plan to repeat the Barcelona City Walkshop this year.

Categories
Augmented Reality News

Skin in the Game

Those who read this blog are more than likely among the group that shares some level of conviction (maybe not as strongly as I do!) that Augmented Reality is going to be important. We are a small minority, but April 2012 will represent an important historical milestone. Perhaps it is as important as the June 2009 release of Layar's AR browser in terms of demonstrating that AR has traction, has graduated from the laboratory, captured imaginations and will have huge commercial impacts.

In the past three weeks many more people are finally getting to see the signs that indicate how important Augmented Reality will be in the future. The curtain that formerly prevented people from knowing how much investment was truly happening is dropping. Major companies are now following Qualcomm and putting resources into AR.

They are putting "skin in the game" and that's what it takes to convince many (including the mobile pundits such as Tomi Ahonen and Ajit Jaokar) that AR has passed a treshhold. In case you didn't catch it, Tomi posted on his blog, Communities Dominate Brands, on April 11, 2012 (probably the shortest blog he's ever written!) that he has seen the light and he can now believe that AR is the planet's 8th mass medium.

Twenty-eleven was a slow year for AR revenue growth but those who were paying attention could see small signs of the growing captial influx. Intel had already demonstrated its interest in AR by investing in Layar in late 2010 and in Olaworks in early 2007 and expanded its investments (e..g, in Total Immersion). Texas Instruments, Imagination Technologies, ARM, ST Ericsson and Freescale all revealed that they have established programs on their own or in partnership with AR companies to accelerate AR on mobile platforms.

But, with only a few exceptions, these announcements by semiconductor companies were "me too," forced by the apparent (media) successes of Qualcomm's Vuforia and Hewlett Packard's Aurasma. These last two companies have heavily contributed to the media's awareness of mobile AR, but, sadly, also contributed to the perceived image that AR is a gimmick.

We can now pinpoint the catalytic event when AR is taken more seriously: the 9:00 AM PDT April 4, 2012 announcement by Google that confirmed prolific rumors that it is indeed working on see-through head-mounted displays (posted on TechCrunch here). Many jumped to the conclusion that these are AR glasses, although the applications are not, strictly speaking, limited to AR.  A blog called Glasses from Google calls them "a new paradigm in computing."

While the figures have not been (and never will be) disclosed by Google, I estimate that the company has already invested well over $4M, approximately ten percent of the entire AR industry revenues in 2011, to get to its current prototype (even one that reboots regularly). And (most likely) without reaching far beyond its internal intellectual properties and a few consultants. Note that, in light of the Project Glass announcement, the Google acquisition of Motorola Mobility for total of about $12.5 billion is very strategic. It surpasses the October 2011 $10 bilion Hewlett Packard acquisition of Autonomy by only a few billion dollars. Very big Skin in the Game.

While the global media has for 15 days steadily reported on and speculated about Project Glass, and the video broke social media records, this is not the only example that there have been and are new investments being made in mobile AR in April 2012. Tiny by financial standards, but significant developments nevertheless, include the Olaworks acquisition, and today Total Immersion has announced that  Peter Boutros, a former Walt Disney VP will be its new president. And Google isn't the only company that's working on eyewear for information. For example, Oakley's Apri 17 announcement that it has been working on an eyewear project shouldn't have taken anyone by surprise but managed to make the headlines.

What and who is next?!

And how much skin will the next announcement be worth?

Without strong business cases this second question will be the most difficult to answer and, for this reason, it is a topic to which I have recently written another post.

Categories
Augmented Reality News

Intel’s First Full Acquisition of Korean Firm, Olaworks

The Korea Herald is not where I normally get my news. Nor do I regularly visit The Register (whose tag line is "Biting the Hand that Feeds IT"). But today I visited both in order to learn more about Intel's $30.7M acquisition of Olaworks.

In case you are not familiar with it, Olaworks was one of the early companies to dedicate itself first to computer vision (primarily face recognition) and then to apply its intellectual property to solve Augmented Reality challenges. The founder of Olaworks, Dr. Ryu Jung-hee, has been a long-standing friend and colleague and one of the most outgoing Koreans I've met. Ryu has attended at least four out of the past five AR Standards Community meetings and miraculously shows up at other events (e.g., he accepted my invitation to come to the first Mobile Monday Beijing meeting and showcase on the topic of mobile AR, and presented about Olaworks during the first AR in China meeting, one year ago).

Not only am I pleased for Ryu and the 60 employees who work for Olaworks, I'm also impressed that an analyst concluded that one reason for the acquisition might be Olawork's facial recognition technologies. At present LG Electronics, Pantech, and HTC make use of Olawork’s face recognition technology in their phones. Gartner analyst Ken Dulaney told The Reg that Intel’s decision to acquire was probably informed by the growing popularity of face recognition software in the consumer space. In fact, Texas Instruments recently shared with me that they are very proud of the facial recognition performance they have on the OMAP. Face recognition could be used for a lot of different applications (not just AR) when it is embedded into the SoC, as an un-named source suggested might be Intel's intention since Olaworks seems to be heading for integration with another Intel acquisition, Silicon Hive.

Another analyst speculating on the acquisition, Bryan Ma of IDC, sees the move as one of many steps Intel is taking to "prove it’s better than market leader ARM in the mobile space. It has been trying to position Medfield as a better performance processor using the same power consumption as ARM,” he told The Reg. “In the spirit of this it would make sense for Intel to move for technology and apps which can harness that horsepower to differentiate it from ARM.”

I'm not familiar with the Korean investment landscape but it may be important that the Private Equity Korea article on the acquisition makes a point about Intel's acquisition of Olaworks being the first full Korean acquisition the chip giant has made. It seems that we rarely hear about Korean startups in the West and I suspect that one reason is that the most common exit strategy of a young Korean company is acquisition by one of the global handset manufacturers (LG Electronics, HTC, or Samsung), or one of the large network operators. It's perfectly logical, not only from a cultural point of view but also because the Korean mobile market is large and has a long history of having its own national telecommunications standards.

After NTT-DoCoMo's launch of its 3G service in October 2001, the second 3G network to go commercially live was SK Telecom in South Korea on the CDMA2000 1xEV-DO technology in January 2002 (10 years ago). By May 2002 the second South Korean 3G network was launched by KTF on EV-DO and thus the Koreans were the first to see competition among 3G operators.

I hope that the Olaworks exit signals the opening of Korean technology silos and an opportunity for other regions of the world to benefit from the advances the Koreans have managed to make in their controlled 3G network environment.

Categories
Business Strategy True Stories

Searching for a Viable Business Model

Customers value what they pay for and (usually) pay for what they value. If customers do not pay for value, is the business model at fault?

This basic question is at the root of a current polemic facing Facebook. Companies in other markets as well, for example AR, must answer the question or die. I feel that the business model is not always the root of the problem. The take home message of this post is that just because a business model doesn't work for one company the first time its tried doesn't mean the same or a similar system will fail for another company in a different geographic context, or at another point in time, after users have been through an educational process.

Three years ago, around 2 PM in the afternoon of February 11, 2009, I sat in the audience of a session on social networking at the Mobile World Congress. I had already been focusing on mobile social networking for several years (since mid-2006) and remember its "birth" as a mobile Web site and an iPhone application that made it easier to upload photos and notes, and to exchange messages on the site, and a look-up function for phone numbers.

Facebook mobile was a year "old" and I had been following along since its birth in 2008. I wasn't alone. The rest of the world read about Facebook's rapid mobile growth in a mashable blog post, Gigaom blog post and an article on BusinessWeek.com.

At that time, Facebook operated these mobile services worldwide:

The number of users and the options for accessing the social network were quite similar in size and approach to where the mobile AR industry is today.

In the nine months that followed from January 2009, when there were a mere 20M mobile Facebook users, to September 2009 when the network had reached 65M mobile users, Facebook implemented what it called the "Facebook Credits" platform. It is now available using 80 payment methods in over 50 countries.

Facebook was alert to the fact that it was not monetizing user traffic and open to experimenting with business models. The PaymentsViews post on August 25, 2009 (which I've preserved below in case of catastrophic melt down), shows just how hard it was to use on mobile and how creatively Facebook pushed its Credits program.

For a variety of reasons, Facebook failed in its first attempts to monetize the mobile platform. First, there was and there remains friction in the mobile payments system. We've learned since the introduction of smartphone applications that even if it is phenomenally easier on a smartphone than on a feature phone, users don't want to take time to click through multiple screens to authorize a payment. There are dozens of companies that are working to make payments easier on mobile.

Then there was just a lack of creativity in the goods which were offered. In Japan, precisely the same business model worked relatively well and mobile social networks flourished where Facebook could not break in because the Japanese youth culture was more mobile saavy and the digital goods were far more innovative, dynamic and valuable to the customers. Other reasons for abandoning the Facebook Credits system include that the desktop business model is so lucrative.

in February 2012, a piece in the New York Times reminded us that although it now has more than half of its 845 million members logging into Facebook daily via a mobile device, it is still not monetizing its mobile assets.

If a company with an estimated pre-IPO valuation of $104 Billion is unable to figure out a strong business model for mobile social media, where are AR companies going to look for their cash cow?

I predict that the answer will include something very close to Facebook's original mobile Facebook Credits concept.

The reasons I feel strongly in the future of a mobile commerce for information model are that the presentation of the digital asset will be more refined and the timing for the business model will be completely different. When companies will offer their information assets in a contextually-sensitive AR-assisted package for a fixed increment of time and in a limited location, the users will know what they're getting and only purchase what they need.

In addition, users will have been through several generations of commercial "education" on mobile platforms. in the months and years prior to digital assets (information, games, content) being sold in small increments, users will have learned that they can purchase their transportation fares with their devices, their movie tickets and even their beverages in a night club.

While it will certainly not be the only business model on which the mobile AR services and content providers will need to rely, the tight relationship between the user's willingness to pay for an experience and the value provided will bring about repeat usage and, eventually, widespread adoption.

———————————————————-

In case of meltdown of this company's site, I have taken the liberty of putting the full text of PaymentsViews post below:

Purchasing Facebook Credits with Zong Mobile Payments

by Erin McCune on August 25, 2009

At Glenbrook we believe that social eCommerce and virtual currencies are the new frontier of payments. Person-to-person transfers, charity donations, and micropayments for virtual goods (e.g. games, music, e-books, etc.) are exploding within social networks and as the 800-pound-gorilla in the social networking space, all eyes are on Facebook. Estimates vary, but $300-500 million in transactions may happen within Facebook in 2009 (note 1), although thus far precious few of those transactions are funded by a native Facebook payment mechanism.

A couple days ago I decided to send my colleague Bryan a birthday gift on Facebook and was startled to discover that Facebook now as an option to buy Facebook Credits, Facebook’s fledgling virtual currency, via mobile phone using Zong (more from Payments Views on Zong here).  Developers on Facebook have accepted mobile payments for some time now, from Zong as well as other mobile payment providers, but the Facebook Gift Shop and Facebook Credits are Facebook services, not a developer product. And up until now (note 2) Facebook has only accepted credit card payments.

Being the payment geek that I am, I opted for the mobile phone payment option and took screen prints of the process flow. And then I wanted to compare the check out process via phone to the credit card check out process, so I bought Bryan a second gift (lucky Bryan) and took more screen prints. Continue reading to see a comparison of the check out process for the two payment methods.

But first, a little background…

The (Continuing) Evolution of Facebook Payments

  • There has been a long standing Facebook Gift Shop where users can purchase virtual gifts for one another for $1 each. (Some sponsored gifts are free.) Users purchase “gifts” with a credit card: MasterCard, Visa, AmEx.
  • December 2007:  Rumored that beta test of payments system for applications was imminent. Developers were instructed to sign up to participate (and had to sign an NDA).
  • November 2008: Converted gift shop dollars to “credits.” Each $1 buys 100 credits, so gifts that used to cost $1 are now priced at 100 credits. Still pay for Facebook Credits with a credit card.
  • March 2009: Facebook claims to be “looking at” a virtual currency system.
  • April 2009: Facebook introduces a limited pilot program whereby users can give credit to one another. If one user “likes” content that one of their friends has posted, the user can give them a virtual tip, using Facebook Credits. The only thing you can do with the credits is buy Gifts or give them to your other friends.
  • May 2009: Facebook announces “Pay With Facebook” a new feature that will enable users to make purchases from Facebook application developers. Funding is via Facebook Credits, which can be purchased only via credit card.
  • June 2009: Facebook began testing payment for virtual goods within Facebook using Pay With Facebooka nd Facebook Credits, starting with the GroupCard, Birthday Calendar, and MouseHunt applications.
  • August 2009: Facebook announces that the Gift Store is conducting an “alpha test” of non-Facebook gifts in the Facebook Gift Shop, including some physical goods (e.g. flowers, candy).
  • August 2009: It is now possible to purchase Facebook Credits with your mobile phone, via Zong.

Purchasing Facebook Credits via Mobile Phone

(Note: click on individual images to see larger version)

When I clicked on Bryan’s Facebook profile I was reminded to wish him Happy Birthday, and optionally, buy him a “gift”

Picture1

Up until now, Facebook has only accepted payment via Credit Card for Gift Credits. But now it is possible to pay with your mobile phone. Note that the “pay with mobile” option is listed first.

Picture2

I could select whether to purchase 15 ($2.99), 25 ($6.99), or 50 ($9.99) Facebook Credits and then prompted to enter my mobile phone number.

FB Payments Picture3

Meanwhile, I received a SMS text message from Zong providing a PIN number, confirming a payment of $2.99 to Facebook, and instructions on how to stop the payment or get help:

Zong Text Msg 1

I entered the PIN number provided, waited a few moments, and then got a confirmation screen.

FB Payments Picture4

Finally, I received two confirmation SMS text messages from Zong (not from Facebook):

Zong Text Msg 2

Zong Text Msg 3

Purchasing Facebook Credits with a Credit Card

For Bryan’s second gift (a virtual beer, I am sure he would have preferred a real one!) I opted to pay with a credit card. Note the difference in price per Facebook Credit (more on that in a minute).

FB CC Payment Picture1

Next I entered my card details and was immediately presented with a confirmation screen. The process is definitely quicker (and cheaper) if you purchase via credit card.

FB CC Payment Picture2

Finally, when I purchased via credit card I received a confirmation email directly from Facebook (whereas with the mobile phone payment I received the confirmation via SMS text from Zong, rather than Facebook).

FB CC Payment Picture3

Pricing Varies by Payment Method

When I paid with my mobile phone the price per Facebook Credit was twenty cents. I only paid ten cents per Facebook Credit when I made my purchase with a credit card. Zong charges the merchant (in this case Facebook) a higher processing fee than the credit card companies do. This is not uncommon. Payments via mobile phone are typically for virtual goods (ring tones, avatar super powers, games, etc.) with relatively low cost of goods, thus merchants are less price sensitive. Once they’ve done the coding, every incremental sale above and beyond development costs is profit. Mobile payments for virtual goods cost between 20-50% of the transaction amount, with most of the fee being passed on to mobile phone carriers. Given this pricing structure, it is not surprising that Facebook charges more per Facebook Credit when you buy with your mobile phone. It is unclear how much of the net fee application developers receive and how much Facebook retains, and if the split varies depending on payment method.

Other Forms of Payment Within Facebook

Keep in mind that Facebook Credits are just one way of purchasing goods within Facebook. Today, Facebook application developers monetize their games and other applications by accepting payment directly using PayPal, Google, Amazon FPS, or SocialGold. Or developers may opt to receive direct payment via mobile phone via Zong, Boku, or another mobile payment provider. Virtual currencies that can be used across a variety of social networks and game sites include Spare Change and SocialGold. It is also possible to earn virtual currency credit by taking surveys and participating in trials offered via Super Rewards, OfferPal Media, Peanut Labs and many others. And finally, game developers in particular, often accept payment via a prepaid card sold in retail establishments, such as the Ultimate Game Card. The social and gaming web is exploding with virtual currency offerings, yet thus far no one model or payment brand dominates.

We’ll continue to monitor Facebook’s payment evolution and track the development of social eCommerce here at Payments Views. In the meantime, you might enjoy these related Payments Views posts:

Notes

  1. Facebook transaction value estimates here.
  2. Caveat: I am not quite sure when Facebook started accepting Zong – sometime after June, as that was when I last checked in. I suspect, but haven’t confirmed, that the change was made in conjunction with last week’s announcement that the Gift Store is conducting an “alpha test” of non-Facebook gifts in the Facebook Gift Shop, including some physical goods (e.g. flowers, candy). If anyone out there knows for sure, please let us know in the comments.

end of full text here.

Categories
Augmented Reality Research & Development

Project Glass: The Tortoise and The Hare

Remember the Aesop’s fable about the Tortoise and the Hare? 11,002,798 viewers as of 9 AM Central European Time April 10, 2012. Since April 4, 2012 Noon Pacific Time, in five and a half days, over the 2012 Easter holiday weekend, the YouTube “vision video” of Google’s Project Glass has probably set a benchmark in terms of how quickly a short, exciting video depicting a cool idea can spread through modern, Internet-connected society. [update April 12, 2012: here’s an analysis of what the New Media Index found in the social media “storm” around Project Glass.]

The popularity of the video (and the Project Glass Google+ page with 187,000 followers) certainly demonstrates that beyond a few hundred thousand digerati who follow technology trends, there’s a keen interest in alternative ways of displaying digital information. Who are these 11M viewers? Does YouTube have a way to display the geo-location of where the hits originate?

Although the concepts shown in the video aren’t entirely new, the digerati are responding and engaging passionately with the concept of handsfree, wearable computing displays. I’ve seen (visited) no fewer than 50 blog posts on the subject of the Project Glass. Most are simply reporting on the appearance of the concept video and asking if it could be possible. There are those who have invested a little more thought.

Blair MacIntyre was one of the first to jump in with his critical assessment less than a day after the announcement. He fears that success (“winning the race”) to new computing experiences will be compromised by Google going too quickly when slow, methodical work will lead to a more certain outcome. Based on the research in Blair’s lab and those of colleagues around the world, Blair knows that the state-of-the-art on many of the core technologies necessary for this Project Glass vision to be real is too primitive to deliver (reliably in the next year) the concepts shown in the video. He fears that by setting the bar as high as the Project Glass video has, expectations will be set too high and failure to deliver will create a generation of skeptics. The “finish line” for all those who envisage a day when information is contextual and delivered in a more intuitive manner will move further out.

In a similar “not too fast” vein, my favorite post (so far, we are still less than a week into this) is Gene Becker‘s April 6 post (48 hours after announcement) on his The Connected World blog. Gene shares my fascination with the possibility that head-mounted sensors like those proposed for Project Glass would lead to continuous life capture. Continuous life capture has been shown for years (Gordon Bell has spent his entire career exploring it and wrote Total Recall, other technologies are actively being developed in projects such as SenseCam) but we’ve not had all the right components in the right place at the right price. Gene focuses on the potential for participatory media applications. I prefer to focus on the Anticipatory services that could be furnished to users of such devices.

It’s not explicitly mentioned, but Gene points out something I’ve raised and this is my contribution to the discuss about Project Glass with this post: think about the user inputs to control the system. More than my fingertips, more of the human body (e.g., voice, gesture) will be necessary to control a hands-free information capture, display and control system. Gene writes “Glass will need an interaction language. What are the hands-free equivalents of select, click, scroll, drag, pinch, swipe, copy/paste, show/hide and quit? How does the system differentiate between an interface command and a nod, a word, a glance meant for a friend?”

All movement away from keyboards and mice as input and user interface devices will need a new interaction language.

The success of personal computing in some way leveraged a century of experience with the typewriter keyboard to which a mouse and graphical (2D) user interface were late (recent) but fundamental additions. The success of using sensors on the body and in the real world, and the objects and places as interaction (and display) surfaces for the data will rely on our intelligent use of more of our own senses, use of many more metaphors between the physical and digital world, and highly flexible, multi-modal and open platforms.

Is it appropriate for Google to define its own handsfree information interaction language? I understand that the Kinect camera point of view is 180 degree different from that of a head-mounted device, and it is a depth camera, not a simple and small camera on the Project Glass device but what can we reuse and learn from Kinect? Who else should participate? How many failures before we get this one right? How can a community of experts and users be involved in innovating around and contributing to this important element of our future information and communication platforms?

I’m not suggesting that 2012 is the best or the right time to be codifying and to put standards around voice and/or gesture interfaces but rather recommending that when Project Glass comes out with a first product, it should include an open interface permitting developers to explore different strategies for controlling information. Google should offer open APIs for interactions, at least to research labs and qualified developers in the same manner that Microsoft has with Kinect, as soon as possible.

If Google is the hasty hare, as Blair suggests, is Microsoft the “tortoise” in the journey to provide handsfree interaction? What is Apple working on and will it behave like the tortoise?

Regardless the order of entry of the big technology players, there will be many others who notice the attention Project Glass has received. The dialog on a myriad of open issues surrounding the new information delivery paradigm is very valuable. I hope the Project Glass doesn’t release too soon but with virtually all the posts I’ve read closing by asking when the blogger can get their hands on and nose under a pair, the pressure to reach the first metaphorical finish line must be enormous.