Categories
Augmented Reality Research & Development

Physical World as an Interface

Although there are a growing number of excellent examples and even reports of positive return on investment on mobile AR, I shudder at the thought that the first applications the term "Augmented Reality" will bring to the minds of most people will be a game played with the wrapping on a candy bar. OK. I get it. The power of engagement.

AR experiences that fail to bring value to the user (beyond a quick thrill) in return for their attention are unhealthy for our image. People fail, or are not paid enough, to think sufficiently about the impact this technology will have and how to use it.

In my opinion, one profound impact of AR will be to turn the user's immediate environment into the interface for search and interactivity with digital information. Time for a new term: turning the physical world into the interface for digital is an extension of Skeuomorphism.

According to the Wikipedia definition, a skeuomorph is a physical ornament or design on an object copied from a form of the object when made from another material or by other techniques. It's a principle that Apple, while under the direction of Steve Jobs, was known for. The debate over the merits of Apple's extensive use of skeuomorphism became the subject of substantial media attention in October 2012, a year after Jobs' death, largely as the result of the reported firing of Scott Forstall, described as "the most vocal and high-ranking proponent of the visual design style favored by Mr. Jobs".

There are already examples of AR permitting the physical world to become the interface for the digital one. One I'm hoping will be repeated in many public spaces is the interactive lobby. If you are not already aware of this Interactive Spaces project, developed earlier this year for an Experience Center on the Mountain View Google campus, I highly recommend getting acquainted with the goals and people behind it on this blog post.

In this example, the cameras in the ceiling detect the user's presence and moving around in the space causes objects to move, sounds to be produced and more.

Expect many more examples in 2013.

Categories
Business Strategy Research & Development

Who is Leading Us Indoors?

Financial analysts' blogs are not on my list of top reads this summer so I was surprised to find myself reading this fresh post on SeekingAlpha. It is a thorough research study on the potential revenue to be generated from Nokia's patent portfolio. After describing how much Nokia has invested in R&D in the past 10+ years, the headline "Location Based Mapping Patents Are Hidden Jewel of Nokia Patent Portfolio" appears and the analyst jumped directly to the topic of indoor positioning.

Indoor positioning has been an increasingly important topic for my research. I'm not alone in discovering that there will be a large value stream to come on the basis of positioning users more accurately indoor (as well as outdoor) so the potential for innovation in this space is going to be huge. Well, that is if there are not already patents protecting such innovations and their future use. 

I found this post on Forbes to shed a lot of insights into the thoughts I had when I saw the Nokia and Groupon deal, creating Groupon Now! Of course, the Forbes post is more in depth and valuable. Here are a few points that this blogger extracted from the Grizzly Analytics report published in December 2011:

Of the five leading companies (Google, Apple, Microsoft, Nokia and RIM), Krulwich sees Microsoft and Nokia as the most likely to challenge Google in indoor positioning. He expects Microsoft and Nokia to launch a service sometime in 2012, perhaps tagged to Microsoft’s “Tango” Windows Phone update. Both companies have significant experience in indoor positioning. Microsoft has researched how to determine location using special radio beacons as well as by analyzing Wi-Fi signal strength. It has also experimented with what Krulwich calls movement tracking. That involves tracking a device as it moves away from a known location, such as a door to a building (which can be pinpointed via GPS because it is outdoors).

Beyond its research, Microsoft holds granted patents in indoor positioning. Krulwich counted at least five Microsoft patents related to determining phone location using wireless access points, radio beacons, device movements and other radio signals.

Nokia’s indoor positioning work is equally sophisticated with patents going back to at least 2006. In September 2006, Nokia filed a patent on “Direction of Arrival” detection. That strategy leverages ultra-wideband (UWB) radio technology to estimate location. In fall 2007, Nokia also filed three patents related to determining location via Wi-Fi signal strength.

Although Krulwich's prediction that Microsoft and Nokia would launch an indoor positioning service in 2012 has not yet been disproven, it's clear that Google has continued to make more noise around indoor than any of the other potential leaders. If Microsoft and Nokia are going to be battling out the indoor future with the likes of Apple, Google/Motorola, Qualcomm, Research in Motion, among others who have also written or read the writing on the walls, Microsoft and Nokia will need to acquire or partner with those who have a much higher rate of success in the mobile market.

Where does all this interest in the indoor mobile positioning space lead us? To finding and working with small innovative companies that have the potential to either implement well on the patents of others, or to generate new intellectual property for indoor positioning and, in either case, be acquired by one of the five major companies leading users of mobile services indoors.

Who are you and how can I help?

Categories
Internet of Things Research & Development Social and Societal

Even Minnesotans know about RFID

I don't have anything for or against Minnesota, but why would this little known state come up twice in a few days? This merits a little examining.

Earlier this week a friend of mine who lived in Minneapolis in the early 90s was telling me that upon her recent visit there she was amazed at the vibrant community living there. Is that why in the 2008 U.S. presidential election, 78.2% of eligible Minnesotans voted – the highest percentage of any U.S. state – versus the national average of 61.7%? I guess this could be a relevant factoid in a US presidential election year.

Then, I discovered that the University of Minnesota’s Institute on the Environment and Seoul National University have recently released a study on the use of three things that are squarely on my radar: smartphones, social networks, and "things" (in this case packages using RFID). And, if that wasn't enough to catch your attention, there's also a "green" component to this study. According to this article on the University of Minnesota's Institute on the Environment's web site:

The study used spatial and agent-based models to investigate the potential environmental benefits of enlisting social networks and smartphones to help deliver packages. While sensitive to how often trusted and willing friends can be found in close proximity to both the package and the recipient within a day, results indicate that very small degrees of network engagement can lead to very large efficiency gains.

Compared to a typical home delivery route, greenhouse gas emissions reductions from a socially networked pickup system were projected to range from 45 percent to 98 percent, depending on the social connectedness of the recipients and the willingness of individuals in their social networks to participate. System-wide benefits could be significantly lower under assumptions of less than 100% market adoption, however. In fact, the study points out that many of the gains might be nullified in the short term as fewer home truck deliveries make existing delivery systems less efficient. But, “with only 1-2% of the network leveraged for delivery, average delivery distances are improved over conventional delivery alone – even under conditions of very small market penetration,” the study concluded.

“What is important is that sharing be allowed in the system, not how many ultimately choose to share time or resources,” says study co-author Timothy Smith, director of IonE’s NorthStar Initiative for Sustainable Enterprise. “We find that providing the relatively few really inefficient actors in the network the opportunity to seek the help of many better positioned actors can radically improve performance.” This is particularly relevant today, Smith says, as online retailers such as Amazon begin introducing delivery pickup lockers in grocery, convenience and drug stores.

Perhaps there is, indeed, a natural link between voter participation and social networking for your local package delivery: if a citizen is more involved in the well being of the community and wants to vote, perhaps the same person will also be open to making small detours for the purpose of delivering a package and protecting the environment.

I suspect that the speakers about NFC and RFID at the upcoming IoT Zurich meetup event will be touching on this topic of citizens using their smartphones with near field communications, but probably not for the same applications. 

Categories
Augmented Reality Research & Development

Project Glass: The Tortoise and The Hare

Remember the Aesop’s fable about the Tortoise and the Hare? 11,002,798 viewers as of 9 AM Central European Time April 10, 2012. Since April 4, 2012 Noon Pacific Time, in five and a half days, over the 2012 Easter holiday weekend, the YouTube “vision video” of Google’s Project Glass has probably set a benchmark in terms of how quickly a short, exciting video depicting a cool idea can spread through modern, Internet-connected society. [update April 12, 2012: here’s an analysis of what the New Media Index found in the social media “storm” around Project Glass.]

The popularity of the video (and the Project Glass Google+ page with 187,000 followers) certainly demonstrates that beyond a few hundred thousand digerati who follow technology trends, there’s a keen interest in alternative ways of displaying digital information. Who are these 11M viewers? Does YouTube have a way to display the geo-location of where the hits originate?

Although the concepts shown in the video aren’t entirely new, the digerati are responding and engaging passionately with the concept of handsfree, wearable computing displays. I’ve seen (visited) no fewer than 50 blog posts on the subject of the Project Glass. Most are simply reporting on the appearance of the concept video and asking if it could be possible. There are those who have invested a little more thought.

Blair MacIntyre was one of the first to jump in with his critical assessment less than a day after the announcement. He fears that success (“winning the race”) to new computing experiences will be compromised by Google going too quickly when slow, methodical work will lead to a more certain outcome. Based on the research in Blair’s lab and those of colleagues around the world, Blair knows that the state-of-the-art on many of the core technologies necessary for this Project Glass vision to be real is too primitive to deliver (reliably in the next year) the concepts shown in the video. He fears that by setting the bar as high as the Project Glass video has, expectations will be set too high and failure to deliver will create a generation of skeptics. The “finish line” for all those who envisage a day when information is contextual and delivered in a more intuitive manner will move further out.

In a similar “not too fast” vein, my favorite post (so far, we are still less than a week into this) is Gene Becker‘s April 6 post (48 hours after announcement) on his The Connected World blog. Gene shares my fascination with the possibility that head-mounted sensors like those proposed for Project Glass would lead to continuous life capture. Continuous life capture has been shown for years (Gordon Bell has spent his entire career exploring it and wrote Total Recall, other technologies are actively being developed in projects such as SenseCam) but we’ve not had all the right components in the right place at the right price. Gene focuses on the potential for participatory media applications. I prefer to focus on the Anticipatory services that could be furnished to users of such devices.

It’s not explicitly mentioned, but Gene points out something I’ve raised and this is my contribution to the discuss about Project Glass with this post: think about the user inputs to control the system. More than my fingertips, more of the human body (e.g., voice, gesture) will be necessary to control a hands-free information capture, display and control system. Gene writes “Glass will need an interaction language. What are the hands-free equivalents of select, click, scroll, drag, pinch, swipe, copy/paste, show/hide and quit? How does the system differentiate between an interface command and a nod, a word, a glance meant for a friend?”

All movement away from keyboards and mice as input and user interface devices will need a new interaction language.

The success of personal computing in some way leveraged a century of experience with the typewriter keyboard to which a mouse and graphical (2D) user interface were late (recent) but fundamental additions. The success of using sensors on the body and in the real world, and the objects and places as interaction (and display) surfaces for the data will rely on our intelligent use of more of our own senses, use of many more metaphors between the physical and digital world, and highly flexible, multi-modal and open platforms.

Is it appropriate for Google to define its own handsfree information interaction language? I understand that the Kinect camera point of view is 180 degree different from that of a head-mounted device, and it is a depth camera, not a simple and small camera on the Project Glass device but what can we reuse and learn from Kinect? Who else should participate? How many failures before we get this one right? How can a community of experts and users be involved in innovating around and contributing to this important element of our future information and communication platforms?

I’m not suggesting that 2012 is the best or the right time to be codifying and to put standards around voice and/or gesture interfaces but rather recommending that when Project Glass comes out with a first product, it should include an open interface permitting developers to explore different strategies for controlling information. Google should offer open APIs for interactions, at least to research labs and qualified developers in the same manner that Microsoft has with Kinect, as soon as possible.

If Google is the hasty hare, as Blair suggests, is Microsoft the “tortoise” in the journey to provide handsfree interaction? What is Apple working on and will it behave like the tortoise?

Regardless the order of entry of the big technology players, there will be many others who notice the attention Project Glass has received. The dialog on a myriad of open issues surrounding the new information delivery paradigm is very valuable. I hope the Project Glass doesn’t release too soon but with virtually all the posts I’ve read closing by asking when the blogger can get their hands on and nose under a pair, the pressure to reach the first metaphorical finish line must be enormous.

Categories
Internet of Things Research & Development

The Air We Breathe

In IoT circles, air is a popular topic. There is so much of it and, at the same time, it is so fundamental to the quality of life on our planet.

During the IoT-4-Cities event Andrea Ridolfi, co-founder of SensorScope, presented about the use of sensors mounted on buses and trams to measure air quality in the cities of Lausanne and Zurich as part of the OpenSense project.

This is a really interesting collaboration that I hope will develop systems for commercial deployments using an architecture similar to this one below.

Since deploying these systems widely will be expensive, going to scale will probably require getting citizens involved in air quality sensing. The citizen participation component of air quality sensing was the topic of presentations by Michael Setton, VP of Marketing of Sensaris and Jan Blom, User Experience Researcher at Nokia Research.

 

 

On March 30, the same day as the IoT-4-Cities meeting, the IoT-London meetup group held a workshop and 10 people built their first sensors. The web site with materials shared during the workshop would be a great basis for people to get started.

In parallel, Ed Borden of Pachube (LogMeIn) has put the Air Quality Egg project up on Kickstarter.com and it took off like a rocket, meeting its financial goal of $39,000 in less than 10 days. There's still three weeks before the project closes on Thursday April 26, 2012.

I want to get some people from Switzerland involved in building a prototype of the Air Quality Egg as a DIY project for the IoT Zurich meetup community, but, unfortunately, I and another enthusiast, JP de Vooght, lack all the necessary skills.

  • Are you interested in leading an AQE workshop or getting involved?
  • Do you have a venue where about 10 people can meet for a half day (with benches where use of sodering tools is convenient)? What else is needed? a 3D printer?

Join the Air Quality Egg Project and contact JP before April 25! We can promote the activity on the IoT-Zurich meetup list and page.

Categories
Internet of Things Research & Development Social and Societal

City WalkShop

Adam Greenfield is one of the thought leaders I follow closely on urban technology topics. Adam and his network (including but going beyond the Urbanscale consulting practice) are far ahead of most people when it comes to understanding and exploring the future of technology in cities.

In this post I'm capturing information about this small event conducted in November 2010 in collaboration with Do Projects (in the context of the Drumbeat Festival) because it inspires me. I've also found documentation about two more of these done in spring of 2011 (Bristol and London). On March 11, there will be another one taking place in Cologne, Germany in collaboration with Bottled City.

City WalkShop experiences are "Collective, on-the-field discovery around city spots intensive in data or information, analyzing openness and sharing the process online."

I discovered the concept of WalkShops when I was exploring Marc Pous' web page. Marc just founded the Internet of Things Munich meetup group a few weeks ago and, in addition to being eager to meet other IoT group founders (disclosure: I founded IoT Zurich meetup in October 2011), I learned that he is a native of Barcelona (where the IoT-Barcelona group meets).

I got acquainted with Marc's activities and came across the Barcelona WalkShop done with Adam.

The WalkShop Barcelona is documented in several places. There's the wiki page on UrbanLabs site that describes the why and the what, and I visited the Posterous page. Here's the stated goal:

What we’re looking for are appearances of the networked digital in the physical, and vice versa: apertures through which the things that happen in the real world drive the “network weather”, and contexts in which that weather affects what people see, confront and are able to do.

Here's a summary of Systems/Layers process:

Systems/Layers is a half-day “walkshop” organized by Citilab and Do projects held in two parts. The first portion of the activity is dedicated to a slow and considered walk through a reasonably dense and built-up section of the city at hand. This portion of the day will take around 90 minutes, after which we gather in a convenient “command post” to map, review and discuss the things we’ve encountered.

I'd love to participate or organize another of these WalkShops in Barcelona in 2012, going to the same places and, as one of the outcomes of the process, to compare how the city has evolved. Could we do it as a special IoT-Barcelona meeting or in the framework of Mobile World Capital?

I also envisage getting WalkShops going in other cities. Maybe, as spring is nearing and people are outside more, this could be a side project for members of other IoT Meetup Groups?

Categories
Internet of Things Research & Development Social and Societal

Risks and Rewards of Hyperconnected-ness

I often get asked to define a Spime. The definition is simple “Space + Time” but the implications are deeper than most people have time to think about. That’s one reason that Wranglers are needed. But the fundamental attribute of a spime is that it is hyperconnected and it is doing something with its connections. By documenting or publishing where it was made, by whom, where it has traveled, or how long it has been “on” (or another attribute that can be detected by the object), our objects are developing memory. Ironically, for humans, being hyperconnected may work differently. 

In a series on the Read Write Web portal, Alicia Eler is exploring the hyperconnected life. The first piece she posted, How Hyperconnectivity Effects Young People, summarizes the results of a study on American Millennials and consequences of having an “always on” life. The Pew’s study of the impacts of always being connected to the Internet on the brains of youth is both qualitative and quantitative. Well worth a scan if not more of your time. Here are a few of the highlights I found particularly relevant:

  • relying on the Internet as our “external brain,” saves room in our “wet brains” for different kinds of thinking (no surprise here). 55% of those surveyed believe that the always on youth will have positive impacts on the world as a result of finding information more quickly and thinking in less structured ways, “thinking out of the box.” 42% of those surveyed feared the result would be negative.
  • always being connected tends to build a desire for instant gratification (no surprise here), and the increased chances of making “quick, shallow choices.”
  • Education reform is much needed to meet the requirements of these “new” and hyperconnected and mobile students. This really dovetails well with the outcomes of the Mobile Youth Congress held last week at Mobile World Congress in Barcelona. The iStudent Initiative suggests that learning should be more self-directed and the classroom will be where students report what they’ve learned.

Then, in a second post entitled, Introducing Your Hyperconnected Online-Offline Identity, Alicia explored the subject of fragmented identity. The premise is that our identities are fractured because we can be different people in different places and in response to those around us who are different (home, business, sports, entertainment/hobbies).

“The real self is saddled somewhere in the overlap between these three circles. These ideas of the self apply in both an online and offline context. This abstraction, explains ScepticGeek, may come at least partially from Carl Rogers.

Basic-Three-Circles-with-Text2.png

“Online, we battle with the same conflicts, plus a few other quirks. We are a Facebook identity (or two), a Twitter account, a LinkedIn oh-so-professional account and maybe even Google+ (plus search your world, no less). Each online identity is in and of itself an identity. Maintaining them is hard, often times treacherous work. We must slog through the Internet-addled identity quagmire.”

In another paradox, I think that when “things” are connected, even via a social network such as facebook, we (humans) truly have the opportunity to know the objects or places better, with a richer and deeper understanding because we think there’s more information, less subjective and more quantitative data on which we can base our opinions.

I wonder if there will also be ways for Spimes to have different personae, to project themselves in unique ways to different audiences. Perhaps it will be simpler because inanimate objects don’t have the need or desire to reconcile all their identities in the “self.” But it will always remain the responsibility of the wrangler to manage those identities. Job security is what I call that!

Categories
Internet of Things Research & Development

The Big Data Bandwagon

Big Data and I go way back. How can I get on the Big Data Bandwagon?

It's not a domain into which I regularly stray but nuclear physics was the focus of both my parents' careers so their use of computers comes to mind whenever the topic of Big Data comes up. I'm stepping out of my comfort zone but I am going to hypothesize that the study of physics and the Internet of Things share certain attributes and that both are sexy because they are part of Big Data. You're looking at Big Data in the illustration here.

Both physics and IoT begin with the assumption that there's more to the world than what the naked eye can see or can be detected by any of our other human senses. You can't see atoms, or electrons or quarks or any of those smaller particles. All you can use to know they are there are the measurements of their impacts on other particles using sensors.

And sensors are also at the heart of the Internet of Things. In addition to the human-detectable phenomena, sensors embedded where we can't see them detect attributes we can't see, don't have a smell, don't make a sound or otherwise are too small, too large, too fast or too far away for us to use our "native" human sensors to detect. The sensors monitoring the properties of materials in physics (like the sensors in our environment monitoring the air quality, the temperature, the number of cars passing over a pressure sensor on the roadbed) communicate their readings with time stamps and these contribute to other readings as a set of data forms.

You get the rest: the raw data then become the building material upon which analyses can be performed. It's difficult for the common man to discern patterns from the illustration above or millions of sensor readings from a nuclear power plant. Machine learning and algorithms extract the patterns from the data for us and we use these patterns to gain insights and make decisions.

So, my point is that the concept of using computers to analyze large data sets to answer all kinds of questions–the core of Big Data–has been around the research community for decades and applies to many, if not all, fields. IBM has long been leading the charge on this. Here's an interesting project led by Jeff Jonas, Chief Scientist of IBM's Entity Analytics Group, that just celebrated its one year anniversary. A January 2012 HorizonWatching Trend Report presentation on Big Data points to lots of resources.

What's new with Big Data in 2012 is the relative ease with which these very large data sets can be reliably collected, communicated, stored and processed, and, in some cases, visualized.

A feature article about Big Data's relevance in our lives in the New York Times frames the subject well and then explains why Big Data is trending: everyone wants to see the past and the present, and to understand the world more clearly. With our improved "visibility" we might be able to make better decisions. The "text book" example is the Oakland Athletics baseball team comeback on which the book and movie, Moneyball, are based.

With the help of coverage in books, motion picture, major news media and tech bloggers, Big Data is one of the big memes of 2012. Trends like the widespread adoption of Big Data usually lead to large financial gains.

Let's see if I can use this data to make better decisions! Maybe I should re-brand everything I do so that the relationships of my activities to Big Data are more clear to others. Big Spimes? What do you think?

Categories
Augmented Reality Events Internet of Things Research & Development

Augmented Humans

Augmented humans are at the epicenter of a scenario for the future that Ray Kurzweil has been popularizing for over 20 years. To recap the central thesis of his life's work, including the book The Singularity is Near published in 2005, Kurzweil promotes the notion that technological singularity is the inevitable result of our research on genetics, nanotechnology and robotics (including artificial intelligence). Whether one believes the trends to which he points will go as far as Kurzweil predicts (in which some of those born human and who live among us today will live far longer than any earlier specimens of our race and will, at the same time, benefit or suffer from "superintelligence") or not, research is continuing unabated in these domains.

Some findings of basic and applied research in areas at the core of the Singularity will be reported by those who will present papers during the third annual Augmented Human conference. This conference whose proceedings will later be published by the ACM focuses on augmenting human capabilities through technology for increased well-being and enjoyable human experience. The program committee solicited contributions on the following topics (this list pasted directly from the conference call for papers, which closed earlier this week):

  • Augmented and Mixed Reality
  • Internet of Things
  • Augmented Sport
  • Sensors and Hardware
  • Wearable Computing
  • Augmented Health
  • Augmented Well-being
  • Smart artifacts & Smart Textiles
  • Augmented Tourism and Games
  • Ubiquitous Computing
  • Bionics and Biomechanics
  • Training/Rehabilitation Technology
  • Exoskeletons
  • Brain Computer Interface
  • Augmented Context-Awareness
  • Augmented Fashion
  • Augmented Art
  • Safety, Ethics and Legal Aspects
  • Security and Privacy

I'd like to hear what these folks are doing. However, I'd also (maybe even more) like meet and get acquainted with flesh and blood Augmented Humans. One whom I met a few years ago at a conference is Rob Spence. Rob is a documentary filmmaker who lost an eye and decided, with the help of Steve Mann, one of the original first-person webcamera video streamers, to have a wireless video camera fitted into his prosthetic eye. Rob kept a blog about the experience for several years but moved it three years ago this month to another host and seems to have been closed. Here's a 2010 interview with Rob published on the Singularity University's blog. According to Rob Spence's web site, visited today when researching this post, he's working on a documentary for the Canadian Film Board. So, at least for now, his story is private.

I'm currently reading Hard-Boiled Wonderland and the End of the World, a work of fiction by Haruki Murakami. The central character (unknowingly) has his brain rewired as part of an experiment and it (his brain) is programmed for him to live the rest of his life "reading" dreams from the skulls of unicorns. It's a gracefully written story. Although stories of people whose bodies and minds have been altered to become "augmented humans" make for excellent works of fiction, of a blog, and probably a documentary, I suspect that the paths humans pursue towards this goal are filled with failed attempts. Interesting to note the last two bullets on the list of topics covered at the AHC. There's confirmation of my concern.

At Laval Virtual, the largest industry event dedicated exclusively to Virtual Reality, Masahiko Inami, a professor in the School of Media Design at the Keio University (KMD), Japan, is giving a talk entitled "Initial Step Towards Augmented Human". Here's the session description:

What are the challenges in creating interfaces that allow a user to intuitively express his/her intentions? Today's HCI systems are limited, and exploit only visual and auditory sensations. However, in daily life, we exploit a variety of input and output modalities, and modalities that involve contact with our bodies can dramatically affect our ability to experience and express ourselves in physical and virtual worlds. Using modern biological understanding of sensation, emerging electronic devices, and agile computational methods, we now have an opportunity to design a new generation of 'intimate interaction' technologies.

This talk will present several approaches that use multi/cross modal interfaces for enhancing human I/O. They include Optical Camouflage, Stop-Motion Goggle, Galvanic Vestibular Stimulation and Chewing Jockey.

Although probably less shocking and hair-raising than the talks at the third AHC, this session should also be very thought-provoking and practical for those working in the field of Virtual Reality. I'll try to make it to both of these events to get fully informed about all aspects of Augmented Humans.

Categories
2020 Research & Development

Abundance, the book

For me the word "abundance" is associated with a positive, peaceful state of mind. In essence, it's the opposite of viewing the world from a place of need, fear or greed. And it's a fantastic title for a blog, a movie, a company, or a book. Wish I had thought of it!

In the forthcoming book Abundance: The Future Is Better Than You Think, the authors, Peter Diamandis and Steven Kotler, suggest that our future will be shaped by four emerging forces:

  • the exponential technologies,
  • the DIY innovator,
  • the Technophilanthropist, and
  • the Rising Billion.

The terms aren't defined on the portal and the first chapter doesn't introduce them either, but the marketing of the book is beautifully well aligned with the title in the sense that it makes the potential reader feel the abundance they (those who are promoting the book) have to share with the rest of the world.

Those who pre-order the book are promised a bundle of benefits. Customers who place an order before February 13, 2012 on the book's portal will receive:

  • access to Singularity University’s private graduate training video library (covering AI, robotics, computing systems, neuroscience, synthetic biology, nanotechnology, energy, and innovation);
  • a $1200 gift certificate toward attendance at SU’s 7-day Executive Program at the NASA Ames Campus in 2012;
  • free online access to view Transcendent Man, a documentary film chronicling the life and controversial ideas of Ray Kurzweil; and
  • an invitation to a meeting with the book’s authors via webcast.

That's a lot for $24. I pre-ordered my copy before I wrote this post.


Categories
3D Information Augmented Reality Internet of Things Research & Development

Clear Directions Ahead

During the 2011 Geneva Auto Show (almost a year ago), BMW shared with enthusiasts its Vision ConnectedDrive prototype. "Assisted by sensors integrated into the headlights and taillights, a head-up display on the ConnectedDrive Concept can list information on the road ahead in a 3-dimensional format."

Augmented Reality for drivers was also a feature of last week's Consumer Electronics Show. For example, Pioneer Electronics revealed a display that mounts in place of or below the rear view mirror of any model car to project road information for quick consultation without obscuring the driver's view of the road. The photo below is from the CNN article covering Mercedes-Benz's introduction of what it terms "the Future of Driving.

While everyone acknowledges that a date for commercial release of these technologies has not been set, the direction of research and development in the automotive industry is clear: more sensors, more mobile connected services for the user/driver, more in the driver's field of view. More and better sensors are already available for those who can pay the premium price. Also, in an automobile where miniaturization and low power consumption are not as important as in a smartphone, we can anticipate more advanced and more accurate sensors, including cameras, to appear.

Furthermore, the justification of new gadgets on the basis of driver and road safety appeals to many constituents from the individual driver to the regional and national transportation authorities who have (potentially) fewer troubles with traffic congestion. I haven't read anything about the policies or regulations treating the use of AR in cars, but I wouldn't be surprised if some were introduced.

One of the enabling technologies for these applications is the pico projector. MicroVision is one of the early providers of these technologies, while a neighbor, also in the state of Washington, the Human Photonics Laboratory at University of Washington is another. Other enablers are the variable opacity screen materials (aka Smart glass) which can be manufactured today. And to receive information from the cloud, without interfering with the user's mobile phone service, and perhaps using different protocols, we may have machine-to-machine (M2M) mobile communications. In the case of a high end car, the extra radio (its cost, its weight or power requirements) are not obstacles.

Categories
Research & Development Standards

Virtual Worlds and MPEG-V

Virtual Reality is not a domain on which i focus, however, I recognize that VR is at the far end of Milgram's continuum from Augmented Reality so there are interesting developments in VR which can be borrowed for wider application. For example, Virtual Reality has a long history of using 3 dimensionality, from which AR practitioners and designers have much to learn.

I'm particularly attentive to standards which could be shared between VR and AR. The current issue (vol 4 number 3) of the Journal of Virtual World Research is entirely dedicated to the MPEG-V, the standard developed in ISO/IEC JTC 1 SC 29 for Virtual World (ratified one year ago, January 2011).

This journal is the most comprehensive resource I've found on the standard. It is written and edited by some of those leading the specification's development including:

Jean H.A. Gelissen, Philips Research, Netherlands
Marius Preda, Insitut TELECOM, France
Samuel Cruz-Lara, LORIA (UMR 7503) / University of Lorraine, France
Yesha Sivan, Metaverse Labs and the Academic College of Tel Aviv-Yaffo, Israel

I will need to digest its contents carefully. Not much more to say about it than this at the moment!