Categories
Events Internet of Things

Where EVRYTHNG Connects

Over the past 10 days I've been traveling and participating in important workshops and events in the US so writing and posting to this blog has been infrequent. My recent face-to-face meetings involved those attending the AR-in-Texas workshops, followed by the participants of the Fifth AR Standards Community Meeting that I chaired in Austin. Then, I participated in the Open Geospatial Consortium's quarterly Technical Committee meetings. I'm currently in San Francisco to attend the New Digital Economics Brainstorm.

I haven't counted but I estimate that within a week's time, during and between these events, I've met with over 100 people individually or in small groups. During the trip just prior to this one, the five days of Mobile World Congress in Barcelona, I met and spoke with at least that many and probably closer to 200 people.

A significant slice of these (the majority, I am guessing), are people with whom I have a history–simply meaning that we may have spoken by Skype, phone or in person, or exchanged some e-mail. Our meetings in the physical space, however, differ from those we conduct virtually. We all know that the Internet has formed far more links between people than physical contacts could ever hope to make, however, meeting in person still brings us value. How much? Well, that's difficult to measure in time and in terms of revenue. Certainly they provide me sufficient value to warrant my leaving my office to attend meetings! I could probably ramble on and reflect further about this interpersonal on-line/in-person communication dichotomy but one tangent I want to explore with you is slightly different.

When I'm traveling I also come into contact with many many objects. Products, places, things. I wonder how many objects (new ones, old ones, ones I've seen/encountered before) I come into contact in a day. What value do these bring to me? How would I discover this?

Think of a ‘Facebook for Things’ with apps, services and analytics powered by connected objects and their digital profiles. With billions of product and other objects becoming connected, tagged and scannable, there’s a massive opportunity for a company that can provide the trusted engine for exchanging this active object information.

One of the companies that is responding to the opportunity is EVRYTHNG. I hope to see many new and familiar people in the room on April 3 in Zurich when I'll be chairing the next Internet of Things face-to-face meeting featuring the start up EVRYTHNG. Why should you be there?

One reason is that co-founder Dominique Guinard will be talking from his company's perspective about:

– What is the Web of Things?
– Web of Things: How and Why?
– Problem Statement: Hardware and Cloud Infrastructures for Web-augmented Things
– Web-enabling Devices and Gateways
– Active Digital Identities (ADIs)
– EVRYTHNG as a storage engine
– Problem Solved: Connecting People & Products
– Vision: Every Thing Connected
– Projects and Concrete Example of How and Why ADIs are Useful.
– Using our cloud services and APIs to build your next internet of things / web of things applications.

Let's connect in Zurich!

Categories
Innovation Social and Societal

In digital anima mundi

Each year the producers of TED bring beautifully-articulated, thought-provoking content to the world. Those of us who are not invited or choose not to attend the event in person get free access to these talks in the weeks that follow the end of the live production. My first session from the 2012 TED program was by Peter Diamandis about our fascination with negative trends and the concepts he has captured in Abundance, the book.

An example of abundance in the world/on the web is the page on which Gene Becker of Lightening Laboratories shares his notes and slides of an inspirational talk he gave last week on a SXSW stage. Thank you for sharing these, Gene!

 

Categories
2020 Social and Societal

Anticipatory Services (1)

It's March 18, 2012. I've entered the doors of a nice hotel in the outskirts of Austin, Texas. As I greet the agent and approach the reception desk I get my photo ID and my credit card out and lay them on the counter. After a moment of looking at a box on the counter, the agent at the desk replies to me that the fitness center and the swimming pool are on the ground floor. In response to this information (which I will not use), I ask how I can connect to the Internet service in the room and the procedure to follow to have something printed out.

I enter my room and, in anticipation of my arrival, the fan and air conditioner are turning full tilt. I immediately find the thermostat on the wall and turn off everything. The temperature is to my liking. I open the curtains to let in the natural light.

If I were an author, I would write a poem or a short story about a time in the future when all the people, places and things around me are able to detect who I am, what I'm saying, to whom and every gesture I make. The environment will be organized in a way that my every need will have been considered and the options are made available.

I will be able to choose how to be reached (since there won't be these antiquated devices such as telephones and computers any more). Though the inner workings will be invisible to my naked eye, the "alert" surfaces of my surroundings and the objects I carry with me will be the interfaces by which I receive suggestions and make my choices known. When I arrive at the restaurant for an evening cocktail, I'll be served a bowl of freshly made tortilla chips accompanied by dishes of guacamole and salsa.

For a small monthly or annual fee, my preferred provider of premium anticipatory services will be tracking and logging my every move and anticipating my future. When the experiences  I have exceed expectations, I will be happy to pay yet more!

I've recently learned, from an article written by Bruce Sterling in the April 2012 issue of the Smithsonian Magazine on the Origins of Futurism, of the HG Wells essays of 1902 on the topic of life in 2000. The "Anticipations," as the author entitled them, are available as part of the Guttenberg project. I am really looking forward to digesting these, in particular how he thought of the urban life we lead today.

I wonder if he also thought of anticipatory services.

I regret that my visibility into the near future confirms I will be unable to digest these works in the coming days. And that the next place I stay an agent will similarly greet me as a guest with expectations they seek, but will fail, to meet. When I am home, my expectations are lower and the astounding reliability of my world to deliver beyond what I need is much appreciated.

Categories
Augmented Reality

GOING OUTSIDE

Spring (and the Greenville Avenue St. Patrick’s Day Parade) has brought Dallasites out in droves today. Locals tell me that this unusual. I’m reminded of this wonderful short article, originally published in The New Yorker March 28, 2011 issue.

JUST IN TIME FOR SPRING

By Ellis Weiner

Introducing GOING OUTSIDE, the astounding multipurpose activity platform that will revolutionize the way you spend your time.

GOING OUTSIDE is not a game or a program, not a device or an app, not a protocol or an operating system. Instead, it’s a comprehensive experiential mode that lets you perceive and do things firsthand, without any intervening media or technology.

GOING OUTSIDE:

1. Supports real-time experience through a seamless mind-body interface. By GOING OUTSIDE, you’ll rediscover the joy and satisfaction of actually doing something. To initiate actions, simply have your mind tell your body what to do—and then do it!

Example: Mary has one apple. You have zero apples. Mary says, “Hey, this apple is really good.” You think, How can I have an apple, too? By GOING OUTSIDE, it’s easy! Simply go to the market—physically—and buy an apple. Result? You have an apple, too.

Worried about how your body will react to GOING OUTSIDE? Don’t be—all your normal functions (respiration, circulation, digestion, etc.) continue as usual. Meanwhile, your own inboard, ear-based accelerometer enables you to assume any posture or orientation you wish (within limits imposed by Gravity™). It’s a snap to stand up, sit down, or lie down. If you want to lean against a wall, simply find a wall and lean against it.

2. Is completely hands-free. No keyboards, mice, controllers, touch pads, or joysticks. Use your hands as they were meant to be used, for doing things manually. Peeling potatoes, applauding, shooting baskets, scratching yourself—the possibilities are endless.

3. Delivers authentic 3-D, real-motion video, with no lag time or artifacts. Available colors encompass the entire spectrum to which human eyesight is sensitive. Blacks are pure. Shadows, textures, and reflections are beyond being exactly-like-what-they-are. They are what they are.

GOING OUTSIDE also supports viewing visuals in a full range of orientations. For Landscape Mode, simply look straight ahead—at a real landscape, if you so choose. To see things to the left or the right, shift your eyes in their sockets or turn your head from side to side. For Portrait Mode, merely tilt your head ninety degrees in either direction and use your eyes normally. Vision-correcting eyeglasses not included but widely available.

4. Delivers “head-free” surround sound. No headphones, earbuds, speakers, or sound-bar arrays required—and yet, amazingly, you hear everything. Sound is supported over the entire audible spectrum via instantaneous audio transmission. As soon as a noise occurs and its sound waves are propagated to your head, you hear it, with stunning realism, with your ears.

Plus, all sounds, noises, music, and human speech arrive with remarkable spatial-location accuracy. When someone behind you says, “Hey, are you on drugs, or what?,” you’ll hear the question actually coming from behind you.

5. Supports all known, and all unknown, smells. Some call it “the missing sense.” But once you start GOING OUTSIDE you’ll revel in a world of scent that no workstation, media center, 3-D movie, or smartphone can hope to match. Inhale through your nose. Smell that? That’s a smell, which you are experiencing in real time.

6. Enables complete interactivity with inanimate objects, animals, and Nature™. Enjoy the texture of real grass, listen to authentic birds, or discover a flower that has grown up out of the earth. By GOING OUTSIDE, you’ll be astounded by the number and variety of things there are in the world.

7. Provides instantaneous feedback for physical movement in all three dimensions. Motion through 3-D environments is immediate, on-demand, and entirely convincing. When you “pick up stuff from the dry cleaner’s,” you will literally be picking up stuff from the dry cleaner’s.

To hold an object, simply reach out and grasp it with your hand. To transit from location to location, merely walk, run, or otherwise travel from your point of origin toward your destination. Or take advantage of a wide variety of available supported transport devices.

8. Is fully scalable. You can interact with any number of people, from one to more than six billion, simply by GOING OUTSIDE. How? Just go to a place where there are people and speak to them. But be careful—they may speak back to you! Or remain alone and talk to yourself.

9. Affords you the opportunity to experience completely actual weather. You’ll know if it’s hot or cold in your area because you’ll feel hot or cold immediately after GOING OUTSIDE. You’ll think it’s really raining when it rains, because it is.

10. Brings a world of cultural excitement within reach. Enjoy access to museums, concerts, plays, and films. After GOING OUTSIDE, the Louvre is but a plane ride away.

11. Provides access to everything not in your home, dorm room, or cubicle. Buildings, houses, shops, restaurants, bowling alleys, snack stands, and other facilities, as well as parks, beaches, mountains, deserts, tundras, taigas, savannahs, plains, rivers, veldts, meadows, and all the other features of the geophysical world, become startlingly and convincingly real when you go to them. Take part in actual sporting events, or observe them as a “spectator.” Walk across the street, dive into a lake, or jump on a trampoline surrounded by happy children. After GOING OUTSIDE, you’re limited not by your imagination but by the rest of Reality™.

Millions of people have already tried GOING OUTSIDE. Many of your “friends” may even be GOING OUTSIDE right now! Why not join them and see what happens?

Categories
Augmented Reality Standards

Open and Interoperable AR

I’ve been involved and observed technology standards for nearly 20 years. I’ve seen the boom that came about because of the W3C's work and the Web standards that were put in place early. The standards for HTTP and HTML made content publishing for a wider audience much more attractive to the owners and developers of content than having to format their content for each individual end user application. 

I’ve also seen standards introduced in emerging industries too early. For example, the ITU H.320 standards in the late 1980s were too limiting and stifled innovation in the videoconferencing industry a decade later. Even though there was an effort to correct the problem in the mid-1990s with H.323, the architectures were too limiting and eventually much of the world went to SIP (IETF Session Initiation Protocol). But even SIP has only had limited impact when compared with Skype for the adoption of video calling. So, this is an example where although there are good standards available, they are implemented by large companies and the mass market just wants things that work, first time and every time.  AR is a much larger opportunity and probably closer to the Web than video conferencing or video calling.

With AR, there’s more than just a terminal and a network entity or two terminals talking to one another. As I wrote in my recent post about the AR Standards work, AR is starved for content and without widespread adoption of standards, publishers are not going to bother with making their content available. In addition to it being just too difficult to reach audiences on fragmented platforms, there’s not a clear business model. If, however, we have easy ways to publish to massive audiences, traditional business models such as premium content subscription and Pay to watch or experience, are viable.  

I don’t anticipate that mass market AR can happen without open AR content publishing and management as part of other enterprise platforms. The systems have to be open and to interoperate at many levels. That's why in late 2009 I began working with other advocates of open AR to bring experts in different fields together. We gained momentum in 2011 when the Open Geospatial Consortium and the Khronos Group recognized our potential to help. These two standards development organizations see AR as very central to what they provide. The use of AR drives the adoption of faster, high performance processors (which members of the Khronos Group provide) and location-based information.

There are other organizations very consistently participating and making valuable contributions to each of our meetings. In terms of other SDOs, in addition to OGC and Khronos, the W3C, two sub committees from ISO/IEC, Open Mobile Alliance, Web3D Consortium and Society of Information Display are reporting regularly about what they’re doing. The commercial and research organizations that attend include, for example, the Fraunhofer IGD, Layar, Wikitude, Canon, Opera Software, Sony Ericsson, ST Ericsson and Qualcomm. We also really value the dozens of independent AR developers who come and contribute their experience as well. Mostly they’re from Europe but at the meeting in Austin we expect to have a new crop of US-based AR developers showing up.

Each meeting is different and always very valuable. I'm very much looking forward to next week!

Categories
Augmented Reality Events

Aurasma at GDC12 and SXSW12

I was unable to attend the Game Developers Conference last week in San Francisco, but it sounds like it was a good event. I enjoyed reading Damon Hernandez's post on Artificial Intelligence. Damon and I are working together on the AR in Texas Workshops March 16 and 17.

At GDC12, Aurasma was in the ARM booth showing Social AR experiences. During this video interview David Stone gave some numbers and his excitement about the platform nearly leaves him speechless.

The SXSW event is going on this week and Aurasma is there as well. In Austin, Aurasma broke the news about their partnership with Marvel Comics. This is could have been good news for the future of AR-enhanced books. Unfortunately, the creative professionals who worked on this demonstration let us down. Watch the movie of this noisy animation showing what the character is capable of doing, and ask yourself "how many times does a "reader" want to watch this?"

I fear the answer is: Zero. Is there any aspect of this experience sufficiently valuable for a customer to return? I could be wrong.

What more could the character have done? Well, something related to the story of the comic book, for starters!

Categories
Augmented Reality Events Standards

Interview with Neil Trevett

In preparation for the upcoming AR Standards Community Meeting March 19-20 in Austin, Texas, I’ve conducted a few interviews with experts. See here my interview with Marius Preda. Today’s special guest is Neil Trevett.

Neil Trevett is VP of Mobile Content at NVIDIA and President of the Khronos Group, where he created and chaired the OpenGL working group, which has defined the industry standard for 3D graphics on embedded devices. Trevett also chairs the OpenCL working group at Khronos defining an open standard for heterogeneous computing.

Spime Wrangler: When did you begin working on standards and open specifications that are or will become relevant to Augmented Reality?

NT: It’s difficult to say because so many different standards are enabling ubiquitous computing and AR is used in so many different ways. We can point to graphics standards, geo-spatial standards, formatting, and other fundamental domains. [editor’s note: Here’s a page that gives an overview of existing standards used in AR.]

The lines between computer vision, 3D, graphics acceleration and use are not clearly drawn. And, depending on what type of AR you’re talking about, these may be useful, or totally irrelevant.

But, to answer your question, I’ve been pushing standards and working on the development of open APIs in this area for nearly 20 years. I first assumed a leadership role in 1997 as President of the Web3D Consortium (until 2005). In the Web3D Consortium, we worked on standards to bring real-time 3D on the Internet and many of the core enablers for 3D in AR have their roots in that work.

Spime Wrangler: You are one of the few people who has attended all previous meetings of the International AR Standards Community. Why?

NT: The AR Standards Community brings together people and domains that otherwise don’t have opportunities to meet. So, getting to know the folks who are conducting research in AR, designing AR, implementing core enabling technologies, even artists and developers was a first goal. I need to know those people in order to understand their requirements. Without requirements, we don’t have useful standards. I’ve been taking what I learn during the AR Standards community meeting and working some of that knowledge into the Khronos Group.

The second incentive for attending the meetings is to hear what the other standards development organizations are working on that is relevant to AR. Each SDO has its own focus and we already have so much to do that we have very few opportunities to get an in depth report on what’s going on within other SDOs, to understand the stage of development and to see points for collaboration.

Finally, the AR Standards Community meetings permit the Khronos Group to share with the participants in the community what we’re working on and to receive direct feedback from experts in AR. Not only are the requirements important to us, but also the level of interest a particular new activity receives. If, during the community meeting I detect a lot of interest and value, I can be pretty sure that there will be customers for these open APIs down the road.

Spime Wrangler: Can you please describe the evolution you’ve seen in the substance of the meetings over the past 18 months?

NT: The evolution of this space has been rapid, by standards development standards! This is probably because a lot of folks have thought about the potential of AR as just another way of interfacing with the world. There’s also been decades of research in this area. Proprietary silos are just not going to be able to cover all the use cases and platforms on which AR could be useful. 

In Seoul, it wasn’t a blank slate. We were picking up on and continuing the work begun in prior meetings of the Korean AR Standards community that had taken place earlier in 2010. And the W3C POI Working Group had just been approved as an outcome of the W3C Workshop on AR and the Web.

Over the course of 2011 we were able to bring in more of the SDOs. For example, the OGC and Web3D Consortium started presenting their activities during the Second community meeting. The OMA Mob AR Enabler work item presented and ISO SC24 WG 9 chair, Gerry Kim, participated in the Third Meeting in conjunction with the Open Geospatial Consortium’s meeting in Taiwan.

We’ve also established and been moving forward with several community resources. I’d say the initiation of work on an AR Reference Architecture is an important milestone.

There’s a really committed group of people who form the core, but many others are joining and observing at different levels.

Spime Wrangler: What are your goals for the meeting in Austin?

NT: During the next community meeting, the Khronos Group expects to share the progress made in the newly formed StreamInput WG. We’re just beginning this work but there’s great contributions and we know that the AR community needs these APIs.

I also want to contribute to the ongoing work on the AR Reference Architecture. This will be the first meeting in which MPEG will join us and Marius Preda will be making a presentation about what they have been doing as well as initiating new work on 3D Transmission standards using past MPEG standards.

It’s going to be an exciting meeting and I’m looking forward to participating!

Categories
Internet of Things Research & Development Social and Societal

City WalkShop

Adam Greenfield is one of the thought leaders I follow closely on urban technology topics. Adam and his network (including but going beyond the Urbanscale consulting practice) are far ahead of most people when it comes to understanding and exploring the future of technology in cities.

In this post I'm capturing information about this small event conducted in November 2010 in collaboration with Do Projects (in the context of the Drumbeat Festival) because it inspires me. I've also found documentation about two more of these done in spring of 2011 (Bristol and London). On March 11, there will be another one taking place in Cologne, Germany in collaboration with Bottled City.

City WalkShop experiences are "Collective, on-the-field discovery around city spots intensive in data or information, analyzing openness and sharing the process online."

I discovered the concept of WalkShops when I was exploring Marc Pous' web page. Marc just founded the Internet of Things Munich meetup group a few weeks ago and, in addition to being eager to meet other IoT group founders (disclosure: I founded IoT Zurich meetup in October 2011), I learned that he is a native of Barcelona (where the IoT-Barcelona group meets).

I got acquainted with Marc's activities and came across the Barcelona WalkShop done with Adam.

The WalkShop Barcelona is documented in several places. There's the wiki page on UrbanLabs site that describes the why and the what, and I visited the Posterous page. Here's the stated goal:

What we’re looking for are appearances of the networked digital in the physical, and vice versa: apertures through which the things that happen in the real world drive the “network weather”, and contexts in which that weather affects what people see, confront and are able to do.

Here's a summary of Systems/Layers process:

Systems/Layers is a half-day “walkshop” organized by Citilab and Do projects held in two parts. The first portion of the activity is dedicated to a slow and considered walk through a reasonably dense and built-up section of the city at hand. This portion of the day will take around 90 minutes, after which we gather in a convenient “command post” to map, review and discuss the things we’ve encountered.

I'd love to participate or organize another of these WalkShops in Barcelona in 2012, going to the same places and, as one of the outcomes of the process, to compare how the city has evolved. Could we do it as a special IoT-Barcelona meeting or in the framework of Mobile World Capital?

I also envisage getting WalkShops going in other cities. Maybe, as spring is nearing and people are outside more, this could be a side project for members of other IoT Meetup Groups?

Categories
Internet of Things Research & Development Social and Societal

Risks and Rewards of Hyperconnected-ness

I often get asked to define a Spime. The definition is simple “Space + Time” but the implications are deeper than most people have time to think about. That’s one reason that Wranglers are needed. But the fundamental attribute of a spime is that it is hyperconnected and it is doing something with its connections. By documenting or publishing where it was made, by whom, where it has traveled, or how long it has been “on” (or another attribute that can be detected by the object), our objects are developing memory. Ironically, for humans, being hyperconnected may work differently. 

In a series on the Read Write Web portal, Alicia Eler is exploring the hyperconnected life. The first piece she posted, How Hyperconnectivity Effects Young People, summarizes the results of a study on American Millennials and consequences of having an “always on” life. The Pew’s study of the impacts of always being connected to the Internet on the brains of youth is both qualitative and quantitative. Well worth a scan if not more of your time. Here are a few of the highlights I found particularly relevant:

  • relying on the Internet as our “external brain,” saves room in our “wet brains” for different kinds of thinking (no surprise here). 55% of those surveyed believe that the always on youth will have positive impacts on the world as a result of finding information more quickly and thinking in less structured ways, “thinking out of the box.” 42% of those surveyed feared the result would be negative.
  • always being connected tends to build a desire for instant gratification (no surprise here), and the increased chances of making “quick, shallow choices.”
  • Education reform is much needed to meet the requirements of these “new” and hyperconnected and mobile students. This really dovetails well with the outcomes of the Mobile Youth Congress held last week at Mobile World Congress in Barcelona. The iStudent Initiative suggests that learning should be more self-directed and the classroom will be where students report what they’ve learned.

Then, in a second post entitled, Introducing Your Hyperconnected Online-Offline Identity, Alicia explored the subject of fragmented identity. The premise is that our identities are fractured because we can be different people in different places and in response to those around us who are different (home, business, sports, entertainment/hobbies).

“The real self is saddled somewhere in the overlap between these three circles. These ideas of the self apply in both an online and offline context. This abstraction, explains ScepticGeek, may come at least partially from Carl Rogers.

Basic-Three-Circles-with-Text2.png

“Online, we battle with the same conflicts, plus a few other quirks. We are a Facebook identity (or two), a Twitter account, a LinkedIn oh-so-professional account and maybe even Google+ (plus search your world, no less). Each online identity is in and of itself an identity. Maintaining them is hard, often times treacherous work. We must slog through the Internet-addled identity quagmire.”

In another paradox, I think that when “things” are connected, even via a social network such as facebook, we (humans) truly have the opportunity to know the objects or places better, with a richer and deeper understanding because we think there’s more information, less subjective and more quantitative data on which we can base our opinions.

I wonder if there will also be ways for Spimes to have different personae, to project themselves in unique ways to different audiences. Perhaps it will be simpler because inanimate objects don’t have the need or desire to reconcile all their identities in the “self.” But it will always remain the responsibility of the wrangler to manage those identities. Job security is what I call that!

Categories
3D Information Augmented Reality Innovation

Playing with Urban Augmented Reality

AR and cities go well together. One of the reasons is that, by comparison with rural landscapes, the environment is quite well documented (with 3D models, photographs, maps, etc). A second reason is that some features of the environment, like the buildings, are stationary while others, like the people and cars, are moving. Another reason for these to fit naturally together is that there's a lot more information that can be associated with places and things than those of us passing through can see with our "naked" eyes. There's also a mutual desire: people –those who are moving about in urban landscapes, and those who have information about the spaces–need or want to make these connections more visible and more meaningful.

The applications for AR in cities are numerous. Sometimes the value of the AR experience is just to have fun. Let's imagine playing a game that involves the physical world and information encoded with (or developed in real time for use with) a building's surface. Mobile Projection Unit (MPU) Labs is an Australian start up doing some really interesting work that demonstrates this principle. They've taken the concept of the popular mobile game "Snake" and, by combining it with a small projector, smartphone and the real world, made something new. Here's the text from their minimalist web page:

"When ‘Snake the Planet!” is projected onto buildings, each level is generated individually and based on the selected facade. Windows, door frames, pipes and signs all become boundaries and obstacles in the game. Shapes and pixels collide with these boundaries like real objects. The multi-player mode lets players intentionally block each other’s path in order to destroy the opponent."

Besides this text, there's a quick motivational "statement" by one of the designers (this does not play in the page for me, but click on vimeo logo to open it):

 

 

And this 2 minute video clip of the experience in action:

I'd like to take this out for a test drive. Does anyone know these guys?