Categories
Augmented Reality Events Internet of Things Research & Development

Augmented Humans

Augmented humans are at the epicenter of a scenario for the future that Ray Kurzweil has been popularizing for over 20 years. To recap the central thesis of his life's work, including the book The Singularity is Near published in 2005, Kurzweil promotes the notion that technological singularity is the inevitable result of our research on genetics, nanotechnology and robotics (including artificial intelligence). Whether one believes the trends to which he points will go as far as Kurzweil predicts (in which some of those born human and who live among us today will live far longer than any earlier specimens of our race and will, at the same time, benefit or suffer from "superintelligence") or not, research is continuing unabated in these domains.

Some findings of basic and applied research in areas at the core of the Singularity will be reported by those who will present papers during the third annual Augmented Human conference. This conference whose proceedings will later be published by the ACM focuses on augmenting human capabilities through technology for increased well-being and enjoyable human experience. The program committee solicited contributions on the following topics (this list pasted directly from the conference call for papers, which closed earlier this week):

  • Augmented and Mixed Reality
  • Internet of Things
  • Augmented Sport
  • Sensors and Hardware
  • Wearable Computing
  • Augmented Health
  • Augmented Well-being
  • Smart artifacts & Smart Textiles
  • Augmented Tourism and Games
  • Ubiquitous Computing
  • Bionics and Biomechanics
  • Training/Rehabilitation Technology
  • Exoskeletons
  • Brain Computer Interface
  • Augmented Context-Awareness
  • Augmented Fashion
  • Augmented Art
  • Safety, Ethics and Legal Aspects
  • Security and Privacy

I'd like to hear what these folks are doing. However, I'd also (maybe even more) like meet and get acquainted with flesh and blood Augmented Humans. One whom I met a few years ago at a conference is Rob Spence. Rob is a documentary filmmaker who lost an eye and decided, with the help of Steve Mann, one of the original first-person webcamera video streamers, to have a wireless video camera fitted into his prosthetic eye. Rob kept a blog about the experience for several years but moved it three years ago this month to another host and seems to have been closed. Here's a 2010 interview with Rob published on the Singularity University's blog. According to Rob Spence's web site, visited today when researching this post, he's working on a documentary for the Canadian Film Board. So, at least for now, his story is private.

I'm currently reading Hard-Boiled Wonderland and the End of the World, a work of fiction by Haruki Murakami. The central character (unknowingly) has his brain rewired as part of an experiment and it (his brain) is programmed for him to live the rest of his life "reading" dreams from the skulls of unicorns. It's a gracefully written story. Although stories of people whose bodies and minds have been altered to become "augmented humans" make for excellent works of fiction, of a blog, and probably a documentary, I suspect that the paths humans pursue towards this goal are filled with failed attempts. Interesting to note the last two bullets on the list of topics covered at the AHC. There's confirmation of my concern.

At Laval Virtual, the largest industry event dedicated exclusively to Virtual Reality, Masahiko Inami, a professor in the School of Media Design at the Keio University (KMD), Japan, is giving a talk entitled "Initial Step Towards Augmented Human". Here's the session description:

What are the challenges in creating interfaces that allow a user to intuitively express his/her intentions? Today's HCI systems are limited, and exploit only visual and auditory sensations. However, in daily life, we exploit a variety of input and output modalities, and modalities that involve contact with our bodies can dramatically affect our ability to experience and express ourselves in physical and virtual worlds. Using modern biological understanding of sensation, emerging electronic devices, and agile computational methods, we now have an opportunity to design a new generation of 'intimate interaction' technologies.

This talk will present several approaches that use multi/cross modal interfaces for enhancing human I/O. They include Optical Camouflage, Stop-Motion Goggle, Galvanic Vestibular Stimulation and Chewing Jockey.

Although probably less shocking and hair-raising than the talks at the third AHC, this session should also be very thought-provoking and practical for those working in the field of Virtual Reality. I'll try to make it to both of these events to get fully informed about all aspects of Augmented Humans.

Categories
Augmented Reality Events Standards

AR Standards Community

When we place a phone call, we don't insert a pre-fix for Blackberry phone, a different prefix for calling someone who uses an iPhone and another for Android users. A call request is placed and connected, regardless of the device and software used by the receiver's handset.  When people publish content for the Web (aka "to be viewed using a browser"), they don't need to use a special platform for Internet Explorer, a special content management system or format for Opera Software users, another for Firefox users, and another for those who prefer to use Safari. And, as a result of substantial effort on the part of the mobile ecosystem, the users of mobile Web browsers can also view the same content as on a stationary device, adapted for the constraints of the mobile platform.

With open standards, content publishers can reach the largest potential audiences and end users can choose from a wealth of content sources.

Augmented Reality content and experiences should be available to anyone using a device meeting minimum specifications. If we do not have standards for AR, all that can and could be added to reality will remain stuck in proprietary technology silos.

In the ideal world, where open standards triumph over closed systems, the content a publisher releases for use in an AR-enabled scenario will not need to be prepared differently for different viewing applications (software clients running on a hardware platform).

The community working towards open and interoperable AR will be meeting March 19-20 in Austin, Texas to continue the coordination activities it performs on behalf of all content publishers, AR experience developers and end users.

If you can come, and even if you are unable to meet in person with the leaders of this community, you can influence the discussion by submitting a position paper according to our guidelines.
 

Categories
Events Internet of Things

Makers at IoT ZH

The second meeting of the Internet of Things Zurich meetup group was an enormous success! In the audience, we had an excellent mix of artists, programmers, Do-it-Yourself-ers, students and academics, people from businesses interested in learning about IoT.

Now what?

Growth. To say that this group is large would be an exaggeration because Switzerland is a small country and we only began in earnest a few weeks ago. But by Swiss standards, this group of passionate people, the "makers" of the local IoT industry, is respectable (61 as of this morning). And there were over 50 people gathered in the ETHZ venue to learn from entrepreneurs. 

Experience. Few have it and everyone wants it. The goal of this session was to hear from those with experience in the IoT about lessons learned to date.

We began with great content from Cuno Pfister, Oberon microsystems (slides), Thomas Amberg, Yaler.net (slides) and Simon Mayer, not technically an entrepreneur (he's a PhD candidate at the ETHZ Distributed Systems Group) but a real good guy who shared with us what's happening on the Web of Things side (slides).

During his introduction, Cuno framed the world (loosely speaking) as those who are "corporates" and have a set of characteristics that make them risk averse, although they have (or perhaps as a result of their) resources, and the "tinkerers" those he called "makers." Makers are characterized by:

  • no legacy business models
  • focus on personal growth
  • generating new ideas
  • cost-sensitive (low financial resources) and work on their projects in their spare time
  • attracted to and frequently adopt open systems

After the talks, I took a poll of the people in the room to ascertain the composition of this community. Approximately 30% of us are already "makers" in some fashion. We didn't define this or require people to demonstrate that they have this status through an exam! Presumably even those who are already experimenting want to improve. Of the remainder, many–over half of the room–aspire to become "makers."

With this in mind, there's an excellent opportunity to organize more community meetings and to explore other programs that will permit people to get proficient with IoT tools quickly and with limited resources. I'll be talking to our local experts and more makers in coming weeks to see what we can do about fulfilling this desire and addressing the needs.

Categories
Augmented Reality Events News

hARdware Makes the Headlines

Announcements featuring Augmented Reality are numerous at CES this year. When one steps back from the noise, it appears, as it has most of 2011, that the buzz is primarily coming from the hardware side of the ecosystem. In the limited time I have to absorb from the deluge of CES news I can't begin to capture everything, but just consider:

Where are Intel, ARM, NVIDIA, Imagination Technologies and the other important chip vendors with their eye on mobile?

One can argue if Nokia is a hardware or a software company but it's all three: hardware/devices, software/applications and services/navigation. Nokia's City Lens, being demonstrated at CES, is a great example of urban AR. It's not clear which cities will have it or how many POIs there are. It looks like its only available on the Nokia Lumia 900′s at the moment. Uses onboard sensors to change view modes (held flat, the map shows up on the screen, held upright, list view shows up). OK, so it's rotation-aware. I wonder if this uses any Wikitude technology.

A notable exception to this hardware-centric line-up is Aurasma's announcement about its new 3D engine. Adding 3D puts the platform practically on par with Total Immersion and metaio, at least in terms of feature sets. The technology is featured in a video spot on the LA Times Web site. This and another nice piece in The Guardian is great for raising consumer awareness of AR. The Guardian wrote about the pterodactyl flying around Big Ben. And a video showing a prehistoric monster invading Paris.

There's enough AR-related news and excitement in the first three days of this week to fill a month!

Categories
Augmented Reality Events Research & Development Social and Societal

Algorithmic City and Pattern Language

The Mobile City Web portal and the blog of Martijn de Waal are inspirational to me. He introduces many concepts that, although they do not use precisely the same words/vocabulary, they mirror what I've been thinking and seeing. One of the posts on The Mobile City from March 2011 is a review by guest contributor Michiel de Lange of a compendium of articles about technology in cities edited by Alessandro Aurigi and Fiorella De Cindio.

The Augmented Urban Spaces (2008) would be the basis for a great conference on "smarter cities" and Internet of Things.

Another post that I found stiulting is Martijn's report of the Cognitive Cities Salon he attended in Amsterdam, Martijn highlighted a talk given by Edwin Gardner entitled "The Algorithmic City" which I am sorry to have missed and, unfortunately, I have not found the slides on-line (yet). From what Martijn writes, the subject of Algorithmic Cities is currently theoretical but one can imagine a day when it will become common place.

The Algorithmic City is the result of urban planners using algorithms as part of their process(es). Quoting from the Mobile City blog post published on July 3 2011:

"So far algorhithms have shown up in ‘parametric design’ where all kinds of parameters can be tweeked that the computer will then turn into a design for a building or even a complete city. Gardner is not so much interested in this approach. The problem is that there is no relation between the paramaters, the shapes generated and the society that is going to make use of these shapes. Social or economic data are hardly used as parameters and the result is ‘a fetishism of easthetics’, at best beautiful to look at, but completely meaningless.

Instead, Gardner takes inspiration from Christophers Alexander‘s book A Pattern Lanugage."

I'm not sure if there is a connection between the theoretical work of Gardner and a video of the next version of a software solution provided by Procedural (a startup based in Zurich purchased by ESRI on July 11, 2011), called CityEngine but somehow the two come together in my mind. Using CityEngine, design algorithms are manipulated using the simplest of gestures, such as dragging. It's not a software platform I'm likely to have need for in the near future, but I hope that urban planners will soon have opportunities to experiment with it, to explore the Algorithmic City concept, and that I will witness the results. Maybe someone will build an AR experience to "see" the Algorithmic City using smartphones.