Categories
Internet of Things Research & Development Social and Societal

City WalkShop

Adam Greenfield is one of the thought leaders I follow closely on urban technology topics. Adam and his network (including but going beyond the Urbanscale consulting practice) are far ahead of most people when it comes to understanding and exploring the future of technology in cities.

In this post I'm capturing information about this small event conducted in November 2010 in collaboration with Do Projects (in the context of the Drumbeat Festival) because it inspires me. I've also found documentation about two more of these done in spring of 2011 (Bristol and London). On March 11, there will be another one taking place in Cologne, Germany in collaboration with Bottled City.

City WalkShop experiences are "Collective, on-the-field discovery around city spots intensive in data or information, analyzing openness and sharing the process online."

I discovered the concept of WalkShops when I was exploring Marc Pous' web page. Marc just founded the Internet of Things Munich meetup group a few weeks ago and, in addition to being eager to meet other IoT group founders (disclosure: I founded IoT Zurich meetup in October 2011), I learned that he is a native of Barcelona (where the IoT-Barcelona group meets).

I got acquainted with Marc's activities and came across the Barcelona WalkShop done with Adam.

The WalkShop Barcelona is documented in several places. There's the wiki page on UrbanLabs site that describes the why and the what, and I visited the Posterous page. Here's the stated goal:

What we’re looking for are appearances of the networked digital in the physical, and vice versa: apertures through which the things that happen in the real world drive the “network weather”, and contexts in which that weather affects what people see, confront and are able to do.

Here's a summary of Systems/Layers process:

Systems/Layers is a half-day “walkshop” organized by Citilab and Do projects held in two parts. The first portion of the activity is dedicated to a slow and considered walk through a reasonably dense and built-up section of the city at hand. This portion of the day will take around 90 minutes, after which we gather in a convenient “command post” to map, review and discuss the things we’ve encountered.

I'd love to participate or organize another of these WalkShops in Barcelona in 2012, going to the same places and, as one of the outcomes of the process, to compare how the city has evolved. Could we do it as a special IoT-Barcelona meeting or in the framework of Mobile World Capital?

I also envisage getting WalkShops going in other cities. Maybe, as spring is nearing and people are outside more, this could be a side project for members of other IoT Meetup Groups?

Categories
Internet of Things Research & Development Social and Societal

Risks and Rewards of Hyperconnected-ness

I often get asked to define a Spime. The definition is simple “Space + Time” but the implications are deeper than most people have time to think about. That’s one reason that Wranglers are needed. But the fundamental attribute of a spime is that it is hyperconnected and it is doing something with its connections. By documenting or publishing where it was made, by whom, where it has traveled, or how long it has been “on” (or another attribute that can be detected by the object), our objects are developing memory. Ironically, for humans, being hyperconnected may work differently. 

In a series on the Read Write Web portal, Alicia Eler is exploring the hyperconnected life. The first piece she posted, How Hyperconnectivity Effects Young People, summarizes the results of a study on American Millennials and consequences of having an “always on” life. The Pew’s study of the impacts of always being connected to the Internet on the brains of youth is both qualitative and quantitative. Well worth a scan if not more of your time. Here are a few of the highlights I found particularly relevant:

  • relying on the Internet as our “external brain,” saves room in our “wet brains” for different kinds of thinking (no surprise here). 55% of those surveyed believe that the always on youth will have positive impacts on the world as a result of finding information more quickly and thinking in less structured ways, “thinking out of the box.” 42% of those surveyed feared the result would be negative.
  • always being connected tends to build a desire for instant gratification (no surprise here), and the increased chances of making “quick, shallow choices.”
  • Education reform is much needed to meet the requirements of these “new” and hyperconnected and mobile students. This really dovetails well with the outcomes of the Mobile Youth Congress held last week at Mobile World Congress in Barcelona. The iStudent Initiative suggests that learning should be more self-directed and the classroom will be where students report what they’ve learned.

Then, in a second post entitled, Introducing Your Hyperconnected Online-Offline Identity, Alicia explored the subject of fragmented identity. The premise is that our identities are fractured because we can be different people in different places and in response to those around us who are different (home, business, sports, entertainment/hobbies).

“The real self is saddled somewhere in the overlap between these three circles. These ideas of the self apply in both an online and offline context. This abstraction, explains ScepticGeek, may come at least partially from Carl Rogers.

Basic-Three-Circles-with-Text2.png

“Online, we battle with the same conflicts, plus a few other quirks. We are a Facebook identity (or two), a Twitter account, a LinkedIn oh-so-professional account and maybe even Google+ (plus search your world, no less). Each online identity is in and of itself an identity. Maintaining them is hard, often times treacherous work. We must slog through the Internet-addled identity quagmire.”

In another paradox, I think that when “things” are connected, even via a social network such as facebook, we (humans) truly have the opportunity to know the objects or places better, with a richer and deeper understanding because we think there’s more information, less subjective and more quantitative data on which we can base our opinions.

I wonder if there will also be ways for Spimes to have different personae, to project themselves in unique ways to different audiences. Perhaps it will be simpler because inanimate objects don’t have the need or desire to reconcile all their identities in the “self.” But it will always remain the responsibility of the wrangler to manage those identities. Job security is what I call that!

Categories
3D Information Augmented Reality Innovation

Playing with Urban Augmented Reality

AR and cities go well together. One of the reasons is that, by comparison with rural landscapes, the environment is quite well documented (with 3D models, photographs, maps, etc). A second reason is that some features of the environment, like the buildings, are stationary while others, like the people and cars, are moving. Another reason for these to fit naturally together is that there's a lot more information that can be associated with places and things than those of us passing through can see with our "naked" eyes. There's also a mutual desire: people –those who are moving about in urban landscapes, and those who have information about the spaces–need or want to make these connections more visible and more meaningful.

The applications for AR in cities are numerous. Sometimes the value of the AR experience is just to have fun. Let's imagine playing a game that involves the physical world and information encoded with (or developed in real time for use with) a building's surface. Mobile Projection Unit (MPU) Labs is an Australian start up doing some really interesting work that demonstrates this principle. They've taken the concept of the popular mobile game "Snake" and, by combining it with a small projector, smartphone and the real world, made something new. Here's the text from their minimalist web page:

"When ‘Snake the Planet!” is projected onto buildings, each level is generated individually and based on the selected facade. Windows, door frames, pipes and signs all become boundaries and obstacles in the game. Shapes and pixels collide with these boundaries like real objects. The multi-player mode lets players intentionally block each other’s path in order to destroy the opponent."

Besides this text, there's a quick motivational "statement" by one of the designers (this does not play in the page for me, but click on vimeo logo to open it):

 

 

And this 2 minute video clip of the experience in action:

I'd like to take this out for a test drive. Does anyone know these guys?

Categories
Augmented Reality Events

AR@MWC12

I'm heading to Barcelona for Mobile World Congress, the annual gathering of the mobile industry. It's always an exciting event and I meet a lot of interesting companies, some with whom I'm already acquainted, industry leaders, some new to the segments on which I focus but well known in mobile, and others that I've never heard of.

When I arrive in Barcelona on Sunday, I'm going to begin using the MWC AR Navigator, an application developed by mCRUMBS in collaboration with GSMA, the organizer of MWC, to make getting around the city and the event easier and efficient as possible (and with the assistance of AR, of course).

On Monday Feb 27 my first priority will be the Augmented Reality Forum. This half-day conference is sponsored by Khronos Group and four Khronos member companies: Imagination Technologies, ST Ericsson, ARM and Freescale. Through the AR Forum presentations, these companies are driving mobile AR awareness and sharing how their new hardware and open APIs from Khronos will improve AR experiences.

After the presentations, I will be moderating the panel discussion with the speakers. Join us!

In the following days, my agenda includes meetings with over 50 companies developing devices, software and content for AR experiences. Many of those with who I will meet don't have an actual booth. Finding these people among the 60,000 attendees would be impossible without appointments scheduled in advance (and the aid of the MWC AR Navigator)! Are you attending MWC and want to set aside a quick briefing with me, please contact me as soon as possible.

If you haven't booked meetings but want to see for yourself what's new for AR in 2012, I recommend that you at least drop by these booths for demonstrations:

  • Imagination Technologies – Hall 1 D45
  • ST Ericsson partner zone – Hall 7 D45
  • ARM – Hall 1 C01
  • Freescale – AV27
  • Qualcomm – Hall 8
  • Nokia/NAVTEQ – Hall 7
  • Alcatel-Lucent – Hall 6
  • Aurasma (HP) –Hall 7
  • Texas Instruments – Hall 8 A84
  • Intel – Hall 8 B192
  • VTT – Hall 2 G12
  • mCRUMBS and metaio – Hall 2.1 C60
  • HealthAlert App
 – Hall 2.1E65
  • Augmented Reality Lab – Hall 
2H47
  • Blippar – Avenue AV35
  • BRGR Media
 – Hall 2 F49
  • Pordiva – Hall 2 E66
  • wöwbile Mobile Marketing – Hall 7 D85

These are not the only places you will see AR. If you would like me to add others to this list, please leave a comment below.

Categories
Augmented Reality Innovation

Square Pegs and Round Holes

For several years I've attempted to bring AR (more specifically mobile AR) to the attention of a segment of media companies: those that produce and sell print and digital content (of all kinds) as a way of bringing value to both their physical and digital media assets (aka "publishers").

My investments have included writing a white paper and a position paper. I've mused on the topic in blog posts (here and here), conducted workshops, and traveled to North America to present my recommendations at meetings where the forward-looking segment of the publishing industry gathers (e.g., Tools of Change 2010, NFAIS 2011).

I've learned a lot in the process but I do not think I've had any impact on these businesses. As far as the publishers of books, newspapers, magazines and other print and digital content (and those who manage) are concerned, visual search is moderately interesting but mobile AR technology is a square peg. It just has not fit in the geometry of their needs (a round hole).

With their words and actions, publishers have demonstrated that they are moving as quickly as they possibly can (and it may not be fast enough) towards “all digital.” Notions of extending the life expectancy of their print media assets by combining them with interactive digital annotations are misplaced. They don’t have these thoughts. I was under the illusion that there would be a fertile place, at least worthy of exploration, between page and screen, so to speak. Forget it.

After digesting this and coming back to the topic (almost a year since having last pushed it) I’ve adjusted my thinking. Publishers are owners and managers of assets that are (and increasingly will be) used interactively. The primary difference between the publisher and other businesses that have information assets is that the publisher has the responsibility to monetize the assets directly, meaning by charging for the asset, not some secondary product or service. Relative sizes and complexity of digital archives could also be larger in a company that I would label “publisher,” but publishers come in all sizes so this distinction is, perhaps, not valuable.

Publishers are both feeding and reliant upon the digital media asset production and distribution ecosystem. Some parts of the ecosystem are the same  companies that served the publishers when their medium was print. For example, companies like MarkLogic and dozens of others (one other example here), provide digital asset management systems. When I approached a few companies in the asset management solution segment, they made it clear that if there’s no demand for a feature, they’re not going to build it.

Distribution companies, like Barnes & Noble and Amazon, are key to the business model in that they serve to put both the print (via shipping customers and bookstores) and digital (via eReaders) assets in the hands of readers (the human type).

Perhaps this is where differentiation and innovation with AR will make a difference. I hope to explore if and how the eReader product segment could apply AR technology to sell more and more often.
 
 

Categories
Augmented Reality Events Standards

Interview with Marius Preda

On March 19 and 20, 2012 the AR Standards Community will gather in Austin, Texas. In the weeks leading up to the next (the fifth) International AR Standards Community meeting, sponsored by Khronos Group and the Open Geospatial Consortium, experts are preparing their position papers and planning contributions.

I am pleased to be able to share a recent interview with one of the participants of the upcoming meeting, Marius Preda. Marius is Associate Professor with the Institut TELECOM, France, and the founder and responsible of GRIN – Graphics and Interactive Media. He is currently the chairperson of MPEG 3D Graphics group. He has been actively involved in MPEG since 1998, especially focusing on Video and 3D Graphics coding. He is the main contributor of the new animation tools dedicated to generic synthetic objects. More recently, he is the leader of MPEG-V and MPEG AR groups. He is also the editor of several MPEG standards.

Marius Preda’s research interests include 3D graphics compression, virtual character, rendering, interactive rich multimedia and multimedia standardization. And, he also leads several research projects at the institutional, national and European and international levels.

Spime Wrangler:  Where did you first learn about the work going on in the AR Standards community?

MP: In July 2011, during the preparations of the 97th MPEG meeting, held in Turin, Italy, I had the pleasure to meet Christine Perey. She came to the meeting of MPEG-V AhG, a group that is creating, under the umbrella of MPEG, a series of standards dealing with sensors, actuators and, in general, the frontier between physical and virtual world.

Spime Wrangler:  What sorts of activities are going on in MPEG (ISO/IEC JTC1 SC29 WG11) that are most relevant to AR and visual search? Is there a document or white paper you have written on this topic?

MP: Since 1998, when the first edition of MPEG-4 was published, the concept of mixed – natural and synthetic – content was made possible in an open and rich standard, relatively advanced for that time. MPEG-4 was not only advancing the compression of audio and video, but also introducing, for the first time, the compression for graphics assets. Later on, MPEG revisited the framework for 3D graphics compression and grouped in Part 16 of MPEG-4 several tools allowing compact representation of 3D assets.

Separately, MPEG published in 2011 the first edition of MPEG-V specification, a standard defining the representation format for sensors and actuators. Using this standard, it is possible to deal with data from simplest sensors such as temperature, light, orientation, position to very complex ones such as biosensors and motion cameras. Similarly for actuators. From the simple vibration effect today embedded in almost all the mobile phones to complex motion chairs such the ones used in 4D theatres, these can all be specified in standard-compliant libraries.

Finally, several years ago, MPEG standardized MPEG-7, a method for describing descriptors attached to media content. This work is currently being extended. With a set of compact descriptors for natural objects, we are working on Visual Search. MPEG has also ongoing work on compression of 3D video, a key technology in order for realistic augmentation of the captured image to be provided and rendered in real time.

Based on these specifications and the significant know-how in domains highly relevant to AR, MPEG decided in December 2011 to publish an application format for Augmented Reality, grouping together relevant standards in order to build a deterministic, solid and useful model for AR applications and services.

More information related to MPEG standards is available here.

Spime Wrangler:  Why are you going to attend the meeting in Austin? I mean, what are your motivations and what do you hope to achieve there?

MP: The objective of AR Standards is laudable but, at the same time, relatively difficult to achieve. There are currently several, probably too many, standardization bodies that claim to deliver relevant standards for AR to the industry. Our duty, as standard organizations, is to provide interoperable solutions. This is not new. Organizations, including Standards Development bodies, always try to use mechanisms such as liaisons or to cooperate rather than to compete.

A recent very successful example of this is the work on video coding jointly created by ISO/IEC MPEG and ITU-T VCEG and published individually under the names MPEG-4 AVC and h.264 respectively. In fact, there is exactly the same document and a product compliant with one is implicitly compliant with the second. My motivation in participating to Austin meeting is to verify if such collaborative approach is possible as well in the field of AR.

Spime Wrangler: Can you please give us a sneak peak into what you are going to present and share with the community on March 19-20?

MP: I’ll present two aspects of MPEG work related to AR. In a first presentation, I’ll talk about the MPEG-4 Part 16 and Part 25. The first is proposing a set of tools for 3D graphics compression, the second an approach on how to apply these tools to scene graph representations other than the one proposed by MPEG-4, e.g. COLLADA and X3D. So, as you can see, there are several AR-related activities going on in parallel.

In the second presentation I’ll talk about the MPEG Augmented Reality Application Format (ARAF), and MARBle, an MPEG browser developed by the TOTEM project for AR (currently available for use on Android phones). ARAF is an ongoing activity in MPEG and early contact with other standards body may help us all to work towards the same vision of providing a one stop solution for AR applications and services.

Categories
Internet of Things Standards

Can We Define IoT (continued)?

Ovidiu Vermesan is Chief Scientist at SINTEF Information and Communication Technology, Oslo, Norway and co-editor (with Peter Friess, EC Coordinator, DG Information Society and Media) of a book of essays on Internet of Things. This is a real masterpiece for those who are seeking a comprehensive look at all the different trends around IoT.

Vermesan recently contributed to a mailing list discussion on the topic of how we will define the IoT (see my post in January 2012 on the topic here and from June 2011 here). This was in response to a suggestion that IoT should be limited to those applications enabled with RFID. I find the historical perspective of this debate interesting so I obtained permission to publish the memo here (with a few edits).

"Internet of Things is much more than M2M communication and wireless sensor networks, 2G/3G/4G, RFID, etc. These are enabling technologies that will make "Internet of Things" applications possible.

So, what is the Internet of Things? Let's look at some statements made in the past 90 years:
 
1926 – Nikola Tesla in an interview with Colliers magazine:  "When wireless is perfectly applied the whole earth will be converted into a huge brain, which in fact it is, all things being particles of a real and rhythmic whole………and the instruments through which we shall be able to do this will be amazingly simple compared with our present telephone. A man will be able to carry one in his vest pocket."
 
1991 – Mark Weiser's Scientific American article on ubiquitous computing ‘The Computer for the 21st Century’, where he stated “The most profound technologies are those that disappear. They weave themselves into the fabric of everyday life until they are indistinguishable from it”.
 
1999 – Neil Gershenfeld published his book "When Things Start to Think" and stated “in retrospect it looks like the rapid growth of the World Wide Web may have been just the trigger charge that is now setting off the real explosion, as things start to use the Net.” 1999.
 
2004 – Neil Gershenfeld, Raffi Krikorian and Danny Cohen in the article "The Internet of Things" stated "The principles that gave rise to the Internet are now leading to a new kind of network of everyday devices, an "Internet-0" (unfortunately, this is not accessible without a subscription to Scientific American).
 
2009 – Kevin Ashton in an article in RFID Journal: "I could be wrong, but I'm fairly sure the phrase "Internet of Things" started life as the title of a presentation I made at Procter & Gamble (P&G) in 1999".

Thank you to Vermesan and all those who foresaw and shared this trend, and to those who continue to seek a definition that will hold firmly into the future while also serving us today!

Categories
Augmented Reality

Augmented Vision 2

It’s time, following my post on Rob Spence’s Augmented Vision and the recent buzz in the blog-o-sphere on the topic of eyewear for hands-free AR (on TechCrunch Feb 6, on Wired on Feb 13, on Augmented Planet Feb 15), to return to this topic.

I could examine the current state of the art of the technology for hands-free AR (the hardware, the software and the content). But there’s too much information I could not reveal, and much more I have yet to discover.

I could speculate about if, what and when Google will introduce its Goggles, as been rumored for nearly 3 months. By the way, I didn’t need a report to shed light on this. In April 2011, when I visited the Google campus, one of the people with whom I met (complete with his personal display) was wearable computing guru and director of the Georgia Institute of Technology Contextual Computing Group, Thad Starner. A matter of months later, he was followed to Google by Rich deVaul whose 2003 dissertation on The Memory Glasses project certainly qualifies him on the subject of eyewear.  There could, in the near future, be some cool new products rolling out for us, “ordinary humans,” to take photos with our sunglasses and transfer these to our smartphones. There might be tools for creating a log of our lives with these, which would be very helpful. But these are not, purely speaking, AR applications.

Instead, let me focus on who, in my opinion, is most likely to be adopting the next generation of non-military see-through eyewear for use with AR capabilities. It will not be you nor I, or the early technology adopter next door who will have the next generation see-through eyewear for AR. 

It will be those for whom having certain, very specific pieces of additional information available in real time (with the ability to convey them to others) while also having use of both hands, is life saving or performance enhancing. In other words, professional applications are going to come first. In the life saving category, those who engage in the most dangerous field in the world (i.e., military action) probably already have something close to AR.

Beyond defense, let’s assume that those who respond to a location new to them for the purpose of rescuing people endangered by fire, flooding, earthquakes, and other disasters, need both of their hands as well as real time information about their surroundings. This blog post on the Tanagram web site (where the image above is from), makes a very strong case for the use of AR vision.

People who explore dark places, such as underwater crevices near a shipwreck or a mine shaft already have cameras on their heads and suits that monitor heart rate, temperature, pressure and other ambient conditions. The next logical step is to have helpful information superimposed on the immediate surroundings. Using cameras to recognize natural features in buildings (with or without the aid of markers) and then altimeters to determine the depth underground or height above ground to which the user has gone, floor plans and readings from local activity sensors could be very valuable for saving lives. 

I hope never to have to rely on these myself, but I won’t be surprised if one day I find myself rescued from a dangerous place by a professional wearing head-mounted gear with Augmented Reality features.

Categories
2020 Business Strategy

When the Title Fits, Wear It

A GigaOM blog post that Gene Becker tweeted about today reminded me that many of those who were "traditionally employed" in past years are setting up their web sites, starting their blogs, joining the ranks and giving the "independent" way of life a try. According to MBO Partners there are over 16 million freelancers, consultants and other independently employed people in America today. By 2020, the number is expected to quadruple to approximately 65 million.

Who are these people, I ask myself? The CEO of MBO Partners (quoted in the GigaOM blog) reports that seven out of ten of those interviewed in a survey believe they are "experts in their field" and have advanced skills and education. Unfortunately, this statement is not supported with hard data or a report.

In my opinion, only clients (i.e., those who hire consultants) are in a position to decide if a consultant is an expert worthy of their fee. Are these people who MBO Partners interviewed successful? To be a successful consultant requires a lot of focus on clients while, at the same time, being vigilant, reading the tea leaves, to detect emerging trends and to explore new directions.

After 21 years working as an independent, it's a topic about which I am qualified to have opinions.

The most important qualifications of an independent in any field are to be passionate about a domain, to pay attention to detail, to be discrete about what is said (and what is omitted) and, for a consultant, to have a strong desire for the client to succeed. The last of these, to prioritize the client's success, is an essential component of a consultant's character that carries risk. Often clients can describe what they need and, together, the goal can be reached, the product shipped, the service proven valuable. But sometimes the client fails for reasons outside a consultant's control, regardless of the consultant's contribution to a company or project. Sometimes the client succeeds but doesn't recognize or feel the need to acknowledge the role of the consultant. Few of the other consultants who I've helped get started and with whom I've worked can live with these and other risks of being independent. 

Having grown accustomed to managing risk (and balancing other aspects of professional independence that I won't go into here), the title of "consultant" no longer reflects how I feel, who I am. To me, the title of "Spime Wrangler" captures better the element of uncertainty with which I am comfortable and the strong desire to pursue the unknown, to tackle or wrestle with the Spimes.

How many of those who decide to hang out their shingle between now and 2020 will share the sense of adventure that makes me leap out of bed each morning? I hope that those who do, whoever they are, will assume the title of Spime Wrangler as well.

Categories
Internet of Things Research & Development

The Big Data Bandwagon

Big Data and I go way back. How can I get on the Big Data Bandwagon?

It's not a domain into which I regularly stray but nuclear physics was the focus of both my parents' careers so their use of computers comes to mind whenever the topic of Big Data comes up. I'm stepping out of my comfort zone but I am going to hypothesize that the study of physics and the Internet of Things share certain attributes and that both are sexy because they are part of Big Data. You're looking at Big Data in the illustration here.

Both physics and IoT begin with the assumption that there's more to the world than what the naked eye can see or can be detected by any of our other human senses. You can't see atoms, or electrons or quarks or any of those smaller particles. All you can use to know they are there are the measurements of their impacts on other particles using sensors.

And sensors are also at the heart of the Internet of Things. In addition to the human-detectable phenomena, sensors embedded where we can't see them detect attributes we can't see, don't have a smell, don't make a sound or otherwise are too small, too large, too fast or too far away for us to use our "native" human sensors to detect. The sensors monitoring the properties of materials in physics (like the sensors in our environment monitoring the air quality, the temperature, the number of cars passing over a pressure sensor on the roadbed) communicate their readings with time stamps and these contribute to other readings as a set of data forms.

You get the rest: the raw data then become the building material upon which analyses can be performed. It's difficult for the common man to discern patterns from the illustration above or millions of sensor readings from a nuclear power plant. Machine learning and algorithms extract the patterns from the data for us and we use these patterns to gain insights and make decisions.

So, my point is that the concept of using computers to analyze large data sets to answer all kinds of questions–the core of Big Data–has been around the research community for decades and applies to many, if not all, fields. IBM has long been leading the charge on this. Here's an interesting project led by Jeff Jonas, Chief Scientist of IBM's Entity Analytics Group, that just celebrated its one year anniversary. A January 2012 HorizonWatching Trend Report presentation on Big Data points to lots of resources.

What's new with Big Data in 2012 is the relative ease with which these very large data sets can be reliably collected, communicated, stored and processed, and, in some cases, visualized.

A feature article about Big Data's relevance in our lives in the New York Times frames the subject well and then explains why Big Data is trending: everyone wants to see the past and the present, and to understand the world more clearly. With our improved "visibility" we might be able to make better decisions. The "text book" example is the Oakland Athletics baseball team comeback on which the book and movie, Moneyball, are based.

With the help of coverage in books, motion picture, major news media and tech bloggers, Big Data is one of the big memes of 2012. Trends like the widespread adoption of Big Data usually lead to large financial gains.

Let's see if I can use this data to make better decisions! Maybe I should re-brand everything I do so that the relationships of my activities to Big Data are more clear to others. Big Spimes? What do you think?

Categories
Internet of Things Standards

Does IoT Include Virtual Things?

Marco Carugi, one of ZTE's standards "agents," is presiding over the fourth day of the Fourth Meeting of the Global Standards Initiative on the Internet of Things.

In case it has escaped your attention, this is a short description of the activity, from the GSI-IoT Web page:

"The Global Standards Initiative on Internet of Things (IoT-GSI) promotes a unified approach in ITU-T for development of technical standards (Recommendations) enabling the Internet of Things on a global scale. ITU-T Recommendations developed under the IoT-GSI by the various ITU-T Questions – in collaboration with other standards developing organizations (SDOs) – will enable worldwide service providers to offer the wide range of services expected by this technology. IoT-GSI also aims to act as an umbrella for IoT standards development worldwide."

One of the recurring topics on the agenda of past meetings, this week's meeting, and the focus of the first two hours of today, is the definition of the Internet of Things. The definition, and my opinion about the work invested on this topic, have evolved from my post about it last June. It would not be appropriate for me to share/post the current ITU draft definition before it is published by the IoT-GSI group, so I will limit my remarks to my views on the process and one of the core decisions regarding the scope of IoT.

The tenacity and dedication demonstrated by the contributors is impressive. The systematic decomposition of each and every word used in the definition, the possible qualifiers and interpretations of these terms has produced dozens of internal documents and figures, tracing who contributed what. This traceability is fundamental to all standards processes. What impresses me is the way these engineers have systematically attacked so many fuzzy concepts using a language that is, for nearly all those involved, not their mother tongue (English).

One delicate point on which there has been some debate (though far less than I anticipated) is if the IoT definition will include in its scope (in addition to physical things), something called "virtual things." These virtual things are files, such as eBooks, photographs, songs and 3D objects, that can be stored, processed and accessed.

In the past I felt that virtual things should not be in scope but, after listening to contributions, I can see an argument for the IoT to include virtual things. Virtual things will not have attributes such as physical dimensions, communications protocols nor will they be able to measure the physical world. And, physical things will not have a digital dimension (kb of storage). Both virtual and physical things have attributes such as the date and time of creation, an author or manufacturer and an authority to which we will address requests. And, perhaps the most important attribute of all, the physical world might be modified as a result of communication with either virtual and physical things.

The inclusion of virtual things is important in my mind, but it is not the most hotly debated assertion. Other topics such as if there are, and how to describe, the "personalities" attributable to things is even more unclear and sensitive.

Being an observer of these discussions certainly has confirmed my interest in developing, as early as possible, an Augmented Reality glossary of terms on which SDOs will be able to form their own definitions. It has also opened my eyes to the careful editing process defined by the ITU for its work. Results of this process will be published later this year and I hope, for the sake of progress towards standards development, that the definitions will stand the test of time.

Categories
Augmented Reality Business Strategy

Between Page, Screen, Lake and Life

In my post entitled Pop-up Poetry I wrote about the book/experience Between Page and Screen. Print, art and AR technology mix in very interesting ways, including this one, but I point out (with three brilliant examples) that this project is not the first case of a "magic book."

Like many similar works of its genre, Between Page and Screen uses FLARToolKit to project (display) images over a live video coming from the camera that is pointed at the book's pages. Other tools used in Between Page and Screen include the Robot Legs framework, Papervision for 3D effects, BetweenAS3 for animation and JibLib Flash. Any computer (whose user has first downloaded the application) with a webcam can play the book, which will be published in April. Ah ha! I thought it was available immediately, but now learn that one can only pre-order it from SiglioPress.com.

And, as suggests Joann Pan in her post about the book on Mashable, "combining the physicality of a printed book with the technology of Adobe Flash to create a virtual love story" is different. Pan interviewed the author and writer of the AR code. She writes, "Borsuk, whose background is in book art and writing, and Bouse, developing his own startup, were mesmerized by the technology. The married duo combined their separate love of writing and technology to create this augmented reality art project that would explore the relationship between handmade books and digital spaces."

The more I've thought about this project and read various posts about Between Page and Screen, in recent days, the more confident I am that I might experience a magic book once or twice, but my preferred reading experience is to hold a well-written, traditional book. I decided to come back to this topic after I read about another type of "interactive" book on TechCrunch.

First thing that caught my eye was the title. Fallen Lake. Fallen Leaf Lake! Of course! I used to live in the Sierra Nevada mountains, where the action in this novel is set, and Fallen Leaf Lake is an exceptionally beautiful body of water. But the post by John Biggs points out that the author of Fallen Lake, Laird Harrison, is going to be posting clues and "extra features" about the characters in the book by way of a password protected blog.

All these technology embellishments on books seem complicated. They're purpose, Biggs believes, is to differentiate the work in order to get some tech blogger to write about the book and then, maybe, sell more copies.

Finally, Biggs points out that what he really wants in a book, what we all want and what will "save publishing," is good (excellent) writing. Gimmicks like AR and blog posts might add value, but first let's make sure the content is well worth the effort.