Categories
Augmented Reality Research & Development

ElipseAR-Cloud Image Recognition

There's long been a debate among computer vision experts between those who envisage feature extraction and matching in the network and those who implement it on the device. There are many factors one must consider and trade offs that must be made but in the end everything boils down to cost: what can you gain by where you put the different tasks if the tasks must be done in real time. For many applications, feature extraction is a bottle neck due to lack of computational power. Qualcomm's AR SDK is an example of device-based recognition.

ElipseAR, a startup based in Madrid, Spain, is heavily on the network side of the debate. They are planning to release a set of tools for markerless (feature-based) Augmented Reality and Computer Vision development that will make image matching and tracking, 3D animation rendering, geolocation using the camera view, face recognition easier to integrate into AR applications. Existing AR apps? Future AR apps?

The company's web site clearly makes a distinction between image recognition, image tracking and matching. What's not clear at the moment, because their position differs depending on which page you are reading, is how much of the ElipseAR processing is to happen within the device and how much will be in the network. They also may be confusing "image" with real time video recognition.

At the moment the company says it will offer its tools for commercial use at no charge. The beta program started in early July and is expected to run until the end of 2011.

Tests must be conducted in real world circumstances to measure the merits of the new algorithm and its architecture. It will be compared against not only other network-based image recognizers such as kooaba, but also other SDKs that have been out for much longer such as Qualcomm AR SDK, Qconcept and others. It's difficult to imagine, for example, ElipseAR getting out ahead of String Labs which released their code June 16, 2011.

Even if the reliability of the ElipseAR algorithms and architecture prove to be up to industry benchmarks, there will continue to be latency out of the control of the developer or user in the cellular networks. There have been rumors that the network "effect" can be overcome, but this will never be a universally reliable solution because coverage of mobile networks is and never will be 100%.

Categories
3D Information Business Strategy News Research & Development

London’s Imperial College

On July 27, 2011 the UK Research Council awarded a £5.9m grant to the “digital city exchange” programme of Imperial College. According to the press release, the funds are to be used to establish a new research center focusing on “smart cities” technologies.

A multidisciplinary team, involving businesses, public administrations and academia, is being put in place to use the city of London as a test bed for emerging smart cities hardware, software and processes. The article in the Financial Times very perceptively puts the focus on the following statements issued by spokesperson, David Gann, the head of the innovation and entrepreneurship group at Imperial College.

"New sensors and real-world monitoring can be combined with “cloud computing” to bring greater efficiency and new services to cities. For instance, data about peak traffic periods and local sensors could be used to avoid congestion for supermarket deliveries.

“London, with all its economic and social diversity, will be a very good place to launch some of these capabilities into new cities around the world and create new jobs and growth. The act of invention is at the point of consumption.”

Another article about the grant emphasizes, as did the press release, more of an urban planning angle.

It's very exciting to have this center establishing itself, although the size of the grant does not seem in line with the ambitions and objectives as they are described, and there should be others of its kind connecting to it as well.

Categories
Augmented Reality Research & Development

Reduced Reality

In the physical world, visually noisy environments are common. Some cultures enjoy or at least live in a stimulating visual landscape, be it on their screens or in/on the real world. I recall there being more visual noise in Asian urban landscapes than I am accustomed to. I prefer the work of designers that hide or disguise the clutter in every day life. Take, for example, power and telephone lines. For a variety of reasons these are above ground in some parts of the world and below in others.

I prefer a visually "simple" world. Blocks of uniform or lightly textured surfaces: the sky, the water of lake Geneva, even skyscrapers.

Why would the same algorithms and systems used to attach additional information to the real world not also be useful to reduce information? Power and telephone lines could "disappear" from view, as would graffiti and trash.

There was a poster at ISMAR2010 that demonstrated the "reduction" of reality using a mobile device to cover/camouflage a QR code in a photograph. By sampling from the background in the immediate proximity and tiling the same pixel colors and textures over the marker, there was a sense of continuity, the marker disappeared. Unfortunately, the specific project and references made to it are difficult to find but I hope to see more of this in the next ISMAR event in Basel.

Categories
3D Information Augmented Reality Research & Development

AR for Blacksburg

The AR-4-Basel project is a pilot for what could become a widespread trend: a municipality or any size area can make data sets it owns and maintains for its citizens available to AR developers who then can prepare AR experiences for visitors and inhabitants.

Ever since starting the AR-4-Basel project in May, I have been planning how to expand and apply the lessons learned to other cities. The first to follow is definitely Barcelona, Spain. The AR-4-Barcelona project is already ramping up. Then, Berlin is my next target. I’d like to explore the possibility of getting something started in Beijing as well, if there is going to be an AR in China conference in 2012.

Another “B” city which has all the earmarks of a future haven for AR experiences is Blacksburg, Virginia!

“The 3D Blacksburg Collaborative is a consortium of multi-disciplinary researchers, experts and students from various universities and governments, who are creating a data and delivery infrastructure for an interactive virtual 3D city model.”

Which “B” city would you nominate for a future AR project?

Categories
Innovation Research & Development

Innovation Research

INSEAD has published its Global Innovation Index for 2011.

"The overall GII scores provide a composite picture of the state of each country’s innovation performance. The Report stresses leaders by index, by income group and by region.

"Switzerland comes in at top place in the overall GII 2011 rankings (up from position 4th last year) on the basis of its strong position in both the Input and Output Sub- Indices (3rd and 2nd, respectively). Although the country does not top any individual pillar, it places within the top 5 in three Input pillars (Institutions, Market and Business sophistication) and both Output pillars (Scientific outputs and Creative outputs)."

Source: INSEAD (2011) Global Innovation Index

Another interesting point to examine is where China is positioned on the scales of R&D users and R&D importers. From the LiftLab, this is part of a post by Marc Laperrouza on his "Time to look east" blog. Marc pulled out this chart and made the comment below.

"As with many synthetic indexes, it is always worthwhile to dig further into the data. It turns out that China has a number of strengths and weaknesses. Among the former, the report lists patent applications, gross capital formation, high-tech imports and exports (a large majority are MNC-driven).

Among the latter, one can find regulatory quality, press freedom and time to start a business. True enough, both business and market sophistication have notably increased over the years and so has scientific output.If China aims to reach the top 20 or higher it will have to work hard (and fast) on its institutions."

Categories
Augmented Reality News Research & Development

Pittsburgh Pattern Recognition

On July 22 2011, Google acquired PittPatt, the Pittsburgh Pattern Recognition Team, a privately-held spin out of CMU Robotics.

Three questions jumped out when I learned of this acquisition.

  • Why? Doesn't Google already have face recognition technology?
    Unfortunately, based on the publicly available information, it's not clear what is new or different about PittPatt's technology. Okay, so they have an SDK. There are several possible explanations for this acquisition. Maybe the previous facial recognition technology Google had acquired with Neven Vision in August 2006 then released as part of Picasa in 3rd quarter 2008 (it appeared in Picasa as early as May 2007) was insufficient. Insufficient could mean inaccurate too often, too difficult to implement in mobile, not scalable. That doesn't seem likely.
    Maybe the difference is that the PittPatt technology was working on video as well as still images. YouTube already has a face recognition algorithm, but it is not real time. For AR it would be valuable if the face recognition and tracking performs reliably in real time.
    Another possible explanation has to do with IP. Given the people who founded PittPatt, perhaps there are some intellectual properties that Google wants for itself or to which it wants to prevent a competitor to have access.
     
  • What are the hot "nearby" properties that will get a boost in their valuation as a result of Google's purchase?
    Faces are the most important attribute we have as individuals and the human brain is hard wired to search for and identify faces. Simulating what our brains do with and for faces is a fundamental computer vision challenge. Since this is not trivial and so many applications could be powered by face recognition (and when algorithms can recognize faces, other 3D objects will not be far behind), there's always a lot of resources going into developing robust, accurate algorithms.

     

     

    Many–perhaps dozens–of commercial and academic groups continually work on facial recognition and tracking technology. Someone has certainly done the landscape analysis on this topic. One of the face recognition research groups with which I've had contact is at Idiap in Martigny, Switzerland. Led by Sebastien Marcel, this research team is focusing on the use of such highly accurate facial recognition that it can be the basis for granting access. KeyLemon is an Idiap spin off using the Idiap technology for biometric authentication to personal computers. And, there is (almost certainly) a sizable group already in Google dedicated to this topic. 
     

  • What value added services or features can emerge that are not in conflict with Google's privacy policy and haven't been thought of already/implemented by Google and others?
    This is an important question that probably has a very long and complex, multi-part answer. I suspect it has a lot to do with 3D objects. What's great about studying faces is that there are so many different ones to work with and they are plastic (distort easily). When the algorithms for detecting, recognizing and tracking faces in video are available on mobile devices, we can imagine that other naturally occurring and plastic objects would not be too far behind.

I hope Eric Schmidt is proven wrong about there not being facial recognition in the future of Google Goggles and similar applications and we see what is behind the curtain in the PittPatt acquisition!

Categories
Business Strategy Internet of Things Social and Societal

Shaspa-Shared Spaces

Oliver Goh of Shaspa Research said in an interview with Into Tomorrow during CES2010 that "smart technologies" should solve real world problems we experience. That oversimplifies the situation a bit, I think. The types of problems we as individuals want technology to solve will be different based on our circumstances (age, home vs business, country of residence, culture, etc) and the challenges facing businesses also vary widely depending on the domain, currency fluctuations and so forth.

So how could one device detect any circumstance and be ready to respond? Good question! One which I hope to be able to ask about the Shaspa Bridge.

According the Shaspa web site where I found this diagram, their technology connects sensors, gathers data and supports software for decision making and management of resources. Their applications are focusing on shared living and working spaces–hence the name "Sha" for Shared and "Spa" for Spaces.

Sounds remarkably reminiscent of the applications built on the Pachube platform using sensors in the environment or on a smart phone to inform decision making.  But the companies with which Shaspa seeks to do business are quite different and, although there is reference on the site to open and interoperable solutions based on standards, the concepts of Open Source and building communities of users and developers are noticeably absent from their positioning.

Shaspa has some points in common with WideTag in that there is a social media component to the platform. And, similarly to WideTag over the past year, Shaspa does not appear (based on its web site "news" section) to be making much noise. The most recent posting on SlideShare is already over 24 months old. The company could be conserving resources for when there are greater opportunities for businesses serving the developers of solutions based on the Internet of Things, or busy actually doing projects which are too sensitive to make public.

Could Shaspa be one of the companies which will get a positive boost from the recent acquisition of Pachube?
 

Categories
Business Strategy Internet of Things

WideTag too?

With the dust settling around the Pachube acquisition, it's important to consider other companies that might be out there in the same category and impacted by the change in the landscape. One of these companies is WideTag. Although it is technically based in Redwood City, California, the company was founded by three Italians and I believe that the "heart" of the project was in Northern Italy.

WideTag's angle on the sensor data aggregation problem was to provide a software platform that has a social media component. Aside from the emphasis on social media, WideTag and Pachube are very similar. Compare with the Pachube mission, this text:

"The WideSpime framework for massive data collection applications allows for the rapid development of highly scalable, and robust vertical applications in the areas of energy, environment, industrial monitoring, and others.

The OpenSpime development libraries have been put in open-source in order to spur the growth of a healthy community sharing the spime-based vision of the forthcoming Internet Of Things. In addition to this, Roberto Ostinelli, WideTag’s CTO, released in open-source Misultin >-|-|-|<>, a high-performance http server."

The major differences between Pachube and WideTag today are that WideTag is no longer an active business, while Pachube has a major sponsor and deep pockets from which to draw.

It was clear from the declining level of newsworthy activity and developments throughout 2010 that the company was not growing. In March 2011, a post by WideTag CEO, Leandro Agrò, on the site announced that the three co-founders had gone their separate ways but were thankful for the opportunity they had to work in the exciting field of the Internet of Things. What was the difference? Was it a resource limitation?

So now, with the Pachube property valuation in mind, is there an opportunity to pour a little cash in and revive WideTag? Is there a WideTag Phase 2? Or is there a fresh, new company, like Open Sen.se, coming in to fill the void?

Categories
Internet of Things News

Pachube Acquired by LogMeIn

The news broke earlier today that Wobrun, Mass-based LogMeIn, a provider of software to remotely access computers and mobile devices, acquired Connected Environments, the provider of Pachube for approximately $15M cash. In its press release, and the investor relations conference call that followed, LogMeIn said that it intends to leverage the acquisition to expand its Gravity platform while leaving the existing team in place. Usman Hague, the founder of Connected Environments and the individual most closely identified with the company's vision, wrote a sincere post about his hope for the future on his blog.

Pachube (pronounced Patch Bay) has been around for nearly 4 years (the service was launched in 2008) and has had a tremendous impact on the development of concrete Internet of Things projects.  I hope that this continues and, with the resources of the parent company, expands in the future.

A few words from the LogMeIn press release:

"The Pachube Service and User Community

Pachube is an Internet of Things pioneer.  Their service offers real-time monitoring and management of any type of connected device. Pachube makes it easy for people to connect their devices and sensors to its service, to publish data, and to receive data and instructions from other devices. The Pachube service also collects and stores the published datastreams for further analysis and visualization. Using the Pachube service, individuals, developers and businesses can create applications, services and products that leverage the data created by these connected devices. In doing so, Pachube empowers people to share, collaborate and make use of the information generated by the world around them.  Currently, Pachube users send more than seven million datapoints to the service each day."

The Pachube community is, in my mind, the most valuable asset of the company which cannot quickly be rebuilt. I wonder if LogMeIn will be able to nurture and to grow the community which is composed largely of people who are very firmly devoted to open source.

What do you think?

Categories
Innovation Research & Development

3D City Models and AR

Google StreetView was certainly a trail-blazing concept and it has entered the mainstream. But it was not the first service and Google isn’t the first company that had the concept to collect data about the physical world by driving a specially equipped vehicle (with one or more cameras, high performance GPS and other sensors) through space. Decades earlier, the Jet Propulsion Laboratory worked on this concept in order to permit the vehicles landing on the moon (or other spatial bodies) to record their immediate environment. Earthmine is a pioneer not only in the capture of the real world (using designs developed by the JPL) but also to explore business models based on this data sets. What do these have in common? They proved that the ambitious goal of digitally “capturing” the real world in a form that supports navigation through the data afterwards, was possible.

As the technologies developed in these projects have evolved and become more powerful–in every dimension–and competitors have emerged based on other maturing technologies, systems are detecting the physical world at higher and higher resolutions, and the data gathered produce increasingly more accurate models at lower costs.

Instead of “manually” building up a 3D model from a 2D map and/or analog data, urban environments are being scanned, measured and modeled at an amazing speed, and at lower cost than ever before. Fascinating, but to what end?

In the AR-4-Basel project, we seek to make available to AR developers accurate 3D models in order for the digital representation of the real world to serve as the basis for higher performance AR experiences. The concept is that if a developer were able to use the model when designing experiences, or the placement of content, they would have a virtual reality in which to experiment. Then, when in the real world the user’s device with a camera would automatically extract features, such as edges of buildings, roofs, and other stationary attributes of the world, and match those with the features “seen” earlier in the digital model. The digital data would be aligned more accurately and the process of augmenting the world with the desired content would be faster.

In order to determine if this is more than just a concept, I need to find and receive the assistance of 3D city model experts. Here are a few of the sites to which I’ve been in search of such knowledge:

This process is proving to be time consuming but it might yield some results before another solution to improve AR experience quality emerges!

Categories
2020 Internet of Things

The Singularity

Ray Kurzweil is one of the pundits I like to follow when I want to look into the distant future. I first became aware of his work in mid-2001 when I skimmed the Age of Spiritual Machines and then, for reasons that escape me, it just rest on my bookshelf amongst the many other thought-provoking works. It wasn't until I discovered the heavy volume for which he is better known today that I took in the magnitude of Kurzweil's vision: that technological advancement will be central to unlocking the enduring mysteries of brain function.

I got the hard copy and read the Singularity is Near in mid-2006, as soon as it came out. It was an excellent sequel, of sorts, to an extremely well-written work by Washington Post writer Joel Garreau, Radical Evolution. I highly recommend that anyone who wants to examine the future begin with Garreau's work because it offers a greater variety of perspectives than Kurzweil.

In late 2008, the Singularity University was born. It's great that there is a working think tank with "real world" laboratory examining the scenarios he (and now others) propose. Ever since the establishment of the Singularity U, it seems that Kurzweil's "properties" have gained a lot of momentum. But they are not without detractors.

This blog post entitled "The Singularity is Far" on the Kurzweil site caught my attention because it so directly questions some of the Singularity theory's basic premises about the human brain and future technologies. And it is featured on the Kurzweil site! While a neuroscientist's view seems one to which we should listen on the topic of brain enhancing technologies, and he raises many excellent points in his essay, I find the nanobots scenario very attractive, regardless of when it may finally be possible!

It would be fantastic to have the opportunity to study or attend some sessions at the Singularity U. I'd really like to learn how they address obstacles, to apply their techniques to the projects on which I work. Ideally, such an investment would be something from which I would benefit long before we have nanobots flowing in our bloodstreams!

Categories
Internet of Things Policy, Legal, Regulatory Social and Societal

Smart Cities and Big Citizens

The AR-4-Basel project is a framework by which public data about a city, the city of Basel more specifically, can be put in the hands of Augmented Reality developers using a variety of tools and platforms and to encourage the development community to be creative. Many scenarios for AR in urban environments are for consumers. The end goal being that if we knew more about our immediate environments, we might make different decisions.

The departments of the city of Basel with whom I'm in communications are primarily thinking of the Internet of Things, and AR in particular, as a professional tool, enabling people to do their job more efficiently when in the field, perhaps to save on resources/reduce waste (increase efficiency) and to make better decisions which might impact their lives or those of others.

So, in the context of this project, I'm spending a lot of time speaking with experts and reading the opinions of those much more informed in these matters of "smart cities" than I. Martijn de Waal is one of those that has invested highly of himself in this topic and clearly "gets it."

One of the posts that I found particularly enlightening is a "dialog" of sorts between Ed Borden of Pachube and Adam Greenfield of Urbanscale. Rather than read my paraphrasing, please read it.

At this point, the jury is out on if these are really different positions and if different, which of these positions best characterizes the situation. It is early enough that cities (BigGov) and their managers (politicians) could "wake up" and take a more active role in their own technology use. But not all citizens want or should be participating in the decisions that require having all (and some of it sensitive) data. And, it is definitely true that citizens can and should be involved in some of these services which primarily benefit them.

I look forward to seeing this dialog continue and to learning more from the experts in this field. Maybe as a small citizen of a small urban area in a small country, I will be able to make a difference in how others live.