Television. From a former life I vaguely remember this broadcast medium that was (and still is for some people) provided on a screen in a defined sequence of segments called "shows" in an order defined by something that was called a "program." The content is professionally produced and sometimes approaches the real world. Then there is this genre of television called "reality TV" but that's something else.

Companies that prepare content for broadcast sometimes mix a video signal from a television camera with a digital data stream in such a way that one (the digital data) overlays the signal from the camera, synchronized in real time so well that the viewer can imagine that the line is "drawn" in perspective in the real world. The most clear case of this is the first down line in America football. A line appears on the television over the video to show the viewer where the ball stopped on its way to the goal. Those who are in the stadium cannot see the line.

A recent article published about Augmented Reality (principally about the use of AR in medical use cases) on the National Science Foundation web site described the experience of seeing first-down line on television as an example of Augmented Reality. Unfortunately, the differences between composing a video in the studio and sending out to millions of viewers over a broadcast communication medium and composing an AR experience in real time on a user's device for viewing from precisely one pose are too numerous to be overlooked.

Here are a number of ways the two differ:

#1 pose: the content that is captured by the television camera is destined to be broadcast to a mass audience. It may be broadcast globally, locally, but it is still a one-to-many signal. In a broadcast technology studio the user (the individual for whom the composed scene is visible), the viewer's pose (context and position with respect to reality) is in no way utilized to create the experience (remember "AR Experiences"). In television there are "viewers" and in AR there are "users."

Test: If the viewer looks 180 degrees from where the composed scene is rendered, no longer viewing the television at all, the scene (first down line overlay on the video signal) is still there. If an AR user looks away from the point of interest, the augmentation no longer appears.

#2 real time: see point #1. Broadcast looks like it's real time, but it's delayed with respect to what the user does.  If you replay the same sequence of frames captured by the television camera, say 5 milliseconds, an hour or a year later, the same overlay will be possible. The AR experience requires that all the elements be exactly the same to reproduce the same AR experience. By definition, every AR experience is unique because we are unable to travel backwards in time to repeat a moment in the past.

#3 reality: What the user sees on the TV screen is digital overlay over digital media. It is composed centrally by software in the studio. Is the television camera operator's point of view the "reality"? Yes, but only for the operator of the camera.

I understand that the mobile device-based AR experiences we have today suffer from the same weakness in the definition of "reality".

My conclusion is that "broadcast AR" is a misnomer. It may be helpful to introduce the concepts of digital overlay but should not be confused with "real" AR. Over time, as more people have AR experiences of their own, we will have less need to use poor analogies to define AR and there may even come a day when we will be able to drop the label "AR" entirely.