There is currently no method or consistent approach to determine if a physical world person, place or thing has had an AR experience developed using its features and published permitting the use of Augmented Reality for deeper understanding or entertainment. The experience may exist, but the user must be “called to action” by a message. Calls to action can include a friend or professional providing a URL or the publisher raising user awareness with a symbol or logo.
Now let’s imagine ourselves in a world with devices with or connected to unlimited processing power, network resources and sensors, that AR experience developers have been busy encoding augmentations with valuable interactions but have no longer any need to attach a Call to Action, and that automatic systems have been deployed on mobile devices and in networks, rendering the manual experience authoring processes of 2015 unnecessary for the vast majority of assets and real world objects.
Unlike in 2015, experiences are not discreet (isolated to one or few targets) but rather can be the continuous integration of more than one information source. In fact, the digital assets available for viewing in synchrony with a user’s focus of interest are supplied by multiple data publishers and fused together to enrich the user’s daily life.
In a target-rich environment, the user’s system is continually comparing the context with databases of digital assets in a manner that is analogous to that which navigation services of 2015 use. Provided the user has not requested to filter experiences for only a few attributes, the user context will be continually monitored. Regardless the point of interest density, digital information (in navigation systems, the route) will appear.
In the future, users’ experiences of their world could be composed of digital assets presented as needed, in context and from multiple sources into a single experience. This is one of the many benefits of there being a widely-adopted discovery architecture accessible to the user’s software client.
AR Experience Discovery Defined
AR Experience Discovery is the result of there being new software intelligence built into network-based identity and data management systems and mobile user agents connected to sensors and continually monitoring the user’s context, such that digital assets encoded with triggers matching any stimuli in the user’s environment will be delivered (though not necessarily presented). Experiences prepared by publishers with assets on servers connected to Discovery services and who have a relationship with the individual user or the class of which this user is one, will be delivered and available for presentation, if and when the trigger is recognized.
Support for AR Experience Discovery requires that new steps be introduced in the AR authoring, publishing and detection processes.
At minimum an AR Experience Discovery system will involve:
1. a web services configuration interface for end user preferences
2. a web services interface to automatically send a stream of secure, personalized queries from the user’s application to one or more selected discovery services,
3. a web service interface on the discovery service provider to receive queries and associated assets (e.g., tokens) from authenticated users,
4. a web service interface between the discovery service and one or more AR publishing environments about which it is aware and stores a catalog.
Once the end user has completed and saved configurations, the discovery services will run in the background.
Beginning in March 2014, experts began to gather several times per month to informally discuss development/definition of the first AR Discovery use cases. A goal was formulated: We seek to develop examples of running code that demonstrate the components and interfaces of a simple, automated AR experience discovery service.
We will explore how this resembles or differs from other discovery systems, develop code re-using existing components, protocols and interfaces where possible and available.
Our long term vision is that AR Experience Discovery will be open and ubiquitous, easy to manage and benefit end users.
Open AR Experience discovery will be valuable to all segments of industry and society as a result of reducing the effort necessary for developers to make AR experiences findable and the effort neccessary for end users to have experiences matching needs, honoring the policies of all segments in the end-to-end system, including the end user, and to foster innovation through combinations and new configurations of discovery service components.
To reduce possibility of future misunderstandings, we are making sure that all the project participants have a common definition of AR discovery. This page serves, in part, that purpose.
We have described preliminary use cases. As new team members enter, we will go over the use cases and discuss possible simplification or enhancements.
We have begun to prepare interaction diagrams and will refine these to make sure we understand the required system components.
The work toward our shared goal begins by repurposing what already exists and soliciting from known sources the code we need.
We will describe the functional blocks referred to in the interaction diagram.
We will investigate both existing and new information encoding and exchange standards for use in or possible extensions required for AR discovery.
If/where we find we are lacking an open interface (something is missing), we will identify this/these.
We will work with AR experience developers to provide us the content and access to existing AR experiences.
Terms and Condition of Participation
The task force is working on the AR Experience Discovery project in the open. All those who wish to work on this project, agree to the simple “honor system” with respect to contributions and intellectual property. Anyone who enters will be notified of this policy and must agree to disclose to all members of the group by way of the archived mailing list prior to introducing anything that could be or is known to be proprietary.
The task of defining requirements that may lead to potentially new protocols, components and interfaces will be undertaken at the time when needed, in the most appropriate environment (aka the most appropriate SDO’s pre-standardization tools).
For more information
If you would like more information, to meet other members and to participate in a meeting of this project task force, please contact Christine Perey email@example.com.