The WebVR 1.0 Spec

July 11, 2016

Well, it’s official! The WebVR spec is on the road to becoming a standard with W3C, or World Wide Web Consortium which mediates standards for HTML, CSS, XML, and other web technologies.

Link to the spec:

https://w3c.github.io/webvr/

Elements of WebVR 1.0

The WebVR 1.0 spec is designed for rendering VR experience using a web browser as the source for content.

 

What’s Missing

The initial version of WebVR is designed assuming the HTML5 Canvas element is used, via the WebGL interface. It does not have an interface for natively including HTML DOM elements. Future versions might support multiple layers of DOM and 3d, which will be stacked and rendered in the VR headset. So, for the presented integrated web Ui elements (such as those which might be generated by jQuery) have to be simulated in the WebVR HTML5 Canvas display.

 

Breakdown of elements

VRDisplay – an object  which allows querying a computer to return a list of attached VR devices. One of these displays will typically be Google Cardboard, which is a generic set for smartphones. Additional VR devices such as Oculus or Vive can also be enumerated. VRDisplay is available under the Navigator objects in the browser, e.g.,

navigator.getVRDisplays().then(function (displays) {
  // Use the first display in the array if one is available. If multiple
  // displays are present, you may want to present the user with a way to
  // select which display to use.
  if (displays.length > 0) {
    vrDisplay = displays[0];
  }
});

Note that the result is returned as a Promise object, and in 2016 some browsers may need a pollyfill to support Promises.

The individual displays get their own ID. In addition, the active VR display can be read as activeVRDisplays,

Among the things returned by VR Display is a list of the current capabilities for the VR headset.

interface VRDisplayCapabilities {
  readonly attribute boolean hasPosition;
  readonly attribute boolean hasOrientation;
  readonly attribute boolean hasExternalDisplay;
  readonly attribute boolean canPresent;
  readonly attribute unsigned long maxLayers;
};

The actual scenes are presented in a VRLayer, which gets its bits from an HTML5 canvas, onscreen or offscreen. The display can read a sub-region of a larger <canvas> element.

dictionary VRLayer { VRSource? source = null; sequence<float>? leftBounds = null; sequence<float>? rightBounds = null; };
Some versions of Firefox have an older version of the API, which returns VRDevices instead of VRDisplays. This is deprecated.

Eye Field of View

The field of view for each eye is defined by the following object:

interface VRFieldOfView {
  readonly attribute double upDegrees;
  readonly attribute double rightDegrees;
  readonly attribute double downDegrees;
  readonly attribute double leftDegrees;
};

The WebVR Spec also describes features of rendering per-eye stereo images:

interface VREyeParameters {
  [Constant, Cached] readonly attribute Float32Array offset;
  [Constant, Cached] readonly attribute VRFieldOfView fieldOfView;
  [Constant, Cached] readonly attribute unsigned long renderWidth;
  [Constant, Cached] readonly attribute unsigned long renderHeight;
};

Here, renderWidth and renderHeight refer to the viewport. Rendering can define either two viewports, or one viewport with both stereo images.

 

Sensors and Environment

Sensor data is represented in the VRPose object, which provides information on the state of a given sensor. Sensors are organized on the following 3D coordinates, and assume a sitting position as default.

  • Positive X is to the user’s right.
  • Positive Y is up.
  • Positive Z is behind the user.

The kinds of input supported are:

interface VRPose {
  readonly attribute DOMHighResTimeStamp timestamp;

  readonly attribute Float32Array? position;
  readonly attribute Float32Array? linearVelocity;
  readonly attribute Float32Array? linearAcceleration;

  readonly attribute Float32Array? orientation;
  readonly attribute Float32Array? angularVelocity;
  readonly attribute Float32Array? angularAcceleration;
};

A Timestamp lets you know how often the given sensor is being updated.

 

The VR Environment

If room-scale VR is supported, the object will also return information about the play area, or stage.

interface VRStageParameters {
  readonly attribute Float32Array sittingToStandingTransform;

  readonly attribute float sizeX;
  readonly attribute float sizeZ;
};

WebVR assumes a default “sitting” coordinate system. This can be converted to “standing” coordinates via VRStageParameters.sittingToStandingTransform converts this to standing space.

 

Rendering

The VR system uses the .requestAnimationFrame() interface to send content to the headset. However, since the VR headset might refresh its screen at a different rate than the computer it is plugged into, you can’t assume that it will run like standard requestAnimationFrame.

VR headsets typically use fisheye-type lenses, which distort a flat image into a wraparound view in the user’s eye. To compensate for pixel stretching, the headset typically creates a complimentary distorted image, often with “barrel distortion” being used. Since multiple pixels may be needed to represent one pixel stretched in distortion, the HTML5 canvas will often be larger than the apparent image. The required increase in HTML5 canvas size can be computed by:

var leftEye = vrDisplay.getEyeParameters("left");
var rightEye = vrDisplay.getEyeParameters("right");

canvas.width = Math.max(leftEye.renderWidth, rightEye.renderWidth) * 2;
canvas.height = Math.max(leftEye.renderHeight, rightEye.renderHeight);

Utility

Right now, the 1.0 version of WebVR is not “native” in any browser. However, there is a good polyfill for experimentation called WebVR-Boilerplate. Here’s the GitHub site:

https://github.com/borismus/webvr-boilerplate

The final WebVR API will allow you to take any element on the page, and make it fullscreen. The polyfill has to approximate this feature, and at the present does not work well for <canvas> tags embedded within a standard web page. The best environment is one where the <canvas> is attached to document.body in JavaScript, and fills the entire page.

You can keep up on the status of WebVR development on the Slack Channel

http://webvr.slack.com