Header Ads

Supporting sensors in Windows 8


Recent advances in sensor technology are catalysts for the acceleration and evolution of user experiences on PCs. The ability to react to changes in ambient light, motion, human proximity, and location are becoming common and essential elements of the computing experience. Even something simple—like an ambient light sensor to adjust display brightness in a room with changing light—is potentially a basic scenario for desktop PCs. Of course, we also want to make sure you have full control over the use of these peripherals, since we know that different sensors leave open opportunities for risk or abuse that some folks might not be comfortable with. This post looks at the details of supporting sensors in Windows 8 and was authored by Gavin Gear, a PM on the Device Connectivity team.
--Steven

The first thing we explored about sensors was how Windows 8 should use them at the system level, to adapt the PC to the environment while preserving battery life.

Adaptive brightness
The first system feature was automatic display brightness control, or what we call “adaptive brightness.” This was a feature that we first introduced in Windows 7 using ambient light sensors (ALS), and is targeted at mobile form factors like slates, convertibles, and laptops. With today’s display panels supporting brightness levels at approximately twice the intensity of what was common just a few years ago, this feature is more important than ever. By dynamically controlling screen brightness based on changing ambient light conditions, we can optimize the level of reading comfort, and save battery life when the screen is dimmed in darker environments.



A tablet PC in harsh outdoor lighting with adaptive brightness (left), and without (right)

You can see here that adaptive brightness helps you see content on the screen more clearly, since the screen automatically gets brighter when the tablet enters a bright environment. And for those of you who use your desktop PCs in a sunny room, you know this same thing can happen at different times of the day in different seasons.

Automatic screen rotation
Many smartphones and other mobile devices have established the expectation that when you rotate the device, the graphic display will also rotate and adapt to the new orientation (including adapting to aspect ratio changes). Data from an accelerometer allows the device to determine its basic orientation. By automatically rotating the screen, people can use their devices (primarily slates and convertibles) in a more natural and intuitive way, without needing to manually rotate the screen with software controls or hardware buttons.



Windows 8 Start screen in landscape and portrait orientations

Developer support for sensors
Beyond figuring out the basics for how a Windows 8 system might use sensors, we also needed to think about how apps might use sensors. We looked at a variety of examples of sensor-enabled apps including games, commercial applications, tools, and utilities, to help us determine which scenarios to support.

First on the list was the ability for apps to understand motion and screen rotation. This requires an accelerometer – a device that can be used to measure the force due to gravity, and the motion of the device itself. But most scenarios require more than just an understanding of motion and gravity. Orientation is also an important requirement for many applications. To enable a PC to understand orientation we needed to integrate the functionality of a compass.

Supporting a compass would at minimum require a 3D accelerometer (which measures acceleration on three axes) and a 3D magnetometer (which measures magnetic field strengths on 3 axes). This combination of sensors is called a 6-axis motion and orientation sensing system, and can support a basic tilt-compensated compass, screen rotation, and certain casual game apps like a labyrinth style game. However, in our testing and prototyping, we found the 6-axis motion sensing system has two key drawbacks: sporadic compass inaccuracy, and a lack of the responsiveness required by 3D interactive games.

Recently, a new type of sensor has started to emerge on phone platforms – the gyro sensor. Gyro sensors measure angular speed, typically along 3 axes. You can also use the data from gyro sensors to increase the responsiveness and accuracy of 3D motion-sensing systems. A gyro sensor is very sensitive, but it lacks any form of orientation reference (such as gravity or north heading).

This diagram shows how gyro data is represented as a set of three rotations along the three primary axes for the device:



Initially, some thought that the need for such sensors was scoped to very few apps, such as specialized games. But the more we examined the 3D motion and orientation sensing problem, the more we realized that applications are much more immersive and attractive if they react to the kind of motion humans can easily understand, such as shakes, twists, and rotations in multiple dimensions. With these kinds of sensors it would certainly be possible to build very immersive 3D games, but it would also enable lots of other apps to more naturally respond to input from a variety of motions, including mapping and navigation applications, measuring utilities, interactive (between two machines) applications, and simple apps like casual games.

Engineering challenges
We started our exploration into motion apps by prototyping some 3D experiences. The first challenge was to map the physical orientation of the device directly to a virtual 3D environment in the app. We decided to model a simple augmented reality experience by emulating a tablet as a window into a virtual world. The concept was fairly simple: when you move the device while looking at the screen, the virtual environment (the inside of a room) would appear to stay stationary.

Initially, we tried an experiment using the accelerometer to map up and down movement of the device to up and down movement of the 3D environment in response. When you hold the device still, the scene should remain stable. When you tilt the device, the view should tilt up or down. Right away we encountered an issue: “noise” in the data from the accelerometer sensor was causing jittery movement of the 3D environment even when the device was held stationary. We were able to see this noise clearly by capturing accelerometer data and charting it.



Without noise, the lines on the chart would be straight, with no vertical deviation. The conventional way to remove such noise is to apply a low-pass filter to the raw data stream. When we implemented this mitigation in our prototype, the resultant motion was smooth and stable (jitter-free). But the low-pass filter introduced another problem: the app lost responsiveness and felt sluggish when responding to motion. We needed a way to compensate for this jitter without reducing responsiveness.

The next experiment was to provide the ability to “look left” and “look right” in our virtual 3D environment app. We used a 6-axis compass solution (3D accelerometer + 3D magnetometer) to support this type of movement. Although this kind of worked, the movement was not consistent due to the general instability of the 6-axis compass. It was also challenging to blend the up-and-down movement with the left-and-right movement.

From these experiments it was clear that this combination of sensors could not provide the fluid and responsive experience we wanted. The accelerometer sensor was not providing clean data, and could not be used alone to determine device orientation. The magnetometer was slow to update and was susceptible to electromagnetic interference (think of a compass needle that sticks in one position occasionally). We had yet to experiment with the gyro sensors, but because gyros could only determine rotational speed, it wasn’t clear how they could help.

Creating “sensor fusion”
But further experimentation demonstrated that using all three sensors together could solve the problem. It turns out that an accelerometer, magnetometer, and a gyro can complement each-other’s weaknesses, effectively filling in gaps in data and data responsiveness. Using a combination of these sensors it is possible to create a better, more responsive, and more fluid experience than the sensors can provide individually. Combining the input of multiple sensors to produce better overall results is a process we call sensor fusion.

Essentially, sensor fusion is a case where the whole is greater than the sum of the parts. A typical sensor fusion system uses a 3D accelerometer, a 3D magnetometer, and a 3D gyro to create a combined “9-axis sensor fusion” system. To understand how this system works, let’s take a look at the inputs and outputs.


9-axis sensor fusion system

This diagram shows two types of outputs: pass-through outputs in which the sensor data is passed directly to an application, and sensor fusion outputs in which the sensor data is synthesized into more powerful data types.

Some applications can use pass-through sensor data directly. This data can be used at “face value” for a variety of scenarios. One such scenario is an app that implements a pedometer to count your steps as you walk. The graph below shows the output of the accelerometer for a person walking with a tablet PC. This graph clearly shows it is possible to detect every step the person took.





But, as our experiments revealed, many applications can’t effectively use the raw sensor data. Some of these applications include:

Compass apps
Enhanced navigation and augmented reality apps
Casual games
3D gaming apps
Here’s a screenshot from a 3D game sample:



3D first-person shooter game (shown at //Build/)

These applications need to use sensor fusion data in order to support the features they implement. The “magic” of sensor fusion is to mathematically combine the data from all three sensors to produce more sophisticated outputs, including a tilt-compensated compass, an inclinometer (exposing yaw, pitch, and roll), and more advanced representations of device orientation. With this kind of data, more sophisticated apps can produce fast, fluid, and responsive reactions to natural motions.

By integrating a sensor fusion solution, Windows 8 provides a complete solution for the full range of applications. Sensor fusion in Windows solves the problems of jittery movement and jerky transitions, reduces data integrity issues, and provides data that allows a seamless representation of full device motion in 3D space (without any awkward transitions).

Working with hardware partners
While designing a sensor fusion solution for Windows, we also needed to help hardware designers to take advantage of this solution by partnering with them early. Designing a sensor fusion system is relatively easy if you’re designing a single device. But Windows runs on many kinds of PCs in many form factors, using hardware components from many different manufacturers. We needed to provide a solution that enabled the entire ecosystem of Windows hardware partners to participate.

The first step was to provide a baseline of performance for sensor packages that would work with Windows’ sensor fusion solution. Using Windows certification guidelines, we provided specifications for sensor performance. To help hardware companies verify that their solutions were compatible with Windows, we built a number of tests, which we provide with the Windows Certification kit.

Reducing the cost of developing and supporting drivers was another challenge. In order to make it simpler for sensor hardware manufacturers and PC makers, we wrote a single Microsoft-supplied driver that would work with all Windows-compatible sensor packages connected over USB and even lower power busses like I2C. This sensor class driver enables hardware companies to innovate with sensor hardware while ensuring that their hardware can be supported easily with drivers that ship with the Windows operating system.

To help speed adoption of the class driver, Microsoft worked with industry partners to introduce the specification into public standards. In July 2011 the standard for sensors was introduced in the HID (Human Interface Device) specification of the USB-IF (HID spec version 1.12, introduced with review request #39). This standardization enables any sensor company to build a sensor package that is compatible with Windows 8 by following the public standard USB-IF specifications for compliant device firmware. This reduces the time and cost required to integrate sensor hardware with Windows 8 PCs. Other benefits include a lower support cost and more consistent hardware capabilities for Windows 8 PCs that are equipped with sensors.

But beyond standardizing the class driver, we also wanted to optimize the performance of the sensor fusion solution, and minimize its impact on battery life. Each active sensor on a system draws power, and sending data up the stack consumes both memory and CPU time. We helped minimize the power and performance impact for sensor fusion systems running on Windows 8 in two major ways:

1. We architected the sensor fusion interfaces in Windows 8 to enable much of the processing of sensor fusion data to happen at the hardware level. This hardware-level sensor fusion capability means that computationally expensive algorithms don’t have to run on the main CPU, saving power and CPU cycles.

2. We implemented powerful filtering mechanisms that we tied directly to the needs of sensor apps running at any given point in time. This pay-for-play data and event model means that sensor data is only sent up the stack at the rate that apps need that data, and no faster. This results in greatly reduced CPU utilization for sensor data throughput.

Sensors and Metro style apps
To pull all of this together, our final challenge was to make the power and promise of sensor fusion available to those writing Metro style apps. To enable this, we designed a sensor API as part of the new WinRT. Through these APIs, developers can access the power of sensor fusion from any Metro style app. These APIs are clean and simple, and at the same time give developers access to the data needed to support everything from casual games to virtual reality applications. Of course these capabilities are all available as Win32 APIs for game developers or other uses in desktop applications.

The following JavaScript code snippet shows how easy it is to get access to an accelerometer and subscribe to events using the Windows Runtime:

var accelerometer;
accelerometer = Windows.Devices.Sensors.Accelerometer.getDefault();
accelerometer.addEventListener("readingchanged",onAccReadingChanged);

function onAccReadingChanged(e) {
    var accelX = e.reading.accelerationX;
    var accelY = e.reading.accelerationY;
    var accelZ = e.reading.accelerationZ;
}
For more information about support for sensors in the Windows Runtime, please see this //build/ session on using location & sensors in your app.

You may be wondering at this point how you can try out sensor fusion on Windows 8, or even write some apps that use these new capabilities. Developers who attended the //build/ conference in 2011 received the Samsung Windows 8 Developer Preview slate PC, which included a full package of sensors. There were only about 4,000 of those given out, so of course, not everyone had the opportunity to get one. The good news is that the same 9-axis sensor fusion system that was built into the Windows Developer Preview device is now available online for purchase from ST Microelectronics. The “ST Microelectronics eMotion Development Board for Windows 8” (model # STEVAL-MKI119V1) attaches via USB, and works with the HID sensor class driver that’s included in Windows 8. If you’ve downloaded the Developer Preview version of Windows 8 and are itching to try out the sensor experience you should consider getting one of these devices.

No comments:

Powered by Blogger.