Decisions With Visualizations

Decisions With Visualizations

Every day we make numerous decisions with the aid of visualizations, including selecting a driving route, deciding to evacuate before a hurricane strikes, or identifying optimal methods for allocating business resources. My work aims to understand how we use visualizations to make complex decisions about the future, including how the properties of a graphical display influence our judgments and how visualization scientists can enhance displays by capitalizing on human cognition and reasoning. My applied work in this domain has contributed predominantly to research on how we make decisions with displays of uncertainty.

Effects of Ensemble and Summary Displays on Interpretations of Geospatial Uncertainty Data

Ensemble and summary displays are two widely used methods for representing visual-spatial uncertainty, and there is disagreement about which is the most effective technique for communicating uncertainty to the general public. Visualization scientists create ensemble displays by plotting multiple data points on the same Cartesian coordinate plane. Despite their use in scientific practice, it is more common in public presentation to use visualizations of summary displays, which scientists create by plotting statistical parameters of the ensemble members. While prior work has demonstrated that viewers make different decisions when viewing summary and ensemble displays, it is unclear what components of the displays lead to diverging judgments. This study aims to compare the salience of visual features—or visual elements that attract bottom-up attention—as one possible source of diverging judgments made with ensemble and summary displays, in the context of hurricane track forecasts. We report that salient visual features of both ensemble and summary displays influence participants’ judgments. Specifically, we find that salient features of summary displays of geospatial uncertainty can be misunderstood as displaying size information. Further, salient features of ensemble displays evoke judgments that are indicative of accurate interpretations of the underlying probability distribution of the ensemble data. However, when participants use ensemble displays to make point-based judgments, they may overweight individual ensemble members in their decisionmaking process. We propose that ensemble displays are a promising alternative to summary displays in a geospatial context but that decisions about visualization methods should be informed by the viewer’s task.

Uncertainty Visualization by Representative Sampling from Prediction Ensembles

Data ensembles are often used to infer statistics to be used for a summary display of an uncertain prediction. In a spatial context, these summary displays have the drawback that when uncertainty is encoded via a spatial spread, display glyph area increases in size with prediction uncertainty. This increase can be easily confounded with an increase in the size, strength or other attribute of the phenomenon being presented. We argue that by directly displaying a carefully chosen subset of a prediction ensemble, so that uncertainty is conveyed implicitly, such misinterpretations can be avoided. Since such a display does not require uncertainty annotation, an information channel remains available for encoding additional information about the prediction. We demonstrate these points in the context of hurricane prediction visualizations, showing how we avoid occlusion of selected ensemble elements while preserving the spatial statistics of the original ensemble, and how an explicit encoding of uncertainty can also be constructed from such a selection. We conclude with the results of a cognitive experiment demonstrating that the approach can be used to construct storm prediction displays that significantly reduce the confounding of uncertainty with storm size, and thus improve viewers’ ability to estimate potential for storm damage.

Evaluating the impact of binning 2D scalar fields

The expressiveness principle for visualization design asserts that a visualization should encode all of the available data, and only the available data, implying that continuous data types should be visualized with a continuous encoding channel. And yet, in many domains binning continuous data is not only pervasive, but it is accepted as standard practice. Prior work provides no clear guidance for when encoding continuous data continuously is preferable to employing binning techniques or how this choice affects data interpretation and decision making. In this paper, we present a study aimed at better understanding the conditions in which the expressiveness principle can or should be violated for visualizing continuous data. We provided participants with visualizations employing either continuous or binned greyscale encodings of geospatial elevation data and compared participants’ ability to complete a wide variety of tasks. For various tasks, the results indicate significant differences in decision making, confidence in responses, and task completion time between continuous and binned encodings of the data. In general, participants with continuous encodings were faster to complete many of the tasks, but never outperformed those with binned encodings, while performance accuracy with binned encodings was superior to continuous encodings in some tasks. These findings suggest that strict adherence to the expressiveness principle is not always advisable. We discuss both the implications and limitations of our results and outline various avenues for potential work needed to further improve guidelines for using continuous versus binned encodings for continuous data types.

Non-expert interpretations of hurricane forecast uncertainty visualizations

Uncertainty represented in visualizations is often ignored or misunderstood by the non-expert user. The National Hurricane Center displays hurricane forecasts using a track forecast cone, depicting the expected track of the storm and the uncertainty in the forecast. Our goal was to test whether different graphical displays of a hurricane forecast containing uncertainty would influence a decision about storm characteristics. Participants viewed one of five different visualization types. Three varied the currently used forecast cone, one presented a track with no uncertainty, and one presented an ensemble of multiple possible hurricane tracks. Results show that individuals make different decisions using uncertainty visualizations with different visual properties, demonstrating that basic visual properties must be considered in visualization design and communication.

The influence of different graphical displays on nonexpert decision making under uncertainty

Understanding how people interpret and use visually presented uncertainty data is an important yet seldom studied aspect of data visualization applications. Current approaches in visualization often display uncertainty as an additional data attribute without a well-defined context. Our goal was to test whether different graphical displays (glyphs) would influence a decision about which of 2 weather forecasts was a more accurate predictor of an uncertain temperature forecast value. We used a statistical inference task based on fictional univariate normal distributions, each characterized by a mean and standard deviation. Participants viewed 1 of 5 different glyph types representing 2 weather forecast distributions. Three of these used variations in spatial encoding to communicate the distributions and the other 2 used nonspatial encoding (brightness or color). Four distribution pairs were created with different relative standard deviations (uncertainty of the forecasts). We found that there was a difference in how decisions were made with spatial versus nonspatial glyphs, but no difference among the spatial glyphs themselves. Furthermore, the effect of different glyph types changed as a function of the variability of the distributions. The results are discussed in the context of how visualizations might improve decision making under uncertainty.