Deciding what climate change data you need

There are many types of climate change projections data available, and choosing the right data for the right purpose is an important part of undertaking impact assessments.

For simple impact assessments, it may be sufficient to provide qualitative information about the direction of future climate change. For example: warmer with more extremely high temperatures; drier with more droughts: heavier daily rainfall; higher sea level; fewer but stronger cyclones; and more ocean acidification. This could facilitate a high-level scoping process, such as a workshop discussion, that identifies potential impacts and prioritises elements that may need further quantification.

Other impact assessments need quantitative information. In many cases, projected changes (relative to some reference period) are required for different years, emissions scenarios, climate variables and regions. In other cases, application-ready datasets are needed. Various methods are available for producing application-ready datasets. The choice of method must be matched to the intended application, taking into account constraints of time, resources, human capacity and supporting infrastructure. A summary of some common methods for creating application-ready datasets follows, with advantages and disadvantages presented in the table below.

Sensitivity analysis

This entails running a climate impact model with observed climate data to establish a baseline level of impact, then re-running the model with the same input data, modified by selected changes in climate (e.g. a warming of 1, 2 or 3 °C). This simple methodology has been effectively used to demonstrate climate change impacts in the agricultural sector.

Delta change or perturbation method

The projected changes in mean climate, as simulated by a climate model, are applied to observed climate data. This may be in the form of an additive or multiplicative factor depending on the variable. Some modifications to this method may incorporate projected changes in daily climate variability using quantile scaling.

Climate analogues

Historical climate data are used as an analogue for the future. The analogue may correspond to a different climate in time or space, such as a past climate anomaly that may occur more often in future in a particular region, or a location that currently has a climate similar to that expected in another region in the future. See the Climate Analogues tool.

Weather generator

Weather Generators use a statistical model to simulate time series of weather data, with statistical properties similar to observed weather data. These properties can be modified for future climates using information from climate models.

Statistical downscaling

A method that utilises a statistical relationship between the local-scale variable of interest and larger-scale atmospheric fields. This is achieved through regression methods (e.g. multiple regression statistics, principal component, canonical correlation analysis, neural networks), weather typing or through the use of weather generators calibrated to the larger-scale atmospheric fields (see Technical Report Section 6.3). To implement this technique, high-quality observed data are required for a number of decades to calibrate the statistical model. The statistical model can then be applied to projected changes in large-scale atmospheric fields from a global climate model (GCM) to infer projected changes at the local scale.

Dynamical downscaling

This involves the use of a finer resolution atmospheric climate model, driven by output from a global climate model. The atmospheric model may have a global domain or a regional domain, the latter being called a regional climate model (RCM). This provides better representation of topography and associated effects on local climate, such as rainfall in mountainous regions, as well as the potential to better simulate extreme weather features, such as tropical cyclones. However, this method is computationally intensive and the results are strongly dependent on the choice of both global climate model and fine resolution atmospheric model. Furthermore, results from dynamically downscaled climate models are required to be bias-corrected (using statistical techniques) before they are suitable for use in climate change impact assessments. This method has been used in a study of impacts on agriculture reported through the Climate Futures for Tasmania initiative.

Table below: Summary of typical characteristics associated with methods for application of climate projections.

Method Advantages (disadvantages are in italics)
Sensitivity analysis Requires no future climate change information.

Shows most important variables/system thresholds.

 Allows comparisons between studies. Impact model uncertainty seldom reported or unknown.

May not be OK near complex topography and coastlines

Change in mean only

OK for some applications but not others.

Represents range of change if benchmarked to other scenarios.
Weather generators Provides daily or sub-daily weather variables.

Preserves relationships between weather variables.

Already in widespread use for simulating present climate.

Needs high quality observational data for calibration and verification.

Assumes a constant relationship between large-scale circulation patterns and local weather.

Scenarios are sensitive to choice of predictors and quality of GCM output.

Scenarios are typically time-slice rather than transient.

Difficulty reproducing inter-annual variability (e.g. due to ENSO) and tropical weather phenomena such as monsoons and tropical cyclones.
Delta change method Simple to implement and suitable for many applications.

Limited applicability where changes in variance are important.

May not capture projected climate behaviour around complex topography.
Delta change method (including future changes to variability, e.g. quantile scaling) Suitable for many applications

Includes changes in variability that may be important for adequately simulating extreme events

Requires statistical expertise to implement

May not capture projected climate behaviour around complex topography.
Statistical downscaling Good for many applications, but restricted by availability of relevant variables.

Site-specific time-series and other statistics, e.g. extreme event frequencies.

Good near complex topography.

Requires high quality observational data over a number of decades for calibration and verification.

Assumes a constant relationship between large-scale circulation patterns and local weather.

Scenarios are sensitive to choice of forcing factors and host GCM or RCM.

Choice of host GCM or RCM constrained by archived outputs.

Hard to choose from the large variety of methods, each with pros and cons.

Need to be aware of any shortcomings of the particular method chosen.
Dynamical downscaling with bias correction Provides regional climate scenarios at 10-60 km resolution.

Reflects underlying land-surface controls and feedbacks.

Preserves relationships between weather variables.

Often gives better representation of coastal and mountain effects, and extreme weather events.

Simulations with bias-corrected sea surface temperatures should make the present-climate simulation more realistic than the host GCM.

Requires high quality observational data for model verification.

Should not be assumed that the dynamically downscaled projections are necessarily more reliable than projections based on the host model.

High computational demand.
Projections are sensitive to choice of host GCM and RCM.
Dynamical downscaling change factor method applied to high res. obs. As for dynamical downscaling with bias correction.

This technique is suitable for many agricultural and biodiversity applications.