WCRP-WWRP-THORPEX Model Evaluation and Development Survey - Ideas for Progress
Q4: Are there any resources or opportunities within the modelling/process study/observational/theoretical communities that would be particularly useful and should be exploited?
Intercomparison of simulations of the Arctic 20th Century climate.
OMIP to study the role of mixing parameterizations in tropical systematic errors - creation of Cold Tounge Working Group.
CPT and model intercomparison on tropical biases.
MIP of non-hydrostatic models.
Model intercomparison on 1D, 2D and 3D case studies associated with field campaigns.
Comparison of past climate change simulated by climate models with minimal constraints (Lucarini and Russell, JGR, 2002).
A more thorough intercomparison of the double ITCZ problem in current coupled and atmospheric models.
An international effort to compare carbon cycle model output under standard conditions to key observational datasets, which hopefully will also force a re-examination of model formulation in some cases.
A benchmarking effort for climate models of surface hydrology simulations and land-atmposphere feedbacks.
A model intercomparison of leading geoengineering proposals such as stratospheric particle injection or marine cloud brightening.
A WCRP WG on geoengineering to acknowledge the need to better study this area and provide international oversight and quality control to geoengineering-related modeling.
A global high-resolution land surface analysis (including soil moisture, seasonally changing land cover).
Modularized and open source model code for external collaboration, particularly with the university community.
Create global real-time datasets (including land surface) specifically for regional NWP (via TIGGE-LAM, under THORPEX).
There is a lack of methodology to identify the causes (critical and controlling processes) of model defficiencies. The non linearity of the system operating on climate timescales hides connections between cause and effect for preditions and simulations, including many aspects of the current climate. Intercomparisons useful to bring deficiencies to attention, but do not give insight on causes or solutions for model deficiences.
Need to develop a systematic approach to determining which model biases are the most important to fix, instead of the current ad hoc approach that tends to respond too much to the latest observations rather than responding to long term model performance goals.
Diagnostic work that helps to understand model problems should be prioretized by WCRP-WWRP-THORPEX. Setting up a joint Working Group on Forecasting System Diagnostics that combines model developers, those doing diagnostics studies (physical and dynamical processes and their interactions) and the data assimilation community (characterization of model error will be a priority for this community).SPCZ Working Group.
Funding projects that develop, test and apply new diagnostic techniques.
Use of NWP to understand the origin of model error. Development of techniqes that identify the origin or errors, eg initial tendency or analysis increment technique (eg Rodwell and Jung, 2008).Improve aspect of deep convection parameterization that relates the occurrence of deep convection to SSTs.
Improve and sustaine marine observations to produce improved global ocean reanalyses.
Maintain an international climate observing system with sensor redundancy that is able to detect trends in radiative fluxes, clouds etc over the next 30yrs and maintain long observational records from the 20th century. This is critical to climate model testing, even more than new process observations.
SPARC DynVar working on understanding the impact of stratospheric changes on surface climate but this is largely unfunded.
Combination of observational process studies and high resolution models eg CFMIP for cloud studies.
Integration of observational process studies and model development. For example in 2011 there are several field campaigns planned in the Tropical Indian Ocean, with empases on atmospheric convection, air-sea interaction and cloud-aerosol interaction (CINDY2011/DYNAMO, AMIE, 7SEAS, etc). This is an opportunity for the modelling community to team up with the process studies to optimize the benefit from the in situ data that will be collected.
Increase efforts to gather, improve and extend available data records on long time periods.
Render model software freely available for the academic community.
Develop new models with increased modularity, inter-operability, efficiency, that are useable by the academic community.
Basic Arctic observations.
Develop internationally-agreed metrics to compare NWP models (not only Z500).
How to test and evaluate parameterizations.
Evaluating parameterizations against real observations, not just evaluating against model parameters (eg 2m temperature, 10m wind).
Exploit the advent of adequate (radar and lidar) cloud observations from space.
Intense use of observations such as radar and SMOS satellite data, more use of current data (eg 3D radar) and improve other (eg. snow cover measurements).
International exchange of weather radar data for global and regional data assimilation (for WGNE to lead?).
Exploitation of rain radar data and non-hydrostatic modelling.
High resolution ppt data (both in time - hourly, and space), covering long periods of time.
Assess the reliability of satellite ppt in monsoon regions.
Better all-weather observations and high time-resolution cloud and rain observations.
Use data assimilation systems to compare models.
Need a focused and well supported effort to combine climate modelling to the ice sheet dynamics process study community.
International process studies focused on ocean-ice sheet interaction and the stability of ice sheets.
Encourage an activity like the AR4 diagnostic exercise (http://www-pcmdi.llnl.gov/projects/amip/DIAGSUBS/diagsp.php) that focuses on feedbacks between different communities.
Render key observational data (eg events on a convective scale) available by means of joint efforts, at least for a couple of case studies, to address the problem of error development in a systematic way.
Improve international data sharing (observations and model).
Process studies with a focus on monsoon systems.
For ensemble forecasting, develop parameterizations built from probablilistic concepts and the notion of random realizations ( eg. for convection, boundary layer schemes, radiation, gravity wave drag, etc.).
GLACE2 looking at (limits of) predictability associated to soil moisture, but this should be complimented by an analysis of the role of how the intial states are obtained and how models propagate the information from the land surface to the atmosphere. Should combine with LANDFLUX-type of set up for creating intial states/evaluating data and with model developers, looking at analysis and attribution of model differences.
GLACE2 for soil moisutre, need analogue experiments for snow processes in relation to (seasonal-decadal) surface temperature and radiation, land use in relation to decadal forcing, and local energy balance terms in relation to local variability in surface temperature and direct boundary layer processes.
Improved fidelity of 'simulator' codes for calculating synthetic remotely sensed data from atmospheric simulations to allow more rigorous testing of all types of models given the realities of the data available.
Improving communication between model developers and outside research community - much experimentation within modeling centers (including negative results and routine tests) is not published. Finding a way to informally and quickly disseminate descriptions of findings on particular processes (eg convection) that are not worth writing up as peer-reviewded papers could lead to external collaborations from posted results.
A Climate Process Team that brings together theorists of convectively-coupled dynamical phenomena, GCM convective scheme developers and convection modelers to share insights on scale interactions. This would be different to GEWEX cloud process teams that focus on testing and improving numerical cloud models. This CPT would try to identify more standardized approaches to the scale-interaction problem that could coordinate and focus research. These conceptual advances could be exploited for other scale-interaction problems (ocean eddies, land-surface atmosphere interactions).
The use of a convincing observational proxy for studying model-predicted cloud feedbacks, either globally or in some key regime such as the subtropics, the storm tracks or tropical convection.
Challenge other disciplines, especially mathematics, to consider and investigate the general problem of linking cause and effect in complex systems, particularly the climate system.
Development of highly-flow dependent analysis procedures to improve tropopause-level simulation.
Integrated long-term experiments eg HYMEX, and long term funding strategies.
More use of cryospheric (generally satellite-derived) products by climate modelling and NWP communities. Sea ice thickness estimates from a variety of sensors are becoming available on relatively large spatial and temporal scales, though with considerable uncertiainties. Satellite-based polar cloud products are also maturing.
More effort at imporving the Arctic System Reanalysis project. The major reanalysis projects (eg NCEP/NCAR, ECMWG, JMA) should collaborate to identify which aspects of polar climate can be improved upon.
Data rescue and better/more complete analysis of existing, though difficult to access observations (eg parts of Africa, South America), where access to long records would help with characterising multi-decadal variability.
The scientific community should be fully prepared for the next volcanic erruption to exploit this opportunity to learn about the climate system.
Process teams to quantify and understand the process of air-sea interation, ocean subduction and meridional ocean overturning, particularlyin the southern hemisphere.
CMIP is quite useful for the climate data analystis to investigate model deficiencies and understanding the models ability to simulate natural and anthropogenic variability, but the analysis results are obtained once the model versions are frozen so are less efficient for making progress in model development. There could be an 'international climate modelling initiative' that enables communication between modelling groups to share efforts in model development. This may already occur in the US and Europe, but an international framework is not in place. This group would focus on various physical processes in models, rather than model climatology and variability.
A key element of decadal prediction is to use initial subsurface and surface oceanic conditions to forecast changes in the global distribution of upper and mid-ocean heat content. This requires multi-national commitment of ongoing measurements at and beneath the ocean surface for many decades to come, especially through the Argo Project (www.argo.net) for subsurface floats and throught the Voluntary Observing Fleet, as specified in the Implementation Plan for GCOS (Parker et al, JGR, 2007).
Exploitation of ACRE Project (www.met-acre.org) and the Climate of the 20th Century Reanalyses (www.esrl.noaa.gov/psd/data/gridded/data.20thC_Rean.html).
Avoid 'initiative fatigue' by making best use of existing projects and WCRP and GCOS WGs.
The systematic use of processes-resolving models (cloud resolving, ocean eddy resolving, and large-eddy resolving models) to improve GCM parameterizations. Need dedicated high speed computing and coordinated activities like the GEWEX Cloud System Study (GCSS).
Increased computing and staff resources.
Access to fastest supercomputers for high resolution climate modelling.
Long term observations of basic quantities with a global spatial coverage.
An assessment of the predictability of deep convection.
In RCM, no comprehensive investigation has addresed issues in RCM domain such as what domain should be used in terms of size and location and how sensitive are RCM results to a change of domain? There is a lack of empirical rules concering the adequacy of the RCM domain. The effect of domain size on the RCM solution for applications that require a climate simulation or projection over a given location should be addressed in a realistic context for that application, not within an idealized framework. How robust are the fine scales resolved in high resolution RCM, to what point does high resolution imply high precision/accuracy. The application of an RCM should be preceded by a series of tests to first determine a configuration that will ensure robustness of the model simulation statistics against changes in domain size and location, under realistic conditions for the given application. This procedure should be systematic and followed for any RCM undertaking and its outcomes should be part of the published results as a standard validation procedure.
The validation of RCM solutions should be focussed on the 'potential added value' rather than just the direct comparison to observed or analysed data, where most atmospheric variables have variance spectra that are dominated by planetary scales that are used to drive the RCM. In the validation process, it is important to separate the 'large scales' from the 'small scales', ie those scales only resolved by the RCM. Potential added value is expected for the small scales that cannot be resolved by the driving data but it may also occur in intermediate or large scales if RCMs are able to correct some deficiencies of the large scales when driving with imperfect data from from coarse resloution GCMs.Validation of the large scales should focus on verifying the degree of control of the driving (nesting) technique, looking at the consistency of the simulation when driving with good data such as reanalyses and imperfect data from coarse grid GCMs. For small scales, verification should focus on the degree to which these are generated (spun up) within the high resolution RCM, whether due to surface forcings (topography, land-surface heterogeneities, etc), hydrodynamic instabilities, or non-linear cascades from large to small scales. Isolating the small scales from the total fields can highlight the main 'potential added value' of RCMs. PDFs of simulated variables may be useful to highlight 'potential added value' of RCM simulations of extremes.
A WG of RCM is needed to give the RCM community a forum for issues that are specific for nested models used for climate application. A panel or task force within an existing WG with a different main interest has proved to be insufficient.
There are the seeds sown for an international activity on the benchmarking of terrestrial models that will look across the carbon, water, energy and nutrient cycles. More funding is required to scale up the Free-Air Carbon Dioxide Enrichment (FACE) Experiment effort to an international activity with strategic experiments located in different biomes and with a more serious involvement of modellers in the design and interpretation of the experiments.The activity should be planned from the onset as a collaboration among the relevant communities and not exclusively led by experimental field ecologists (who are also reaching out to terrestrial modellers) as is currently demanded by the US DOE.
The acceptabce by the terrestrial modelling community of the potential for optimization (implicitly, through natural selection) as a theoretical principle governing how plant resources (C and N) are allocated under different experimental conditions. This fills a major theory gap that otherwise is treated simply as relying on 'plugging in' empirical numbers - an unrealiable basis for prognostic modelling.