Where does the Downsizer pull its COOP climate data from?
The NOAA Web climate service
My Downsizer client doesn't work any more.
If you have not pulled it lately -- you should get the new version of the downsizer from our web site. The old ones will not work with updated servers.
Page Last Modified: 30 October, 2013
Frequently Asked Questions
Where can I go to get the software?
Click the Software link above.
What is the Modular Modeling System (MMS) and what happened to it?
MMS was originally developed as a Unix application and as such, the MMS graphical user interface relied heavily on the Motif and X windows library. Since MMS was ported to Windows operating systems several years ago, we have been unable to figure out a stable way to deal with these libraries so the decision was made to implement the GUI in Java so that it will run on any platform; setup and installation are much easier now. At the same time, the MMS Model Builder was deprecated in favor of allowing users to select their process modules in the control file. The PRMS2010 model itself remains in C and Fortran.
Does MoWS provide training?
We usually teach a PRMS workshop at the National Training Center in Denver once a year. Although there are no prerequisites, the objective of the workshop is to assist users in setting up a PRMS watershed model to provide simulations for an existing funded project. Most attendees in the 2009 workshop were setting up projects to evaluate climate change scenarios.
What software tools does MoWS provide?
Precipitation-Runoff Modeling System (PRMS): (1) Simulates landsurface hydrologic processes, including evapotranspiration, runoff, infiltration, and interflow, by balancing energy and mass budgets of the plant canopy, snowpack, and soil zone on the basis of distributed climate information (temperature, precipitation, and solar radiation); (2) simulates hydrologic water budgets, at the watershed scale, with temporal scale ranging from minutes to centuries; (3) integrates with models used for natural resource management or other scientific disciplines; and (4) has a modular design that allows for selection of alternative hydrologic process algorithms among existing or easily added modules.
Let Us Calibrate (Luca): A multiple-objective, stepwise, automated procedure for hydrologic model calibration and the associated graphical user interface (GUI). Luca is a wizard-style user-friendly GUI that provides an easy systematic way of building and executing a calibration procedure. The calibration procedure uses the Shuffled Complex Evolution global search algorithm to calibrate any model compiled with the U.S. Geological Survey's Modular Modeling System. This process assures that intermediate and final states of the model are simulated consistently with measured values.
Object User Interface (OUI): A computer application that has been developed to provide the general framework needed to couple disparate environmental resources models and manage the necessary temporal and spatial data. Users may write model and data specific interfaces using the Java abstract classes in the OUI library. These interfaces are dynamically loaded by OUI at run time. Through the use of these interface classes and the XML control file, OUI's data tree and map based graphical user interface are highly configurable for most all applications. The OUI user's manual provides installation instructions, a detailed discussion of system concepts, a working example with complete data sets, and specifications for interface development and application using the OUI graphical user interface.
Thornthwaite Monthly Water Balance (TWB): Uses an accounting procedure to analyze the allocation of water among various components of the hydrologic system. Inputs to the model are monthly temperature and precipitation. Outputs include monthly potential and actual evapotranspiration, soil moisture storage, snow storage, surplus, and runoff.
The GIS Weasel: Aids in the preparation of spatial information for input to lumped and distributed parameter hydrologic or other environmental models. The GIS Weasel provides geographic information system (GIS) tools to help create maps of geographic features relevant to a user's model and to generate parameters from those maps. The operation of the GIS Weasel does not require the user to be a GIS expert, only that the user have an understanding of the spatial information requirements of the environmental simulation model being used. The GIS Weasel software system uses a GIS-based graphical user interface (GUI), the C programming language, and external scripting languages. The software will run on any computing platform where ArcInfo Workstation (version 8.0.2 or later) and the GRID extension are accessible. The user controls the processing of the GIS Weasel by interacting with menus, maps, and tables.
Downsizer: A GUI that retrieves daily climate and gage data. Climate data is from the NWS COOP network and the NRCS SNOTEL network. The COOP data is updated regularly. We have to update the SNOTEL data manually, so SNOTEL is usually only current through the last water year. Gage data is from the USGS National Water Information System (NWIS)
Will MoWS help me set up my watershed model?
We will be glad to answer questions and depending on our work load, we will help, but we will not do your work for you.
What is an HRU?
A HRU is a Hydrologic Response Unit; the smallest homogeneous area in PRMS. In general you want your HRUs to represent a spatially unique area such as a headwater basin and, if in mountainous terrain, you would further separate a north facing slope from a south facing slope because the different amounts of insolation received. In the end it comes down to what questions are being asked. Each HRU is assumed to be homogeneous for each attribute such as soil type or vegetation density. MoWS users commonly use the GIS Weasel to derive spatial parameters (elevation, land cover, soils, etc) for each HRU.
How big (or small) should my HRUs be?
Not knowing anything about your specific application, HRUs should generally be between 0.1 and 500 square kilometers in size. If you make them outside of this range, you're probably up to no good.
How many HRUs do I need?
Not knowing anything about your specific application, 100 is a good number for a PRMS model of a 1000 square kilometer watershed. Of course, there are very good reasons why you may want more or less.
What shape should my HRUs be?
HRUs can be any shape that makes sense for your application. MoWS generally uses irregularly shaped polygons, which correspond to overland flow or solar radiation planes associated with watershed stream segments. Some PRMS modelers prefer regular squares or rectangles which may correspond to gridded data sets or the spatial discretization scheme of other models.
I've downloaded and unziped the PRMS distribution. Now what?
The unzipped directory contains the programs and data to run simulations for an example watershed. To make your own model, copy the example project directories and rename it to something like 'MyBasin' so that you now have a directory named prms\projects\MyBasin. You will now need to modify the file names and content to fit your project.
How do I know that the model actually ran?
- Put the file made with the weasel (called the "parameter file") and the file made by the downsizer (called the "data file") into the folder prms\projects\MyBasin\input
- Go into the folder prms\projects\MyBasin\control and copy and rename the control file to something like MyBasin.control. This copy is just in case it gets messed up while you are editing it, you can go back.
- Load MyBasin.control into your favorite editor. Search for "param_file". The next line should be the name of the demo parameter file that came with the distribution. Change this name to the name of your parameter file. The distribution version of PRMS uses relative paths. You can use full paths, but for now, use relative paths.
- Do the same thing for your data file by searching for "data_file" and specify the path to your data file.
- Edit the .bat files in the folder prms\projects\MyBasin to point to the control and parameter files as appropriate.
- Run your model by double clicking on MyBasin.bat
Remember, just because PRMS starts, doesn't mean that it ran.
Three things to look at if you're not sure:
What are the inputs to PRMS and where do they come from?
- Check the Basin Summary Report. It should have lines in it for the time period that the model ran. So, if you don't see these lines, then the model probably did not really run
- Look at the DOS command window where the model was started. See if there are any really ominous sounding messages. PRMS is semi verbose, so there will always be some normal chatter from the model on standard output, but messages about not being able to find files, bad modules, etc. could indicate that your model did not really run
- If the above two items look good, double click on MyBasin_gui.bat in the PRMS2010_beta\projects\MyBasin folder. If you have java version 6 (1.6.x_x_) installed, and everything is good, you will see the GUI. Select "Run->Single Run" from the top level menu bar. Then click on the "Start" button on the bottom of the "MMS Run Control - Single Run" window. You will see graphs appear if the model actually runs.
Daily-mode model inputs are daily precipitation, maximum and minimum air temperature, and solar radiation. The energy inputs of air temperature and solar radiation are used in the computation of evaporation, transpiration, sublimation, and snowmelt. These point data are extrapolated to each HRU using a set of adjustment coefficients developed from regional climate data. The coefficients typically include the effects of HRU elevation, slope, aspect, and distance to one or more measurement sites. Measured maximum and minimum daily air temperature data are adjusted using monthly or daily lapse rates and the elevation difference between a climate station and each HRU.
Precipitation amount on each HRU is computed by multiplying point measurements by a monthly correction factor. The correction factor attempts to account for a number of sources of measurement variability and error including the effects of elevation, spatial variation, topography, gage location, deficiencies in gage catch due to the effects of wind, and other factors. One distribution method enables the user to identify the precipitation gauge most representative of an HRU and to specify the monthly correction factor to be used to compute HRU precipitation amount. A second method is similar in that the gauge most representative of an HRU is selected. However, a second gauge is also selected for use in computing the monthly correction factor as a function of the ratio of the mean monthly precipitation at each station and their difference in elevation.
What is the difference between dimensions, parameters, and variables in PRMS?
Dimensions define the number of spatial features or other constants, such as the number of HRUs, number of months in a year, and the number of temperature stations for which time-series data are specified in the PRMS Data File(s). Dimensions are specified in the PRMS Parameter File.
Parameters are user-specified data that do not change during a simulation, such as the area, slope, and aspect of a hydrologic response unit (HRU). Parameters may have a single value or they may include multiple values (one- or two-dimensional arrays). Parameters are specified in the PRMS Parameter File.
Variables are states and fluxes which vary from one time step to another, and are either specified in the PRMS Data File(s) (such as the daily maximum air temperature at a temperature station) or calculated (such as the soil-infiltration rate for a HRU). Variables, like parameters, may have a single value or they may include multiple values (one- or two-dimensional arrays). The input variables are specified in the PRMS Data File. Selected output variables are written to the Statvar File.
What is the difference between PRMS Storm and Daily modes?
Watershed response can be simulated at both a daily and a storm time scale. In the daily mode, hydrologic components are simulated as daily average or total values. Streamflow is computed as a mean daily flow. In the storm mode, selected hydrologic components are simulated at time intervals that can range from less than one to 60 minutes. The time step must be constant within a storm but could be different for each storm. Continuity of mass is maintained as the model moves from daily mode to storm mode and back to daily mode. Storm hydrographs and sediment yields for selected rainstorms can be simulated in storm mode. Sediment modeling capabilities are provided only in the storm mode.
For storm-mode computations, a watershed is conceptualized as a series of interconnected flow plane and channel segments. An HRU is considered the equivalent of a single flow plane. The shape of the flow plane is assumed to be rectangular, with the length of one side of rectangle equal to the length of the channel segment that receives runoff from the flow plane. The flow-plane width is computed by dividing the HRU area by the channel segment length. All flow planes are assumed to connect to a channel segment. Cascading flow planes are not currently supported, but a module to support this capability is being developed.
Why doesn't the PRMS Basin Summary Report balance?
1. The PRMS Basin Summary Report was originally written as an aid for calibration and did not necessarily include all of the components of the water cycle required to compute an exact balance. This means that although the report did not show a balance, when all of the necessary individual variables were output to statvar files and summed, PRMS would balance.
Over the past several years, we have worked very hard on improving this report. Modern versions of this report do balance. Older versions of PRMS may produce reports which do not balance. If this is important to you, update your model.
2. If the PRMS Basin Summary Report does not balance and you have updated your model within the last year, it is certainly possible that you have found a bug. If you have a question about this, contact MoWS help.
What is the difference between PRMS and HSPF (or Sac SMA, VIC, WMS, MIKE SHE, etc.)?
Read Computer Models of Watershed Hydrology by Vijay P. Singh. While we are more than happy to answer questions about PRMS, we cannot comment on the suitability of these other models for any particular application.
My PRMS (or MODFLOW) model seems to work just fine. Do I really need GSFLOW?
The short answer is "maybe no." GSFLOW can be a lot more work to set up and calibrate than either PRMS or MODFLOW individually, especially if you don't have the surface- or ground-water modeling expertise that is required. We encourage all watershed and ground-water modelers to investigate the capabilities of GSFLOW, but it may not be required for all situations.
The hydrograph predicted by PRMS doesn't match the measured gage discharge at all. What next?
I. Get daily printout
II. Water balance (annual or monthly volumes)
- Check whether computed values such as daily solar radiation, precipitation, temperature, evapotranspiration and other input data are within normal expected ranges. If not, double check that your input data contains observations of temperature and precipitation in the correct order. Also verify that the elevations of your temperature stations and HRU's are in the same units, either feet or meters. Lapse rates are in temperature units per 1000 elevation units.
- If temperatures and precipitation rates plotted for individual HRU's look OK, look for patterns (i.e. recessions always low).
Case 1 - model generally underpredicts volume.
Case 2 - model generally overpredicts volume.
- ET too high? Compare potential ET with regional values. Adjust epan_coef, hamon_coef, jh_coef, and temperature adjustment parameters - tmin_lapse, tmax_lapse, tmin_adjust, and tmax_adjust.
- rain_adj/snow_adj to low?
- Look for seasonal or monthly patterns and check seasonal and monthly patterns and check seasonal and monthly parameters.
- soil_moist_max too high?
- Continually increasing storage in groundwater or subsurface reservoirs? Adjust gwflow_coef, ssrcoef_lin, ssrcoef_sq. Possibly soil2gw_max, ssr2gw_rate.
- Snowpack not melting? Check snowpack parameters, temperature data and adjustment parameters, solar radiation, tmaxf_allsnow, tmaxf_allrain, rad_trncf.
- hru_imperv and imperv_stor_max.
Case 3 - Model underpredicts in high precipitation years.
- ET too low? (the Case 1, -1)
- rain_adj/snow_adj to high?
- Check for seasonal and monthly pattern.
- soil_moist_max too low?
- Impervious area or retention storage too low.
Case 4 - Model underpredicts in low precipitation years.
- ssrcoef_sq too low? ssr2gw_rate.
- Potential ET too high.
- Check seasonal and monthly.
Case 5 - Model overpredicts in high precipitation years.
- soil_moist_max too high?
- hamon_coef/jh_coef too high?
- Check seasonal and monthly.
Case 6 - Model overpredicts in low precipitation year.
- ssrcoer_sq too high?
- Surface runoff too high?
- Impervious area too high?
- Potential ET too low?
- Check seasonal and monthly.
III. Daily Runoff values (timing)
- Soil type?
- ET to low?
- Check seasonal and monthly.
1. Look at components of flow and determine which is causing problem.
Case 1 - groundwater - Recompute gwflow_coef. Adjust ssr2gw_rate and/or soil2gw_max.
Case 2 - subsurface - Adjust ssrcoef_lin and ssrcoef_sq. Look at and adjust if needed: ssr2gw_rate and soil_moist_max.
Case 3 - surface - Adjust smidx_exp, carea_max, and carea_min/smidx_coef.
IV. Sensitivity and optimization procedures
1. Run sensitivity analysis to determine most sensitive parameters. Then optimize using the automatic or manual calibration
The parameters affecting the volume and timing of runoff differ whether the basin is dominated by snowmelt or by rainfall.
For basins where most of the precipitation falls as snow, these are common sensitive parameters:
soil_moist_max (volume & timing)
tmax_lapse (volume & timing)
tmin_lapse (volume & timing)
hamon_coef or jh_coef (volume)
For basin where the majority of the prcipitation falls as rain, these are common sensitive parameters:
soil_moist_max (volume & timing)
hamon_coef or jh_coef (volume)
Will a completed PRMS model be able to integrate into GSFLOW without much modification?
If you have a PRMS model running and calibrated, to set up GSFLOW, you also need:
The process of setting up one of these models is described in detail in the "Example Problem" section of the GSFLOW documentation. I would suggest that anybody who is thinking about doing this should read these 20 pages.
- A MODFLOW 2005 (version 1.8) model of the same area as the PRMS model.
- The MODFLOW model should use the Unsaturated Zone Flow (UZF) package.
- The MODFLOW model should use the Stream Flow Routing (SFR) package instead of the River package.
- The MODFLOW model should at least be well calibrated for steady state.
- GSFLOW requires additional parameters which describe how the PRMS HRUs map to the MODFLOW cells and how the cells map to the HRUs.
- Use the individually calibrated PRMS and MODFLOW input files as the starting input files for GSFLOW.
- Calibrate the PRMS side of GSFLOW by adjusting the parameters which control gravity drainage to reflect the fact that PRMS considers much more flow as "ground-water flux" than MODFLOW does. PRMS ET parameters must be adjusted down because MODFLOW is discharging (additional) water to the soil zone, so more soil water is available to satisfy PET than with straight PRMS.
- The MODFLOW side of GSFLOW must now undergo a transient calibration with recharge and lateral contributions to streamflow coming from PRMS. These fluxes are (usually) much more variable in both time and space than what the usual MODFLOW model sees.
Our cooperator has a climate station network for flood warning. Can I use this data to drive PRMS?
Maybe. There are several factors that determine the applicability of measured climate time-series data as PRMS input, including period of observations, quality of observations, and proximity.
What kind of forecasting has PRMS been used for?
Disclaimer: Members of the MoWS group are not hydrologic forecasters. We do not make or use forecasts in any decision making context. Also, we do not develop, support, or run any atmospheric weather or climate simulation models.
There are three basic types of forecasts that PRMS has been used for:
As far as the usefulness of ESP goes, we have had the most success using it for seasonal (type 2) forecasts. Weather forecasts work better for the short term forecasts, and ESP can't capture the long term trends that the climate models (hopefully) can.
- Short term (less than 5 day outlook) where the model is driven with antecedent conditions and weather forecasts. The purpose of these models is usually to determine the timing and peak amount of the snowmelt runoff at the beginning of the snowmelt season. Of course, the forecasters at the NWS make flood forecasts, which are similar in that they use weather forecasts and real time observations to drive the models. But, to my knowledge, PRMS has not been used for this type of real time flood forecasting.
- Medium (or seasonal) term (2 week to 6 month outlook) where the model is driven with antecedent conditions and some kind of expected short term climate scenarios. These climate scenarios usually are synthetically generated, output from an atmospheric model, or historic climate (ESP). I like ESP best because the other methods are a lot of work and it's not clear that you get any more information out of them. If you believe that the climate during this year's runoff season is probably going to be like the previous 10 or 20 years, why not initialize your model (antecedent conditions) with the current snow pack and soil moisture state and just run those seasonal climate traces through. This type of water supply forecasting, has been to date, the most successful application of PRMS with the ESP methodology.
- Long term (10 to 100 year outlook) where the model is driven with atmospheric model output. Antecedent conditions don't matter because we are looking for trends. We have a funded project this year to investigate the response of PRMS and GSFLOW to the IPCC climate change scenarios. These will all be (at least) 100 year runs. For each climate change scenario, each simulation of each climate model will be a trace in the ensemble used to evaluate that climate change scenario.
Where can I get solar radiation data for calibration?
From the GDP (http://cida.usgs.gov/climate/gdp/) using the NREL solar radiation data set (http://mows.cr.usgs.gov:8080/thredds/dodsC/sr) These data were originally downloaded from the NREL web site ( http://www.nrel.gov/gis/data_solar.html) in units of kilowatt-hours per square meter per day and were converted to Langleys per day. PRMS uses Langleys per day. I recommend that everyone use these values for calibrating the solar radiation (i.e. ddslope and dd_intcp) parameters.
Langleys are a bit archaic. Here are some useful unit conversions:
1 Langley per day = 1 calorie per square centimeter per day
1 Langley per day = 0.01163 kilowatt-hour per square meter per day
NREL has several different values of solar radiation available, but the "global horizontal" data has been posted. "Global horizontal" means on a surface tangent to the surface of the Earth -- in other words a horizontal plate, wherever on the Earth you are measuring (or using) it.
I asked NREL how they want to be cited. They said:
Use the data as you need - the data is out there for people to use however we can't guarantee the results (as noted in our disclaimer). If you use our maps as we published, the citation should be:
"This map was created by the National Renewable Energy Laboratory for the U.S. Department of Energy." If you reprocess the data and create your own map, you could use "Map created by XXX with base data from the National Renewable Energy Laboratory."
Where can I get potential evapotranspiration data for calibration?
From the GDP (http://cida.usgs.gov/climate/gdp/) using a digitized version of the NOAA evaporation atlas (http://mows.cr.usgs.gov:8080/thredds/dodsC/pe). These data are in units of millimeters per month. PRMS uses mean monthly values in inches per day as calibration targets.
1 mm per month = 0.001293 inches per day
Cite this data set as:
Farnsworth, R.K., Thompson, E.S., and Peck, E.L., 1982, Evaporation atlas for the contiguous 48 United States: NOAA Technical Report NWS 33, U.S. Dept. of Commerce, Washington DC
What is the Shuffle Complex Evolution (SCE)?
SCE is a global optimization strategy used to calibrate model parameters. As with any optimization, the user must be prudent in selecting appropriate parameters and objective functions.
Luca gives me strange error messages that I don't understand.
The first thing you should do is to exit Luca and start over again. This sometimes helps.
Luca fails to run SCE. What's wrong?
When this happens, usually there is something wrong with your initial parameter file, data file or the setting of your MMS model.
Can I run Luca multiple times simultaneously?
Yes, but make sure those sessions running simultaneously do not have the same name, which must be set in Instruction 1 in Luca.
What version of Java do I need to run OUI?
OUI requires, at least, the current major release version of the Java JRE, available from: http://java.sun.com/
Do you have any instruction on how to recompile the OUI after having made very minor modifications on some of the tree node classes?
Here's some super quick instructions to get you going. I use the Netbeans IDE to develop and build OUI, so these instructions assume the use of this system.
There are two ways to do this. The first is to compile up your new tree nodes into your own jar file and add this jar to the classpath when oui runs. Then you can reference your new tree nodes in the project.xml file just like any of the other tree nodes that come with oui.
The other way is to work on the oui source codes directly. This can be a bit more involved. The first thing you need is all of the oui source code. It is in a directory called oui/src/.
Then, start up netbeans and make a new project as a "java application". Name it "oui".
Once you have the new project created, look for it in the "Projects" window in the upper left corner of netbeans. Right click on the "oui" project and choose "Project Properties" from the popup menu. Add the oui/src directory to the "Source Package Folder" list.
Then click on "Libraries" in the "Categories" tree. Add all of the jar files (except for oui.jar) in the oui/lib directory.
Click the OK button to close the "Project Properties" window.
At this point, you should be able to build the project by right clicking on "oui" in the "Projects" window and choosing "Build" or "Clean and Build" from the menu.
If the thing builds correctly, you can run oui directly from the IDE by setting your projcet.xml file and working directory in the "Project Properties" window (under "Run") or you can copy the "oui.jar" file from the oui netbeans project directory. Look for it in a subdirectory called "dist/". This oui.jar file can replace the standard version of oui.jar in the distribution version. It will contain any changes you have made to the tree nodes
How does OUI run ESP?
The Ensemble Streamflow Prediction (ESP) methodology was developed by Jay Day (then) of the National Weather Service. It is described in:
Day, G.N., 1985, Extended streamflow forecasting using NWSRFS: American Society of Civil Engineers, Journal of Water Resources Planning and Management, v. 111 no. 2, p. 157-170.
OUI implements what he describes. For clarification, here are the basic steps for implementing ESP methodology:
Here's an example:
- Determine the "forecast period." This is the period that defines the set of ensemble output traces that ESP will generate.
- Determine the "initialization period." This is the time period that determines the antecedent conditions (ie soil moisture or snow pack) for the forecast. By default, OUI sets this to end on the day before the forecast period starts. Then the start time is set to two years (730 daily time steps) before this. This length is a variable specified in the project.xml file, so you can change it if you want. I wouldn't make it any shorter and it doesn't really need to be any longer.
- Based on the forecast period, the ESP methodology determines an ensemble set of "historic periods." It goes back through the input climate data file and picks out periods which cover the forecast period from previous years.
- The input climate data from the initialization period is pasted together with each historic period and written into separate input climate data files. Then PRMS simulates each of these forecast scenarios, saving the output of each simulation to a different file.
- These output files are analyzed together as the "ESP output forecast ensemble."
I have a PRMS input climate file which covers the time period 10/1/1998 through 3/31/2008. I would like to make an ESP forecast for the period 4/1/2008 through 7/31/2008. So, I specify as input to ESP:
Forecast period: 4/1/2008 through 7/31/2008
Initialization period: 4/1/2006 through 3/31/2008
From this, ESP determines that I have 9 historic periods: 4/1/1999 through 7/31/1999
4/1/2000 through 7/31/2000
4/1/2001 through 7/31/2001
4/1/2002 through 7/31/2002
4/1/2003 through 7/31/2003
4/1/2004 through 7/31/2004
4/1/2005 through 7/31/2005
4/1/2006 through 7/31/2006
4/1/2007 through 7/31/2007
The climate data for the initialization period is combined with the data for each of the historic periods to make the 9 input files which are used to generate the simulated traces of the ESP ensemble.
If you run ESP in OUI, and then look in the PRMS input and output directories, you will see all of these files.
How can I extract flows corresponding to specific exceedence levels from the ESP tool? When I view the report, it just ranks the flow. Is there any capability to use a probability distribution to pull out the 90, 50, and 10 percent exceedence flows?
The ESP tool looks at the forecast traces for a defined period and rates them according to either total volume or peak value during the period. This is what was programmed for the BOR in Yakima. These numbers show up on the trace list when either "rank by volume" or "rank by peak" is selected.
If I understand the question, what is wanted is the daily time series plots where each curve corresponds to an exceedence level. The exceedence values for each day comes from a reranking of the traces for that day. If this is what is wanted, then no ESP tool does not do this.
If you have an analysis program that does the ESP analysis you like, you can always pull the forecast traces from the output files directly. They are in output/esp.