Trinity Testing

An automated testing service for building model calibration using EnergyPlus...

Learn more »

Available Tests

IDF-to-XML

Browse…

XML-to-IDF

Browse…

Download Test



Download Result



Submit Test

Browse…

Browse…

Browse…

Browse…



Submit Model

Browse…



How do I use it?

File Conversion

The Trinity test framework makes use of XML files to represent building models. The XML format gives more flexibility than the standard EnergyPlus IDF format, especially when used to mark those components of the model that should be tuned/tested. For that reason, the system provides conversions between XML and IDF.

When converting from IDF to XML, a zipped version of the XML file is returned (corresponding to this schema). When converting from XML to IDF, a zipped file containing the IDF and a CSV is returned. The IDF is the E+ model, and the CSV contains any tuning/testing information that was marked in the original XML. (Once again, the XML is more flexible and informative than the IDF, so two files are required to communicate the information in a single XML file.)

XML files may add attributes to any element to be tuned/tested. The Trinity test framework recoginizes the following attributes for a given element (and will collect them in the resulting CSV file when performing XML-to-IDF conversion):

tuneType
The data type to be tuned (usually "float" or "integer")
tuneMin
The minimum bound for the tuned variable
tuneMax
The maximum bound for the tuned variable
tuneDistribution
The distribution for the tuned variable (usually "uniform" or "gaussian"
tuneGroup
The group name to which this variable belongs (used to define meta-variables that should be tuned as one)
tuneConstraint
The complex constraint that applies to this variable (must use the group names)

The following is an example of those attributes:

tuneType="float" tuneMin="0" tuneMax="0.5" tuneDistribution="uniform" tuneGroup="A" tuneConstraint="A+B<1"

Notice how the constraint uses &lt; instead of < because < is a markup character in XML.

Downloading Tests and Results

Downloading Tests

By providing a test ID and the user's API key, the full test suite can be downloaded as a zipped file that contains the following files:

baseModel.xml
The base model that should be tuned to get as close as possible to the true model
output.csv
The true model's EnergyPlus output
weather.epw
The weather file
schedule.csv (Optional)
The schedule file used with the model

Downloading Results

By providing a submission ID and the user's API key, the results of a test can be downloaded as a zipped file that contains the submitted XML model and a CSV file containing measures of performance on both model output and on appropriately marked (using the attributes described above) tuned/tested values.

Creating Tests

To create a test, a user must upload the following files along with an API key that has test-creation privileges:

True Model
The target model or "answer key" that submitters attempt to recreate; this is kept private by the system
Base Model
The model that is given to submitters when they download a test; they should attempt to tune it to achieve the true model
Weather
The weather file associated with the base and true models
Schedule (Optional)
The schedule file associated with the base and true models

Evaluating Models

To submit a model for evaluation, a user must upload the tuned model, along with the test ID against which it should be evaluated and an API key.

What is its purpose?

The Trinity test framework is designed to deal with issues inherent in most auto-calibration results:

  1. Most calibrations in the literature are carried out on specific, unique buildings of interest to the researchers. This complicates any attempt by other investigators to duplicate the work.
  2. Researchers often report the results of their calibrations in different ways using different metrics, and almost all results detail only the model output. If a real building is used, then exact components of the building are likely unknown, which is precisely why automatic calibration is needed. This leads to a proliferation in the literature of unique, largely irreplicable, and less informative results of automatic calibration approaches.

The solution to all of these problems is to test calibration approaches using modified benchmark models. For instance, a given Department of Energy commercial reference building has a fully specified EnergyPlus model, which produces noise-free output when passed through EnergyPlus. Using such a model as a base, a controlled test case can be created where certain variables of the base are manually modified within some specified bounds (e.g., within 30% of the base value). This modified version can then serve as a test case. It can be passed through EnergyPlus to produce similar noise-free output. Then, anyone interested in testing a calibration approach can simply retrieve the base model, including names and ranges of the modified variables, and the test case EnergyPlus output. Ideally, the calibration procedure would be able to discover the (hidden) variable values of the test case and produce very similar EnergyPlus output from the calibrated model. Thus, the calibration system’s effectiveness can then be measured exactly by its error in the input domain (test versus calibrated variable values) and output domain (test versus calibrated model EnergyPlus output).

Where do I go for help?

Send an email to Joshua New (newjr@ornl.gov).