Evaluation UI Webservices

The OCW evaluation UI is a demonstration web application that is built upon the OCW toolkit. The web services for the application are written in Python on top of the Bottle Web Framework.

Configuration and Dependencies

The Evaluation UI is built on top of the OCW toolkit and as such requires it to function properly. Please check the toolkit’s documentation for relevant installation instructions. You will also need to ensure that you have Bottle installed. You can install it with:

pip install bottle

The backend serves the static files for the evaluation frontend as well. If you plan to use the frontend you need to ensure that the app directory is present in the main web service directory. The easiest way to do this is to create a symbolic link where the run_webservices module is located. Assuming you have the entire ocw-ui directory, you can do this with the following command.

cd ocw-ui/backend
ln -s ../frontend/app app

Finally, to start the backend just run the following command.

python run_webservices.py

Web Service Explanation

The backend endpoints are broken up into a number of modules for ease of maintenance and understanding. The run_webservices module is the primary application module. It brings together all the various submodules into a useful system. It also defines a number of helpful endpoints for returning static files such as the index page, CSS files, JavaScript files, and more.

Local File Metadata Extractors

The local_file_metadata_extractors module contains all the endpoints that are used to strip information out of various objects for display in the UI. At the moment, the main functionality is stripping out metadata from NetCDF files when a user wishes to load a local file into the evaluation.

GET /list_latlon/(file_path: path)

Retrieve lat/lon information from given file.

Parameters:
  • file_path (string:) – Path to the NetCDF file from which lat/lon information should be extracted
Returns:

Dictionary containing lat/lon information if successful, otherwise failure information is returned.

Example successful JSON return

{
    'success': true,
    'lat_name': The guessed latitude variable name,
    'lon_name': the guessed longitude variable name,
    'lat_min': The minimum latitude value,
    'lat_max': The maximum latitude value,
    'lon_min': The minimum longitude value,
    'lon_max': The maximum longitude value
}

Example failure JSON return

{
    'success': false,
    'variables': List of all variables present in the NetCDF file
}
GET /list_time/(file_path: path)

Retrieve time information from provided file.

Parameters:
  • file_path (String:) – Path to the NetCDF file from which time information should be extracted
Returns:

Dictionary containing time information if successful, otherwise failure information is returned.

Example successful JSON return

{
    "success": true,
    "time_name": The guessed time variable name,
    "start_time": "1988-06-10 00:00:00",
    "end_time": "2008-01-27 00:00:00"
}

Example failure JSON return

{
    "success": false
    "variables": List of all variable names in the file
} 
GET /list_vars/(file_path: path)

Retrieve variable names from file.

Parameters:
  • file_path (String:) – Path to the NetCDF file from which variable information should be extracted
Returns:

Dictionary containing variable information if succesful, otherwise failure information is returned.

Example successful JSON return

{
    "success": true,
    "variables": List of variable names in the file
}

Example failure JSON return

{
    "success": false
}

Directory Helpers

The directory_helpers module contains a number of endpoints for working directory manipulation. The frontend uses these endpoints to grab directory information (within a prefix path for security), return result directory information, and other things.

GET /list/(dir_path: path)

Return the listing of a supplied path.

Parameters:
  • dir_path (String) – The directory path to list.
Returns:

Dictionary containing the directory listing if possible.

Example successful JSON return

{
    'listing': [
        '/bar/',
        '/baz.txt',
        '/test.txt'
    ]
}

Example failure JSON return

{'listing': []}
GET /list/

Return the listing of a supplied path.

Parameters:
  • dir_path (String) – The directory path to list.
Returns:

Dictionary containing the directory listing if possible.

Example successful JSON return

{
    'listing': [
        '/bar/',
        '/baz.txt',
        '/test.txt'
    ]
}

Example failure JSON return

{'listing': []}
GET /results/

Retrieve results directory information.

The backend’s results directory is determined by WORK_DIR. All the directories there are formatted and returned as results. If WORK_DIR does not exist, an empty listing will be returned (shown as a ‘failure below’).

Successful JSON Response

{
    'listing': [
        '/bar',
        '/foo'
    ]
}

Failure JSON Response

{
    'listing': []
}
GET /results/(dir_path: path)

Retrieve specific result files.

Parameters:
  • dir_path (String) – The relative results path to list.
Returns:

Dictionary of the requested result’s directory listing.

Successful JSON Response

{
    'listing': [
        'file1',
        'file2'
    ]
}

Failure JSON Response

{
    'listing': []
}
GET /path_leader/

Return the path leader used for clean path creation.

Example JSON Response

{'leader': '/usr/local/ocw'}

RCMED Helpers

The rcmed_helpers module contains endpoints for loading datasets from the Regional Climate Model Evaluation Database at NASA’s Jet Propulsion Laboratory.

GET /datasets/

Return a list of dataset information from JPL’s RCMED.

Example Return JSON Format

[
    {
        "dataset_id": "17",
        "shortname": "The dataset's short name",
        "longname": "The dataset's, full name",
        "source": "Where the dataset originated"
    },
    ...
]
GET /parameters/

Return dataset specific parameter information from JPL’s RCMED.

Example Call Format

/parameters/?dataset=<dataset's short name>

Example Return JSON Format

[
    {
        "parameter_id": "80",
        "shortname": "The dataset's short name",
        "datasetshortname": "The dataset's short name again",
        "longname": "The dataset's long name",
        "units": "Units for the dataset's measurements"
    }
]
GET /parameters/bounds

Return temporal and spatial bounds metadata for all of JPL’s RCMED parameters.

Example Call Format

/parameters/bounds/

Example Return JSON Format

{
  "38": {
    "start_date": "1901-01-15",
    "end_date": "2009-12-15",
    "lat_max": 89.75,
    "lat_min": -89.75,
    "lon_max": 179.75,
    "lon_min": -179.75
  },
  "39": {
    "start_date": "1901-01-15",
    "end_date": "2009-12-15",
    "lat_max": 89.75,
    "lat_min": -89.75,
    "lon_max": 179.75,
    "lon_min": -179.75
  }
}
GET /parameters/bounds/

Return temporal and spatial bounds metadata for all of JPL’s RCMED parameters.

Example Call Format

/parameters/bounds/

Example Return JSON Format

{
  "38": {
    "start_date": "1901-01-15",
    "end_date": "2009-12-15",
    "lat_max": 89.75,
    "lat_min": -89.75,
    "lon_max": 179.75,
    "lon_min": -179.75
  },
  "39": {
    "start_date": "1901-01-15",
    "end_date": "2009-12-15",
    "lat_max": 89.75,
    "lat_min": -89.75,
    "lon_max": 179.75,
    "lon_min": -179.75
  }
}

Processing Endpoints

The processing module contains all the endpoints related to the running of evaluations.

GET /metrics/

Retrieve available metric names.

Example Return JSON Format

{
        'metrics': [
                'MetricName1',
                'MetricName2',
                ...
        ]
}
POST /run_evaluation/

Run an OCW Evaluation.

run_evaluation expects the Evaluation parameters to be POSTed in the following format.

{
        reference_dataset: {
                // Id that tells us how we need to load this dataset.
                'data_source_id': 1 == local, 2 == rcmed,

                // Dict of data_source specific identifying information.
                //
                // if data_source_id == 1 == local:
                // {
                //     'id': The path to the local file on the server for loading.
                //     'var_name': The variable data to pull from the file.
                //     'lat_name': The latitude variable name.
                //     'lon_name': The longitude variable name.
                //     'time_name': The time variable name
                //     'name': Optional dataset name
                // }
                //
                // if data_source_id == 2 == rcmed:
                // {
                //     'dataset_id': The dataset id to grab from RCMED.
                //     'parameter_id': The variable id value used by RCMED.
                //     'name': Optional dataset name
                // }
                'dataset_info': {..}
        },

        // The list of target datasets to use in the Evaluation. The data
        // format for the dataset objects should be the same as the
        // reference_dataset above.
        'target_datasets': [{...}, {...}, ...],

        // All the datasets are re-binned to the reference dataset
        // before being added to an experiment. This step (in degrees)
        // is used when re-binning both the reference and target datasets.
        'spatial_rebin_lat_step': The lat degree step. Integer > 0,

        // Same as above, but for lon
        'spatial_rebin_lon_step': The lon degree step. Integer > 0,

        // The temporal resolution to use when doing a temporal re-bin
        // This is a timedelta of days to use so daily == 1, monthly is
        // (1, 31], annual/yearly is (31, 366], and full is anything > 366.
        'temporal_resolution': Integer in range(1, 999),

        // A list of the metric class names to use in the evaluation. The
        // names must match the class name exactly.
        'metrics': [Bias, TemporalStdDev, ...]

        // The bounding values used in the Evaluation. Note that lat values
        // should range from -180 to 180 and lon values from -90 to 90.
        'start_time': start time value in the format '%Y-%m-%d %H:%M:%S',
        'end_time': end time value in the format '%Y-%m-%d %H:%M:%S',
        'lat_min': The minimum latitude value,
        'lat_max': The maximum latitude value,
        'lon_min': The minimum longitude value,
        'lon_max': The maximum longitude value,

        // NOTE: At the moment, subregion support is fairly minimal. This
        // will be addressed in the future. Ideally, the user should be able
        // to load a file that they have locally. That would change the
        // format that this data is passed.
        'subregion_information': Path to a subregion file on the server.
}
OPTIONS /run_evaluation/

Run an OCW Evaluation.

run_evaluation expects the Evaluation parameters to be POSTed in the following format.

{
        reference_dataset: {
                // Id that tells us how we need to load this dataset.
                'data_source_id': 1 == local, 2 == rcmed,

                // Dict of data_source specific identifying information.
                //
                // if data_source_id == 1 == local:
                // {
                //     'id': The path to the local file on the server for loading.
                //     'var_name': The variable data to pull from the file.
                //     'lat_name': The latitude variable name.
                //     'lon_name': The longitude variable name.
                //     'time_name': The time variable name
                //     'name': Optional dataset name
                // }
                //
                // if data_source_id == 2 == rcmed:
                // {
                //     'dataset_id': The dataset id to grab from RCMED.
                //     'parameter_id': The variable id value used by RCMED.
                //     'name': Optional dataset name
                // }
                'dataset_info': {..}
        },

        // The list of target datasets to use in the Evaluation. The data
        // format for the dataset objects should be the same as the
        // reference_dataset above.
        'target_datasets': [{...}, {...}, ...],

        // All the datasets are re-binned to the reference dataset
        // before being added to an experiment. This step (in degrees)
        // is used when re-binning both the reference and target datasets.
        'spatial_rebin_lat_step': The lat degree step. Integer > 0,

        // Same as above, but for lon
        'spatial_rebin_lon_step': The lon degree step. Integer > 0,

        // The temporal resolution to use when doing a temporal re-bin
        // This is a timedelta of days to use so daily == 1, monthly is
        // (1, 31], annual/yearly is (31, 366], and full is anything > 366.
        'temporal_resolution': Integer in range(1, 999),

        // A list of the metric class names to use in the evaluation. The
        // names must match the class name exactly.
        'metrics': [Bias, TemporalStdDev, ...]

        // The bounding values used in the Evaluation. Note that lat values
        // should range from -180 to 180 and lon values from -90 to 90.
        'start_time': start time value in the format '%Y-%m-%d %H:%M:%S',
        'end_time': end time value in the format '%Y-%m-%d %H:%M:%S',
        'lat_min': The minimum latitude value,
        'lat_max': The maximum latitude value,
        'lon_min': The minimum longitude value,
        'lon_max': The maximum longitude value,

        // NOTE: At the moment, subregion support is fairly minimal. This
        // will be addressed in the future. Ideally, the user should be able
        // to load a file that they have locally. That would change the
        // format that this data is passed.
        'subregion_information': Path to a subregion file on the server.
}