-
Notifications
You must be signed in to change notification settings - Fork 21
System Design
This section provides an overview of the model components and is composed of the following sub-sections:
The model is implemented using three frameworks:
- Assignments and aggregate demand model components implemented using Python and Emme
- The CT-RAMP family of activity-based models in Java
- Reporting framework in SQL
The main interchange between these frameworks are skim and demand matrices stored in OMX files. The model is operated and controlled from Emme using the Master run tool.
The SANDAG travel demand model imports network details from ArcInfo Interchange Files (.E00) (for the nodes and links), text (.csv) and data (.dbf) files (for the transit routes, turn restrictions and other network attributes). A base Emme scenario is created from the merged traffic and transit data, with attributes for all time periods. The main zone system corresponds to the TAZs for the traffic assignment; while the TAPs for the transit assignment are applied in a separate transit assignment database.
The traffic and transit networks include many attributes by time-of-day. The transit network includes parameters which vary by mode, route, or stop such as fares, transfer times and perceptions, wait time perceptions, as well as route-to-route-specific transfer wait times.
The bike and walk travel times and logsums are calculated from their route choice models implemented as Java procedures or may be copied from pre-computed files to save runtime. Warm start demand for traffic (auto and truck) is imported and the initial traffic and transit assignments are run in Emme. The traffic demands are separated into SOV non-transponder, SOV transponder, HOV2, HOV3+ by their value-of-time categories (low, medium and high), respectively, plus three truck classes - light, medium and heavy trucks. In total, there are 15 vehicle classes in each of the five time periods.
The transit vehicles are input as background traffic flow in the traffic assignment, and the congested link travel times from the traffic assignment are used in the transit assignment for bus routes in mixed traffic. The transit assignments use two configurations of parameters: one for local bus-only demand and one for all modes (that is, including premium modes). The final assigned flows for transit includes 15 slices of demand by access mode (walk, PNR, KNR) and five time periods (EA, AM, MD, PM and EV). Once the assignments finish, the traffic and transit skims are exported to OMX files.
Once the skims are complete, the CT-RAMP simulation models are initiated which import skims from these OMX files. For operational purposes, the models are separated into general travel by San Diego residents and a set of special market models which includes: trips to and from the San Diego airport; Cross-Border Xpress (CBX) airport trips; cross-border trips by Mexican residents; trips by visitors staying within San Diego; and special event trips. Upon completion, the zone-to-zone demands (TAZ for traffic, TAP for transit) are exported to OMX for each of the disaggregate demand models.
The disaggregate commercial vehicle, aggregate truck, external-internal and external-external models are then run. Note only the first of these is run every iteration, while the final three are only run for the initial iteration (this is an adopted methodology to optimize runtime which was carried over from the previous implementation). These models operate directly on the Emme database, using the saved skims from the assignments and saving the demand matrices within the database. After the demand models are all completed, the demands from the disaggregate models are imported from OMX and summed up by mode and time-of-day, along with the external-internal and external-external auto demands for the next iteration of assignments and skims.
The demand model components are run for three iterations, with the traffic and transit assignments run four times. After the model run is completed, the assignment (network) results, skims and demands are exported to OMX and CSV files for later import into a database for reporting services.
There are two different zone systems used in the Emme models: one for traffic and one for transit. The traffic assignment demand and skims are based on 4,996 Traffic Analysis Zones (TAZ) and the transit assignment demand and skims are based on a maximum of 2,500 Transit Access Points (TAP). The number and location of TAPs can change in different networks. The zone systems and corresponding network and matrices are maintained in two distinct Emme databases.
The main Emme database (located under the Database directory) contains the combined traffic and transit networks, TAZ system, traffic demand and skim matrices as well as the traffic result scenarios. The second transit Emme database (located under the Database_transit directory) contains the TAP zone system with the transit demand, skim matrices and transit result scenarios. Additional information on the file system setup can be found in Setup and Configuration.
The first base scenario (Scenario 100) contains the imported data from the E00 (and associated) files with all inputs for all time periods. These files are generated from the TCOVED network manager. To contain the results for each of the five time period assignments, the base scenario is copied to five additional scenarios. The base database may contain an extra Scenario 1: Empty scenario, which is part of the template Emme project, but can be deleted after the model run.
This section describes how to access model outputs with SQL as well as the model's integration with greenhouse gas emission analysis.
All relevant ABM output is loaded into a Microsoft SQL Server Enterprise 2014 database. Reporting is currently handled in the database via programmability objects and ad-hoc queries. In order to access the output database the user should have Microsoft SQL Server Management Studio 2014 installed. A data warehouse and reporting suite leveraging Microsoft SQL Server Analysis and Reporting services is still in development.
ABM outputs are loaded into a SQL Server database. As improvements are made to the ABM, the database evolves too. The current database schema can be found here.
SANDAG staff developed a procedure to integrate ABM with EMFAC2014/2017 for greenhouse gas emission analysis. First a user needs to run a Python-SQL based procedure to generate EMFAC2014/2017 input files using ABM outputs. Once the inputs are generated, the user then needs to run EMFAC2014/2017 software to create greenhouse gas emission measures.
The Python-SQL procedure relies on two SQL Functions in the ABM database, emfac.fn_emfac_<2014/2017>_vmt
and emfac.fn_emfac_<2014/2017>_vmt_speed
. The first function creates VMT by EMFAC2014/2017 vehicle and technology group. The second function creates percent of VMT by 5 mile speed bins between 0 and 70 mph. These functions rely entirely on tables preloaded in the database, including the default EMFAC2014/2017 inventory tables, the mapping table between EMFAC2014/2017 vehicle types and SANDAG model vehicle types, and model assignment and network output from ABM runs.
To run the Python-SQL based procedure, these software/libraries should be installed on a user’s computer:
- Install
pymssql
. For example inC:\Anaconda\Lib\site-packages
. - Install python excel (
xlwt
) packages. For example inC:\Anaconda\Lib\site-packages\xlwt
. - Install
pyodbc
. For example inC:\Anaconda\Lib\site-packages\sqlalchemy
.
To run the EMFAC2014/2017 input builder:
- Load ABM outputs into the database for a given model run.
- Open a DOS window, navigate to
\python\emfac
folder, executeemfac_xlsx.py
with this usage:Python emfac_xls.py <EMFAC version: 2014 | 2017> <Scenario ID> <Season: Annual | Summer | Winter> <SB 375: On | Off> <Output Path>
- The EMFAC2014/2017 input files are written to the
\output
folder asEMFAC<2014/2017>-SANDAG-[YEAR]-[SEASON]-[YEAR] -<sb375>.xlsx