SAVER-T-FGR-10
Approved for Public Release
Incident Management Software
for Emergency Response
Focus Group Report
January 2021
ii
Approved for Public Release
The Incident Management Software for Emergency Response Focus Group Report was prepared by
the National Urban Security Technology Laboratory for the SAVER Program of the U.S. Department of
Homeland Security, Science and Technology Directorate.
The views and opinions of authors expressed herein do not necessarily reflect those of the U.S.
government.
Reference herein to any specific commercial products, processes or services by trade name,
trademark, manufacturer or otherwise does not necessarily constitute or imply its endorsement,
recommendation or favoring by the U.S. government.
The information and statements contained herein shall not be used for the purposes of advertising,
nor to imply the endorsement or recommendation of the U.S. government.
With respect to documentation contained herein, neither the U.S. government nor any of its employees
make any warranty, express or implied, including but not limited to the warranties of merchantability
and fitness for a particular purpose. Further, neither the U.S. government nor any of its employees
assume any legal liability or responsibility for the accuracy, completeness or usefulness of any
information, apparatus, product or process disclosed; nor do they represent that its use would not
infringe privately owned rights.
The cover photo and images included herein were provided by the National Urban Security Technology
Laboratory, unless otherwise noted.
iii
Approved for Public Release
FOREWORD
The U.S. Department of Homeland Security (DHS) established the System Assessment and Validation
for Emergency Responders (SAVER) Program to assist emergency responders making procurement
decisions. Located within the Science and Technology Directorate (S&T) of DHS, the SAVER Program
conducts objective assessments and validations on commercially available equipment and systems
and develops knowledge products that provide relevant equipment information to the emergency
responder community. The SAVER Program mission includes:
Conducting impartial, practitioner-relevant, operationally oriented assessments and validations
of emergency response equipment
Providing informationin the form of knowledge productsthat enables decision-makers and
responders to better select, procure, use and maintain emergency response equipment.
SAVER Program knowledge products provide information on equipment that falls under the
categories listed in the DHS Authorized Equipment List (AEL), focusing primarily on two main
questions for the responder community: “What equipment is available?” and “How does it perform?”
These knowledge products are shared nationally with the responder community, providing a cost-
saving asset to DHS by ensuring federal, state and local responders are prepared to make
operational and procurement decisions.
The SAVER Program is managed by the National Urban Security Technology Laboratory (NUSTL).
NUSTL works with stakeholders to identify and prioritize project topics that address emergency
responder needs, develops SAVER knowledge products and coordinates with other organizations to
leverage appropriate subject matter expertise.
NUSTL provides expertise and analysis on a wide range of key subject areas, including chemical,
biological, radiological, nuclear and explosive weapons detection; emergency response and recovery;
and related equipment, instrumentation and technologies. Under its SAVER Program, NUSTL - in
conjunction with Pacific Northwest National Laboratory will conduct a comparative assessment of
Incident Management Software to provide emergency responders with reference information on
commercially available technologies. Incident Management Software for Emergency Response falls
under the AEL reference numbers 04AP-05-CDSS, titled Systems and Tools, ICS; 04AP-05-SVIS, titled
Software, Operational Space Visualization; and 04AP-03-GISS, titled System, Geospatial Information.
As part of this project, recommendations for the assessment were gathered from a focus group and
are documented in this report.
For more information on NUSTL’s SAVER Program and Incident Management Software for Emergency
Response or to view additional reports on other technologies, visit:
www.dhs.gov.science-and-
technology/SAVER.
iv
Approved for Public Release
POINTS OF CONTACT
National Urban Security Technology Laboratory (NUSTL)
U.S. Department of Homeland Security
Science and Technology Directorate
201 Varick Street
New York, NY 10014
E-mail: NUSTL@hq.dhs.gov
Website: www.dhs.gov/science-and-technology/SAVER
Authors:
Cecilia Murtagh, Mechanical Engineer, NUSTL
Eliot Calhoun, Program Analyst, NUSTL
Brenda Velasco-Lopez, Test Engineer, NUSTL
Andre Coleman, Senior Research Scientist, PNNL
Richard Ozanich Ph.D., Technical Advisor, PNNL
Jerry Tagestad, Geospatial Data Scientist, PNNL
v
Approved for Public Release
EXECUTIVE SUMMARY
Incident management software (IMS) consists of a suite of mobile-ready tools that aggregate pre-
planned or no-notice critical incident information in a real-time collaborative environment such that
situational status, response priorities, and resource deployment are brought into a common
operating picture. IMS brings together diverse types of data (e.g., map views, property information,
sensor data, resource tracking, computer-aided dispatch) in a multilayered format, providing first
responders and emergency managers access to the information they need to manage small and
large scale no-notice incidents (e.g., house fire, earthquakes) and pre-planned events (e.g., parade,
protest). Emergency management, fire service, law enforcement, and other emergency response
agencies that have a role in the management of incidents and events use IMS to conduct pre-
planning, multiagency coordination, resource allocation, asset tracking, and information collection
and analysis to aid decision making and after action audits and reports.
The National Urban Security Technology Laboratory’s (NUSTL) Systems Assessment and Validation
for Emergency Responders (SAVER) Program, in cooperation with the Pacific Northwest National
Laboratory (PNNL), will conduct a comparative assessment of IMS for emergency response to provide
emergency responders with information to assist their making operational and procurement
decisions.
As part of the assessment planning process, NUSTL convened a virtual focus group from September
16-23, 2020. The virtual focus group was conducted in three parts over the course of one week: an
introductory video conference, individual participant follow-up interviews, and a final group
discussion by video conference. Seven emergency responders from various jurisdictions who have
experience using IMS for emergency response participated. The focus group generated
recommendations on evaluation criteria, developed product selection specifications, and discussed
possible scenarios for assessing IMS.
The focus group identified 31 evaluation criteria. “Capability” and “usability” were the most
important of the five overarching SAVER categories. Eight additional criteria were identified by the
focus group as being of utmost importance for IMS to be used in emergency responses:
Ability to handle standard geographic information system (GIS) files
Personnel tasking and accountability tracking
Information sharing across personnel and agencies
Interoperability with other software and sensors
Intuitive user interface
Reliability of software
Technical support availability
Scalability of users and data traffic
The focus group participants also recommended scenarios and products to be considered for
inclusion in the assessment. These recommendations will be used create the Incident Management
Software for Emergency Response Assessment Plan.
vi
Approved for Public Release
TABLE OF CONTENTS
1.0 Introduction ............................................................................................................................................ 7
1.1 Participant Information ..................................................................................................................... 7
2.0 Focus Group Methodology .................................................................................................................... 8
3.0 Evaluation Criteria Recommendations .............................................................................................. 10
3.1 Capability.......................................................................................................................................... 11
3.2 Usability ............................................................................................................................................ 12
3.3 Deployability ..................................................................................................................................... 12
3.4 Maintainability ................................................................................................................................. 13
3.5 Affordability ...................................................................................................................................... 13
4.0 Evaluation Criteria Assessment Recommendations ......................................................................... 14
5.0 Assessment Scenario Recommendations ......................................................................................... 15
5.1 Pre-planned Event - Protest ............................................................................................................ 15
5.2 Fast-moving No-notice Incident Scenario TBD ........................................................................... 15
5.3 Small No-notice Incident House Fire .......................................................................................... 16
6.0 Product Selection Recommendations ................................................................................................ 16
7.0 Summary .............................................................................................................................................. 16
8.0 Future Actions ...................................................................................................................................... 17
9.0 Acknowledgements ............................................................................................................................. 17
LIST OF FIGURES
Figure 2-1 Focus Group Process ................................................................................................................. 8
LIST OF TABLES
Table 1-1 Focus Group Participant Demographics .................................................................................... 7
Table 2-1 Evaluation Criteria Weighting Scale ........................................................................................... 9
Table 3-1 Evaluation Criteria ..................................................................................................................... 10
Table 4-1 Evaluation Criteria Assessment Recommendations ............................................................... 14
7
Approved for Public Release
1.0 INTRODUCTION
Emergency management, fire service, law enforcement, and other emergency response agencies use
incident management software (IMS) to conduct multiagency coordination, make resource allocation
decisions, and collect and analyze information. By aggregating real-time and historic incident
information in an intuitive, map-based environment, IMS assists first responders with the planning,
management, and reporting of small- and large-scale events and incidents.
Mobile, map based, commercial IMS products incorporate real-time geospatial views of an operating
area and have capabilities for pre-event planning as well as incident response and management.
Operating on handheld devices, tablets, or mobile PCs, these software solutions enable first
responders to execute various tasks including location sharing for fleet and asset tracking, assigning
roles and creating checklists, and after-action reporting.
1.1 Participant Information
On September 16-23, 2020, the National Urban Security Technology Laboratory’s (NUSTL) System
Assessment and Validation for Emergency Responders (SAVER) Program conducted a virtual focus
group on IMS in order to gather recommendations on evaluation criteria, product selection
specifications, products and possible scenarios for the assessment of incident management
software for emergency response. Conducted in three parts over the course of one week, the
virtual focus group consisted of an introductory video conference, followed by individual
participant interviews, and a group discussion video conference at the finish.
Seven emergency responders from various jurisdictions and with at least two years of experience
using IMS were invited to participate in the focus group. Demographic information is listed below
in Table 1-1.
Table 1-1 Focus Group Participant Demographics
Participant Discipline
Years of
Experience
(Discipline)
Years of
Experience
(IMS)
State
Emergency Communications 17 10 New Jersey
Emergency Management 4 3 Colorado
Emergency Management/Fire Service 1/20 6 Virginia
Fire Service 20 20 Minnesota
Fire Service/Emergency Medical Services 31 10 Maryland
Fire Service/Emergency Medical Services 40 25 Maryland
Information Technology (Fire Service) 25 15 Washington
8
Approved for Public Release
2.0 FOCUS GROUP METHODOLOGY
Held via video conference, the first session of the virtual focus group opened with an overview of
NUSTL, the SAVER Program, IMS, and the goals of the focus group. In this first session, a facilitator
asked participants about their experiences using IMS. Focus group participants discussed the types
of scenarios in which they use IMS, including pre-planned large, slow-moving events (e.g., a parade
or marathon) as well as no-notice, small and faster-moving incidents (e.g., house fire, commercial
building fire, or hazardous material spill). Participants pointed out that the scale of the event also
impacted the types of personnel who respond and their respective IMS requirements.
Participants discussed the need for a single IMS that would either encompass all required functions
or allow for seamless integration with other software and tools their organizations also use during
emergencies. Participants noted that during a single incident their organizations currently use
multiple solutions (including pencil and paper) for functions such as tracking resource requests,
sharing information between the emergency operations center (EOC) and responders, personnel
tracking, and asset tracking.
Finally, participants highlighted the need for scalability. Emergencies, especially those involving an
EOC response, may require ad-hoc inclusion of personnel with different backgrounds from multiple
units and agencies. Participants voiced the importance of having an IMS that can quickly
accommodate the addition of new users with relatively little set-up or training. One focus group
participant recalled an instance of switching to free software that is widely used by the public in non-
emergency settings (e.g., Discord and Google Forms) in order to take advantage of its easy
deployabililty and the familiarity to new users.
During the introductory video conference, the project lead outlined four sets of recommendations
needed to plan an assessment that would be requested from the focus group participants:
1. Evaluation criteria recommendationsProduct features that are important to consider when
making operational or procurement decisions
2. Assessment scenario recommendationsOperational settings and activities that reflect
responders’ experiences and would provide evaluators with appropriate conditions to assess
the products
3. Product specification recommendationsFeatures, attributes, or characteristics a product
should possess to be considered for assessment
4. Product recommendationsSpecific brands or models that are relevant to the emergency
responder community and should be candidates for inclusion in the assessment
Figure 2-1 highlights the process followed to gather these recommendations.
Figure 2-1 Focus Group Process
Define and
group
evaluation
criteria by
SAVER category
Identify
technology uses
and evaluation
criteria
Assign weights
to the
evaluation
criteria
Prioritize
and assign
percentages
to the
SAVER
categories
Recommend
assessment
scenarios
Recommend
product
selection
criteria and
products to
assess
9
Approved for Public Release
In this virtual focus group, technology usage was discussed during the introductory video conference.
Over the course of the following three days, project team members contacted focus group
participants for individual interviews to gather, define, and group evaluation criteria by SAVER
category. The SAVER Program uses five criteria categories:
Affordability criteria relate to the total cost of ownership over the life of the product. This
includes purchase price, training costs, warranty costs, recurring costs, and maintenance costs.
Capability criteria relate to product features or functions needed to perform responder relevant
tasks.
Deployability criteria relate to preparing to use the product, including transport, set up, training,
and operational or deployment restrictions.
Maintainability criteria relate to the routine maintenance, storage, calibration, and minor
repairs performed by responders, as well as included warranty terms, duration, and coverage.
Usability criteria relate to ergonomics and the relative ease of use when performing responder
relevant tasks.
The focus group participants also recommended whether the criteria should be assessed
operationally through hands-on experience or by reviewing manufacturer-provided specifications.
All individual feedback on evaluation criteria and categories was then consolidated and presented to
the focus group during the second group videoconference. Focus group members reviewed,
modified, and agreed upon the list of evaluation criteria and associated SAVER category. Next, focus
group participants assigned a weight for each criterion’s level of importance on a 1 to 5 scale as a
group, where “1” is of minor importance and “5” is of utmost importance. Table 2-1 highlights the
evaluation criteria weighting scale.
Table 2-1 Evaluation Criteria Weighting Scale
Weight Definition
5
This evaluation criterion is
of utmost importance:
“I
would never
consider purchasing a product that does not meet my expectations of this
criterion or does not have this feature.”
4
This evaluation criterion is
very important:
“I
would be hesitant
to purchase a product that does not meet my expectations of this
criterion or does not have this feature.
3
This evaluation criterion is
important:
“Meeting my expectations of this criterion or having this feature
would strongly influence
my
decision to purchase this product.”
2
This evaluation criterion is
somewhat important:
“Meeting my expectations of this criterion or having this feature
would slightly influence
my
decision to purchase this product.”
1
This evaluation criterion is
of minor importance:
Other things being equal, meeting my expectations of this criterion or having this
feature
may influence
my decision to purchase this product.
10
Approved for Public Release
Next, the focus group ranked the SAVER categories in order of importance for the IMS assessment.
Based on those rankings, a percentage was assigned to each category to represent its level of
importance.
After ranking the SAVER categories, focus group participants identified product selection criteria and
products that should be considered for the assessment and suggested operational scenarios for the
assessment.
3.0 EVALUATION CRITERIA RECOMMENDATIONS
The focus group identified 31 evaluation criteria and concluded that “capability” was the most
important SAVER category relevant to IMS used for emergency response, followed by “usability,”
“deployability,” “maintainability,” then “affordability,” respectively. Table 3-1 presents the category
weights, the evaluation criteria sorted into the SAVER categories, and evaluation criteria weights.
Table 3-1 Evaluation Criteria
SAVER CATEGORIES
Capability Usability Deployability Maintainability Affordability
Overall Weight
40%
Overall Weight
25%
Overall Weight
15%
Overall Weight
15%
Overall Weight
5%
Evaluation Criteria
GIS Files Handling
Intuitiveness of
User Interface
Scalability
Technical Support
Availability
Cost to Scale
Weight: 5 Weight: 5 Weight: 5 Weight: 5 Weight: 4
Personnel Tasking
and Accountability
Reliability Offline Usability Forensics Logging Ongoing Costs
Weight: 5 Weight: 5 Weight: 4 Weight: 3 Weight: 4
Information
Sharing
Customizability
User-level Access
Control
Autosave Feature Initial Cost
Weight: 5 Weight: 4 Weight: 4 Weight: 3 Weight: 3
Interoperability
Training Resource
Accessibility
Deployment
Options
Data
Synchronization
blank
Weight: 4 Weight: 4 Weight: 4 Weight: 3
blank
Asset Tracking
Interface
Readability
Client Cross-
Platform
Compatibility
Map Updating
blank
Weight: 4 Weight: 3 Weight: 4 Weight: 3
blank
Location Tracking
blank
Mobile Platforms
Availability
Software Updates
blank
Weight: 4
blank
Weight: 3 Weight: 3
blank
11
Approved for Public Release
Record Keeping
blank blank blank blank
Weight: 4
blank blank blank blank
Incident Report
Integration
blank blank blank blank
Weight: 3
blank blank blank blank
Messaging Feature
blank blank blank blank
Weight: 3
blank blank blank blank
Sensitive
Information
Handling
blank blank blank blank
Weight: 3
blank blank blank blank
Pre-planning tools
blank blank blank blank
Weight: 3
blank blank blank blank
3.1 Capability
The eleven capability criteria identified and defined by the focus group listed in order of
importance are:
GIS Files Handling refers to a software’s ability to read standard geographic information system
(GIS) file formats such as KML, GeoJSON, GeoPackage, ESRI shapefile, and geodatabase files.
Personnel Tasking and Accountability refers to the ability to manage staff tasks and assignments
and provide alerts for staff rotations or reassignments.
Information Sharing is the ability to share incident-specific information with field personnel or
other agencies, and--in certain situationsto collaborate across multiple counties (e.g., mutual
aid).
Interoperability refers to the software’s ability to integrate with a variety of other software products
or sensors with no manual data import required. Examples of software with which the IMS might
integrate include computer-aided dispatch (CAD) systems, body-worn sensors, and data services
(e.g., weather, traffic, location-based services, Global Positioning System (GPS) tracking).
Asset Tracking is the ability to view the status (i.e., “available” or “in use”) of physical assets.
Location Tracking is the ability to view and update a map-based location for dispatched and
available resources. This includes automatic vehicle location tracking that broadcasts GPS
locations of vehicles in real-time.
Record Keeping refers to the software’s ability to create an audit trail, (i.e., to capture and recall
time-stamped records of the actions taken for real-time information and post-event assessment).
12
Approved for Public Release
The software should also have the ability to generate reports using custom or integrated Incident
Command System forms.
Incident Report Integration refers to the software’s ability to integrate and view all local incident
reports in one dashboard to avoid duplicate incident notifications.
Messaging Feature refers to the ability for real-time message exchange with other personnel who
are in the field or an EOC.
Sensitive Information Handling refers to the software being appropriately secure and having
protocols to handle information that is categorized as Classified, For Official Use Only, Law
Enforcement Sensitive, Protected Critical Infrastructure Information, or other similar designations.
Pre-planning Tools refers to the capability to preload site-specific features into the software. Site
features may include building maps, environmental features like ponds and streams, and
locations of hydrants, building standpipes, and building emergency exits.
3.2 Usability
The five usability criteria identified and defined by the focus group listed in order of importance
are:
Intuitiveness of User Interface refers to the relative ease or difficulty of using the software
interface, particularly that use of its standard features is obvious and requires minimal training,
searching, or number of steps to execute a function.
Reliability refers to the software’s stability and functionality in the operating environments
required by emergency responder missions.
Customizability is the modifiability of software features to a particular user’s needs and the ease
of making modifications within the software. Customizability includes having filterable items,
editable user roles, modifiable rendering order of map data, adjustability of icons and other visual
elements; and changeable map type and size.
Training Resource Accessibility is the availability of various formats for training such as quick start
guides, video tutorials, and technical manuals and their ease of use.
Interface Readability refers to the clarity and legibility of the user interface including font size,
screen colors, and notification visibility.
3.3 Deployability
The six deployability criteria identified and defined by the focus group listed in order of importance
are:
Scalability is the ability to quickly add users on an ad-hoc basis, as well as to handle sudden or
planned increase in the number of concurrent users or data being exchanged.
Offline Usability refers to the software’s capability to operate when disconnected from a network.
It includes the ability to access vital reference data that may reside on a local device when offline
along with the ability to automatically update and synchronize any data logged in offline mode
once the data connection is restored.
13
Approved for Public Release
User-level Access Control refers to the capability for and ease of assigning different levels of
access for different users depending on their role.
Deployment Options refers to the software delivery channels available, such as on-premises,
Software as a Service or a hybrid solution.
Client Cross-Platform Compatibility refers to the software’s ability to work across different
computing platforms that may be used by different entities involved in a response. Interoperability
with different systems from different agencies improves coordinated communication and
response.
Mobile Platforms Availability is the ability to operate on mobile hardware (e.g., laptops, tablets).
3.4 Maintainability
The six maintainability criteria identified and defined by the focus group listed in order of
importance are:
Technical Support Availability refers to the availability of expedited, 24/7 technical support
offered by the software vendor at times when the software is in use (e.g. while responding to an
emergency). One participant noted that the presence of a software user community is helpful and
that agencies must select service level agreements that match their mission needs.
Forensics Logging refers to the software’s audit trail and error logging capabilities that can be
analyzed in case of software or hardware failure.
Autosave Feature refers to the software’s ability to save data such that data is accurate in the
event of a software or device failure.
Data Synchronization refers to how data is synchronized from a source; (i.e., whether
synchronization is automated or requires a restart or action by the user).
Map Updating is the ease by which new map views and data can be added.
Software Updates refers to the frequency of and method by which updates are made to the
software; for example, manual or automatic, mass or individual. One participant noted that
updates should be transparent to the user.
3.5 Affordability
The three affordability criteria identified and defined by the focus group listed in order of
importance are:
Cost to Scale refers to the cost of adding users, including costs of additional required equipment.
Ongoing Costs refers to costs associated with software maintenance fees, data storage, training,
and professional services required to maintain use of the software.
Initial Cost refers to the initial price to buy or license the software and factors in the availability of
General Services Administration pricing or state contracts and grants.
14
Approved for Public Release
4.0 EVALUATION CRITERIA ASSESSMENT RECOMMENDATIONS
The focus group and SAVER team made recommendations on whether each evaluation criterion
should be assessed operationally or according to manufacturer-provided specifications. In an
operational assessment, evaluators assess criteria based on the hands-on experience using the
product. In a specification assessment, evaluators assess criteria based on product information
provided by the manufacturer. In some cases, criteria may be assessed both operationally and
according to manufacturer-provided specifications.
Also, some evaluation criteria were categorized as “information only.” These criteria will not be
scored by evaluators during the assessment, but will be included as relevant specifications (i.e.,
price, warranty information) in the assessment report. Table 4-1 presents the focus group’s
assessment recommendations for the evaluation criteria.
Table 4-1 Evaluation Criteria Assessment Recommendations
Category Criteria Operational Specification
Information
Only
Capability
GIS Files Handling blank
blank
Personnel Tasking and Accountability
blank blank
Information Sharing
blank blank
Interoperability
Asset Tracking
blank blank
Location Tracking
blank blank
Record Keeping
blank
Incident Report Integration
blank blank
Messaging Feature
blank blank
Sensitive Information Handling
blank
Pre-planning Tools
blank blank
Usability
Intuitiveness of User Interface
blank blank
Reliability
blank blank
Customizability
blank blank
Training Resource Accessibility
blank blank
Interface Readability
blank blank
Deployability
Scalability
blank blank
Offline Usability
blank blank
User-level Access Control blank
blank
Deployment Options
blank
Client Cross-Platform Compatibility blank
blank
Mobile Platform Availability blank
blank
blank
Capability
Capability
Capability
Capability
Capability
Capability
Capability
Capability
Capability
Capability
U
sability
Usability
Usability
Usability
Deployability
Deployability
Deployability
Deployability
Deployability
15
Approved for Public Release
5.0 ASSESSMENT SCENARIO RECOMMENDATIONS
The focus group participants identified use-cases for incident management software as pre-planned
events such as protests, fast-moving no-notice incidents (e.g., flash floods, earthquakes, and
wildfires), and small, no-notice incidents like house fires. Based on these use-cases, the focus group
participants recommended scenarios in which products could be assessed using the evaluation
criteria recommended for operational assessment. Participants suggested using exercises from the
Federal Emergency Management Agency Emergency Management Institute All-Hazards Position
Specific Training Program curriculum as models for each scenario. Focus group participants
recommended that evaluators assess two products per day, in teams of two, and that SAVER
conduct after-action activities following each operational scenario.
5.1 Pre-planned Event - Protest
IMS will be used in a scenario simulating the planning and response to a protest. The software will
be used to generate action and resource plans, identify and deploy resources based on
assessment of threats and vulnerabilities, track and manage equipment and personnel, and to
share information with other entities.
Evaluation criteria assessed during this scenario will include information sharing, interoperability,
asset tracking, location tracking, record keeping, messaging features, incident report integration,
pre-planning tools, reliability, training resources, deployment options, and cross-platform
compatibility. After-action capabilities such as report generation will also be evaluated.
5.2 Fast-moving No-notice Incident Scenario TBD
IMS will be used in a scenario simulating a response to an incident that evolves quickly, is
resource-intensive, and takes place over multiple operational periods. The software will be used to
quickly deploy and track numerous resources, track assets over multiple operational periods,
incorporate tactical dispatch, communicate with field personnel both on and offline, and establish
mutual aid agreements with other agencies and jurisdictions.
Category Criteria Operational Specification
Information
Only
Maintainability
Technical Support Availability
blank blank
Forensics Logging
blank blank
Autosave Feature
blank blank
Data Synchronization
blank blank
Map Updating
blank blank
Software Updates blank blank
Affordability
Cost to Scale blank blank
Ongoing Costs blank
blank
Initial Cost blank
blank
Maintainability
Maintainability
Maintainability
Maintainability
Maintainability
Affordability
Affordability
16
Approved for Public Release
Evaluation criteria assessed during this scenario will include personnel tasking and accountability,
information sharing, asset tracking, location tracking, record keeping, messaging feature, incident
report integration, pre-planning tools, reliability, scalability, offline usability, deployment options,
and cross-platform compatibility.
5.3 Small No-notice Incident House Fire
Incident management software will be used in a simulated response to a house fire. The software
will be used to quickly assess the types of resources needed, deploy and track necessary
resources, communicate with field personnel and share and handle sensitive information with
various agencies and utility companies.
Evaluation criteria assessed during this scenario will include personnel tasking and accountability,
information sharing, asset tracking, location tracking, record keeping, incident report integration,
sensitive information handling, reliability, and deployment options.
6.0 PRODUCT SELECTION RECOMMENDATIONS
During product selection discussions, focus group participants stated they would be interested in
software that covered multiple aspects of incident response and management, rather than products
meant for narrow applications. Participants requested products that can be used on laptops, tablets,
and cellphones, as well as across operating platforms.
The focus group participants recommended selecting from the following manufacturers and their
products for inclusion in the assessment:
ESI Acquisition Inc: WebEOC
Grey Wall Software: Veoci Emergency Management
CORVENA: COR
Dynamis Inc: COBRA
Intterra Group: Intterra
ESRI: ArcGIS Suite (Survey123, Workforce, and Collector)
Adashi Systems: C&C Incident Command Software
Noggin IT Inc: Noggin 2.0 Integrated Safety and Security Platform
Hangar 14 Solutions: StreetWise CADLink
Incident Response Technologies: Rhodium
Tablet Command Inc: Tablet Command
Vendors responding to a request for information posted on SAMS.gov in July 2020 will also have
their products considered for assessment.
7.0 SUMMARY
The focus group, consisting of seven emergency responders with at least two years of experience,
identified 31 evaluation criteria for incident management software. “Capabilityand “usability” were
deemed the most important SAVER categories. These eight focus group-generated criteria were
identified as being of utmost importance to first responders:
17
Approved for Public Release
Ability to handle standard GIS files
Tracking personnel tasking and accountability
Information sharing capability across personnel and other agencies
Interoperability with other software and sensors
Intuitive interface for users
Software reliability
Technical support availability
Scalability of users and data traffic
The focus group participants recommended several scenarios and products to be considered for
inclusion in the assessment. These recommendations will be used to plan the IMS for emergency
response assessment.
8.0 FUTURE ACTIONS
The focus group’s recommendations will be used to guide the development of the Incident
Management Software for Emergency Response Assessment Plan, as well as the selection of
products to evaluate in the assessment.
Once the assessment is complete, the results will be published to the SAVER document library,
www.dhs.gov/science-and-technology/saver-documents-library.
9.0 ACKNOWLEDGEMENTS
NUSTL thanks the IMS focus group participants for their valuable time and expertise. Their insights
and recommendations will guide the planning and execution of the assessment as well as future
SAVER projects. Appreciation is also extended to the participants’ agencies for allowing them to
participate in the SAVER Program.