Skip to Content


Voice Translation Technologies for Criminal Justice Applications Market Survey

Document Details

Information about this document as published in the Federal Register.

Published Document

This document has been published in the Federal Register. Use the PDF linked in the document sidebar for the official electronic format.

Start Preamble


National Institute of Justice.


Notice of request for information.


The National Institute of Justice (NIJ) is soliciting information on speech-to-speech voice translation technologies marketed for use by the criminal justice community. For law enforcement and corrections personnel, first responders, and others who work with the public, overcoming language barriers when working with individuals with limited English proficiency is vital to doing their jobs effectively. Voice translation technology can provide a practical solution. The National Criminal Justice Technology Research, Test, and Evaluation Center (NIJ RT&E Center) is developing a “Market Survey of Voice Translation Technologies for Criminal Justice Applications” to address this issue. This market survey will be published by NIJ to assist agencies in their assessment of relevant information prior to making purchasing decisions.


Responses to this request will be accepted through 11:59 p.m. Eastern Daylight Time on November 21, 2016.


Responses to this request may be submitted electronically in the body of or as an attachment to an email sent to with the recommended subject line “VTT Federal Register Response.” Questions and responses may also be sent by mail (please allow additional time for processing) to the address: National Criminal Justice Technology Research, Test and Evaluation Center, ATTN: VTT Federal Register Response, Johns Hopkins University Applied Physics Laboratory, 11100 Johns Hopkins Road, Mail Stop 17N444, Laurel, MD 20723-6099.

Start Further Info


For more information on this request, please contact Steven Taylor (NIJ RT&E Center) at (443) 778-9348 or For more information on the NIJ RT&E Center, visit​funding/​awards/​Pages/​award-detail.aspx?​award=​2013-MU-CX-K111 and view the description or contact Steven Schuetz, by telephone at 202-514-7663 or by email at Start Printed Page 72616 Please note that these are not toll-free telephone numbers.

End Further Info End Preamble Start Supplemental Information


Information Sought: The NIJ RT&E Center seeks input to its “Market Survey of Voice Translation Technologies for Criminal Justice Applications.” Vendors who respond to this request for information are invited to provide general comments with regard to the Survey for the NIJ RT&E Center to consider, including which categories of information are appropriate for comparison. They are invited also to submit promotional material (e.g., slick sheet) and a print-quality photograph of the product being described. The NIJ RT&E Center intends to include, at a minimum, the following categories of information for each vendor and its product:

Vendor and Product information

1. Vendor name

a. Vendor address

b. Vendor point of contact (e.g., name and contact number/email)

2. Number of years in business

a. Number of years marketing voice translation technologies

3. Product name and model number

a. General description of the components (e.g., microphone type, screen, speaker, carrying case, adapters/chargers, phone/app.)

b. Number of channels (e.g., one for interviewer, one for interviewee)

c. Battery and type (e.g., commercial, rechargeable, lithium ion)

d. Operating system

e. Memory/processor requirements

4. Speech engine used for translation

5. Initial product cost

6. Cost for subsequent software upgrades

7. Warranty (in months)

Concept of Operation

1. Device type (e.g., stand alone, app, or connect to human translator)

2. Primary audience(s) that uses the device (e.g., law enforcement; corrections; courts; military; business; traveler)

3. Location where translation occurs (e.g., onboard or client/server configuration)

4. Input type (e.g. pre-programmed words and phrases or dynamic)

5. Output type (e.g., pre-programmed voice, dynamic voice, text)

6. Eligibility for use in court (if not already used for that application)

7. Languages

a. Input languages the device or app can receive as input (number)

b. Target languages into which the device can translate (number)

Quantitative Measures (Physical Device)

1. Dimensions of device (length x width x height, in inches)

2. Weight of device (in ounces)

3. Battery

a. Power requirement (volts)

b. Run time from full charge to full discharge (in hours)

c. Charge time from full discharge to full charge (in hours)

d. Average life expectancy from first use to replacement for battery (in months)

4. Average life expectancy of the system from first use to replacement (in months)

5. Ruggedness (environmental conditions)

a. Rain tolerance or immersion (water depth, in feet)

b. Operating temperature range (maximum and minimum, in degrees F)

c. Operating humidity range (maximum and minimum, in % humidity)

d. Shock (drop height in inches)

e. Types of/results from other environmental testing

6. Delay between the end of source speech to beginning of target speech (time, in seconds)

7. Vocabulary size (number of words)

8. Volume

a. Loudness of output (range, maximum and minimum, in decibels)

b. Loudness of input required (range, maximum and minimum, in decibels)

c. Maximum background noise (in decibels)

9. Accuracy of translation (% word recognition rate and degree of uncertainty for each language pair)

10. Maximum number of users per device (number)

11. Size of the corpus (e.g., 100 word, 100,000 word) used to train the tool (number of words)

12. Input speed of speaking to the tool (range, maximum and minimum words per minute)

13. Output speed with which the device or app “speaks” (words per minute)

14. Limit to the length of the sentence/utterance to be translated (number of words)

15. Screen size (length x width x height, in inches)

Qualitative Measures

1. Source and target language pairs the device is capable of translating (e.g., English-Spanish, English-Chinese, etc.)

2. Measures taken to ensure that the bi-directional speech output into the target language contains correct words

3. Methods taken to ensure and measure that bi-directional speech output conveys the intended meaning into the target language (e.g., correct translation of speaker's intent and emotion)

4. Bi-directional ease of use (e.g., trained and untrained)

5. Means by which the technology has been evaluated (e.g., laboratory, operational)

6. Utilization of separate training and testing data sets during vendor evaluation of the product

7. Tool's capability to recognize proper names (e.g., people, places)

8. Ability to use device in hands-free manner

9. Ability to record and store translations (e.g., on the device, app, or server)

a. Length of conversation that can be recorded (in minutes)

b. Length of time stored on device, app, or server (in days)

c. Costs for storage or archiving (in dollars)

d. Ability to maintain chain-of-custody

10. Means of securing data in transit from device or app to server

Operations, Maintenance and Support

1. Language selection method (e.g., automatic, user input)

2. Activation method (e.g., voice activated, push to talk)

3. Method of indicating breaks between speakers

4. Conversation location recorded or geolocated

5. Conversation time/duration recorded (e.g., time-stamped)

6. Frequency of retraining of speech engine

7. Frequency of software updates

8. Training types provided to user (e.g., initial, recurring, yearly, etc.)

9. Support types provided to user (e.g., on-demand, 24/7, manuals, etc.)

Speech Engine Implementation

1. Describe the means by which translation is accomplished (e.g., natural language processing, text to speech conversion, grammar-based, statistics-based)

2. Describe method used to train the translation engine, if applicable

a. For one to one or one to many language (e.g., English to Spanish vs. English to Spanish and German and French)

b. For languages with different structures (e.g., English and Start Printed Page 72617Japanese and Arabic)

c. For a domain or discipline (e.g., law enforcement, travel)

d. For dialects, accents, or different pronunciations

e. For cultural norms regarding relationship and status (e.g., sex, adult-child, age)

f. For colloquialisms, slang, jargon, codes, or terms of art

g. For poor grammar

h. For uncertainty (e.g., um, ah, starts and stops, other natural sounds: Coughing, sneezing, throat clearing, lip smacking, lisping, slurring, stuttering, snorting)

i. For voice types (e.g., adult female, adult male, child female, child male)

Describe security mechanisms employed on device or app (e.g., strong passwords, password expirations, restricted privileges)

App-Specific Measures

a. Devices on which the app can be deployed (e.g., iPhone 6, Samsung Galaxy, iPad3 etc.)

a. Hardware sensors required

b. Platforms on which the app can be deployed (e.g., iOS, Android, Blackberry OS, Windows)

c. Performs in online/offline manner

d. Minimum and optimum network connectivity or performance needed

a. Operational impacts of a connection-deficient setting

e. User-friendliness

f. Rating in the online store where app was acquired

g. Interaction technique (e.g., motion, voice activation)

h. Device orientation for optimum app utilization (e.g., vertical, horizontal)

i. Means by which app conserves battery life

j. Means by which update notifications are delivered

k. Security designed into the app from inception

l. Means by which personal and organizational data are separated

m. Phone features required for app to function properly

n. Memory required

o. Number of simultaneous users (number)

Publication of product information in the resulting market survey does not constitute endorsement of any product or vendor by the National Institute of Justice, Office of Justice Programs, Department of Justice, or the Federal Government.

Start Signature

Nancy Rodriguez,

Director, National Institute of Justice.

End Signature End Supplemental Information

[FR Doc. 2016-25401 Filed 10-19-16; 8:45 am]