Service Introduction

一、 Overview of services

Translation by Speech uses a new generation of NMT neural network machine translation technology, integrates high-precision voice recognition and translation capabilities, and provides on-line integration solutions.

二、 Apply for services

Translation API uses a complete flow and self-application model. You may sign up on the LiveData’s official website (https://www.ilivedata.com/), and then create an application on console. A pid and a service key will be sent to you.

You can also activate other services on the Management Console - Overview Page.

三、 Access Method

Service endpoint

Production Server

https://speech.ilivedata.com/api/v1/speech/translate

Request headers

Header Value Description
Content-Type application/json
Accept application/json
X-AppId Application ID or Project ID
X-TimeStamp Request time in UTC format For example: 2010-01-31T23:59:59Z. (for more information, go tohttp://www.w3.org/TR/xmlschema-2/#dateTime).
Authorization Signature Token Check Authentication section below

Request body

Param SubParam Optional Description
speechLanguageCode Required source speech language support langs
textLanguageCode Required target text translate language support langs
audio Required Audio file in Base64 format
config codec Optional AMR_WB, OPUS. If not specified, AMR_WB will be used.
sampleRateHertz Optional only 16000 support for now
userId Optional Unique User ID. userId should not be longer than 32 chars.
alternativeLangCodes Optional Candidate language array, up to 4 langs support langs
textToSpeech Optional speech to speech tranlate, True/False. If not specified, False will be used.
textToSpeechConfig outputFormat Optional tts output format, amr-wb,opus,pcm,mp3. If not specified, pcm will be used.
voiceGender Optional tts voice gender 0-female / 1-male. If not specified, female will be used.

Sample Request

Request Sample

{
  "speechLanguageCode": "zh-CN",
  "textLanguageCode": "en",
  "config": {
                  "codec": "OPUS",
                  "sampleRateHertz": 16000
  },
  "audio":"T2dnUwACAAAAAAAAAAAd8pVTAAAAAGsIvpMBE..."
}

Authentication

Requests to Speech Recognition API must be signed - that is, they must include information that Speech Recognition API can use to authenticate the requestor. Requests are signed using the appId and secretKey which are issued to your application. To sign a request, you use some values from the request, and your secret key to create a signed hash - this is the signature. You can then add the signature to the request using the HTTP Authorization header.

How to Generate a Signature for a Request to Speech Recognition API

\1. Create the canonicalized query string that you need later in this procedure:

\1. SHA256 digest of the query body, normally it will be a JSON string.

\2. Convert the digest bytes to HEX string

\2. Create the string to sign according to the following pseudo-grammar (the “\n” represents an ASCII newline character).

StringToSign = HTTPMethod + "\n" + 
               HostHeaderInLowercase + "\n" + 
               HTTPRequestURI + "\n" + 
               CanonicalizedQueryString <from the preceding step> + "\n" +
               "X-AppId:" + SAME_APPID_IN_HEADER + "\n" + 
               "X-TimeStamp:" + SAME_TIMESTAMP_IN_HEADER

The HTTPRequestURI component is the HTTP absolute path component of the URI up to, but not including, the query string. If the HTTPRequestURI is empty, use a forward slash ( / ).

\3. Calculate an RFC 2104-compliant HMAC with the string you just created, your Secret Key as the key, and SHA256 as the hash algorithm.

For more information, see http://www.ietf.org/rfc/rfc2104.txt.

\4. Convert the resulting value to base64.

\5. Use the resulting value as the value of the *Authorization* HTTP header.

Important

The final signature you send in the request must be URL encoded as specified in RFC 3986 (for more information, see http://www.ietf.org/rfc/rfc3986.txt). If your toolkit URL encodes your final request, then it handles the required URL encoding of the signature. If your toolkit doesn’t URL encode the final request, then make sure to URL encode the signature before you include it in the request. Most importantly, make sure the signature is URL encoded only once. A common mistake is to URL encode it manually during signature formation, and then again when the toolkit URL encodes the entire request.

Some toolkits implement RFC 1738, which has different rules than RFC 3986 (for more information, go to http://www.ietf.org/rfc/rfc1738.txt).

Response body

Description
errorCode 0 if successful
errorMessage error message if errorCode is not 0
translation source source language code
target target language code
sourceText source text that recognized from input audio
targetText target text that translated from source text
targetAudio tts url

Sample Response

Sample Response

{
    "errorCode": 0,
    "translation": {
        "source": "zh-CN",
        "target": "en",
        "sourceText": "你好!",
        "targetText": "Hello!",
        "targetAudio": "https://xxx"

    }
}

Error Code

HTTP Status Code Message
200 0 no message
429 1104 Out of Rate Limit
429 1105 Out of Quotas
405 1004 Method Not Allowed
411 1007 Not Content Length
400 1002 API Not Found
400 1003 Bad Request
400 2000 Missing Parameter
400 2001 Invalid Parameter
400 2002 Invalid Request
400 2102 Input Too Long
400 2109 Speech Recognition Failed
400 2110 File is invalid
400 2111 Failed to download file
400 2112 TaskId is invalid
401 1102 Unauthorized Client
401 1106 Missing Access Token
401 1107 Invalid Token
401 1108 Expired Token
401 1110 Invalid Client
401 2003 Invalid Scope
401 2004 Unsupported Response Type
401 2100 Translation Failed
401 2101 No Match
401 2103 Detection Failed
401 2104 Language Not Supported
401 2105 Normalization Failed
401 2106 Inappropriate Word Used
401 2107 Invoke Service Failed
401 2108 Service Unavaliable