Security Requirements for Alexa Smart Screen SDK
Smart Screen SDK Builds on the AVS Device SDK
The Smart Screen SDK builds on the AVS Device SDK. Commercially distributed devices with the Smart Screen SDK must meet all of the security requirements for the AVS Device SDK and the requirements described below. The Smart Screen SDK introduces interprocess communication (IPC) between a new display process "client" and the SDK process "server." In addition to the communication between the client and server, the client also communicates with the user visually, and the server communicates with the AVS APIs on Amazon servers and manages Alexa state. Both the client and server are "Amazon software." You must protect Amazon software in accordance with the requirement below.
Security Requirement: Device SHALL protect local Amazon software from unauthorized access, e.g. on-device Man-in-the-Middle attack or display hijacking.
How you achieve this protection depends upon how you implement the client and the IPC.
Protecting Data Between Display Client and Local Server
It's important for you to protect the data going back and forth between the client and server because it could be personal data belonging to your customer. For example, there are Alexa skills that allow a user to talk to their bank, and so the display on your device could include personal information related to a user's bank account. Your device might also stream video from home security cameras. The end user of your device must be able to trust that the information they see on the device is from Alexa. Alexa must trust that the input from your device display is coming directly from the user.
HTML Rendering Client with WebSocket IPC (Sample App)
Security Requirement: Device SHALL use mutually-authenticated TLS (mTLS) or similar to protect local WebSocket interprocess communication (IPC) if the client can be restricted to the Amazon client, or use out-of-band per-session secret exchanges otherwise.
Uncontrolled Software and Session Tokens
However, if unknown software can run in the browser, the client certificate is available to any browser software. The client certificate no longer provides authentication for the Amazon Alexa GUI client. Third-party browser software might get installed locally and so achieve the HTTP Origin of "localhost." In these cases, you must use another web-based security technique: a session token.
The Sample App doesn't define how the HTML Rendering Client process starts. In your own device, you can modify the WebSocket server to generate an authentication token. You then pass the authentication token to the method you use to start the HTML Rendering Client. The client can present this token back to the WebSocket server, usually as an additional HTTP header. Your WebSocket server must verify both the token and the origin (localhost) to ensure that the client is the authentic Alexa display client. The token should be valid only one time, be impossible to counterfeit, and be valid for a limited time, just like an internet web-session token. You don't want an attacker to analyze a single device and counterfeit tokens in their fake Alexa display client for any other device. Amazon recommends you defend your device by making secrets unique per device and update tokens per session.
You should generate the certificate for the local WebSocket server and client on the device so that each is unique per device. The following tables show the properties for these two different certificates.
|Self-signed||You must have the private key for each certificate on the device. You don't want an attacker to get a "real" (pre-trusted by standard browsers) private key off your device.|
|Old and long-lived||Unless you plan to regenerate the certificate periodically, you should make the certificate long-lived, like 200 years, longer than your device will function. Make sure that the start date for the certificate is before the power-on date for your device so that even if the clock has never been set on your device, the certificate will be valid.|
|Nonsigning||Make sure that any private keys on your device aren't able to sign other certificates. Your public and private keys are both on the device, so an attacker could get the private key. (The public key for the WebSocket server is in your browser trust store (so that it authenticates the server without complaint), and the private key is in your WebSocket server.) You don't want an attacker to use your self-signed and trusted certificate to sign something else and forge other sites' certificates (for example, you don't want your browser to trust a fake
If you use a session token to authenticate the Alexa display client to the WebSocket server, you should still use server TLS to encrypt the traffic, even though it's local. You don't want an attacker to be able to observe the data, and many Linux systems allow even nonroot users to
tcpdump localhost data. This case would also support the following requirement.
Security Requirement: Device SHALL encrypt confidential data in storage and transmission
Beyond the Sample App: OpenGL and Unix Domain Sockets
Your system might use an alternative template rendering client, perhaps based on OpenGL. In this case, you probably wouldn't use a WebSocket client (if you do, see the previous section). Because you're using a custom rendering client, you probably answer the question, "Do I control all the software running in this renderer?" as "Yes." If you're also using Unix Domain sockets (or even named pipes) for your IPC, you can control access to them with standard Unix user permissions. If you're using a different operating system, you must use the permission scheme in that operating system to ensure that other processes cannot communicate or eavesdrop between your display client and SDK process server.
Security Requirement: Device SHALL run software with the least privilege required for operation. For Linux this means running software under a user account with the file system access that is necessary for operation (e.g. not as root nor as a member of a group with overly permissive read/write/run permissions. For Android this means applications must avoid being granted system permissions…
Security Requirement: Device SHALL apply access controls that restrict read/write/run permission to specific applications or user accounts on files, sockets, named pipes, or other IPC mechanisms involved in the Alexa Voice Service implementation.
If a utility like
strace is available for your system (even if you don't include it in your firmware), you should encrypt your IPC communication. Tools like
strace can allow attackers to eavesdrop on the IPC communications. OpenSSL/TLS works with Unix Domain sockets and could provide an effective encryption solution. OpenSSL/TLS isn't a requirement as long as you have strong, least privilege user permissions in effect throughout your system, but encrypted IPC is an enhancement you can make for security.
Additional security requirements
As registered developer, you can access additional security requirements in the AVS developer console. For more details on these requirements, see AVS Security Requirements.
Was this page helpful?
Last updated: Sep 28, 2022