Amazon Developer Blogs

Showing posts tagged with How to

January 11, 2017

Abdullah Hamed

UnityBlog.jpg

So, you have this shiny game you made in Unity3D. You have tested your monetization funnel. You have created all of your In-App Purchase items (IAP). All you need now is to integrate it with Amazon Appstore In-App-Purchasing APIs. There 2 ways to integrate the Amazon IAP API into your Unity game. You can use the built-in cross-platform Purchasing API from Unity, or you can use our own Unity Plugin. In this blog, we will look at a basic comparison of the two methods, and the advantages and disadvantages of each. We will also walk through setting up IAP items and implementing the Unity Purchasing API in your game.

[Read More]

January 11, 2017

David Isbitski

The Alexa Skills Kit provides the ability to display visual information, both text and images, via skill cards. These cards are a useful way to provide your users with additional information from your Alexa skill that may be too verbose or too difficult to include in the voice user interface. Skill cards can be displayed in many form factors across different types of devices. This includes the Alexa app via iOS and Android devices, the Alexa app via a web browser, on Fire Tablet, and on the big screen while interacting with skills on Fire TV.[Read More]

December 28, 2016

Mario Viviani

122016_FireTV1.png

In parts 1-5 of this series we followed the user journey on Fire TV from browsing and content discovery to reading the details of specific content and performing an action. Now we end our journey on the best part: how to play the video!

The PlaybackOverlayActivity

In a Leanback-enabled project, playing video content is performed within the PlaybackOverlayActivity.

The UI of the PlaybackOverlayActivity is simple. We have a full-screen video player that is responsible for playing the content. On top of the video player is the PlaybackOverlayFragment, which is responsible for displaying all the media controls and managing the underlying content play back.

[Read More]

December 20, 2016

Jeff Blankenburg

We all have our favorite places. It may be your childhood hometown, an exotic place you visited, or even your college town. Regardless of why a city is your favorite, we all have our favorite spots to visit and want to tell others about, and that’s exactly what this new skill template helps you do.

This new template uses AWS Lambda, the Alexa Skills Kit (ASK), and the Alexa SDK for Node.js, in addition to the New York Times Search API for news. We provide the business logic, error handling, and help functions for your skill, you just need to provide the data and credentials.

For this example, we will create a skill for the city of Seattle, Washington. The user of this skill will be able to ask things like:

  • “Alexa, ask Seattle Guide what there is to do.”
  • “Alexa, ask Seattle Guide about the Space Needle.”
  • “Alexa, ask Seattle Guide for the news.”

You will be able to use your own city in the sample provided, so that users can learn to love your location as much as you do. This might also be a good opportunity to combine the knowledge from this template with our Calendar Reader sample, so that you can provide information about the events in your town, as well as the best places to visit.

After completing this tutorial, you’ll know how to do the following:

  • Create a city guide skill - This tutorial will walk Alexa skills developers through all the required steps involved in creating a skill that shares information about a city, and can search for news about that location.
  • Understand the basics of VUI design - Creating this skill will help you understand the basics of creating a working Voice User Interface (VUI) while using a cut/paste approach to development. You will learn by doing, and end up with a published Alexa skill. This tutorial includes instructions on how to customize the skill and submit for certification. For guidance on designing a voice experience with Alexa you can also watch this video.
  • Use JavaScript/Node.js and the Alexa Skills Kit to create a skill - You will use the template as a guide but the customization is up to you. For more background information on using the Alexa Skills Kit please watch this video.
  • Manage state in an Alexa skill - Depending on the user’s choices, we can handle intents differently.
  • Get your skill published - Once you have completed your skill, this tutorial will guide you through testing your skill and sending your skill through the certification process so it can be enabled by any Alexa user. You may even be eligible for some Alexa swag!
  • Interact with the Bing Search API.

Get started and build your first—or next—Alexa skill today.

Special Offer: Free Hoodies

All published skills will receive an Alexa dev hoodie. Quantities are limited. See Terms and Conditions.

[Read More]

December 16, 2016

Andy Haldeman

System X-Ray is useful for displaying system metrics on Fire TV, but did you know you can display information of your own choosing? Your app can send information to System X-Ray which will be displayed while your app is in the foreground. There are several ways this feature can be used, such as displaying static information, when a metric crosses different threshold boundaries, or when an event occurs. Let’s walk through some examples.

Examples

Static Information

If you test your app on multiple Fire TVs, you may have wished you could tell at a glance which Fire TV model you are testing. If you connect your Fire TVs to different WiFi networks, it would be helpful to see which network a Fire TV is currently connected to. System X-Ray can help you solve these problems. You can collect this information as your app starts up, and send it to System X-Ray.

private void updateMetrics(Context context, String buildModel, String ssid) {
    // Initialize Intent
    Intent intent = new Intent("com.amazon.ssm.METRICS_UPDATE");
    intent.putExtra("com.amazon.ssm.PACKAGENAME", context.getPackageName());

    // Add metrics
    intent.putExtra("Metrics1", buildModel);
    intent.putExtra("Metrics2", ssid);

    // Send
    context.sendBroadcast(intent);
}
[Read More]

December 13, 2016

Marion Desmazieres

Coding-Dojo.png

Today, we’re excited to announce a new, free video series on Alexa development by Coding Dojo, a pioneer in the coding bootcamp space that offers in-person and online classes. These Coding Dojo YouTube videos will help aspiring and established Python coders learn about building skills for Alexa, the voice service that powers Amazon Echo.

Here is what you can expect to learn in Coding Dojo's Alexa Skill Training series:

  • The videos will introduce Alexa-enabled devices like Echo and talk about the Alexa Skills Kit, a collection of self-service APIs, tools, documentation and code samples that make it fast and easy for you to add skills to Alexa.
  • The video instructor will take you through the process of creating an Alexa skill built in Python using an AWS Lambda function as the backend to handle the skill's requests. You will learn the steps to create a Coding Dojo skill that can tell you about the coding bootcamp and their instructors.
  • The videos will cover how to configure a skill in the Amazon developer portal, and will discuss setting up the interaction model, intent schema, and sample utterances, and testing the skill.
  • With a code walkthrough you will take a closer look at the code that’s allowing your Alexa skill and Lambda function to interact.
  • Finally, the video training will walk you through creating your own backend using Flask-Ask, a Python framework and Flask extension created by John Wheeler, an Alexa Champion. You will also learn how ngrok can allow you to test your skill locally. The series will end with an overview of AWS Elastic Beanstalk and its advantages.
“At Coding Dojo we want to give people hands-on experience building apps and programs for popular technologies in order to help them further their careers,” said Richard Wang, CEO at Coding Dojo. “The new videos will give both novice and existing developers invaluable project experience for their resumes and portfolios. With a number of our graduates already working at Amazon, we're hopeful that these types of real world projects will help more of our students get the opportunity to work on exciting new technology like Alexa.”

Watch the Alexa video series for free on YouTube today.

Learn more about Alexa with Coding Dojo

In addition to the videos, Coding Dojo announced a new in-person and online class, as well as an Alexa hackathon that will train Python developers to create skills. The Alexa skill building class is available as a module in the Python stack at Coding Dojo’s 14-week onsite and 20-week online coding bootcamp. Finally, Coding Dojo will host an Alexa skills hackathon led by Amazon Alexa employees on February 20, 2017 in San Jose. Anyone interested in participating should contact Coding Dojo's San Jose campus.

Check out the full announcement by Coding Dojo here.

December 09, 2016

Mario Viviani

Providing the Details of the App Content through the DetailsFragment

In Part 1 of this series we analyzed the TV Interaction Model, based on three steps: Browsing for Content, Reading Description and Details, and Playing the Content. The first action, Browsing for Content, as we have seen in Part 3 of this series, is achieved through the BrowseFragment.

Now let’s see how we can provide information about a specific piece of content, following the second step of the user journey, Reading Description and Details. To do this we’ll use one of the main components of a Leanback-enabled project: the DetailsFragment.

The DetailsFragment

The DetailsFragment is displayed when the user selects a specific piece of content on the BrowseFragment. It contains information like Title, Subtitle, Description, and is accompanied by a preview of the content. It also contains Actions that we can prompt our user to perform.

One of the most important classes used in the DetailsFragment is DetailsOverViewRow. This class defines which content is displayed in the fragment (as seen in the previous episode, DetailsOverViewRow takes advantage of a Presenter, called DescriptionPresenter) and, most importantly, is responsible for defining the Actions that we can prompt our user to perform.

private void setupDetailsOverviewRow() {
    
    final DetailsOverviewRow row 
			= new DetailsOverviewRow(mSelectedMovie);
    ...
    row.setImageDrawable(R.drawable.default_background));
    row.addAction(new Action(ACTION_WATCH_TRAILER, 
				“Watch Trailer”, “FREE”)));
    
    mAdapter.add(row);
}

In the highlighted row we demonstrate how easy it is to add a specific Action to the DetailsFragment. Just by coding addAction() we can add a new Action for the user to perform. In this case we added the unique ID ACTION_WATCH_TRAILER, for the Action and two Strings Watch Trailer”, “FREEto define the text field of the button.

Once we have added this line, the Action will be displayed on the DetailsFragment.

By using Actions we can easily add IAP items like “Rent the Content”, “Buy”, or “Subscribe”. It is just a matter of attaching a Listener to the Actions to perform consequent tasks.

When we deploy a Leanback-enabled project, the only Action that is defined by default is the “Watch Trailer” that prompts the trailer of the content to play.

Stay tuned for Part 6: How to Play Video Content using the PlaybackOverlayFragment

In the next and final episode of this series we’ll show how to play the content, leverage the Remote Control, and how to show the on-screen controls using the PlaybackOverlayFragment.

Stay tuned!

Mario Viviani (@mariuxtheone)

 

December 07, 2016

David Isbitski

Earlier in the year, we introduced built-ins with 15 different intents (such as Stop, Cancel, Help, Yes, No) and 10 slot types (such as Date, Number, City, etc.) that made it easier for developers to create voice interactions.  Today, the US preview of our new Alexa Skills Kit (ASK) built-in library is available to developers. This expands the library to hundreds more slots and intents covering new domains including books, video and local businesses. We chose these based on feedback from our developer community, as well as our own learnings with Alexa over the past year.

When you’re building a skill, it’s challenging to think of all the different ways your customers might ask the same question or express the same idea – all of which your skill would ideally need to understand. The new built-in intents and slots reduce your heavy-lifting by providing a pre-built model. For example, just including the following statement “SearchAction” makes your skill understand a customer’s request for phone numbers for local businesses. 

Customer usage and your feedback is important for us to improve the accuracy of the library, which will increase over the course of the preview. To provide feedback during this developer preview or submit your questions, visit our Alexa Skills Kit developer forums, create a question, and use the “built-in library” topic. We appreciate your help!

Getting Started

The built-in intent library gives you access to built-in intents that fall into categories, such as the weather forecast which I will walk through below (check out the full list of categories here). You can use these intents to add functionality to your skill without providing any sample utterances. Using one of these new built-in intents in your skill is similar to using a standard built-in intent like AMAZON.HelpIntent:

  1. Add the intent name to your intent schema.
  2. Implement a handler for the intent in your code.

The differences are:

  • Intents in the library are named according to a structure using actions, entities, and properties. Understanding this naming convention can help you understand the purpose and use of each intent.
  • Intents in the library also have slots for providing additional information from the user’s utterance. The slots are provided automatically, so you do not define them in the intent schema. In contrast, the standard built-in intents like AMAZON.HelpIntent cannot use slots.

Our weather example would have an intent schema like this:
 

{

  "intents": [

    {

      "intent": "AMAZON.SearchAction"

    }

  ]

}

Although no slots are defined in the above schema, an utterance like “what’s the weather today in Seattle” would send your skill a request with slots containing today’s date and the city “Seattle.”

These intents are designed around a set of actions, entities, and properties. The name of each intent combines these elements into an intent signature. In the above example the action is SearchAction, its property is object, and the entity is WeatherForecast.

[Read More]

December 02, 2016

Marion Desmazieres

The name of Harrison Kinsley may not ring a bell but if you’re into Python programming you’ve probably heard the name “Sentdex”. With over 125,000 subscribers to his YouTube channel and about 800 free tutorials on his associated website, Harrison has become a reference for learning materials on Python programming.

Today, we’re excited to share a new Alexa skills tutorial for Python programmers available for free on PythonProgramming.net with companion video screencasts to follow along. This three-part tutorial series provides the instructions and code snippets to build an Alexa skill in Python that goes to the World News subreddit, a popular feed on news aggregator Reddit, and reads the latest headlines. To follow along, you will need an Alexa-enabled devicengrok or an https enabled server, and an Amazon Developer account.

In this tutorial, you can expect to learn:

Get started with the Alexa tutorial series here. For more Python tutorials, head to Harrison’s website.

Happy coding!

Marion

Learn more

Check out these Alexa developer resources:

 

December 02, 2016

Mario Viviani

Editing the user interface of a Leanback-enabled TV app through Presenters

In the previous episode of this series we discussed how to create the main interface of our Leanback-enabled project through the BrowseFragment. Now let’s take a closer look into the Presenter class. The Presenter class allows us to define the look and feel of our Leanback-enabled app without editing the underlying data structure. 

The Presenter class

The Leanback template we created was built following a custom version of the common development pattern, Model-view-controller (MVC), in which the Presenter class acts as the View. The Presenters are passed to the ArrayObjectAdapter as arguments and define how the content of the Adapter should be displayed

The Leanback approach provides a variety of predefined Presenters:

  • CardPresenter defines singular content
  • ListRowPresenter defines how various content in a row should be displayed and arranged
  • DetailsDescriptionPresenter defines the UI of the DetailsFragment

Implementing the Presenters are quite similar: they all follow the ViewHolder pattern and are mostly composed by Custom Views with methods to set the fields of the views. Let’s take a close look at the customizing the CardPresenter as an example:

[Read More]

Recent Posts


Archive