Using Sound In Your Applications

Sound effects are a common part of many applications, and MHP and OCAP give developers have two possible ways of including sound in their application. As we will see, the two approaches have different strengths and weaknesses, and developers may choose to use either or both of these techniques in an application. In this tutorial, we will look at the two approaches, and where we should use one approach over the other.

The first way to play a sound is using the Java Media Framework API. We won't look at this in any detail in this tutorial because it is covered in much more detail in the JMF tutorial, but we will look at the basics. The second way is through the org.havi.ui.HSound class. This provides a simpler API for audio playback, but it does not offer as much control over the media as JMF. In many cases applications will not need this extra control, and for applications that simply want to play a sound on certain events (e.g. when the user enters an invalid value in a form, or when they press a button that has no effect) then HSound is probably all they need.

Playing sounds with HSound

As we can see from the interface to the HSound class below, it offers a fairly simple way to play an audio clip without a lot of unnecessary overhead. The methods on this class are limited to loading the source clip, playing and stopping it, and ensuring that any audio data is free when it is no longer needed. For many applications, this is enough: they do not need any more control and a complex API would only get in the way.

public class HSound {

  public HSound();

  public void load(java.lang.String location)

  public void load( contents)

  public void set (byte[] data);

  public void play();

  public void stop();
  public void loop();

  public void dispose();


Using this API is as simple as it appears. An application simply loads the clip it wants to play and then calls the play() method; this method is asynchronous, and so the method will return before the clip has finished playing, avoiding the need for the application to spawn another thread simply for playing a sound. The code below shows how an application can play a sound:

// Create an HSound object
HSound player;
player = new HSound();

// Load an audio clip into the HSound object
try {
catch (Exception e) {
// Now play the clip;

The loop() method allows a sound to be played repeatedly, instead of simply playing and then stopping when the end of the clip is reached. This is useful for playing a piece of background music, for instance, where the clip can simply be left playing with no intervention needed by the application or the user. Simply playing the clip repeatedly using play() may result in a gap at the end of the clip before it starts playing again. Some people have reported a that the loop() method does not work correctly on some implementations, however, and so developers should take care not to rely on this.

One thing that applications must take care with, however, is to free the audio data when it is no longer needed. Calling the dispose() method will free the audio data held in memory and allow other applications to re-use the space. Simply stopping the clip is not enough, because references to the audio data may still remain within the middleware. Disposing of the HSound object correctly will avoid memory leaks and increase reliability.

Clips can be loaded from a variety of sources. While a file is the most obvious source, the other versions of the load() method provide more flexibility. Loading data from a URL can mean any kind of URL, so data can be loaded from a file using a file:// URL or from a remote source using HTTP or some other protocol. The third version of the load() method passes a buffer containing the sample data directly to the HSound object, and while this creates slightly more work for the application developer it does give complete freedom as to where the sample data comes from. It could be read from a file, hard-coded directly into the application, or even acquired from an IP multicast stream or a stream of MPEG private sections.

One problem that has been noticed is that not all platforms behave the same way when loading data from files. Some platforms take a string containing a file:// URL as a parameter to the first version of the load() method, while others take a filename. To avoid problems, many developers prefer to load the audio clip into a memory buffer and then tell the HSound object to use that buffer for its sample data, or use the version of the load() method that takes a URL. The example below shows one common way of doing this:

org.havi.ui.HSound player; file;
byte[] audioData; stream;

// create the File object that we will get the data 
// from
file = new File("mySound.mp2");

// Create a memory buffer to hold the audio clip
audioData = new byte[(int)file.length()];

// Load the audio clip in to the memory buffer 
// using the standard Java file operations.  We 
// could also use the DSM-CC API and load this file 
// asynchronously if we wanted to.
try {
  stream = new FileInputStream(file);;
catch (Exception e) {

// Create the HSound object that we will use to 
// play this audio clip
player = new HSound();

// Tell the HSound object to use the sample data 
// that we have just loaded

// Now play the clip.;

Playing sounds with JMF

Sometimes, applications need more control than just choosing whether a clip plays once or repeatedly. The Java Media Framework offers applications significantly more control over the playback of an audio clip, at the expense of slightly more complexity. We will not look at JMF in too much detail here, because most of the information is already covered in the JMF tutorial elsewhere on this site.

As with HSound, simply playing back a clip is fairly straightforward: loc; player; file;

// Create a Medialocator that represents our clip.  
// This should be a file URL, so first we create 
// an object representing the file and then we 
// get the URL from that File object
file = new File("mySound.mp2");
loc = new

try {
  // Create the JMF Player for the audio file
  player =;
  // Play it;
catch (Exception e) {

The big advantage that we get with using JMF is flexibility. While the basic functionality of a JMF Player object is very similar to that of HSound, the additional controls that may be available for a JMF player are a big advantage. Applications can call the getControl() method on a player to get a Control object that can implement a wide variety of functionality. For audio clips, one of the more useful of these is the This lets the application decide at what point a clips should start playing, so that part of the clip could be skipped if necessary. The application can also set the time at which a clip ends, so together these give an application the ability to play only a small part of a much longer clip. Applications can also control how fast a clip is played, pause and resume it, and generally get much more fine-grained control over the playback of the audio through JMF than they can through HSound.
The only thing that HSound can do that JMF can not is loop a sound (it's possible to emulate this with JMF, but HSound tends to do it better).

The other area where JMF is weaker is when we look at loading the audio data. HSound supports loading the data from a file in the object carousel or from a URL, or even from an array of bytes. JMF only supports loading data from a file using a file:// URL.

Sound formats

MHP and OCAP both support only one format for sound clips - MPEG 1 layer 1 or layer 2, with the restrictions imposed in ETSI standard TR 101 154. This is the same standard that is used for audio in DVB services. MPEG-1 layer 3 (MP3) files or other audio formats such as WAV are not supported, and applications should not try and use them. While MPEG may not seem the most obvious choice at first, it is the format that is supported by every receiver in the market today. Other formats may not be supported in the receiver hardware that is currently available, making it impractical to use those formats on some hardware platforms, and so the choice of MPEG-1 offers a safe choice for all receivers.

Some emulators do not support MPEG audio, however, and so in these cases application authors will need to use a different format such as the Sun .au format for their audio files. These will not work on a real STB, however, and so developers should take care when moving their application from an emulator to a real environment.

Limitations on using sound

One thing that designers and developers need to be aware of when using sound is the size of the files. Since most sound clips will be downloaded from the object carousel, large sound files will take a long time do download and may affect the loading times for other files - as the overall size of the carousel increases, the average latency of a file will also increase. Large files also use more memory, which may be a problem on receivers will less RAM. In general, this means that long sound clips (e.g. background music for your application) are probably not a good idea.

Length of clips is not the only reason for thinking this way. MHP and OCAP receivers may not be able to mix sound clips, meaning that a receiver may only be able to play one sound at a time. Any other audio that is playing will be cut off until the new sound has finished, so if your application has background music, any other sound effects may mute the background music until they are finished. This applies to audio from the service associated with your application as well, so a sound effect will mute the audio from the parent service. By keeping audio clips short, designers and developers can use audio without conflicting with other elements of the service.

Of course, there are times when you don't want the audio from the parent service to be heard at all - in a full-screen application, it may make no sense to keep the underlying audio playing. To stop the audio from the parent service, we have to use JMF and the JavaTV service selection API:

// Get a reference to the JavaTV ServiceContextFactory
ServiceContextFactory factory;
factory = ServiceContextFactory.getInstance();

// From this, we can get a reference to the parent
// service context of our Xlet.  To do this, we need a
// reference to our Xlet context.  It's times like this
// that show why your application should always keep a
// reference to its Xlet context
ServiceContext myContext;
myContext = factory.getServiceContext(myXletContext);

// ServiceContentHandler objects are responsible for
// presenting the different parts of the service.  This
// includes the media components
ServiceContentHandler[] handlers;
handlers = myContext.getServiceContentHandlers();

for(int i=0; i < handlers.length ; i++) {
  if (handlers[i] instanceof ServiceMediaHandler) {
    // This is a Player for part of the service, since
    // ServiceMediaHandler objects are instances of JMF
    // Player objects
    // All we have to do is stop the player to stop the 
    // underlying audio.  This is a crude approach, and
    // a better solution would be to get a reference to 
    // the for this 
    // player and remove the audio component from the 
    // set of components that are being presented by 
    // this player.
    Player p = ( handlers[i];

Using sound effectively

One thing to remember when using sound in an application is to use it well. Digital TV applications may be used in environments where there are other distractions and where audio cues may not get heard, and so sound design may play an important part in the usability of your application. Use of sounds should fit with the overall style of the application, and should take account of the environment in which the application is normally used. A useful introduction to sound design is available from Boxes And Arrows: "Why Is That Thing Beeping? A Sound Design Primer"