Building an iPod: Part 3 — Integrating Apple MusicKit JS
In the first part of our “Building an iPod” series we took a small glimpse at the click wheel and user interface. In the second chapter we explored Spotify’s API and played some music. In this part we’ll integrate Apple Music on top of the abstract layer Spotify is build on which is called IEngine.
The IEngine interface includes all of the API calls that are required to read and play the logged in users library. In the previous episode we implemented every method that was declared in the IEngine interface for the SpotifyEngine. We built the PlayService on top of that engine interface. Our next task is to implement the AppleMusicEngine that handles the streaming logic for Apple Music.
MusicKit JS vs Spotify
In the previous part we had a discussion about Spotify’s music playing capabilities. In a nutshell with Spotify we can initialize a device within the browser window and then we can tell it what to do in a REST API. No realtime functionalities, nothing fancy since it was designed to be a remote control interface. Working with Spotify’s Remote API in general is hard when the goal is a realtime user interface.
MusicKit JS is completely different. MusicKit lets us play our users iCloud library inside our app once the user is authenticated.
We can get the configured MusicKit JS instance by calling MusicKit.getInstance(). It provides realtime information about the current playback state, this extremely useful feature is completely missing from the Spotify side. Another useful feature of the MusicKit JS are callbacks
With Spotify we had to synchronize our playback state every time the user interacted with the playback by requesting it through RES API. This is not a bulletproof solution for our use case and it consumes a lot of API calls which eventually will lead to throttling with a high user base.
Spotify’s solution on the other hand has a much more detailed documentation. It is so much easier to dig up information about response objects, behavior and such while we browse Spotify’s Platform Documentation. Spotify’s web interface even lets us fire API calls which is cool. Apple’s MusicKit documentation feels incomplete and its missing a lot of details, I really feel like it needs more attention. The TypeScript type definitions for MusicKit JS helped me a lot.
Authorizing users
We need to an active membership in the Apple Developer Program. To make requests to the Apple Music API, Create a MusicKit identifier and private key. Next we have to create a developer token. Here is a sample code that’ll do it for you:
We’ll include the source of MusicKit in the angular.json next to the code that’ll initialize MusicKit for us with our developer token. Save musicKit to a file named MusicKit.js.
"scripts": [
//..
// https://js-cdn.music.apple.com/musickit/v1/musickit.js
"MusicKit.js",
"MusicKitInit.js" // this will initialize MusicKit
]
Here is the implementation for MusicKitInit.js that will initialize MusicKit after the window’s load event. Make sure t include your developer token as the value of the developerToken key that you created in the previous step.
This code has an event listener for the MusicKit’s musickitloaded event.Upon musickitloaded we configure the MusicKit instance and then we fire the event of musicKitConfigured. When musicKitConfigured is fired first we check if the user is authenticated.
this.musicKit = MusicKit.getInstance();
if (this.musicKit.isAuthorized) {
// user is authorized
}
// user is not authorized
In case the user is authenticated we slide to the menu screen else we bind the musicKit.authorize() event to the login button. This will take the user to Apple’s authorization popup window.
appleMusicLoginButtonClicked(): void {
this.musicKit.authorize();
}
When the user is finally authenticated then we can read her iCloud library. Every method under musicKit.api.library returns a promise. Implementing the IEngine interface with MusicKit was so effortless, even with the inadequate documentation.
We need more abstraction
PlayService that accepted user inputs and then acted based on the players internal state and forwarded requests to the IEngine instances. PlayService in working only with Spotify and it has use cases that are different for MusicKit. For example when the user wanted to play an album the PlayService checked if there was a Spotify Connect device set. If there were no selected Connect device, then we forwarded the user to the select screen. Use cases like this and many more are not needed or even totally different in the MusicKit scenario.
We’ll need to cover the distinct behavior of the Spotify and MusicKit with an interface. We’ll copy every method declatarion from PlayService and we’ll put them into an interface. This interface will be IPlayBehaviour.
PlayService will have a factory that instantiates a behavior instance based on the streaming platform selection of the user. Next every method inside the PlayService will forward execution to the IPlayBehavior implementation. Every Spotify related logic will be outsource to the SpotifyPlayBevaior class from the PlayService. And that’s it, no more worries about Spotify related logic.
SpotifyPlayBehaviour and AppleMusicPlayBehaviour will forward platform specific calls to SpotifyEngine and AppleMusicEngine respectively. This way our high level logic (PlayService) will be platform independent.
After the implementation of AppleMusicEngine we can listen both Spotify and Apple Music on our iPod.
Implementing AppleMusicEngine is so much easier with the musicKit singleton MusicKitJs provides. compared to Spotify’s REST API. We can get the musicKit singleton by invoking getInstance() on the AppleMusic global variable.
this.musicKit = AppleMusicKit.getInstance();
The musicKit.api includes every resource we need to access while the musicKit.player includes realtime information about the playback status. A partial implementation of the AppleMusicEngine is showcased here:
The AppleMusicPlayBehaviour mostly interacts with the musicKit.player. A partial implementation for the seek action can be found here:
The seek feature will takes us to the part of the track we point to. This feature took me 5 minutes to implement whereas the Spotify implementation took me hours. We directly read our playback state from the MusicKit instance. After we request the seek our playback state will be updated in MusicKit. In contrast when we work with SpotifyEngine a seek has to be followed with an updatePlayList API call.
You can try it webPod here.