Tutorials‎ > ‎

Creating a Music Playlist with Visualizer Using HTML5 Web Audio API, Canvas and Node.js

posted Dec 4, 2016, 11:41 PM by Benedictus Jason Reinhart   [ updated Dec 5, 2016, 6:56 PM by Surya Wang ]

The Web Audio API provides a powerful and versatile system for controlling audio on the Web, allowing developers to choose audio sources, add effects to audio, create audio visualizations, apply spatial effects (such as panning) and much more. (Source)

The concept of the Web Audio API is simple. Your audio things are described as node. The source of audio is a node. The effects on the audio (reverb, gain, filters, etc.) is also a node. The output of your audio (speaker, headphones, line out, etc.) is a node. All these nodes are connected as a chain of processes. If we want to give filter to our music, first we need the audio source node, the music node. Then we connect the music node to the filter node, and filter node to the destination node (a speaker or headphone). 

Other than filters and reverb, there is also an AnalyserNode. AnalyserNode can be used to get the information of frequency and time-domain analysis in real-time. Using the frequency information given by the AnalyserNode, we can provide a visualizer drawn in HTML5 canvas.
In terms of code, we need to:
  • Create an AudioContext object as the context or container of all our nodes
  • Create an audio source node
  • Create an AnalyserNode object to analyse the frequency in the audio currently played
  • Connect source to AnalyserNode so it can be analysed
  • Connect AnalyserNode to AudioContext.destination
Firstly, AnalyserNode does not change any frequency in our audio but still output the audio input it takes, so we can safely assume that the input of destination is the same as the output of source. Secondly, an AudioContext has a property destination, it's usually the speaker or headphone of the user.

Before we start, make sure you have Node.js installed with Express in it. You can download Node.js in here and Express.js here.

Now, we start by setting up the server application first. The directory should look like this:
  |-- visualizer
    |-- songs
      |-- song.mp3
    |-- index.html
    |-- app.js

We'll write our server code in app.js. The server access control has to allow our origin, because streaming media files cannot be done by using file:/// URI and not all server in the internet allow cross domain origin.

var express = require('express');
var fs = require('fs');
var app = express();

app.use(function(req, res, next) {
    res.header("Access-Control-Allow-Origin", "*");
    res.header("Access-Control-Allow-Headers", "X-Requested-With");

app.get('/', function (req, res) {
    var filenames = fs.readdirSync('./songs/');
    var html = fs.readFileSync('./index.html', {encoding: 'utf-8'});
    var options = '';
    var sources = '';
    for (var i = 0; i < filenames.length; i++) {
        if (filenames[i].endsWith('.mp3')) {
            options += '<option value="'+filenames[i]+'">'+filenames[i]+'</option>';
    var render = html.toString().replace('#select#', '<select>'+options+'</select>');

app.listen(3000, function () {
    console.log('Visualizer server listening on port 3000');

To list all mp3 files in our directory, we need the fs (filesystem) module from Node. Then, we read all the mp3 files in directory and add it as an item in a combo box. Later in index.html we need to write #select# that will be replaced by the server with the combo box filled with mp3 file name items. As the server will send index.html as the response, we should write the index.html now.

<!DOCTYPE html>
    <title>Audio Visualizer</title>
        html, body {
            margin: 0;
            width: 100%;
            height: 100%;
            font-family: 'consolas';
            background: #111;
            color: white;
        #control, canvas {
            position: fixed;
            top: 0;
        select {
            position: relative;
            top: -10px;
            padding: 3px;
        #stats {
            position: fixed;
            bottom: 0;
            left: 0;
    <div id="control">
        <audio controls loop autoplay>
        <div id="settings">
            <div>Min. Frequency <input type="range" min="20" value="20" step="50" max="20000" name="minFrequency"></div>
            <div>Max. Frequency <input type="range" min="20" value="20000" step="50" max="20000" name="maxFrequency"></div>
            <div>Smoothing <input type="range" min="0" max="0.9" step="0.05" value="0.8"></div>
    <div id="stats">

        var control = document.querySelector('#control');

        var settings = document.querySelectorAll('#settings input');
        settings[0].addEventListener('change', changeMinMaxFrequency);
        settings[1].addEventListener('change', changeMinMaxFrequency);
        settings[2].addEventListener('change', changeSmoothness);

        function changeMinMaxFrequency() {
            minFrequency = settings[0].value;
            maxFrequency = settings[1].value;
            minIndex = Math.ceil(minFrequency / frequencyRange)|0;
            maxIndex = Math.floor(maxFrequency / frequencyRange)|0;

            // initialize balls
            balls.length = 0;
            for (var i = minIndex; i < maxIndex; i++) {
                balls[i] = new Ball(canvas.width / 2, canvas.height / 2, 360 / (maxIndex-minIndex) * i);

        function changeSmoothness() {
            analyser.smoothingTimeConstant = settings[2].value;

        function randomBetween(min, max) {
            return (Math.random() * (max - min) + min) | 0;

        class Ball {
            constructor(x, y, angle) {
                this.x = x;
                this.y = y;
                this.angle = angle;
                this.color = 'rgba('+randomBetween(60, 200)+', '+randomBetween(60, 200)+', '+randomBetween(60, 200)+', 1)';
        Ball.radius = 3.5;
        Ball.trailSize = 2;

        var canvas = document.querySelector('canvas'),
            ctx = canvas.getContext('2d'),
            audio = document.querySelector('audio'),
            audioContext = new AudioContext(), // "Container" for all audio nodes
            source = audioContext.createMediaElementSource(audio), // The music, we get it from <audio> element
            analyser = audioContext.createAnalyser(); // To get frequency information of source
        canvas.width = window.innerWidth;
        canvas.height = window.innerHeight;

        analyser.fftSize = 1024; // FFT algorithm of signal processing, the size must be a power of 2
        analyser.smoothingTimeConstant = 0.8;

        var bufferLength = analyser.frequencyBinCount, // array of frequencies length, must be half of FFT size
            sampleRate = audioContext.sampleRate, // The audio context sample rate, usually 48000hz or 44100hz. The maximum frequency in the music is half of this sample rate.
            frequencyRange = sampleRate / bufferLength, // each element of the array represents a range of frequencies
            minFrequency = 20, // minimum frequency a human can hear is 20hz, so we set the minimum frequency our visualizer can draw to 20.
            maxFrequency = 20000, // maximum frequency a human can hear varies depending by its age, but normally it's 20,000hz. more information: http://hypertextbook.com/facts/2003/ChrisDAmbrose.shtml
            minIndex, // the lowest index of the minimum frequency
            maxIndex, // the highest index of the maximum frequency
            dataArray = new Uint8Array(bufferLength),
            balls = [];

        changeMinMaxFrequency(); // init balls, min and max frequency

        var lastTime = Date.now(),
            stats = document.querySelector('#stats');
            updateStatsInterval = 200; // milliseconds
            stats.updateInterval = updateStatsInterval;
        function draw() {
            var now = Date.now();
            var deltaTime = now - lastTime;
            lastTime = now;

            stats.updateInterval -= deltaTime;
            if (stats.updateInterval < 0) {
                stats.innerHTML = 'FPS: ' + ((1000 / deltaTime) | 0);
                stats.updateInterval = updateStatsInterval;


            ctx.clearRect(0, 0, canvas.width, canvas.height);

            for(var i = minIndex; i < maxIndex; i++) {
                var distance = dataArray[i];
                distance *= distance / 200;
                if (distance < 5) continue;
                var angle = balls[i].angle * (Math.PI / 180);
                balls[i].x = Math.cos(angle) * distance;
                balls[i].y = Math.sin(angle) * distance;
                var gradient = ctx.createLinearGradient(canvas.width/2, canvas.height/2, balls[i].x+canvas.width/2, balls[i].y + canvas.height/2);
                gradient.addColorStop(0, "rgb(150,150,150)");
                gradient.addColorStop(1, balls[i].color);
                ctx.strokeStyle = gradient;
                ctx.fillStyle = balls[i].color;

                ctx.arc(balls[i].x + canvas.width/2, balls[i].y + canvas.height/2, Ball.radius*distance/100, 0, 2 * Math.PI, false);

                ctx.moveTo(canvas.width/2, canvas.height/2);
                ctx.lineTo(balls[i].x + canvas.width/2, balls[i].y + canvas.height/2);
                ctx.lineWidth = Ball.trailSize * distance/300;


        var select = document.querySelector('select');
        select.addEventListener('change', function() {
            var audio = document.createElement('audio');
            audio.setAttribute('autoplay', 'autoplay');
            audio.setAttribute('loop', 'loop');
            audio.setAttribute('controls', 'controls');
            var src = document.createElement('source');
            src.setAttribute('src', this.value);



            source = audioContext.createMediaElementSource(audio);

First, we need to create the context, source, and analyser.
var audio = document.querySelector('audio'),
    audioContext = new AudioContext(), // "Container" for all audio nodes
    source = audioContext.createMediaElementSource(audio), // The music, we get it from <audio> element
    analyser = audioContext.createAnalyser(); // To get frequency information of source

Then, simply connect those nodes together.

FYI, if we don't want any filters or analysis from another node, we can just simply connect the source and destination altogether.
that way the audio source output will goes directly to user's speaker/headphone without any alteration or analysis.

The analyser object has several properties. The fftSize is used to determine the FFT algorithm buffer size and it needs to be a power of 2 number.
var bufferLength = analyser.frequencyBinCount; // this will always be half of the fftSize

we use the frequencyBinCount property to get the size of the array, which will be half of the fftSize property of analyser. Knowing the array size, we can declare the unsigned integer array by doing this:
var dataArray = new Uint8Array(bufferLength);

dataArray is used to render the visualizer, it stores the frequency volume value. The visualizer is going to display the frequency volume. For example, a kick drum should be around 20-200hz, so dataArray from [0] to [3] (with a frequencyRange value around 80-90) should have high values depending on how hard the kick drum sounds.

The array will be filled every time the canvas redraw, using this method:

With the frequency information every frame, we can transform them into a visualizer. Assuming the frequency range is 80hz, by using loop we can draw a line for each frequency array that represents 80-160hz, 81-160hz, 161-240hz, and so on until it reaches the maximum frequency of 20000. Notice that we also skip the 0-80hz because our minimum frequency is 20hz, so we skip the first index of the array. Remember, each element in the array represents the volume of the frequency range. That means dataArray[20] represents the volume of 1681-1760hz.

To handle audio source change, for example when user wants to change the music, we cannot just disconnect the nodes and change the audio source. That is because the <audio> element is already bound to the context, and as far as I know there is no way (if you know how please tell me in the comment section!) to disconnect the audio element source from AudioContext. A simple solution I found was to just delete the old <audio> element and create another <audio> element with different source.

var select = document.querySelector('select');
select.addEventListener('change', function() {
    audio = document.createElement('audio');
    audio.setAttribute('autoplay', 'autoplay');
    audio.setAttribute('loop', 'loop');
    audio.setAttribute('controls', 'controls');

    var src = document.createElement('source');
    src.setAttribute('src', this.value);



    source = audioContext.createMediaElementSource(audio);

Your final result should be like this:

Benedictus Jason Reinhart,
Dec 4, 2016, 11:41 PM