Lo-Fi Player explorations
Earlier today, I was introduced to Lo-Fi Player, a browser-based virtual room in which you can generate instrumental lo-fi hip-hop tracks that’s powered by machine learning models from magenta.js.
For a user, the idea is fairly simple. A room loads with multiple items in it: a guitar, a keyboard, a bass guitar, a TV, a radio, view outside, etc. You tinker with each of these to either change the way the room looks or the music that’s being played. You can make the melody ‘denser’ or ‘chiller’ (or other adjectives), add bass, subtract drums, whatever you wish (and has been modelled).
This making the melody ‘denser’ or ‘chiller’ is achieved through Recurrent Neural Networks. I don’t believe I’m qualified to go into any detail beyond ‘some machine learning algorithm is used’. But this is a good place to start gaining some intelligence on how it’s done. I plan on gaining that intelligence. Updates will follow.
Until then, you can chill in the first room I’ve created on Lo-Fi Player.
The Boat approaches a story we’ve heard before in a way that both refrains from fetishising human suffering, and narrates it in a meaningful and unique way. A large part of this comes down to its craft: the writing, the music, the illustrations. But a large part of it also comes to its choice of medium – interactive multimedia storytelling.
With this Sonic Pi experiment, the idea was to a. see if I could compose an entire track on Sonic Pi, and b. see how easy or difficult such composition was. I set out to compose a minimal house track, now tentatively called Are Microwaves Friends? I used 808 and 909 samples to build the percussive spine of the track. For the bass and melodies, I used the preloaded synths.
To see how easy it was to arrange No Surprises by Radiohead on Sonic Pi. Every exploration has to start somewhere and my exploration of Sonic Pi has started with what I would classify as a successful attempt at coding a cover of No Surprises. Why No Surprises? Because I like it. And Sonic Pi’s default keys reminded me of the keys on the intro of the song.
Ever since I've known about live coding music, I've wanted to do it. And ever since I’ve wanted to do it, I’ve wanted to do it on Sonic Pi. But fact is, ‘ever since’ has been three laptops, eight years, zero code. Until now, that is. The prospects, I must say, are really exciting.
This a browser-based virtual room in which you can generate instrumental lo-fi hip-hop tracks that’s powered by machine learning models from magenta.js. A room loads with multiple items in it: a guitar, a keyboard, a bass guitar, a TV, a radio, view outside, etc. You tinker with each of these to either change the way the room looks or the music that’s being played.
Each generation sees itself at a media crossroads. I was part of the first generation of Indians that had free access to a liberalised television media landscape. The next will be the first generation that is not only at a media crossroads but also at the cusp of seeing boundaries between media dissolve.
My first attempt at co-writing a coherent story on the Dragon model of AI Dungeon, which uses OpenAI's new API for leveraging the GPT-3 model, which was trained with approximately one trillion words.
The GPT-2 powered AI Dungeon is a text-based story game, where every bit of text you input, say ‘you walk towards the forest’, or ‘you search for food’ gets the game to move the plot along. It’s fascinating.
Home is a ghost of our own creation: the cave – its lowly ancestor – recreated from some lost memory. Our predators are now nebulous, our prey has come to be served on china. Is it any surprise that our idea of home has become just as nebulous? Is it any wonder that home is no longer just somewhere to lay down our weapons, lick our wounds, share a quiet dinner with family?