**Case Study: Generating Music using Record Neural Networks**
The case study focuses on single instrument music, but it provides the tools and basics to build on top of this work to create personalized music systems even with multi-instrument settings. The course aims to provide all the necessary tools for students to learn how to generate music using record neural networks.
**Example: Irish Folk Music Generation**
The model was given a large dataset of Irish folk music tunes, which it learned from and generated new music compositions. The music generated is algorithmically created, not human-generated, but sounds like Irish folk music due to the data set used. The example showcases how the model can generate single-instrument melodies.
**Multi-Instrument Music Generation**
The IVA system, discussed in a TED talk, generates multi-instrument music using almost 30,000 compositions by famous composers like Beethoven and Mozart. The machine learning model learns patterns from these compositions and generates new music. The music is played by humans but was originally composed by an algorithm.
**Building a Personalized Music System**
The case study provides a starting point for anyone interested in how music can be automatically generated using deep learning and recommittal models. By building on the work presented in this case study, students can create their own recurrent neural network models to generate music in both ABC notation and MIDI notation. The project allows users to take algorithms and data generated by musicians and use a machine learning model to generate new music, taking feedback from listeners to regenerate and improve the output.
**Potential of Personalized Music Systems**
The idea behind creating personalized music systems is that algorithms can learn patterns from large datasets generated by musicians and generate new music tailored to an individual's preferences. This technology has potential as a future startup idea, where users can take their feedback and use it to generate music that suits their taste. The course aims to provide students with the tools and knowledge necessary to explore this concept further.
**Technical Details**
The system uses a combination of algorithms and data generated by musicians to create new music compositions. The case study focuses on single-instrument settings, but the project can be expanded to include multi-instrument settings. The system uses recurrent neural networks to generate music in both ABC notation and MIDI notation.
**Getting Started with Personalized Music Systems**
To get started with creating personalized music systems, students need to learn how to work with record neural networks, including the basics of deep learning and recommittal models. The course aims to provide all the necessary tools and knowledge for students to build their own projects using these technologies.
"WEBVTTKind: captionsLanguage: enso as part of applied a course comm applied AI course comm the core machine learning in the AI course we have actually uploaded a very very very interesting case to be recently which is how to automatically generate music right using record neural networks in deep learning it's a very very interesting case study which especially for somebody like me who enjoys a lot of music but who doesn't have fundamentals of music to be able to compose music myself right so the objective of this case study is very simple imagine if I'm giving if I'm given the data set of lots of music right imagine if somebody gives me whole of Beethoven's music right between being on one the most fundamentally brilliant composites of music can I create new music using the music data set of some of the greatest composers that's the simple problem so I want a model look at this way right imagine if I have a huge data set of music right imagine if I can build a model that will understand the patterns in this data set right this is basically music by lots of phenomenal composers now once once the model M which is which is a recurrent neural network model in deep learning once this model learns the patterns of music in this data set now this model should be able to now generate new music for me generate new music and this new music should not be a copy/paste of the old music it should understand the patterns it should understand the patterns in this music and now generate original music inspired from this data so that's that's that's the type of system that we want to build so in this case study again remember I am NOT an expert in music I do not know anything about the classical music or the modern music right only anything about it I don't know about music notation nothing but I enjoy music like like millions and millions of others I don't know anybody who doesn't enjoy music right so in this case study we learn the basics of how to represent music itself so we learn things like ABC notation right which is a very very simplified string-based or character based notation will also learn about MIDI format which is a very important and very widely used formatting music creation then we learn a special type of recurrent neural network called character recurrent neural networks which were popularized somewhere in late 2015 which are very well suited for tasks like this right again we focus a lot of our case study on a single instrument music music generation but we also understand how it can be extended to multi instrument settings in a nutshell the big theme here is I personally believe that over the next 20 years or so maybe 10 to 20 years today what happens today the current status quo is there is a musician or an artist who creates music and we as audience consume that music right of course we have some favorite musicians like I love air resonance compositions I also love some classical compositions right so instead of this model what if we have a system where musicians where musicians who are experts in music compose music with the help of an algorithm so it is musicians plus algorithms that will create phenomenally brilliant brilliant music which then is consumed by audience and now audience can give feedback audience can give feedback to this algorithm right and this music will now be fine-tuned by the algorithm you as per the audience tests for example I don't like music which a lot of thumpy music I don't like lot of a lot of high bass music right I prefer melodious music personally so what if the algorithm understands my taste in to create it generates new music based on based on music that I like imagine a world where you have you have custom music created for your tastes with collaboration with musicians and machine learning and deep learning algorithms that's called a space of personalized music I think we will reach the Paige this stage of music creation and consumption or the next decade or so so with all this you might wonder now let's listen to some music that we can create using what we learn in this case study right so I I mean this is music created by others but I'll show it to you and I'll help you listen to it I'll provide links to all of these resources and you'll understand this case study is like a primer on how to generate music using record neural networks and you can generate all of these stuff using using the case study right but again the case study focuses mostly on single instrument as I was saying a bi-level doesn't focus too much on multi instrument but at the end of this case study you will have all the tools to be able to expand on this work to be able to build a personalized music system even with multi instrument systems so the course again gives you all the tools you need all the basics you need to be able to build on top of her case studies that's always been our core theme here right so now let's listen to some music here right so again this is music generated using some Irish folk music so the model the model itself was was given lot of Irish music tunes right Irish folk music tunes the model learnt from these students and now it generated a new music composition here so the music that you are going to listen to is algorithmically generated not human generated but it will sound like Irish folk music because the data set on which it will string was Irish folk music right so let's listen to this please note that this is a single instrument play this is not a multi instrument so this is just this is just one one one melody that I'm playing here but you have a sequence of melodies that are created by this author here right whose username is C and sailor who who could who used character on ins to compose Irish folk music using a large Irish folk music dataset right I provide you reference link to this in the description section of this video and you can listen to other right similarly there is an other piece of music by the way this Irish folk music generation uses ABC notation and recurrent neural networks the other the other example that I am going to show this again a very interesting blog post that will explain in detail in our you know in our case study in the course this uses MIDI format so this uses so it's a slight difference in the mod in the format itself and it slight difference in the model also alright but it generates slightly different music as you can hear here again this music is completely auto-generated this is piano music and you have multiple scores if you want to listen to them this is also a piano music just a just today itself I was listening to this very interesting TED talk the TED talk is called how AI composes a personalized song track of your life very very interesting TED talk I'll provide you a link to this TED talk in this TED talk the designers of the system the system is called IVA right the system is called IVA and aibert composes multi-instrument music very very interesting music so what-what I ever did was what what I ever did was it took music composed it took almost 30,000 music compositions by phenomenal composers like Beethoven Mozart etcetera right I was a as a machine learning model learnt these patterns and it generated it generated new music it generated new music so the music that you're going to listen to now is actually generated by IVA but played by humans by a professional by a professional group of musicians right this is this music that you're going to listen to is composed by an algorithm not by a human right so let's listen to this so as you would recognize this is actually a multi this is actually a multi instrument setting this is basically a multi instrument music being generated by a machine learning algorithm again it all depends it all is based on the data sets that you give it to them and also it mostly uses some form of recurrent neural networks to generate this very very interesting piece of work so this case study for any one of you who is interested in how music can be automatically generated using deep learning and especially recommittal it's work models in deep learning this case study is a good starting point because you learn the basics you will actually build a couple of recurrent neural network models to generate music in both ABC notation and the MIDI notation and you can always expand upon the work that we have in the case study as we strongly encourage all of our students to do to multi-instrument setting and you could be part of the future here you could build a startup that works on that works on building personalized music as I was referring to earlier right if somebody is interested in building a start-up like this I mean I'm all in for it we will try to help you as much as we can so the idea here is that you take you take algorithms and lot of data generated by musicians let let up let a machine learning model generate new music it audience listen to it take their feedback and regenerate music I think this is this is the fuse feature it is one of the futures I'm not sure this will be set always panel but I am sure there is lot of scope and potential in creating personalized music systems like that so let's get goingso as part of applied a course comm applied AI course comm the core machine learning in the AI course we have actually uploaded a very very very interesting case to be recently which is how to automatically generate music right using record neural networks in deep learning it's a very very interesting case study which especially for somebody like me who enjoys a lot of music but who doesn't have fundamentals of music to be able to compose music myself right so the objective of this case study is very simple imagine if I'm giving if I'm given the data set of lots of music right imagine if somebody gives me whole of Beethoven's music right between being on one the most fundamentally brilliant composites of music can I create new music using the music data set of some of the greatest composers that's the simple problem so I want a model look at this way right imagine if I have a huge data set of music right imagine if I can build a model that will understand the patterns in this data set right this is basically music by lots of phenomenal composers now once once the model M which is which is a recurrent neural network model in deep learning once this model learns the patterns of music in this data set now this model should be able to now generate new music for me generate new music and this new music should not be a copy/paste of the old music it should understand the patterns it should understand the patterns in this music and now generate original music inspired from this data so that's that's that's the type of system that we want to build so in this case study again remember I am NOT an expert in music I do not know anything about the classical music or the modern music right only anything about it I don't know about music notation nothing but I enjoy music like like millions and millions of others I don't know anybody who doesn't enjoy music right so in this case study we learn the basics of how to represent music itself so we learn things like ABC notation right which is a very very simplified string-based or character based notation will also learn about MIDI format which is a very important and very widely used formatting music creation then we learn a special type of recurrent neural network called character recurrent neural networks which were popularized somewhere in late 2015 which are very well suited for tasks like this right again we focus a lot of our case study on a single instrument music music generation but we also understand how it can be extended to multi instrument settings in a nutshell the big theme here is I personally believe that over the next 20 years or so maybe 10 to 20 years today what happens today the current status quo is there is a musician or an artist who creates music and we as audience consume that music right of course we have some favorite musicians like I love air resonance compositions I also love some classical compositions right so instead of this model what if we have a system where musicians where musicians who are experts in music compose music with the help of an algorithm so it is musicians plus algorithms that will create phenomenally brilliant brilliant music which then is consumed by audience and now audience can give feedback audience can give feedback to this algorithm right and this music will now be fine-tuned by the algorithm you as per the audience tests for example I don't like music which a lot of thumpy music I don't like lot of a lot of high bass music right I prefer melodious music personally so what if the algorithm understands my taste in to create it generates new music based on based on music that I like imagine a world where you have you have custom music created for your tastes with collaboration with musicians and machine learning and deep learning algorithms that's called a space of personalized music I think we will reach the Paige this stage of music creation and consumption or the next decade or so so with all this you might wonder now let's listen to some music that we can create using what we learn in this case study right so I I mean this is music created by others but I'll show it to you and I'll help you listen to it I'll provide links to all of these resources and you'll understand this case study is like a primer on how to generate music using record neural networks and you can generate all of these stuff using using the case study right but again the case study focuses mostly on single instrument as I was saying a bi-level doesn't focus too much on multi instrument but at the end of this case study you will have all the tools to be able to expand on this work to be able to build a personalized music system even with multi instrument systems so the course again gives you all the tools you need all the basics you need to be able to build on top of her case studies that's always been our core theme here right so now let's listen to some music here right so again this is music generated using some Irish folk music so the model the model itself was was given lot of Irish music tunes right Irish folk music tunes the model learnt from these students and now it generated a new music composition here so the music that you are going to listen to is algorithmically generated not human generated but it will sound like Irish folk music because the data set on which it will string was Irish folk music right so let's listen to this please note that this is a single instrument play this is not a multi instrument so this is just this is just one one one melody that I'm playing here but you have a sequence of melodies that are created by this author here right whose username is C and sailor who who could who used character on ins to compose Irish folk music using a large Irish folk music dataset right I provide you reference link to this in the description section of this video and you can listen to other right similarly there is an other piece of music by the way this Irish folk music generation uses ABC notation and recurrent neural networks the other the other example that I am going to show this again a very interesting blog post that will explain in detail in our you know in our case study in the course this uses MIDI format so this uses so it's a slight difference in the mod in the format itself and it slight difference in the model also alright but it generates slightly different music as you can hear here again this music is completely auto-generated this is piano music and you have multiple scores if you want to listen to them this is also a piano music just a just today itself I was listening to this very interesting TED talk the TED talk is called how AI composes a personalized song track of your life very very interesting TED talk I'll provide you a link to this TED talk in this TED talk the designers of the system the system is called IVA right the system is called IVA and aibert composes multi-instrument music very very interesting music so what-what I ever did was what what I ever did was it took music composed it took almost 30,000 music compositions by phenomenal composers like Beethoven Mozart etcetera right I was a as a machine learning model learnt these patterns and it generated it generated new music it generated new music so the music that you're going to listen to now is actually generated by IVA but played by humans by a professional by a professional group of musicians right this is this music that you're going to listen to is composed by an algorithm not by a human right so let's listen to this so as you would recognize this is actually a multi this is actually a multi instrument setting this is basically a multi instrument music being generated by a machine learning algorithm again it all depends it all is based on the data sets that you give it to them and also it mostly uses some form of recurrent neural networks to generate this very very interesting piece of work so this case study for any one of you who is interested in how music can be automatically generated using deep learning and especially recommittal it's work models in deep learning this case study is a good starting point because you learn the basics you will actually build a couple of recurrent neural network models to generate music in both ABC notation and the MIDI notation and you can always expand upon the work that we have in the case study as we strongly encourage all of our students to do to multi-instrument setting and you could be part of the future here you could build a startup that works on that works on building personalized music as I was referring to earlier right if somebody is interested in building a start-up like this I mean I'm all in for it we will try to help you as much as we can so the idea here is that you take you take algorithms and lot of data generated by musicians let let up let a machine learning model generate new music it audience listen to it take their feedback and regenerate music I think this is this is the fuse feature it is one of the futures I'm not sure this will be set always panel but I am sure there is lot of scope and potential in creating personalized music systems like that so let's get going\n"