Arpangel wrote:The trouble with generative music is that it’s always the same texture, and structure, it doesn’t change, so it’s not interesting.
It needs highlights, it needs genre changes, it needs to morph, randomly between styles, until that happens it’s just a sequencer set to random, mapped to a few preset sounds.
I’ve got the Eno one on my i Pad I never, ever use it.
I only partially agree.
The whole problem is figuring out what automatic means.
If automatic means totally random then you're right, but it's never like that in recent studies involving artificial intelligence. It's not even true if I'm based on my experience. Whatever algorithms they are, they're never really random otherwise nothing makes sense.
The problem is to understand how much we can have freedom to modify the process and direct it where we want.
blinddrew wrote:I think i would challenge your final point, something that is too complex is unlikely to find its way into elevator music because rather than being bland and background it could be challenging and distracting.
So if music-engine can programme his stuff such that it is musically complex whilst still being accessible, i think he probably is doing something right.
For a given definition of 'right'.
And also this is the ignorant opinion of probably the least musically trained person* on the forum, so you can safely ignore if i'm wrong.
* for the avoidance of doubt, i mean me.
When I talk about complex harmonies this does not mean that the music is difficult to listen to.
The most difficult thing is to make sure that the intertwining of melody and harmony is not too strange.
For example this last music I put on youtube has a very complex harmony even if there are only 4 chords.
Between the chord of F minor and E minor, there is a scale that can be used for both chords. This is the C major harmonic scale, little used in western music (the harmonic minor scale is very much used).
What I try to do is to look for these combinations that make a complex harmony easy to listen to.
Funkyflash5 wrote:After giving a couple of the pieces a listen, it seems to me that the sections with more rapid notes were more convincing than the more legato parts. That made me realize that the rapid parts sounded enough like a solo section that I wasn't looking for the same degree of self referential theme that the main body of a conventional song tends to have. I wonder if in the programming it would be possible to get it to generate in phrases that it could morph to fit the chords, while introducing an element of cohesiveness. For example, if it generated a pair of 4 bar phrases, it could then shift their keys as needed, but also perhaps use the rhythm from one with some of the intervals of the other, or splice them together, or stretch them to 8 bars, or shrink them to 2, or some combination of these and more, and then mix and match the results to form the piece. I think it could result in something more humanlike without adding any more human decision making to the process. The human brain craves patterns, so allow the program to hide some for the ear to find.
Very interesting.
In fact the fast notes are easier to generate and the effect is very interesting, because the ear can't hear them all and the impression that comes out is good.
When it comes to producing a melody things get complicated. What I do is to force time and indicate whether notes go up or down. It's very interesting to note then how the rhythmic interweaving can also be good for very different harmonic structures.
The other aspect I use is to use markoff chains. You have to play a melody and identify all the transitions between one degree and another and include the rhythmic aspect in the calculation.
However, this has the disadvantage of imitating too much of the music you analyze. A good compromise is to mix the two techniques.
Interesting topic. I've been following the evolution of AI and Machine Learning from a different angle (automated taxonomies, cataloguing and tagging of media assets and image recognition). In the last couple of years it's been hailed as the next big thing in terms of cutting out the manual legwork in repetitive tasks, although the hype has been unceremoniously deflated by the reality of this particular landscape, along with some monumental and rather embarrassing blunders. The incredibly nuanced way that humans recognise, think, feel, react and make associations is apparently extremely difficult to distill into accurate and repeatable algorithms. With music composition you also have the added criteria of being both original and whatever that magical ingredient is that makes us enjoy music on a personal and subjective level.
After a quick Google for AI music, it looks like an outift called AIVA is taking a more mix 'n' match hybrid approach, with a substantial dollop of mimicry - their engine has numerous preset algorithms to cater for different genres, and you can provide influences in the way of pre-existing pieces (i.e. classical works, standards) or by uploading your own MIDI file. With AI composed pieces played by real musicians, what's immediately apparent - which is what you'd expect - is how familiar they sound:
Undoubtedly one of the biggest disadvantages of computer music is the lack of interpretation. Even the sounds are unrealistic and you have to do a lot of abstraction to appreciate this music. Then there is the rhythmic aspect which is too static.
The starting and ending point is always a midi file.
Even if we listen to beautiful music in this format we get the feeling that it's not good.
Here for example I tried to imitate the clarinet, but unfortunately the result is not realistic :
That's why I'm looking for real musicians who can interpret my music.
In its current form it sounds like the ‘composer’ has got ADHD, maybe if it was slowed down a bit it might improve but it still sounds like computer generated music, going off in all directions, never seeming to settle down into an obvious melody.
Speaking as a clarinet player, that sounds so obviously like a clarinet sample and used by someone that hasn't tried to vary the sound in the manner in which a live player would. The vibrato on the longer notes is horrible too, so obviously from a LFO.
I don't know what sample library was used here, but there are libraries that can produce much more lifelike results, but even these require effort on behalf of the user. You can't just bang in the notes, you have to understand how they might be played.
I totally agree.
And that's why I made an example of how I couldn't imitate real sounds.
After all, my intention is not to do this because I work mainly on the search for harmonic structures and which scales can be used, then obviously the search for algorithms that can express melodies that make sense at the level of perception.
I would love to make real musicians play these pieces, even if when composing something I don't take into account the technique of the instrument and this is a problem.
For example I think that this piece cannot be played by a real clarinet because there is not enough time to breathe between one note sequence and another.
For example I think that this piece cannot be played by a real clarinet because there is not enough time to breathe between one note sequence and another
I wondered about that but I think you’re just OK in that regard.
I still can’t latch on to an obvious melody though.
The music in question is a transposition of a very particular jazz standard where the theme is in fact very articulated. But even the solo is not very human, and maybe impossible to really play.
There have been other music where I have tried to develop a melodic theme. For example this one:
I've listened to the automated music compositions. You have certainly made an achievement, but what is lacking is human influence. There are rules in music theory, especially harmony, but experienced composers break them all the time.
I don't think you can catch the human touch in mathematics. Also, your automated music sounds fragmented, like a series of small parts joined together. The modulations are correct in themselves, but rather limited. For instance, modulation is also possible by using the time factor. You don't need a full cadenza, but you do need good hearing. You need to be able to hear when the new key sounds dominant. Mathematics won't help you there.
But, again, quit an achievement. You should be proud of yourself.