Recently, Stability AI unveiled Stable Audio 2.0, an enhanced version of its music production platform.
Stable Audio 2.0 allows users to create a three-minute audio blog through text prompts, equivalent to the length of a song. This will include an introduction, a complete explanation, and a conclusion.
Regarding the pros and cons of this technology, there is good news. The significant increase in the duration to three minutes, compared to the previous version which only allowed a maximum of 90 seconds. For example, you can imagine creating a fake Christmas song in the style of Rob Thomas/Santana. Another advantage is that the tool is free and available to the public through the company’s website.
Mainly operating through text messaging, there is the option to upload an audio file, where the system will analyze and produce something similar. All audio files uploaded must be copyright-free to avoid replicating existing content.
Instead, it can be useful to use drum loops as part of the percussion or extend a 20-second clip to make it longer.
However, music created by artificial intelligence is still considered a conversation piece and a symbol of a potential future, which is excellent for enthusiasts and tragic for musicians.
Songs may be appealing at first, but complexities emerge, making things somewhat confusing.
For instance, the system prefers to add its own sound effects, but they are not in any known human language. I believe it uses any language included in the text within the images generated by artificial intelligence.
The singing sometimes sounds like it’s coming from real people, and at other times, it seems like they are singers selected from outer space. It’s a strange place amidst this valley, and The Verge described the songs as “soulless and bizarre,” comparing them to whale sounds.