moodagent.headline.jpg
 
 

Letter From the Founders

 
 

Letter From the Founders

 
 

During the advent of MP3 software and hardware, we foresaw the problem that would arise as a result of the digitization of albums into individual tracks: The deconstruction of artists’ repertoire would destroy a critical part of the album experience.

In response, we invented and patented a solution to bring the drama back together, known today as a playmaker. The solution is an asset-, media- and subject-independent sequence automaton capable of continuous generation of harmonically varied sequences of elements based on asset profile and user/user profile. The playlist solution and sequence logic embedded, combined with a unique music feature representation scheme displayed emergent properties with rich potential … in anticipation of providers of high-quality music meta-data services.

The plan was the ideal, which we believed would bring forth providers. It didn’t.

During 2002, and on through 2004, we worked with best of breed meta-data service providers, collaborating to deliver a unique solution for the world’s leading digital music player manufacturer. It was evident that our ambitious music feature representation and playlist method could deliver superior results, even on lo-fi music meta-data.

Dissatisfied with the quality and limitations of the meta-data providers, in 2004, we decided to tackle the project on our own.

Drawing on an extended network of friends, former colleagues and acquaintances, we began the design and development of a universal – both musically and geographically – profiling environment based on our ideal music feature representation plan.

In the course of this, we developed and implemented a series of unique algorithms for music recognition, digital signal processing and analysis, machine self-learning, parallel DSP-ML post-processing and music expert system.

We are now the self-proclaimed ‘Sole Masters of High-Definition Music Profiling’, and have taken advantage of the latest in virtualization for purposes of global, automated music profile syncing and real-time profiling.

But it was because of the extreme level of ambition in the music feature scheme that we had an aim that went beyond what could be delivered by others. Hence, a reason to keep training and re-training, millions and ever millions of iterations in Machine Learning, a reason to keep breaking the rules and stretching the algorithms to produce the right results – the right musical results.

During this process, we also discovered that keeping a single, straight and narrow line of profile processing would never cut it – there are simply too many musical facets that elude the purely algorithmic eye. Multiple perspectives are required, taking into account the effect of what one might call “parallax hearing”.

Today, work on calibrating the Moodagent High-Definition Music Profiling Environment is now a common practice for Syntonetic’s musicologists, and our most critical challenges are developing user experience models and applications that exploit the potential of the Moodagent high-definition music profiles.

Syntonetic can do things nobody else can because we have based our solution on the intelligence built into the data model, the intelligence and trained ears of its musicologists who train and calibrate the system.

The Devil is in the Detail – and the Detail is in Musical Data Models and Parallax Ears!

Best Regards,
Peter & Mikael

 
 

Screenshots & videos

Android: Share

Android: main menu

Android: long press functions

Android: main interface

iPhone: main interface

iPhone: swipe track functions

iPhone: Refresh playlist