Blockchain as a catalyst towards singularity

Posted on Tuesday, January 30th, 2018 by Abderrahim Ait Ben Moh
Blockchain, AI and singularity

A couple of weeks ago a close friend of mine drew my attention to the ICO of the SingularityNET coin AGI (which stands for Artificial General Intelligence). SingularityNET presents itself as a decentralized marketplace for algorithms and data. The SNET aims at creating a plug-and-play for AI-developers and strives to singularity by enabling AI’s to communicate and making requests to one another.

Backed by an impressive AI-team led by Dr. Ben Goertzel and a huge number of involved supporters (Telegram 15K, Facebook 3,5K, Twitter 10K), the initiative gained much traction which resulted in a very successful ICO. I didn’t expect anything else from blockchain, cryptocurrency and AI at the pinnacles of their hype. At the moment of writing AGI trades at $0.80 after reaching a $1.85 high in its first trading week and launching at $0.1. The goal for SNET in 2018 is to become the global leading AI platform. Something China-based initiative DeepBrainChain is aiming for as well.

So that’s the financial side of it all, let’s talk about the underlying ideas. Is singularity a realistic goal or is it farfetched? Despite scepticism amongst a substantial number of scientists, many futurists believe that singularity is within our grasp. For those that need an informative and entertaining update on the subject, I would like to refer to ‘Homo Deus’ (Yuval Noah Harari, 2015). Ray Kurzweil (Google) predicts that we will reach the tipping point in 2029 when machines will equal human intelligence, that’s just 11 years from now! According to this development rate, we should be achieving singularity by 2045 (video).


When it comes to singularity our Zoom Media ASR-scientists, working on customized speech to text models, are less outspoken. Breakthroughs and developments in recent years in computing power, data collection and deep learning techniques have resulted in encouraging progress in narrow AI; such as speech recognition and computer vision, but there are still a lot of obstacles to overcome. In speech recognition one of the commonly accepted metrics that summarizes the accuracy of those AI-models is the WER (Word Error Rate). To give you an indication of the current status which is more or less comparable with human abilities.

March 2017:

IBM claims 5.5% word error rate

May 2017:

Google claims 4.9% word error rate

August 2017:

Microsoft claims 5.1% word error rate

These scores don’t really represent reality. In speech recognition cross-talk, noise and idiosyncratic elements still remain a huge challenge to deal with. Moreover, linguistics isn’t a fixed science. Languages evolve and meaning changes over time. We overcome these challenges in an artificial way by building customized deep learning models, specific lexicons for specific verticals and applying human checks and feedback loops, but this only works in manageable fixed settings and doesn’t pass a Turing Test on microlevel. When your grandmother mumbles you might understand what she says based on previous experiences, body language, tone of voice, lip-reading and context. Your Alexa will black-out, however.

In the coming decade, we think that machines will achieve humanlike understanding when it comes to speech recognition in very different settings. Human interference remains crucial in most of the applications that require specific follow-ups and continuous ingestion of ‘new knowledge’. My friend made an 1860% profit on his investment last week. For him, the quest to achieve singularity couldn’t have gone off to a better start.

What are your thoughts on singularity? Share your ideas with me! Mail to or send me a message on LinkedIn.