In the past decade, we have seen remarkable breakthroughs in the areas of machine and deep learning, and their applications in a wide range of tasks, from image classification & video processing to speech recognition & natural language understanding. The data in these tasks are typically represented in the Euclidean space (i.e. they follow the rules of Euclidean geometry, e.g. the shortest distance between two points is a straight line). However, there are a number of applications where data are generated from non-Euclidean domains and are best represented as graphs with complex relationships (e.g. client accounts in a bank, interactions between these accounts, and the inflow and outflow of assets from these accounts). Consequently, many studies on extending deep learning techniques for graph data have emerged. There are indications that these emerging techniques perform significantly better than traditional methods on non-Euclidean data (e.g. rules based or statistical approach). Given their relevance to areas such as prevention of money laundering and financial crime, in this post, I have provided links to a couple of introductory talks on this topic. Enjoy!
For 30 years, the dynamics of Moore’s Law (an observation that the number of transistors in an integrated circuit would double every two years) held true as microprocessor performance grew at 50 percent per year. But the limits of semiconductor physics mean that CPU performance now only grows by 10 percent per year. However, the demand for computing resource to train artificial intelligence (AI) models has shot up enormously over the past six years (more than 300,000 times according to OpenAI) and is showing no signs of slowing down. Or to put it simply, AI’s compute hunger is outpacing Moore’s Law. So how is the AI industry dealing with this challenge? To address this question, this article explores how the rise of GPU computing and custom-designed AI chips is overcoming the end of Moore’s Law and enabling computationally intense algorithms and AI.
Deep learning, which is an immensely rich and hugely successful sub-field of machine learning, is evolving at such a rapid pace that unless you are in the academia, it is often hard to keep track of the latest developments. I was, therefore, thrilled when I came across a video recording of a lecture (titled “Deep Learning State of the Art (2019) – MIT”) that was recently given by Lex Friedman, a research scientist at Massachusetts Institute of Technology (MIT). In this 45-minute long lecture, Lex goes through a number of recent developments in deep learning that are defining the state of the art in the field of algorithms, applications, and tools. I have provided the link to this video recording in this article, which I hope you will find useful.
In the recent years, Artificial Intelligence (AI) has been a subject of intense media speculation. Discussions related to machine learning, deep learning, and AI come up regularly in countless news articles, internet blogs and TV shows. We are being promised a future of intelligent robots, self-driving cars, transformative healthcare technology, and virtual assistants – a future sometimes painted in a grim light, where human jobs would be scarce and most economic activity would be largely handled by smart robots and better-than-human AI agents. Speculation or not, what is at stake here is our future, and it is important that we are able to recognise the signal in the noise, to tell apart world-changing developments from what are merely over-hyped media guesswork. To that end, this article is an attempt to provide a non-technical explanation of deep learning, an immensely rich and hugely successful sub-field of machine learning.