Building the chip for our AI future

For 30 years, the dynamics of Moore’s Law (an observation that the number of transistors in an integrated circuit would double every two years) held true as microprocessor performance grew at 50 percent per year. But the limits of semiconductor physics mean that CPU performance now only grows by 10 percent per year. However, the demand for computing resource to train artificial intelligence (AI) models has shot up enormously over the past six years (more than 300,000 times according to OpenAI) and is showing no signs of slowing down. Or to put it simply, AI’s compute hunger is outpacing Moore’s Law. So how is the AI industry dealing with this challenge? To address this question, this article explores how the rise of GPU computing and custom-designed AI chips is overcoming the end of Moore’s Law and enabling computationally intense algorithms and AI.


GPU-acceleration platform for large-scale data analytics and machine learning



GPU Cores, Parallel Computing and Deep Learning



Google’s Cloud Tensor Processing Units (TPUs)


Links to Relevant Books

Leave a Reply

Your email address will not be published. Required fields are marked *