Is Machine Learning Demand Starting to Outpace Moore’s Law?
Muhammad Ali had been world heavyweight champion for just one year when Gordon Moore, co-founder and CEO of Intel, made a postulation that came to be known as Moore’s Law. Namely, the number of transistors on an integrated circuit (IC) would double approximately every two years.
In 1965, this was heady stuff: the observation, made in an article in Electronics Magazine, implied that we could expect our computers to become faster and more capable at a lower cost over time. As the years and decades have passed, Moore’s theory has been tested as the rate of technological change has accelerated. For the most part, it’s held true. Yet recent advancements in the burgeoning field of Machine Learning (ML) present an intriguing challenge, if not an outright threat, to the viability of the Law.
The Sustainability of Moore’s Law
Moore’s Law has been instrumental in driving technological advancements for well over half a century, becoming something akin to an inviolable law. Yet as we approach the atomic limits of circuit miniaturization, the observation’s sustainability is coming under fierce scrutiny.
The physical limitations of shrinking circuits are becoming apparent. In 2015, Intel itself acknowledged a slowdown in Moore’s Law, noting that the two-year cadence had slowed to roughly 2.5 years. This 25% increase was a clear indication we were approaching the physical and practical limits of modern semiconductor technology. What’s more, Intel CEO Pat Gelsinger last year revised his earlier comments, admitting 2.5 years had now become 3 – while promising to do everything he could to keep pace with Moore’s Law.
None of this should come as a surprise; Moore himself disavowed his own Law, arguing that “no exponential like this goes on forever.” Indeed, it would have been difficult for the engineer to foresee the growing demands of Machine Learning and allied fields like Artificial Intelligence (AI), cloud computing, and the Internet of Things (IoT) exerting such pressure on the silicon chip industry back in the mid-sixties.
Machine Learning demands are today outpacing the rate of processing power increase, with a 10x demand surge versus a 3x increase in processing power over the last 18 months. Self-evidently, this disparity presents significant obstacles to future advancements in Machine Learning, potentially leading to bottlenecks in innovation and application.
The widening chasm between the computational demands of deep learning and processing power growth mightn’t have been easy to predict, yet it’s something modern technologists must grapple with. The disparity not only suggests potential limitations in the scalability of ML apps, particularly as models become more complex and data-intensive, but a general slowing in the rate of technological change.
Some might contend that this is no bad thing: that it will enable all of us to reconnect with nature and become more present in the moment. But human advancement does not really work this way: it’s impossible to put the genie back in the bottle when it’s out. When the road has been taken, we must proceed in an orderly fashion – or face regression.
A Novel Solution to the Moore’s Law Problem
Against this troubling backdrop, novel solutions are coming to market. One of them, io.net, seeks to bridge the growing divide by leveraging existing GPU compute resources to enhance global processing efficiency. If compute can’t keep pace with ML demands, in other words, let’s put the compute to better work.
io.net‘s recently unveiled decentralized physical infrastructure (DePIN) network is a key pillar of its value proposition. By utilizing the untapped potential of GPUs and CPUs spread around the world, the DePIN network offers a sustainable and scalable solution to meet the burgeoning demands of ML. The model involves rewarding those who contribute/rent out their GPU and CPU power to the network. AI startups and ML engineers, meanwhile, get on-demand access to the necessary GPU compute without having to pay the exorbitant costs typically associated with such resources.
With the provision of permissionless, on-demand GPU/CPU access from a global network of users, io.net democratizes access to aggregated processing power while ensuring efficient utilization of existing resources. This capability translates into significant savings on compute costs, rapid deployment of cloud clusters, and fair pricing.
As the gap between the demands of Machine Learning and the growth of processing power widens, innovative solutions like io.net will be essential.
In maximizing the efficiency and accessibility of existing GPU and CPU resources, io.net is not just offering a workaround to the limitations posed by the slowing pace of Moore’s Law; it’s setting the scene for continued innovation and advancement in both fields.
Other factors are just as important, of course: capital investment being one of them. If you want chip computing power to double, it stands to reason that R&D budgets must increase in tandem. Things like government grants, industry mergers and joint ventures are also important.
Sadly, Gordon Moore died in 2023 – yet his eponymous Law is even more relevant today than it was in the sixties.