The (Coming) Golden Age, Part 1 – The Limits of Dennard Scaling and Amdahl’s Law

There has been a lot of hand wringing in the tech industry since the 2019’s CES when Jensen Huang, the CEO and co-founder of Nvidia, announced that “Moore’s Law isn’t possible anymore.”  Of course, that’s when declaring the king’s dead he touted AI about like Princess Leia pled to Obi-Wan, “You’re my only hope.” For those of you who have been living in another world, Moore’s Law isn’t really a law, but more like a phenomenon observed by Gordon Moore in 1965 when he predicted that the typical two-year cycle of chip improvements would also double the processors’ performance.  The reason people called it a “law” is that Gordon Moore was right for roughly 50 years from his pronouncement and considering that fact that it was an observed phenomenon that had already occurred for a decade, basically “Moore’s Law” continued for roughly 60 years or three generations of human beings. 

I personally could not remember a time when “Moore’s Law” was not in operation, but I also do not remember a time when hardware engineer Cassandras were not telling us that there were limits to improvements. In particular, computer engineers said that the limits of the associated physics would kick in and Dennard Scaling, a phenomenon that allowed for the improvements, would cease.  After years of warnings, the decline finally started to be observed about 2010, right after the Financial Crisis.  This said, it wasn’t really noticed by laymen because we got a bump for another half a decade or so from parallelizing problems (or splitting them up into bundles to be processed at the same time), but it didn’t take very long for Amdahl’s Law to kick in there, which is basically that we can only parallelized so much of a problem and the serial or sequential part then becomes the bottleneck.  As a result, by 2019 everyone (including NVidia’s CEO) had noticed the problems and was starting to hang wring and talk about unrealistic solutions (at least in the short run) that could give us the same “bump” as parallelization like quantum computing.  This was all a shock to me, but to explain why I should probably give some context.

Pushing the Limits, you tend to Find the Limits First

Ever since I first left grad school, I have been working in the most sophisticated part of finance, starting in research at the “Bell Labs of Finance” where I was given the responsibility of rebuilding antiquated investment infrastructure Barr Rosenberg had himself built and from there, I began working solely with hedge funds.  That is my day job (the ULISSES Project and doing things with the University is what I do on the side).   If one were to make an analogy between finance and the car industry, that would mean that I’ve been working in Formula One, or hedge funds, ever since.  In Finance and Silicon Valley, it’s not the big companies where the cutting-edge technology is often developed and deployed, but on the front lines with the start-ups and hedge funds.   This is why I started to see the cracks in the system all the way back in 2008 when Dennard Scaling was kicking in.  In response to this and other issues, I started the ULISSES Project and began looking for solutions emerging in other areas of science.  

The first issues that I addressed were the underlying issues that were emerging from Goldman Sachs’ cutting edge object-oriented environment known as SLANG and SecDB.  I came up with what I thought was a solution in an obscure language that we’d played around with at Columbia when I was at grad school and then I tried to apply that solution on AWS from 2010-2013.  It worked well enough, but I found that the inefficiencies that I was encountering would soon lead to a situation where all of my trading profits were spent on inefficient cloud hosting.   As a result, I left the fund I was running in New York and returned back to the mountains of Utah and begin examining better software solutions with my friend and partner Hal Tolley, who in addition to being the main architect of some amazing hardware technologies, was also the former lead engineer at the top high-frequency trading technology shop in the world, not coincidentally based out of Silicon Valley.

The Search for New Hardware Solutions

By 2014 we had a software solution we were starting on and after a year of pretty much constant work, we found that we had reached the limits of all the existing web platforms, from AWS to Azure.  We looked for other hardware solutions and it was in this quest that we traveled to the heart of Silicon Valley to discuss the matter with the top hardware supplier in the United States: Dell.  Hal and I had known people from Dell for years, either working with them as clients, suppliers, or simply friends in tech and as a result, it wasn’t too difficult to secure a meeting.  In that meeting, we met with people from Dell representing their cloud computing efforts and their venture capital arm.  

My objective in those meetings was simple—I asked for Dell to spearhead a new initiative in Integrated Architecture computing with one of their clients addressing the same issues that Microsoft had attempted in their Extreme Computing Group.  For those of you not familiar with the Extreme Computing Group, it was a project spearheaded by Dan Reed, the current University of Utah Provost, meant to provide extremely powerful computing to researchers in various scientific areas needing massive computing, storage, and analysis—areas such as climate science, astronomy, and genetic analysis.   Initially, a project begun by the legendary Jim Gray in his eScience initiative, it was meant to address the challenges in the next stage in cloud computing.

In the meeting, I clearly articulated to Dell’s people that we were coming to the limits of our generic hardware infrastructures and the only way that problems could be addressed adequately was to address the problems from an integrated hardware and software architecture built with the demands or particular domains in mind, preferably utilizing open source software components.   I pointed out that the problems of finance were every bit as demanding as the projects that Azure had addressed in their Extreme Computing Group and that potential software solutions could be found in past object-oriented software initiatives, most particularly in solutions inspired by Philippe Kahn’s Borland Turbo Pascal, NeXT Computer’s joint venture with Lotus Development and Goldman Sachs’ SLANG and SecDB.

The Germ of an Idea

As I laid out the requirements of high-level finance and investment management, the guys from Dell Ventures were entranced. They told me that I was the only person that had ever walked into their office and understood the technology and the finance and not soon afterward asked me to start a company providing high performance hosting to the world of finance.  Dismayed, I told them that I was not there to “pitch” them, but instead, I wanted them to get AWS, Azure, or Google to start addressing the problem in the right manner. 

As I walked out of the Dell Ventures office, I was followed outside by a young man who told me he was not only there for technological due diligence, but he also ran Dell’s open-source initiative {code}.   He said he was surprised that I was turning down the offer for funding, but he believed we were on to something and if we were willing, he wanted to provide us hardware so we could build a prototype integrated system that could demonstrate both the limitations of current systems and the possibilities that I had articulated for an integrated architecture.  

That discussion as I walked out was the beginning of our relationship with Dell, which I will relate in the next post.

Leave a Reply

Your email address will not be published. Required fields are marked *