The (Coming) Golden Age, Part 2 – An Integrated Architecture for Finance

Once back in Utah, the promised servers from Dell arrived literally within days and that created other problems.  As it turned out, Dell had sent me the exact same servers that it was providing to Google (I was later to find out there was a reason for that) for its cloud hosting and the University of Utah’s Center for High-Performance Computing could not accommodate such sophisticated servers.  This resulted in us having to find a new home for the ULISSES Project at the University of Utah’s renowned Scientific Computing and Imaging Institute, a group considered the top academic computing center in the state.   Once up and running, within a month we had demonstrated to Dell that current systems simply were not built to handle the demands of modern finance.  We did this with the aid of our good friends and long-time supporters at MSCI

Building the Joshua Mark I Servers

Our point definitively proven, Dell’s evangelist for cloud computing joined the ULISSES Project board of directors and we began to design the next integrated architecture that could handle the demands of finance.   It was at this point that a relationship was formed between our research company Rand Labs and Dell, with Dell providing the base hardware components and Hal and me (mostly Hal) hacking together the new architecture.  This relied on a lot of calls and favors to places such as Taiwan and South Korea, and more interestingly a lot of purchases on eBay. 

As important as it was contacting groups in Asia, eBay was not to be underestimated as Hal would buy an entire system for one part, then “eBay” what remained (almost always for a slight profit).   When the project was done we had what we’ve referred to as the Joshua Mark I servers (my middle name is Joshua and so was our friend from Dell so it seemed a logical choice since HAL just doesn’t’ have the same warm vibe). Here is a picture of my partner and friend Hal assembling the first Joshua Mark I.

Hal in the Sandbox (our office) assembling the first Mark I server.

 Both Hal and I grew up building our own computers, but this was something a bit more than buying some pieces from The Computer Shopper and assembling them. When done, our Dell representative Joshua came to examine the completed boxes and then left for AWS to try to implement what we’d done, and then we turned back to further develop our object-oriented software library and tools to take advantage of the new servers we’d built.  This was the beginning of 2018.

The Demo Project – Looking for at least a 10X Jump

We already had a large code library, but we needed a project to sink our teeth into and prove what the systems could do.  For the next several months we interviewed groups globally, trying to find an appropriate partner that had a project that could demonstrate the power of our integrated architecture.   For a long time, we considered applying the system in a single hedge fund to demonstrate what the systems could do, but after long negotiations, we determined that such a case would not be an appropriate showcase.  Foremost among the problems were desires of groups to secure an exclusive license to the technology and get us working for them under strict contracts trading.

After deciding against a single hedge fund, we met with a series of groups covering areas as varied as research, trading, and operations.  For those of you who don’t know, investment operations are basically the simplest area in investment management, with trading systems being more complex and the granddaddy of investment complexity being research systems.  In short, there were tons of working operations systems, a few good trading systems, and really no functioning research systems that could accommodate complex models.   In this sense, the complexity of a functioning research system had basically become a “holy grail” in investment systems.  

A New Modeling Framework for Value

Not shying from a challenge, we built a prototype research system based upon a modeling framework I had started developing when faculty at UC Berkeley years before, and then we contacted groups employing human analysts to see if we could reach an agreement about testing and deployment of our systems.  By mid-2018 we began applying the system with arguably the most complex and varied research process in the world. 

This was particularly meaningful for me because my dissertation advisor had left Columbia University to build such a system at Morgan Stanley.  Budgeted at over a billion dollars to be spent over a decade, the project was halted during the Financial Crisis after four years and $300 million.   I’ve always said that unless one could bring at least an order of magnitude (10X) in improvements in cost, then it wasn’t worth engineers pursuing.  Our goal was to bring two orders of magnitude improvements to the process (100 times more cost-effective than the system my dissertation advisor at Columbia attempted to build).

100X Improvements

A year later and over 100 times more cost-effective, we got the beta of the system up and running on the Joshua Mark I’s covering thousands of companies and hundreds of unique data sources.   It was then time to port the system to AWS, where our former Board member had gone to develop the systems Hal had demonstrated how to build.   When we called Joshua, we were immediately informed that AWS could not replicate our results and as a result, it was shutting down the project.  In short, it was the same conclusion that Microsoft had come to with Azure’s Extreme Computing Group.   When we asked further questions, we were told that AWS simply could not do what Hal and I had done in our little “garage” (our office is actually called “The Sandbox”) in a cost-effective manner. 

Our former board member closed the call telling us he was leaving AWS for Facebook and he said if we could repeat what we’d done, we should do the hosting ourselves because that is the only way we’d get the performance and cost improvements we were looking for.   Dismayed, we surveyed all the competitors in the space and confirmed that none could meet our demands, and only then did we set out to build the prototype next-generation servers. 

Leaving Dell and the Joshua Mark II

Up to this time, the servers we’d developed with Dell had only been used for academic and development purposes., but the new servers would be different because they would not be “Dev Servers” working on a prototype project, but “Prod Servers” handling sensitive data.  Accordingly, we bumped up the game, engaging our Asian suppliers and engineered a cutting-edge system without the use of eBay spare parts.   Those are the Mark II servers currently deployed by our company. The Mark II’s were smaller, faster, more powerful, and frankly, prettier.

Another famous Mark I and Mark II for comparison purposes

Presenting the relative cost and performance differences between the cloud hosting alternatives and what we could do, our prototype partner’s CEO made the decision to become the first client of a new hosting company that we formed to address the domain-specific needs of finance.  We came up with the name, Flex Axion, in a few minutes and then designed a logo and we were up and running.

Freed to Work with IBM (and Former IBMers)

As it turns out, funding the Mark II’s ourselves came with additional benefits in that it totally freed us–and I expect to reap those dividends in the coming years. You see, when providing us with servers Dell had verbally extracted a commitment from me when we began working that we would not work with any current of former IBM groups or use any of their products as long as Dell was providing us with hardware. They did this because as part of my initial “pitch” to them, I explained that some years before I had worked extensively with IBM’s CPLEX in a joint/simultaneous project with the group implementing IBM’s CPLEX at Bloomberg.

At the time, one of my good friends was leading the project for Bloomberg (he’s now at Two Sigma doing the same) and I was doing the same for one of the world’s top Emerging Markets hedge funds so we figured we could pool resources. It worked like a charm and my team pushed the boundaries way beyond what Bloomberg’s team had even imagined doing. I had explained this and given benchmarks to Dell and while impressed, they said I couldn’t use IBM CPLEX on their servers to prove my point. As a result, I used other open source optimization tools to make the same point (that’s why it took a month and not a week).

But as it turns out, we were not the only ones working on integrated, domain-specific architecture.  We were just the only ones in finance so occupied.  Upon surveying the environment, it seems that all the big boys in Silicon Valley have come to the same conclusion, something that I will address in the next blog entry, but it soon became apparent why we had been given the exact same servers Dell supplied to Google.

One Reply to “The (Coming) Golden Age, Part 2 – An Integrated Architecture for Finance”

Leave a Reply

Your email address will not be published. Required fields are marked *