How Silicon Valley and Wall Street are Gambling with the Global Economy

Machine Intelligence is a hot area for founders and investors right now. In the six months to the end of 2014, nine AI startups raised $102 million in mostly Series A rounds whilst a search on the US Patent Office database shows major technology companies filing AI-related systems, spanning visual recognition to data management. Now, before we drink the Kool-Aid about AI and wonder how to get a piece of the action, it’s worth asking if what’s being built will change the world for the better or for the worse, and what part we each play in making that future.

An example of “for the better” would be creating machines capable of understanding the meanings in our natural language. It could also involve inventing “robots that care” so they’ll have the moral and emotional frameworks not to kill us, as Qualcomm proposed at the Tribeca ‘Imagination’ festival on bleeding edge technologies in April. It could also mean making sensors that can help kids cross the roads safely, giving blind people the tools to see and experience the world in better ways or something to stop spam emails forever.

An example of “for the worse” would include building up AI that repeats the mistakes of the recent past such as the global financial crisis, which caused $100+ trillion of value loss. It’s not obvious but we risk losing even more this time if we don’t connect the dots between data reliability and not having appropriate supervision over the machines.

We need to think about this issue because Robin Vasan, a VC at the Mayfield Fund, wrote ‘Rise of the Quants --- Again’. It shared his view about the similarities and parallels between the quant approaches on Wall Street and in Silicon Valley:

The goal is still the same – to use large data sets, massive compute power and increasingly sophisticated algorithms to make sense of massive amounts of structured and unstructured data to make decisions and predictions that formerly required humans.

What the article didn’t refer to, though, are the potential risks of those quant approaches and what happened the last time the machines were let loose on large data sets, without appropriate human controls and frameworks.

Like Robin, I have hands-on experience of banking and technology. I worked in a hedge fund with Machine Learning models and in a data startup that exited. After that, I was board observer on over 20 investments as part of UBS’ Strategic Investments (TMT) team and created e-Intelligence in the bank. Plus I’m a product engineer who happens to be a maths graduate so I know the power and value of “Quant”.

I remember an article with a similar title to Robin’s except it was called ‘Rise of the Machines’ and it highlighted this bit of information:

Somehow the genius quants — the best and brightest geeks Wall Street firms could buy — fed $1 trillion in subprime mortgage debt into their supercomputers, added some derivatives, massaged the arrangements with computer algorithms and — poof! — created $62 trillion in imaginary wealth.

As the current financial crisis spreads (like a computer virus) on the earth’s nervous system (the Internet), it’s worth asking if we have somehow managed to colossally outsmart ourselves using computers. The Wall Street titans loved swaps and derivatives because they were totally unregulated by humans. That left nobody but the machines in charge.

That happened in 2008. In the Silicon Valley of 2015, the machines are again in charge and, this time, they’re left alone to do unsupervised learning. This is when the AI isn’t given any instructions or directions about what to look for in the data. It’s left to pull in a lot of big data and then apply probability and statistics to churn correlations, derive relationships and map data points onto knowledge graphs and clusters in vector spaces.

These are the same mathematical and pattern recognition techniques that were used in the Wall Street algorithms that contributed to the global financial crisis of 2008.

The argument is made by investors and data scientists that the wider availability of APIs, data science tools and bigger volumes of data puts us into a different situation than pre-2008. However, more data doesn’t mean better insights or decision-making as Emanuel Derman, Goldman Sachs’ former Head of Quantitative Strategies Group and now Professor of Financial Engineering at Columbia University, points out in his book ‘Models. Behaving. Badly’. Notably, machines being able to do correlations faster also doesn’t mean they can surface or understand the causal links between data points more intelligently.

Causation matters because then we’d know why something happened: why a customer loves a product; why someone connects with someone else; even why the last global financial crisis really happened.

So that’s the real market potential for founders as we build Machine Intelligence: not in repeating the same framework mistakes from the recent past but in having the imagination to invent something better. Let’s imagine if we could teach the machines to make sense instead of just do more and more maths faster.

What we need to do is enable them with different types of frameworks and supervision from what’s happened so far. What we want to arrive at is where the machines reflect and understand us, our language, our values, our choices and our behaviors. What possibilities would we open up for human and machine intelligence then?


It’s time to start with fresh eyes, busy hands and invent a more intelligent future. Perhaps by looking to the past and Da Vinci for inspiration. ​