Computing and data infrastructure in ’22
At the beginning of last year, I shared a series of investment themes that were top-of-mind in an effort to share my thinking and seek feedback. I was pleasantly surprised by the response, which motivated me to share again this year. As the pace of innovation in computing and data infrastructure accelerates, I am lucky to be able to converse with some of the brilliant minds whose work is driving the industry forward. This post again intends to synthesize the most impactful ideas from these conversations over the past year, and their influence on my go-forward thinking as an investor. I look forward to your thoughts, and hope you enjoy reading it.
The disintermediation of the cloud providers
Last year, it became clear that 3rd party serverless infrastructure solutions would begin to challenge the dominance of the cloud providers. This appears to have played out beyond expectations, as evidenced by the success of companies like Netlify,* Snowflake, PlanetScale,* and Vercel. What has been even more surprising is the cloud providers’ response, or lack thereof, to this growing threat. I believe we are witnessing a shift in their collective mindset that will accelerate the success of the 3rd party serverless ecosystem. It’s no secret that design and UX are a glaring weakness of the cloud providers. As this continues to be exploited, the cloud providers seem to be evolving their strategy from one of coopetition, to full-on enablement of 3rd party players. As the serverless ecosystem continues to blossom, the perceived opportunity cost of competing with it goes up for the cloud providers. Over time, I expect to see decelerating R&D investment in higher-level services, and more emphasis on advancing the capabilities of their core primitives like storage, networking, and compute. So long as the providers are compelling platforms to build on, they continue to capture meaningful value. Counterintuitively, the economics of this value could be equal, or even superior, to that of competing directly. In this scenario, the cloud providers benefit from reduced R&D and GTM spending on these higher-level solutions, and continue to capture their share of overall revenue with minimal effort.
The implications of this shift are enormous. The relationship between developers and the cloud providers will eventually be disintermediated by serverless infrastructure players. We will begin to think of the cloud providers as “utility” rather than “solution” providers. This evolves our perception of the clouds to be similar to the entities that bring us electricity and internet connectivity – providers of access to higher-level solutions we rely on to solve fundamental problems in our working and personal lives. A healthy partnership between the serverless ecosystem and the cloud providers is in the best interest of the developer, and likely to accelerate consumption in a manner that grows the pie for all. This leaves me bullish on serverless infrastructure solutions that differentiate on design and end-user ergonomics. It’s clear that developers will be interfacing with them more, and cloud providers less, over the coming years.
Operational analytics
The meteoric rise of cloud data warehousing continued over the past year. Snowflake revealed its platform ambitions, and a growing ecosystem of “warehouse-native” infrastructure startups has emerged. I expect the next big opportunity to be operational analytics. CDWs were designed to support business intelligence use cases, which amount to large queries that scan entire tables and aggregate the results. This is ideal for historical data analysis, but less so for the “what is happening now?” class of queries that are becoming increasingly popular to drive real-time decision-making. This is what operational analytics refers to. Examples include in-app personalization, churn prediction, inventory forecasting, and fraud-detection. Relative to BI, operational analytics queries join many disparate sources of data together, require real-time data ingestion and query performance, and must be able to process many queries concurrently.
It’s true that real-time analytics has historically been written off as a luxury. However, the ROI I see companies generating relative to complexity of implementation is improving rapidly. This is largely in part due to the maturation of systems like Clickhouse, Druid, Flink, Materialize,* and Pinot. Each poses its own approach to operational analytics, with distinct trade-offs. I expect to see this group of technologies build momentum this year as demand for operational analytics use cases increases. It seems inevitable that the winners in operational analytics will begin to challenge the dominance of CDWs as more batch queries are operationalized in real-time. Snowflake and Google BigQuery are wide-eyed about this risk, and have built large and talented teams to mitigate it.
I’m excited about startups that fill the gaps that become apparent in this new iteration of the data stack. One obvious gap today is streaming data ingestion. While Kafka is now ubiquitous, it remains low-level and difficult to build with for developers who lack expertise in distributed systems. I expect to see innovation in use-case specific streaming ETL that abstracts this complexity. Finally, if I had to predict two terms we will be hearing more of this year, it would be “change data capture” and “materialized views.”
The golden era of AI: large models
To quote Andrei Karpathy, we are witnessing “an ongoing consolidation in AI” that is both incredible and exciting. The last decade brought about an AI renaissance enabled by advancements in neural network architectures. In ‘17, a paper titled “Attention Is All You Need” set us on a different path. It introduced Transformer, a new neural network architecture that moves away from the convolutional and recurrent techniques that had previously defined state-of-the-art, and toward attention mechanisms. In the past few years alone, Transformer has given us novel AI systems for processing and understanding text and language like OpenAI’s GPT-3, and BERT.
We are now seeing implementations of Transformer outperform convolutional neural networks on image and video-based tasks. This suggests the entire field could eventually converge on the Transformer architecture. If true, I expect this will dramatically accelerate the rate of overall progress in AI. Unlike the previous iteration, where models were highly customized to the data modality and task they were built to perform, any subtle advancement in Transformer will immediately apply to all who leverage it. This makes this powerful technology more accessible to domain experts across every industry who possess the necessary context to apply it to real-world problems in impactful ways. When combined with program synthesis, we now have the ability to develop AI systems that “understand” and “act.” This brings us one step closer to AGI, and amounts to what could be the most exciting period of AI innovation to date. I believe we will see previously inaccessible use cases for AI unlocked by Transformer this year.
Supply chain security
The most notable infosec incidents of the past year do not resemble those in years past. Solarwinds, CodeCov, and Log4j were commonly rooted in highly sophisticated actors using 0-day exploits to insert malicious code into their software, which was ultimately used to infiltrate the environments of end-users of that software. I liken this to a thief stealing a “master key” to a type of car, and using it on any owner of the car unnoticed. The discourse in response to these incidents correctly revolves around the need to better secure and regulate our software supply chain. The necessary end-state to prevent such incidents appears analogous to where we have landed after years of innovation and policy-making to promote zero-trust security. We have successfully migrated from a perimeter-based approach to security, to one of identity and assets. Today, software supply chain security remains in a similarly nascent place to that of network security 5 years ago. We rely on 3rd party artifact managers to protect external code, and source control management systems like Github and Gitlab for internal code. This leaves us securing software at the repository level, rather than the artifact level. A compromise of either of those two systems will lead to immediate vulnerabilities. This is why we must move toward an artifact-based approach to security, that requires zero-trust in 3rd party artifact managers and internal source control management systems.
Thankfully, this work is well underway. Efforts like the Sigstore and Syft projects pose approaches that push trust attestation in software artifacts out to developers and build systems directly. This is analogous to how LetsEncrypt enables us to rely on TLS to secure the web. Each software artifact can be digitally signed at the source, and that signature will be verified by any consumer of the software artifact to ensure it has not been tampered with. I believe we will see a significant uptick in interest and investment in software supply chain security this year. Existing application security products will need to be re-architected to conform to a world of zero-trust, and I am excited about the ecosystem of startups pursuing this opportunity.
Decentralized software
Ok, Web-3. I said it. Enough ink has been spilled debating its long-term role in society in the past month alone. This largely subjective conversation continues to feel like a distraction from the reality I see – an unprecedented talent migration toward work on decentralized systems, and an ensuingly steep pace of innovation. Bitcoin and Ethereum’s combined market cap sits above $2T. DeFi is now a $250b+ space. NFT trading volume surpassed $10B+ per quarter last year. 161M unique Ethereum addresses have been created to date. Whether we ultimately decentralize some things, most things, or everything does not matter. What matters is that we have a new tool in our toolbox to solve real problems with software, and have yet to scratch the surface of its potential.
Given my investment focus, I approach web-3 through the framing of the architectural distinctions between it and the software that powers our world today. The most important of these distinctions lies in the back-end architecture of a typical web-3 app. Instead of connecting to a web server that reads and writes to a database, web-3 apps are built upon the concept of smart contracts. Smart contracts are code that defines a web-3 app’s logical guarantees. Smart contract code is then deployed to a global, decentralized state machine such as the Ethereum or Solana blockchain, where participants in the network can interact with it. In spite of such architectural differences, web-3 developers are encountering many of the same challenges that had to be addressed in the early days of both web-1, and web-2. The resulting opportunity for startups to build tooling and infrastructure for web-3 development is both exciting and obvious. It is also where I have been spending time as an investor in the space this year.
Web-3 software needs to be instrumented with observability for debugging and performance monitoring. Web-3 developers will demand tools that improve their productivity. Blockchains will require reliable DNS resolution, as well as global performance and transaction analytics. Web-3 engineering teams will seek out well-designed APIs and abstractions over workflows like deployment, state inspection, smart contract development, and integrating with web-2 infrastructure. A vibrant ecosystem of startups has emerged over the past two years to attack these problems, and I expect to see more new and exciting ideas this year.
If you are thinking about or building products that touch on any of these ideas, I would love to hear from you.
– Bucky
*Indicates KP portfolio company