Building an Alternative to GPS
Wednesday 13th October // GPS, satellites
GPS is fragile. GPS is so successful that it basically keeps time and location for pretty much all critical infrastructure (in the US and friendly nations; Russia, China, and the EU have their own systems, although not totally independent). If it went down, the power would go out, financial markets would stop working and transport systems would stop. GPS is fragile because the satellites are so far away. Signals bounced off these far away satellites and received on the ground are weak and therefore vulnerable to interference and manipulation
Low earth orbit satellites like Starlink are 20 times closer than GPS satellites meaning the signal is more powerful, secure and reliable. But Starlink signals are considered IP and so a team from led by Zak Kassas from the University of California, Irvine, came up with a workaround. They developed an algorithm to calculate the ground receiver's position, velocity, and time by tracking the phase of the underlying carrier wave emitted by a satellite. Using this approach they were able to track position accurately within just 7.7 meters. They improved the system by incorporating a backup tracking system called Simultaneous Tracking and Navigation (STAN), similar to the inertial navigation system (INS) inside GPS, that estimates position by anticipating trajectory. This system actually worked even better than the GPS-INS system in estimating a moving car's position.
As more and more low-earth orbit satellites go up, this approach and (likely) many others will reduce our dangerous reliance of GPS. Which is great for pretty much everyone, expect maybe Elon? You would have thought faster, a more secure positioning system would have been up there as a major selling point for Starlink right? Don't be surprised if Elon isn't lawyering up on this one. Link
Why don’t VCs do research?
Monday 11th October // research, venture capital
Why is research (so)rare in VC?
Research is a luxury. Ain’t nobody got time for that. We’ve got LP cash to spend it as soon as possible in the absolute best companies we can find. Every fund is time-poor. and so the idea of a team that sits around stroking their beards thinking about possible futures seems…like a waste of time? (in the time it takes for me to think about the future of databases, Tiger has led 2 deals) Also, the investment team is super smart and they can do practical research when diligence requires it, right? “Hey look this quantum photonics deal came in, go away and look into the market, would you?” Also I think there’s probably a view that it just doesn’t work. I could make the argument: long-term research is pointless operating under conditions of high uncertainty and when in the end talent is one of the best predictors of success. It’s great to know that privacy-enhancing technologies are ready for investment and we think the market is going to be huge, but the fund lives or dies on its ability to pick the winners right? Well, I dunno.
What is research for?
Research means many things to many people. In fact, it touches on all fund activities. It’s best to discuss it in the context of the objective: diligence, technical, marketing, deals, network, and thesis. Also, there’s research as it relates to data and how a fund manages data, but that feels more like engineering than research, although sometimes the work is described as research or data and research. Francesco Corea at Balderton is a leading light in that domain.
- Diligence — The most common activity for a research team is to do internal diligence. Either reactively when a deal comes in or proactively to better understand a space or technology for future investments. All researchers are pulled into diligence at some point and it’s the most obvious way to add value to the fund.
- Technical Diligence — This is common in crypto e.g. Paradigm, Thomas Walton-Pocock at Fabric, InflectionVC, et al. But less common in ‘traditional’ venture capital as GPs typically have experience in the sectors they are investing in e.g. hardware or healthcare. And also typically have a budget for external experts or venture partners for due diligence as required. This technical research especially in crypto extends to operational work like staking tokens or governance.
- Marketing — Although only a part of what they do, this is the Atomico State of Europe, Mary Meeker State of the Internet, Different Funds State of DeepTech Venture, and Air Street’s State of AI. Good quality research lends itself to easy marketing, although there is a balance between doing research because you want to grow reach versus doing it because you want to grow rich. Research for marketing can easily fall into content marketing. If you have a KPI for number of readers or downloads then you might be sailing too close to the wind…
- Deals— This was the MMC approach when led by David Kelner with the series on the State of AI State of AI paper. And often the objective behind startup landscape maps. This sort of research is lower down the funnel than the research for marketing and I assume anyone that does this sort of research is measured against inbound.
- Ecosystem — let’s take Sam Arbesman and Lux case. They don’t have a team of researchers or a function, but rather Sam, who is a complexity scientist, helps grow the Lux ecosystem in ways he thinks will be important. In theory, every member of the fund is growing the network which if used effectively can compound in value. Somebody like Sam has freedom to follow his curiosity and to speak to interesting people without worrying if there is an investment opportunity. That freedom is a great way to develop novel insights.
- Thesis — Typically a fund is built around an investing thesis. “Hey look, the energy transition is going to generate outsized returns over the next 10 years, I know loads of startups, give me money and I will make good investments.”There isn’t a research function because the thesis is pre-agreed by founding partners to raise the fund. Subsequent funds raised generally invest in different rounds, markets, regions, technologies, etc. The work I did at Outlier Ventures was an exception because we were not a GP/LP fund and so had the freedom to iterate on the investment strategy without being tied to what we promised LPs three years ago. Similarly at Lunar Ventures the thesis is about a lack of technical VCs and can adopt as technology progresses.
The reality is that anyone doing research at a venture fund today might be doing bits and pieces of each of these. Atomico is pretty much doing all 6 of these things bundling research and data into investment products for the business. For others it might be a bit more ad hoc with unclear objectives.
We strongly believe you have to separate the short-term, tactical needs of the investment team with the long-term, strategic needs of research. The main reason is that if you try to do both, the incentive will always be to focus on the short-time. (e.g. we need to do 5 customer calls before the deal closes next week). Also you rarely find the same person to be world-class at both these things. The best investors are out there hustling and writing pithy tweets to attract startups (I assume that’s how it works). But a researcher basically should be reading books, talking to experts, and following curiosity that is likely in the short term to have no practical value. If you find a researcher adding practical value, you should worry. (jk)
So what are we doing at Lunar Ventures?
I’m leading our research program uncovering underpriced theses on the future. We’ve already validated the model a bit with our series on privacy enhancing technologies and collaborative computing (™). We think this allows us to create ‘mini-thesis’ areas in which we understand the direction of a market and can assign some probabilities on future scenarios. So with privacy tech, we looked at the market and said, “hey people aren’t really getting the implications of being able to share information by math rather than law. We think because of X, Y, Z, it’s more likely than not that we will see private and secure compute, so let’s invest in that stuff”. We are looking at post big-data at the moment and asking: will data continue to make machine learning algorithms better? Or is there an underpriced scenario in which it’s the algorithm and not the data that matters? Other stuff bubbling up through our Roam:
- Hardware pluralism: what does a future world look like with analog and digital chips?
- What happens with simulated environments for reinforcement learning agents combined with virtual worlds?
- The intersection of tribes and tools: how does a lens of collective identity rather than the s-curve or surge cycle change the analysis of the future of crypto?
Well research = insights =… profit. But more than that… legacy? Something that will live on after our deaths and make our lives just that little bit more meaningful? Probably not, but look, I just think it’s weird that the rest of finance has research departments staffed with hundreds of people. And VCs are running around doing it themselves? It always seemed a bit off to me.
Intel unveils second-generation neuromorphic computing chip
Tuesday 5th October // Neuromorphic computing, semiconductors
Right so Intel just launched version two of its neuromorphic computer chip called Loihi, and a new framework for building apps, Lava. TLDR, this is nice progress in bringing neuromorphic chips to market, it's inevitable we'll see these chips in the wild soon as a complement to CPUs, GPUs, and FPGAs. They will be used for extremely low-power and high efficiency applications at the edge like learning, sensing and optimisation. It's surprising to me that no country has planted a flag in neurmorphic computing (e.g. we are going to be the global leader in neuromorphic chips by 2030) simply because the 'traditional' semiconductor industry is so geopolitically important and consolidated in Asia (TSMC and Taiwan, basically).
For some more context, neuromorphic computing for those of you with jobs and kids, is a different way to put silicon together to make a computer processor. Instead of your bog-standard von Neumann architecture where data moves back and forth from the processor to memory with a regular cadence known as "clocked time". Neuromorphic designs send data in bursts or "spikes" to wherever it is needed. I mean everyone says it is "brain-like" or "brain-inspired" in design but for some reason that really grinds my gears. I wish we had a better metaphor. Regardless, here we are with a design which excels at certain applications that have calculations that can be split and processed in parallel. This includes a whole class of constraint satisfaction problems, identification of shortest paths in graphs, approximate image searches, and real-world optimization problems. Link
Burn While Reading
Friday, 1st October, 2021 // Reading
Why have interesting articles tucked away in Instapaper when I can share them here:
1/ Major Quantum Computing Strategy Suffers Serious Setbacks. Link.
"So-called topological quantum computing would avoid many of the problems that stand in the way of full-scale quantum computers. But high-profile missteps have led some experts to question whether the field is fooling itself."
2/ Intel unveils second-generation neuromorphic computing chip. Link. Thoughts coming, held up because i'm fed up with the term brain-inspired and don't want to use it. But then I realised it's hard to summarise the different between neuromorphic chips and "von neumann" chips. I'll get there on Monday.
3/ Deep Learning's Diminishing Returns. Link.
4/ On the Internet, We're Always Famous. Link.
"Being known by strangers, and, even more dangerously, seeking their approval, is an existential trap. And right now, the condition of contemporary life is to shepherd entire generations into this spiritual quicksand."
5/ Kgbase. Link. A no-code knowledge graph tool. Haven't had the chance to play around with it yet, but looks cool.
How DeepMind Is Reinventing the Robot
Friday, 1st October, 2021 // Robotics
AI progress today is limited to applications with relatively constrained and predictable environments. But the real-world is messy and unpredictable and we want robots don't we? So we need solutions. Can we just collect loads of data like we did with natural language or computer vision? Well no not really, the domain space of the "real-world" is too large to collect enough training data (Although of course people like Google are basically trying ). Can't we just simulate training data with "sim-to-real" approaches? That's what OpenAI did when it trained a robot hand to solve a Rubik's Cube but [[OpenAI disbands its robotics research team]] because well it doesn't work that well across domains. Although Facebook recently published some work on a stumble proof robot that used a "sim-to-real" method. [[Facebook: Stumble-proof robot adapts to challenging terrain in real time]]. The other approach is to create better algorithms.
Raia Hadsell, Head of Robotics at DeepMind is chasing that one up. Her team has come up with a technique called "progress and compress" addressing one of the main problems with the structure of neural networks namely catastrophic forgetting, the problem that when an AI learns a new task, it has an unfortunate tendency to forget all the old ones. Specifically, the technique combines progressive neural networks, knowledge distillation, and elastic weight consolidation to train a neural network on separate skills, freeze the important weights and consolidate the learning in an aggregated knowledge base which averages out all the learning.
These approaches show great promise, although researchers point to other limitations beyond just catastrophic forgetting such as the need for distributed processing and giving a robot proprioception, a sense of its own physicality. As with anything in AI, [[DeepMind: Generally capable agents emerge from open-ended play]], [[CASP14: what Google DeepMind’s AlphaFold 2 really achieved, and what it means for protein folding, biology and bioinformatics]], we should look to DeepMind for pushing the state-of-the-art, and robotics is no different. This approach to robotics fit one our "better algorithms" over "more data" thesis that we're investing against and developing at Lunar. Link
Fighting Big Tech or Web3
Monday, 27 September // Policy
Cory Doctorow writes in the Communications of the ACM: Competitive Compatibility: Let's Fix the Internet, Not the Tech Giants.
"If we are worried that shadowy influence brokers are using Facebook to launch sneaky persuasion campaigns, we can either force Facebook to make it more difficult for anyone to access your data without Facebook's explicit approval (this assumes that you trust Facebook to be the guardian of your best interests)—or we can bar Facebook from using technical and legal counter measures to shut out new companies, co-ops, and projects that offer to let you talk to your Facebook friends without using Facebook's tools, so you can configure your access to minimize Facebook's surveillance and maximize your own freedom. That would mean reforming the Computer Fraud and Abuse Act to clarify that it cannot be used to make Terms of Service violations into civil or criminal offenses; reforming the Digital Millennium Copyright Act to clarify that defeating a technical protection measure is not an offense if doing so does not result in a copyright infringement; comprehensively narrowing software patents to allow for interoperable reimplementations; amending copyright to dispel any doubt as to whether reimplementing an API is a copyright infringement; and limiting the anticompetitive use of other statutes including those relating to trade secrecy, nondisclosure, and noncompete."
I've always shouted "interoperability" when thinking about how to limit monopolies on the Internet. I think in the Convergence paper even back in 2017 I made the argument that services built atop of blockchains like Ethereum would end up winning versus Big Tech services. Specifically because Ethereum is "permissionless" in the sense Ethereum explicitly can't say who can and can't build on the network. I do wonder if all this talk of taking down Big Tech is fighting yesterday's war.
Chris Dixon is making a slightly different point with his tweetstorm on why Web3 is better than Web2. But the sentiment is the same. Web3 will outcompete Web2 because tokens give users property rights and in doing so align network participants to work together toward a common goal — the growth of the network and the appreciation of the token. He argues this fixes the core problem of centralized networks, where the value is accumulated by one company, and the company ends up fighting its own users and partners.
I don't fully agree with this narrative but if @cdixon (and the crypto community at large) are right, then all the energies going into fighting Big Tech are being wasted because the market is doing what the market does best: creative destruction.
It's interesting (ironic?) that crypto might be the most effective way to fight Big Tech monopolies.
Browsers are cubicles, it's time for open plan
Thursday, 23rd September, 2021 // Browsers
This one is for my colleague Luis (Or) Shemtov. We have thought a lot about the future of the browser here. (I am an investor in Brave, and we recently invested in Stack). This week I came across Tyler Angert's note on what a browser of the 2020s should be. Tyler proposes a list of features he thinks would be useful in a browser: graph visualization and mind mapping, interactive history and version control, predictive search paths, super command-F (Superf), collaboration, automatic scraping and clustering, built in word processing, backlinks, and an infinitely zoomable interface. The Superf feature is the most interesting to me personally, especially if delivered with the speed and accuracy of Superhuman for the browser, read: Superhuman & the Productivity Meta-Layer, the fact no modern web browser allows for cross-tab search seems barbaric. Everyone seems to be talking about freeing data from data silos, but who will free data from tab silos? I do often think about agenda setting and the Overton window as it relates to UX. Everybody is so used to interacting with the browser that developers and users have failed to reimagine design around the jobs-to-be-done. It's like our digital workspace is a cubicle and we are waiting for an open plan design. Actually, that's quite good, that will be the heading. Link
Speeding up biology innovation with agent-based simulation
Thursday, 23rd September, 2021 // Simulation
Next up is a story about agent-based simulations (ABS). This week, a group of scientists unveiled an open-source simulation engine for biomedical research called BioDynaMo. ABS are an inexpensive and efficient way to quickly test hypotheses about the physiology of cellular tissues, organs, or entire organisms. BioDynaMo can simulate complex medical cases in neuroscience, oncology, and epidemiology, by using HPC clusters and hardware acceleration. The engine is 945 times faster than state-of-the-art baselines making it feasible to simulate use cases with one billion cells on a single server! Which sounds like a lot to me, although I have no way to place that into context. Simulation is likely to be a part of all scientists' toolkit as it gets cheaper, faster and more accessible. What's interesting is this was created because apparently existing ABS platforms are too generic for the needs of computational biology, which hints at the need for vertical-specific simulation engines. I’m thinking ABS for manufacturing, ABS for the metaverse obviously. Link
Toward next-generation brain-computer interface systems
Thursday, 23rd September, 2021 // Brain computer interfaces
Controlling machines with just thought is what people think of when they think of the far future. The problem with brain-computing interfaces today is they use one or two sensors to sample up to a few hundred neurons. A study published in Nature Electronics used a coordinated network of 48 independent, wireless microscale neural sensors, each about the size of a grain of salt, to record and stimulate brain activity. To build this system the researchers made advancements in shrinking the electronics and developing a body-external communications hub attached to the scalp that supplied power and networking. The researchers had to bring together expertise in electromagnetics, radio frequency communication, circuit design, fabrication and neuroscience, in a truly interdisciplinary group. When thinking about BCI startups in the future, it's important to look at the breadth of expertise as you don’t often see this variety of experience within a team. The next challenge is in scaling from 48 neurograins. The current configuration could support up to 770, and the group envisions a system of thousands. This system sits in the 'more data' approach to solving the problem versus alternatives leading with 'better algorithms'. E.g. H/T Elad Verbin for the framing. Usable systems will inevitably be a bit of both, combining distributed neural implants with better algorithms. Link
Let's just abstract everything away?
Thursday, 23rd September, 2021 // DevOps
This funding story was interesting as it feeds into our current exploration of the future of databases. Superbase, an open-source alternative to Google's Firebase, raised a $30M Series A. The company claims a developer can create a backend in less than 2 minutes. Essentially, it's a backend-as-a-service solution with a bundle of database (Postgres), authentication, instant APIs, realtime subscriptions and storage with serverless functions on the roadmap. This follows a popular trend towards abstracting away infrastructure that we've spoken about in the past, Xata for example does it for the database, as does Silk. Supabase is extending it to the entire backend. Jean Yang writes here (H/T Al Esmail) about the Dark Side of abstraction and the difficulty of crossing an abstraction barrier or finding and fixing bugs when you abstract things away. She argues that the trend to abstraction, or loss of control, has limits and tools needed to embrace complexity and give devs better ways to observe and understand their systems. This abstraction vs complexity dichotomy, or what Jean calls the Software Heterogeneity Problem, feels like a useful way to think of the dev tools market including databases Luis (Or) Shemtov. Link
Are AI accelerators here to stay or a bridging technology?
Thursday, 23rd September, 2021 // Semiconductors
AI accelerators are hot right now with new startups like Sima.ai, AIStorm, Hailo, Quadric, and Flex Logix, as well as incumbents like Intel, ARM and Baidu all recently launching edge computing chips. Deep Vision, a SF-based company, raised $35M led by Tiger Global (obviously). Deep Vision's chips are different because they are designed to keep data movement to an absolute minimum, reducing latency and improving performance. This is a common claim as everyone fights to trade off performance and efficiency. The company estimates 1.9 billion edge devices to ship with deep learning accelerators in 2025, which seems sensible, depending how you forecast the penetration of alternative chip designs like photonic (e.g. Mythic, Lightmatter, etc) and neuromorphic (Brainchip, IBM, etc). It doesn't really matter if there are 1bn, 5bn or 10bn devices, AI accelerators are no doubt going to be a part of this market. The more interesting question is whether AI accelerators are just a bridging solution until analog chips reach scale or if accelerators will be a part of every chipset even when analog chips are everywhere. I’m minded to say bridging, but would love to be persuaded. Link
The future of databases is one database
Thursday 9th September, 2021 // Databases
SingleStore, previously MemSQL, closed a $80 Series F at a $940 million valuation. It's a distributed, relational SQL database management system that can also store JSON data, graph data, and time series data. Primarily used for intensive data applications supporting HTAP, OLTP, and OLAP workloads but also it's used to consolidate multiple repositories into a single database. It's pitched as the all-in-one database for analytics and AI, and a good example of the "consolidate all databases into one cloud database" vision of the future of databases. The fact it supports time-series, JSON and graph data is evidence that there is a scenario in which companies don't use multiple databases for different data types, but instead just have the one-database-to-rule-them-all. This read is slightly undermined by the fact the company is 8 years old and after $318.1 over 9 rounds of funding is only worth $940 million. The company was probably too early to the market when the pain of big data hadn't yet been felt but also the competition from Amazon, Microsoft, Snowflake, PostgreSQL, MySQL, Redis, etc just makes growth really hard. Link
If noone uses Apple Pay, what chance does everyone else have?
Thursday 9th September, 2021 // Payments
In people don't like change shocker, only 6% of US consumers with Apple Pay activated on their iPhone actually use it to pay in store. This number is so low it's almost unbelievable. This is despite the fact the number of U.S. merchants accepting Apple Pay grew from 19% to 70% from 2015 to today. The main reason for the failure: basically persuading people to use Apple Pay instead of plastic cards and the rise of contactless making payments easier. Apple Pay isn't a 10x improvement over plastic, it's maybe marginally better because it doesn't do anything that plastic can't. Apple Pay and all the other (bigco pay) are examples of using applying new tech (phones + mobile wallets) to existing processes without using the full capabilities of the new tech. It's not clear to me why any of the Pays aren't offering discounts and native BNPL services at checkout or doing more interesting things around money management. Some sort of learning here that change in behaviour is very hard and adoption in consumer fintech is a really really hard distribution problem. Link
Tiny Lasers Could Finally Bring Us Really Smart AR Glasses
Thursday 9th September, 2021 // Augmented Reality
(Written this a few days before the Faecbook/Rayban launch) Want to feel old? Google Glass launched in 2015 and was obviously too early. People are claiming AR glasses are just around the corner with Apple expecting to launch something in 2023 (18 months away). STMicroelectronics, a semiconductor and MEMS technologies provider, recently launched the IEEE LaSAR (Laser Scanning for Augmented Reality Alliance) to push the adoption of laser-beam scanning (LBS) solutions (which obviously STMicroelectronics are good at making). LBS is useful for smart glasses because it uses a compact projector, produces a bright and rich image, and consumes relatively little power. LBS is an important enabler to reduce the complexity and cost of bringing smart glasses to market to compete with Apple, Luxottica/Facebook, Microsoft (Hololens). People wearing computers on the face all day is as much a fashion problem as it is a technical one. Strikes me that the biggest questions around smart glasses will be privacy and retail sales as much as tech. Link
Next-Gen Chips Will Be Powered From Below
Thursday 9th September, 2021 // Semiconductors
As part of trying to understand the future of semiconductors, this article is all about power-saving techniques. The problem: as transistors continue to be made tinier, the interconnects that supply them with current must be packed ever closer and be made ever finer, which increases resistance and saps power. Solution: exploit the "empty" silicon that lies below the transistors using a manufacturing concept called buried power rails or BPR. The technique builds power connections below the transistors instead of above them, with the aim of creating fatter, less resistant rails and freeing space for signal-carrying interconnects above the transistor layer. But this doesn't work for a bunch of reasons so you also have to move the entire power-delivery network from the front side of the chip to the back side. This solution is called "back-side power delivery," or more generally "back-side metallization." I know man, I didn't really know that was a problem before reading this article, but there you have it. How can we improve efficiency of chips? With Back-side PDNs and BPRs. So now you know. Link
Greedy AI Agents Learn to Cooperate
Thursday 9th September, 2021 // Reinforcement Learning
The head of the Intel AI Lab, Somdeb Majumdar, outlines the Labs recent work on collaborative reinforcement learning. He describes two solutions: Collaborative Evolutionary Reinforcement Learning (CERL) and Multiagent Evolutionary Reinforcement Learning (MERL) which essentially combine RL and genetic algorithms in novel ways. For all technical details read the source. What's important in my view is 1) another proof point that state-of-the-art AI problems are being tackled by combining algorithms, supporting the Petro Domingos argument that there is no master algorithm. 2) Today's AI excels at perception tasks such as object and speech recognition, but it's ill-suited to taking actions. For robots, self-driving cars, and other such autonomous systems, RL training will enable them to learn how to act in an environment with changing and unexpected conditions. It's obvious that practical real world robotic and autonomous systems, RL is crucial. Link
Ads, privacy and confusion
August 2021 // Privacy
Ben Evans takes a look at the confusion around advertising and privacy. He uses the Apple and CSAM backlash as a jumping off point to argue that there is no consensus around what sort of privacy we want and what we are trying to achieve. The first approach is to allow advertisers to show ads that are relevant, and get some measure of ad effectiveness - while keeping the data private. This is the Facebook MPC and Google FloC play. The challenge with this approach is that it's hard to get industry consensus around any particular initiative (e.g. W3C FloC) and that it has difficult competition implications. Broader that the technical approach, there is a lack of agreement as to what counts as private. There are three approaches: on vs off device, first vs third party, and consent. Apple took the approach of device = private, and off-device/cloud = not private. The other approach is first party vs third party data. So it's okay for a company to track you across their site(s) but not okay to be tracked across the web. The end of cookies suggests this model is broadly accepted, but the implication here is that big sites will serve better ads. This is the opposite of what regulators actually want. Finally the consent model with pop-ups has been accepted but is now basically meaningless. Apple believes their is implicit consent with opt-in device tracking because it's private. (One thing Ben doesn't cover is data unions where data brokers get explicit consent to collect data and sell it, usually by giving the user a cut of the sale). I see new forms of consent and data unions as being a potential answer to the confusion. Users opt-into data collection because they get paid to surf the web without pop-ups on every page. This is explicit consent for third-party data collection without the need for regulation. A browser plug-in is easy on the web, but expect the next battlefield to be around data collection on devices and sites controlled by Apple, Google, and Facebook. I see no reason that PET infrastructure and data unions aren't both part of the future advertising industry. If this scenario does play out, the value chain will likely consolidate further around the big players with the data broker segment competing around the consent. Link
Superhuman & the Productivity Meta-Layer
August 2021 // Software
Relevant for Luis Shemtov and thesis behind productivity, future of work and investment into Stacks. Ths essay is a response to Kevin Kwok's The Arc of Collaboration essay which argued that productivity and collaboration have been handled as two separate workflows, and modern software like Notion, Figma, and Airtable is succeeding because it brings them together. Julian makes the point that now with the convergence of productivity and collaboration into independent SaaS tools, we have a discovery problem because of silos. Solution? Aggregation of course. He argues we need a new aggregation layer, or as he calls it a productivity meta-layer. What might this look like? Maybe Discord for Gaming. Slack certainly wants to do this. But a productivity meta-layer needs three things: being notified about (relevant) new developments; taking actions on these developments (if necessary); and building a (personalised) history of company records. The best candidate would be email, and Superhuman, the email client, could potentially fulfil this role. The super-fast NLP search and action engine is a strong foundation to build from and it's not a leap to see Superhuman begin to integrate 3rd party services so you can with a shortcut add an email to Trello, or save contact to HubSpot, or whatever the action is. No reason G Suite or MS Office couldn't do something like though in the same way Teams crushed Slack (145m vs 12m). There is something adjacent to company records around knowledge base aka Roam. Roam is not where you talk or take action, but store/access stuff. I'm not sure the right way to think about the interaction between these. At Lunar, Trello/email is to do stuff and record stuff, and Telegram is to talk about stuff. Another interesting thing to think about is speed as a USP. Before you use Superhuman it's hard to see why it is different, but there is something delightful about using it which isn't obvious from specs or features. I suppose this is design/UX as a competitive advantage which it's hard to see as a long-term competitive moat, but 🤷♂️. Link