First Brain Computing and the Mouse Cursor for the Physical World
We are quickly approaching an era where smartphones, watches and computers are extensions of us as human beings. They predict what we want to buy, when we should be hungry, what type of music we want to listen to, at what time. They even help cars navigate to where we stand, the driver simply a vessel of the car, following turn-by-turn directions on a smartphone. All of this dawns a marked shift in computing—smart connected devices are becoming our sort of ‘first brain’1.
As we reach the fringes of this new era, where computers will outnumber us by orders of magnitude, it’s fascinating to look at the platform developments that need to occur in mobile to make ‘first brain’ computing reality. Here are some thoughts on this ‘post-mobile’ world, and how Apple and Google are positioning themselves for it:
- On the heels of their annual developer conferences, it’s obvious that Apple and Google are building predictive computing in to iOS and Android aggressively. Google has ‘Now’ and Apple just launched its ‘Proactive’ assistant. Both companies lean on different legs of the stool in their approaches—Google’s strengths lie at the intersection of cloud and smart data science. Apple’s approach is more client-centric, leveraging tight integration on-device and intra-device. Both companies are recording tons of data to improve predictive capabilities and to help us better navigate the online and physical worlds.
- Predicting what we will do will enable devices to become our ‘first brain’. Meaning figuratively, people consult the device as a first brain, before they think. For smart devices to become a first brain for us, there must be more intelligence and better sharing between device, cloud, and network. There must also be a consistent experience for the user. Sort of a browser, just as Netscape provided a consistent experience for people browsing the web in the 1990’s. For platforms to stick, they need to provide consistency of UX for the user, like Netscape delivered on every computer. Because of the browser, people were able to navigate the online web anywhere in the world, search, retrieve info, buy something and interact. Apps serve this purpose on mobile today, but it’s likely that a new model emerges for physical world interactions.
- One of the interesting but less sexy layers to making smart devices more contextually aware is the ‘physical layer’. The physical layer is the base layer of the networking stack, responsible for sending bytes of data through the air. innovation here is gated by semiconductor advancements and breakthroughs in radio frequency (RF) technology. This is the stuff you hear about less often in tech—e.g. a breakthrough at MIT that enables 10x better spectral efficiency. Advancements happens slow at this layer—new networks take years to build out, CapEx gates innovation, and cool new tech can take decades to be commercialized. But without physical layer advancements, mobile computing would of course cease to improve.
- Interestingly, the physical layer doesn’t really track to Moore’s Law. One of the best ways to describe this is with GPS. GPS will not achieve high fidelity indoors, even with advances to Moore’s Law, since radio waves from satellites simply can’t be collected indoors with a small receive antenna. I call these limitations the ‘physical layer problem’. Think about it this way: though we’re in the era of chips halving in power and doubling in performance every ~1 year, a totally new physical layer solution is needed to solve indoor location, a place where we spend 80-90% of our time.
- So… we need a ‘new’ physical layer—stuff like sensors in the Apple M8 chip which record analog data about the ambient environment and beacons which know you’re present. It’s all very rough grain today. Is your Apple Watch truly context aware? No. But it’s a Gen 1 device and it will get better. Whatever combination of approaches wins, this ‘new’ physical layer will be the third leg of the stool, complementing device and cloud. And these three legs are needed to enable computers to mimic us, to become sort of first brain devices. We’re a ways away… the reality is our ‘smart’ devices have foolishly little understanding of what we’re doing today.
- Apple and Google are both intrigued with indoor location, with public announcements but very little progress in the past 2 years—these are very hard problems to solve. It’s actually not important to consumers whether their phone uses BLE beacons or GPS or sensors to understand its physical environment. What’s important is that it just works. We will reach a time where the physics goes away into the background and the user experience normalizes, just as GPS did for outdoor location, or Netscape did for browsing lists of links—when we can enable a door you’ve just approached to unlock or inform your wife you’ve already walked your puppy because the sensor on her collar knows she was outside.
- This new physical layer that is developing is actually a fused approach: very sophisticated sensors on the device, historical records from the cloud, predictive data science, and smart connected devices transmitting wireless signals. It’s crazy cool futuristic stuff. We are basically building computers with brains that will live all around us.
- If we can give developers accurate X and Y positioning indoors, everything will dramatically change…this is the holy grail and will enable developers to build absolutely amazing apps and services. It’s worth restating, as this is the part that needs to be simple and consistent for not only consumers, but also for developers. Developers must be given an easy way to retrieve the X and Y position without having to calibrate or learn a physical space. Just as GPS provided the phone with an X and Y position so Uber didn’t have to pay attention to how to navigate a car to you.
- This framework around abstracting complexity from developers is the most important part of making this new physical layer work. Why? Because without developers there are no apps. Devs want to focus on their app or service, not the underlying tech—just give them an X and Y position and obviate the rest away. This is best explained with an analogy from platforms-past:
- In the 1980’s the Macintosh and mouse ushered in the graphical user interface, which dawned the PC era. Anyone who lived through it remembers it as magic: the canvas was a 512 × 342 screen of pixels, and when you moved this new device called a mouse, a cursor moved. The mouse and cursor enabled developers to build all kinds of amazing apps—devs didn’t worry about the physics or UX behind why the cursor moved, but instead knew if they drew regions, menus and buttons a user would move the cursor and click. This quickly led to killer apps like VisiCalc and WordStar, which fundamentally changed business, and the world.
- Today, you are the mouse cursor, and the physical world is the new canvas. You are walking around with a supercomputer in your pocket and on your wrist. But unfortunately this mouse cursor metaphor is not yet fully realized—developers can’t build apps with high precision indoors, GPS breaks down, and your Watch has no idea you approached your treadmill or your dog. But all of this is changing fast—by giving this fine-grain context to developers, services will be invented which again change industries overnight…and the physical world canvas is magnitudes bigger than that small screen that we navigated with the mouse and browsed with Netscape.
All of this has implications far beyond mobile. We are starting to approach ubiquitous computing, where cars drive themselves, drones create their own paths, and robots predict what we want. The world will feel very different in 5-10 years as much of this takes hold. We are entering a provocative new reality, one where computers live in the world with people, appearing everywhere and anywhere.
And as we peer into this new world, Apple and Google are using vast resources to create a digital copy of the physical world, then storing it in the cloud. What’s less clear is how each will use that info, and how exactly it will impact the world or our lives…that part is likely to play out somewhat differently than any of us can imagine.
I loved this term when I first heard it from Jakub Krzych, Estimote co-founder and CEO ↩
“the physical layer doesn’t really track to Moore’s Law”
This is a fuzzy statement which I don’t think is needed for the rest of your analogy. It’s like saying that word processing doesn’t scale with Moore’s LAw; in some cases word processing benefits immensely from Moore’s Law, in other cases it is the skill of the programmer or something else which is the limitation.
re: “This quickly led to killer apps like VisiCalc and WordStar, which fundamentally changed business, and the world.”
Although these apps indeed changed the business world, they were around long before GUI’s made the scene.
VisiCalc was the first killer app for the apple][, and Wordstar was a CP/M program. Your point is still valid, but perhaps you should use MacWrite and MacPaint as examples?