All right. So I'm going to talk a little bit more about the software side of things, right? And it's kind of interesting the way this played out, in that I think my talk is actually going to bring a lot of the sort of discussion we've had so far together into something interesting. And so we talk about these major trends in computing, things like big data and deep learning and mobile and so on and so forth. But sort of under the covers, there's a set of things that are starting to change that we don't seem to talk about.
One of these, and it's sort of been brought up earlier, is that over the past 15 or so years, the 1970s computer disappeared. We have a new notion of the machine. The machine is this. We keep seeing this picture over and over again. It's a gigantic computer with thousands of cores, with terabytes of memory, with petabytes of disk. But it is also this, right? And even now this, our thermostats are now part of this giant machine that we're working with.
And to an ever-lessening degree, it is our laptops. And in a world where that's the case, all of a sudden the game starts to change, right? Performance starts to matter again, right? We used to rely on Moore's law. It'll be fine. We'll just wait a couple, you know, six months, and it'll be fast enough and we'll do well. But the truth is, we're trying to squeeze, you know, every ounce of power out of these little guys in our pockets, right?
And on top of that, we care about battery life. So we just need general efficiency and computation. And then on top of performance, now everything is highly, highly networked. And that means our systems are now full of latency, full of disorder. And we have to deal with failure at any one point in time. And the truth is, our models for computation never really included those eventualities, right?
We sort of assumed that, hey, that 1970s computer with just a single CPU would be fine for us. And when I start talking to this, I'm talking about this stuff with other people, they're like, ah, performance. I know what I do. I'm going to pull out C, and I'm going to write some code. And I know that's fast, because it's always been fast and it will be fine.
But honestly, in industry, we are struggling to write even simple programs against this new kind of machine just because our models don't fit it, right? And so that's taken us to something, made something very important start to happen. And we sort of touched on this before, which is that industry and research are starting to come back together again, right? You go back to the 70s, there kind of was no distinction between the two. They were basically the same.
But over the years, big corporations started sort of pushing the agenda, right? And then as a result of some failures in the 80s, like fifth generation computing, we kind of lost faith in sort of computer research. And so what's interesting about that is we kind of did this like depth first traversal of computation, right? We sort of ran with whatever the industry had at the time.
And so the result, of course, is that every mainstream language kind of is just some form of object oriented programming. And every database eventually starts to look like SQL. And we deal with concurrency through this horrid mess of locks and sadness. And we kind of got to the bottom of this tree. And we're looking around and starting to realize, none of these are real great options.
And I would argue the reason that this happened is that pure engineering makes this very pragmatic trade off, which is that you tame complexity by adding a layer of abstraction. And this is like stacking teacups on top of teacups, right? You can do this for a little while. But eventually, those teacups, that stack, starts to lean. And it threatens to fall over. And I would argue that it's thanks to big data that we hit this big wall.
We're like, great, we have Hadoop, but it's not getting us as far as we want. We have to sort of rethink about just doing this pure engineering stuff. Maybe we need to go back to first principles, right? And of course, this is where industry and research start to come back together again. And so the most interesting advances happening right now are coming from people like Alan Stuika, right, out of Berkeley who built Spark.
We're looking back towards research to find a path forward. And the reason I think that that is so important is because what we're really trying to get back to is simplicity, right? Instead of adding more teacups on top of this thing, wipe all the teacups away and find the foundational principles that we need in order to embrace this new machine, right? Gain power through simplicity, not more complexity.
And it is the folks in this room, right, who are doing exactly this. Who are looking for sort of the foundation upon which we could actually start to build this new way of thinking about software and computation in general. And so I thought it'd be interesting to go through a couple of examples and distributed computing is like all the rage these days, so I'll start there, right? And I think one of the more interesting things happening there is what are called commutative replicated data types. So if you have a big distributed system and you have lots of information constantly changing, you want every node in that system to have the same information or something like it, you have this problem with coordination, right? You need to coordinate all of these changes so that everyone has them. And sort of the traditional way of thinking about this is, oh, great, we've done transactions as a coordination mechanism. We'll do distributed transactions. And so we created a bunch of really complicated mechanisms, a bunch of really complicated coordination protocols like Paxos to try and handle this.
But there's, of course, a couple of problems with that. One, like I said, they're really complicated. They're very hard to get right. But two, and even more importantly, they're really hard to reason about, right? And the reason why CRDTs are really interesting is because it's not real important exactly the mechanism. But what they manage to do is just sidestep the problem entirely. They say, no, no, no. Here's a data type that can update regardless of order. OK, that's infinitely simpler than what we were doing before because it just sidesteps the issue of coordination entirely. And I like this example because this is a great example of the pure academic approach towards this problem, which is find out the mathematical principles that prove that this is a wonderfully simple idea and find out how you put that into practice, right?
Another great example of this is also wonderfully simple. You were starting to see a lot more systems built on what are basically append-only event logs, immutable event logs. Because we, again, we have this problem in a distributed system. We learned very quickly that shared, mutable state is a disaster in that world. And then we have to do something else. And there's nothing simpler than just keeping a log of every single thing that ever came into the system, right? And then there's sort of interesting engineering challenges of then taking that log and turning that into the state of the system at any one point in time. And we're seeing really neat stuff, like great, great research coming out of, for example, the University of Auckland doing a project called Octopus DB, where they reimagine the database based on this thing. And other projects like Apache Kafka and Apache SAMHSA are all based purely on, let's make really efficient distributed system software based on just keeping a list of everything that's ever happened.
Now, I started with distributed systems, but this applies to far more than just distributed systems, right? And a completely different vein. You have folks like Facebook building these incredibly complicated user interfaces, you know, meant to be used by lots of people. And they're sort of running against this wall where they can't make them fast enough, they can't make them work on our phones. And on top of that, they just can't reason about them anymore, right? There's so many things on the screen that they're having a really hard time. And so they went back to a really old idea, what's called immediate mode UI. So traditionally, when you build some user interface, you have a button, right? And that button has a bunch of state underneath it. It has a background and it has a hover state and blah, blah, blah. And you mutate that state in order to sort of change the UI.
Media mode is much simpler idea. You just redraw the UI every single frame, right? And it turns your sort of interface code into a pure function of the state of the application. And all of a sudden, when you're in a world like that, it's stupidly easy to understand exactly what you're going to get. As long as you know what's on the left side, you know what you're going to get out on the right side. It's much easier to reason about, interestingly enough, it's a lot easier to make fast.
And so we have immediate mode UI abstractions over the DOM in HTML. Another fun example, right? Constraint programming is sort of making a comeback from the old days. Sat solvers and SMT solvers are getting ridiculously fast, like unbelievably fast. And again, what could possibly be simpler than just writing out a set of constraints and having a system solve it for you, right? And it's neat because this is getting applied in some interesting ways.
So for example, Apple's AutoLayout and iOS is based on a linear inequality constraint solver called Casuary, right? We're starting to see people realize, hey, we learned pretty early on that, you know, query optimizers as an example can write much better queries than human beings can. Why not apply that to other kinds of searches? And so we're seeing constraint programming sort of make a comeback.
And the last one, I'm particularly interested in this one, which is there's a group of people, you know, quietly saying, let's get rid of SQL and let's bring back the relational database part. And they're starting to look at this because they're wondering whether or not stone breakers really write, right? Whether or not you can create a general purpose database that is as fast as the specific ones or could at least, you know, compete with them in some interesting way. And we're seeing this in sort of the re-emergence of data log, a language that has been dead for 40 years based on, you know, came about at the same time prologue did, right? But it's based on this idea of relational, you know, having a relational database without the SQL. And we're starting to see this, you know, get into industry.
People like Rich Hickey built an entire database called the Tomic built on data log, right? And I could keep going on and on. There's, you know, tons more examples here. But the interesting thing is all of these things sort of relate to those trends I was talking about, sort of these currents that I think are going to be really important as we move forward. The first, like I said, is the industry and research are coming back together again, right? And maybe that means that we can go back and look at all of those branches of the depth first search that we skipped, right? The branches of that tree that we just neglected.
And that's being driven by the fact that we have this new machine and we don't know what to do with it yet. You know, we can't program like we did in the 1970s, you know, C is not going to save us. You can't just stick the actor model on top of it and hope it's going to work. We need fundamentally different ways of thinking about computation. And the way we're going to find those is by making a trip back to simplicity, right? Trying to find sort of the fundamental truths that we can use to build the system this way.
And there's one more that I haven't really talked about yet, but sort of stems from all of these, which is that if we actually do manage to come up with a simpler version of programming, it offers us the opportunity to maybe finally have a shot at democratizing computation, right? You know, Peter was talking about, hopefully at some point everyone has access to these really powerful machine and machine learning and predictive capabilities. Let's take a first step and get everyone just the ability to do really simple computation, which is not possible right now because, hey, programming is hard, right? But if we do manage to simplify the system and we actually get it to the point where it works the way we need it to, maybe we do have a shot at that, you know, have a shot at that.
And the implications for the people in this room are pretty tremendous because that means the computer turns into a tool again. And then we start thinking about things a little bit differently. And so there you go. So these are the things sort of that I believe are sort of shaping the way things are moving forward. And like I said, you know, some of these efforts are pretty quiet, but things like CRDTs are actually starting to make their way into industry, right? So the latest implementation of React, React 2.0, actually has a set of CRDT data types inside of it.