This morning, right after my three-year-old son and I lost interest in rebuilding his train set when his 15-month-old sister wrecked it for the third time, my son went for the iPad mini.
It wasn't working either. More specifically, he wanted to watch a Netflix movie, which he couldn't because the iPad was in airplane mode. So he did what he needed to do to fix it: he handed it to dad.
Just then, my daughter toddled up and handed me a ball, which was working just fine.
So there I sat legs crossed on the floor. Netflix in one hand, a ball in the other—two toys with two very different conditions for failure.
The ball has two states of failure: 1) It could get lost or 2) it could run out of air.
The iPad Netflix app has many more than two states of failure. It is beyond me to list them all, but here's a sample:
- The iPad could get lost
- The iPad's battery could die
- Netflix's servers could fail
- DNS servers could fail
- Our wireless router could fail
- The iPad could freeze up
- The iPad could be in airplane mode
- The Netflix session could timeout requiring a new sign in, which from my son's perspective is failure since he can't sign in without me
Let's assume the probability of a lost ball is the same as the probability of a lost iPad (very similar incidence rates in my household). Further, let's assume the probability of the ball losing air is zero over short (play-session-long) time periods.
Right away, we see that no matter how well designed the Netflix app might be, its probability of failure is much greater than the ball.
Simply put, as a system gets more complicated, there are more things that can go wrong. For the general public, this intuition is enough, but in terms of Dr. Drang's Venn diagram woodshedding of Marco over misinterpreted probability theory, a system's probability of failure is the sum of many failure possibilities: this could go wrong OR that could go wrong OR if part A fails, parts B, C, and D will also fail immediately.
With an iPad, the more individual probabilities you add on top of the base probability of being lost, the larger the total probability of failure becomes. No matter what.
Pr[Adults are dumb] = ?
As we mature as both individuals and societies, we seem to have his urge to want ever more complicated tools and toys. We also tend to place increasingly complicated expectations on what these things should do.
A modern computer is made of systems within systems. A MacBook is not hardware, not software, but both of them at the same time. And a MacBook is not really the MacBook of our expectations if it can't connect to the Internet—an entirely larger and more complicated system.
With Yosemite and iOS 8, we have even more interdependence through features like Handoff. Now, a MacBook, iPhone, and iPad are no longer three things but a system of things—an ecosystem with an even higher chance of failure by virtue of sitting atop an ever-rising house of cards.
I think it's worth pondering the time we spending fixing our tools and toys versus the time we spending solving problems and actually getting to play.
I'm not convinced that having complex tools is a necessary condition for achieving remarkable results. The Apollo spacecraft's computer was far less complicated than an iPad's after all. Far less. We haven't landed a man on the moon since 1972.
If complexity and connectivity are necessary conditions for the perceived success (the complement of failure) of any given technology, it stands to reason that the risk of technological failure will increase over time, not fall.
I don't see this as a dystopian inevitability though. I think people are at their best and truest selves in the moments following failures. And to a simplistic extent, the human experience is one of either solving problems or creating problems to solve.
If we've learned anything in the last hundred years or so, it's that technology doesn't promise to simplify our lives. It promises to keep our lives extremely interesting.