+/-20 Years of Computing
In less than 140 characters:
Great new things by decade: 90s: make/save data; 00s: find data; 10s: visualize data, extract greater meaning; 20s: democratize data
The 90s were an incredible time. This was the decade where most computing focused on generating data and saving it. The conventional wisdom of the day seemed to be that, through a magical process, massive amounts of data could be used to solve anything. This was the era of the chess playing super computer, Deep Blue. This was the “anything is possible” childhood of the Internet.
Then came Google. Google made the 2000s the year of search. By then, what it had started in 1998 had reach a seriously huge critical mass. Early in the decade, though, many people and companies struggled to understand the Internet. This was a scary time for me as I saw some high profile collapses like Pets.com learn some hard lessons in business fundamentals (e.g. 1,000,000 views * $0/view = $0).
It’s during this time that businesses based on good foundations of revenue and purpose really grew. This was the decade of search. Entire generations learned that by typing a few keywords into a box could lead you to damn near anything you wanted to know. Organizations and aspiring individuals learned that by pushing information to the Internet in a public way, they could capitalize on this traffic. This was a cool decade.
That brings us to today.
We’re starting to feel a little overloaded by the massive amounts of information available to us. The ability to find a dataset, track it over a period of time and compare it to another dataset is a fairly challenging task today. This is where I expect to see some big “Wows” in 2010—visualization of data.
I’ve seen some absolutely amazing things coming from TED lately (go watch those now) and am excited for what fiscally strong companies and universities can create. Enabling non-PhDs to extract meaning and value out of massive amounts of data has been on the radar for the last 20 years—I think we’re finally to a point where it can happen on a grand scale.
Computing power is no longer a limitation.
Connectivity is no longer a limitation.
We will seem some very impressive and innovative ways to make sense and meaning of data very soon.
I think the success of data visualization will lead to passionate movements to democratize data. Around 2020, it will no longer be acceptable to conceal, hide, or privatize data. There will be a very successful movement to make government data and university data available via extremely accessible means—via APIs or methods that probably don’t exist today. Individuals will adopt the use of standards and contribute—for free—to the pool of data. This long-tail effect will be interesting if not incredible.
Organizations will jump on board and contribute to this stream by dropping the unsuccessful pay-walls they constructed in the 2010s. Vague patents, which will be distorted and abused in the 2010s to monetize data will be invalidated or expire and the flood gates will open.
In 2028, people will start discussing the merits of a conventional census—a reinvigoration of arguments made leading up to the 2020 census. Doing away with the census—which seemed ridiculous in 2018—will have a lot of support. We’ll do one anyway (at great expense) but it’ll be the last time. Around this time (2030), near-real-time data of greater quality than today’s census numbers will be available to all of us.
Thanks to the Internet, I’ll end up back on some future incarnation of this page to see how completely and utterly wrong I was about everything (I can’t wait).
Michael Haren said on 2010-03-08
I just saw this: http://www.google.com/publicdata/directory. Very cool