In the early days of home computing, computer architecture designers did not have a crystal ball to peer into what would be the most common tasks a computer user would engage in for the next thirty or more years. We still don’t have that crystal ball, but we can postulate on what could have been, if we had known.
Some innovative architectures, such as the Commodore Amiga (initial A1000/”Lorraine” model developed from 1982 to 1985), went as far as to put graphics and sound in dedicated parallel custom chips, releasing the load from the CPU. Yet today, we are aware that graphics and sound are not the only priority in common tasks; networking and mass storage are sometimes a bottleneck to be tackled with. We can of course forgive the lack of foresight from 1980s designers, as of then, the Internet was not available for regular consumers, and cheap mass storage solutions were still a mirage.
Lets stop before I continue on a Captain Hindsight rant fest. Before proposing any holistically revised kind of architecture, I’m going to first recognize in this post, the patterns of general computing tasks that, we as humans, execute frequently, from my “default” perspective.
The form factor for general computing nodes is not that important for this exercise. Let us take rack mounted servers to smart phones, with tablets, laptops, desktops and expandable gaming consoles in the mix, anything goes. We’re trying to figure out what is common in them for current day usage. So my current categorization of general computing tasks is as follows:
- Gaming/Simulations;
- Media Content Consumption and Production;
- Communications (computing node inter-connectivity on a planetary packet network);
- Office Productivity/Digital Assistance;
- Archival and Data Mining.
The dividing model I chose to use is just that, a model. For instance, I could have chosen to give Scenario Prediction (Artificial Intelligence) a category of its own, but that would be having a different perspective on simulation and data mining.
Are there any other major general computing patterns that may emerge in the future? I am not sure. What I do realize is that new form factors, interfaces, sensors (and mixing those up) find new contexts in which to engage in different computing scenarios, such as what has happened recently with smart phones and tablets. The major patterns I point to in the example categorization, have not changed much since the late 90s. At least I could take this model back to the late 90s (~15 years ago) and it would make sense.
This post is a build up to a holistic computer architecture (transistor based) I’ve been theoretically playing around with. Stay tuned.