When I was a kid growing up in the 80s, I remember our first computer was an Atari 400. The console had a keyboard built into it, and it had a top loading slot where cartridges could be inserted. The back of the unit had a serial port for connecting peripherals. We had the Atari 410 tape drive that used audio cassettes to record programs and save files, but you could also buy software on cassettes.
As crude as this hardware is in comparison to today’s technology, I still think that it was rather brilliant to store software on audio cassettes…but I digress. Despite the 410′s aforementioned brilliance, it had one major deficiency…it was slow. I remember that my Mom and Dad had bought me this Mickey Mouse game on cassette, and nobody could ever figure out how to get it to load. Eventually the cassette/the 410 tape drive where dismissed as being defective.
One day, in an attempt to curb my growing boredom, I decided to connect the 410 tape drive and give the Mickey Mouse cassette another go. Having exhausted all of my cartridge games, the only game I hadn’t played was the Mickey Mouse game. I put the cassette in, rewound it, and pressed play. The tape played through, and the stopped at a certain position. This was the same result everybody else got when they tried to get this game to load. After 15 minutes looking at the cursor on the screen nothing had happened. By this time I had drifted away (daydreaming). I was probably thinking about how great it would be if I could actually play this game. After about 45 minutes to an hour the Micky Mouse game had loaded, and the iconic mouse’s face was displayed in 8-bit glory on my screen. As it turns out, there was nothing wrong with the 410 or the cassette holding the software, but rather the machine was so slow that this was how long it took for the program to load. Everybody had been so used to the nearly instantaneous load time of the game cartridges that nobody thought that it would take longer for the tape to load. Granted, I found this out by accident, but I learned a valuable lesson that day…have patience. All good things come to those who wait.
I relay the above childhood story only because I think it directly relates to the next portion of this article. Progress bars, download windows, CD import screens, and loading screens are ubiquitous in modern day computing. Irrespective of what computer, gaming console, tablet, netbook, or smart phone that you have…progress bars will be familiar to you. Whether or not you ignore the progress bar, or you eagerly watch it in anticipation for your download/process to finish; the progress bar has an impact on your computing experience and on your life. After recently downloading a 1.49 GB file, I started to reminisce about how waiting for computers has affected my life. I remember when I had a Pentium 100 MHz PC in 1996 that had a 28.8 baud modem that would download at 2.8 Kilobytes per second. Downloading even the smallest of files from the internet was a time consuming and unwelcome chore, but most times I would simply zone out and think while the file transfer was being received. Downloading gave me time to organize my thoughts, and think deeply about various things–mostly school work at that time. Fast forward to today where I’m able to download at 768 kilobytes per second, and I only really have to wait when I’m downloading files that are in excess of 500 megabytes. Now waiting for downloading/loading is almost a thing of the past because there really isn’t much waiting to be done–with the exception of some PS3 games that load files to the PS3 hard drive. Computers and broadband internet have gotten so swift that you only get the opportunity to think when downloading really large files.
I might seem out of date with today’s generation that expects everything that is computer/internet related to be performed instantaneously, but I think that something is lost with everything being so expeditious–not that I want to go back to 28.8 baud modems. This relates somewhat to my having read Nicholas Carr’s “The Shallows.” I have written a review about “The Shallows” in a previous post so I won’t go into it in detail here, but Carr’s main thesis is that the rapid pace of information technology is ruining our brains’ ability to think deeply and thoughtfully about something for a prolonged period of time. While I agree with Carr on many points, I would argue that progress bars and having to wait for a process to finish could be our last bastion of hope in salvaging what is left of deep cognitive thought.
The progress bar reminds us that in order to get something we have to be patient; something that I think is totally lost on many in our postmodern times. I find this quite important because it proves that technology merely aides us to achieve efficacious results. That the real work is still done by a person who has the ability to patiently think though a problem and develop a creative solution. Humans have cognitive abilities that extend far beyond the finite realm of computing, and we should not deprecate those abilities. Who knows…while staring at a progress bar somebody might have an ah-ha moment, and discover a new game-changing idea or innovation that could once again transform the world.
I realize that some might consider this to be too idealistic. I’m sure some readers are thinking “a game changing idea coming from some guy zoning out looking at a progress bar…yeah right!” While I’m sure I can’t change your mind, I’ll just add that I have had some really significant ideas develop while zoning out to a progress bar. In fact, many parts of this website were conceived while downloading HD episodes of the “Engadget Show” whilst zoning out to the progress bar.
Whether you agree, disagree, or think that this article is pure rubbish…I’d love to hear your thoughts on the subject.
With the release of the iPad 2 in the US (March 11, 2011), and its impending global release (March 25, 2011), I’ve decided to look at the legacy of Apple hand-held hardware.
I think the best way for me to examine said legacy of Apple hand-held hardware is to share my personal experiences with the Apple devices that I have had the privilege to own. To add context to this history I think it’s pertinent that I mention that I’ve been an Apple/Mac user since the mid-nineties (circa. 1995/96). I’ll spare everybody the history of the exact Apple computers/devices that I have owned–and still own–over the past years. I will mention that my history with Apple computers began with machines that used the Motorola 68000 line of processors; in 1999 the logical progression of upgrading led me to the PowerPC line of Macintosh computers.
Around this same era (circa. 1998/99) I got the Apple Newton MessagePad 120. The MessagePad 120, originally released in 1996, I bought used off Ebay for $180.00. The device was in mint condition. At the time the US Robotics Palm Pilot was all the rage. I had a Windows PC as well, but the Palm Pilot wasn’t a suitable hand-held device candidate for me, as I was using my Mac more than my Windows machine. The MessagePad 120 had an ARM 20Mhz processor that was considered blazingly fast for a hand-held device at that time. It had/has a black and white–calculator quality–LCD display, a stylus for user input, and it had a decent selection of applications. Overall, the device is utterly antiquated in comparison to today’s technology. Notwithstanding the MessagePad’s primitivity, in terms of mobile hand-held computing Apple was well ahead of every other company. Even if we consider the industrial design of the MessagePad 120…it still has a pleasing visual and aesthetic appeal despite being 15 years old.
The next hand-held Apple device I purchased came 4 years after having acquired the MessagePad 120. In the summer of 2003 I broke down and bought a 3rd generation 10gig iPod. I must admit that when the 1st generation iPod was released in 2001, I was less than impressed. At that time I was more interested in CD players that could play MP3 CDs , and I had a Sony Minidisc player that adequately served my portable music needs. I bought the 3rd generation iPod for 3 reasons:
1. My Minidisc player broke.
2. I had just got a G4 iMac, and the iPod was a nice accessory.
3. I was looking for new gadget to buy. : )
In truth, if my minidisc player didn’t fail I probably wouldn’t have bought the iPod. The Minidisc player could only hold 74 minutes of music (about 15-20 songs), and you had to record all of the tracks in realtime to the disc. Despite Minidisc’s shortcomings it worked well, however, the iPod’s advantages were clear. I could sync my entire iTunes library to the one device, and subsequently carry all of my music around with me at all times.
An additional 6 years would pass before I purchased my next piece of Apple hand-held hardware. In Spring 2009 I got the 2nd generation iPod touch. This was most definitely a exponential leap forward in terms of technology. Besides being an MP3 player the ipod touch could: browse the web through WiFi, run high calibre applications that you would previously only think of running on a desktop/laptop computer, it could play video, and it incorporated an innovative touch screen interface.
Looking back on this legacy of Apple devices, I see the iPod touch as being the amalgamation of all of the best features from the MessagePad 120 and the 3rd generation iPod. However, to classify the iPod touch as a mere amalgam of the two aforementioned devices would be a severe understatement, as the iPod touch adds layers of functionality far beyond anything the MessagePad 120 or 3rd generation iPod could come close to accomplishing. The reason why I see the MessagePad 120 and the 3rd generation iPod as the forerunners to the iPod touch should be axiomatic. It’s clear that the genesis of the iPod Touch, iPhone, iPad/iPad 2 devices was the Newton MessagePad line of products. While the Newton products were not commercially successful, they did lay the foundation for portable hand-held computers that could rival or potentially replace a desktop/laptop computer. This is interesting when you consider that it has taken 15 years for the industry to reach the point where you can have a hand-held device–iPod touch, iPad, iPad 2, Xoom, Galaxy tablet–that can rival the speed, power, and features of a desktop or laptop computer.
Regardless, Apple had the idea and proof of concept for mobile computing with the Newton MessagePad products. Is it any wonder that Apple is currently leading the industry in hand-held/tablet computers? With 15 years of research, development, industrial design, and design aesthetics under its belt, it doesn’t seem that remarkable that the iPod, iPhone, and iPad are commercial success stories.
I just finished reading “The Shallows” by Nicholas Carr. I think it is an important text given the rapid rate of technological advancement in our postmodern world. I will try to not give away too much of the book’s details because I think that everybody should read it for themselves. I don’t want to spoil it for you so I’ll cover the points that I found interesting. The main idea that is suffused throughout the text is that our use of the internet/technology is distracting us from being able to think deeply in a sustained state of concentration. This kind of deep thinking is what has helped mankind reach our current technological level of sophistication. As the book’s narrative progresses, Carr gives us examples from past philosophers and cultural theorists (Friedrich Nietzsche and Marshall McLuhan) about how technology itself–not the technology’s content–effects how we think and develop cognitively. Carr also provides empirical examples of cutting-edge neuroscience research that further supports his claim that technology is distracting us and limiting our intellectual abilities.
We tend to take our digital lives for granted; we use computers and the internet on a quotidian basis all the while thinking that these high-tech devices are enhancing our lives, intelligence, and ability to multitask effectively. As Carr points out…this couldn’t be further from the actual truth. Carr leads us to the conclusion that true understanding can only be achieved by thinking about one concept at a time for a sustained period of time. Books allow the human brain to be able to reach this level of true understanding because a book forces you to concentrate on one concept at a time in a methodical and linear manner.
The web provides an antithetical experience to the act of reading a paper book. It encourages continuous digressions from one subject to another, from one medium to another–text to images to video to links. In other words, our brains are unable to process and transfer all of these disparate snippets of information from our short-term memory (working memory) into our long-term memory. By using the content rich multimedia that web has to offer, we are actually not absorbing information effectively due to cognitive overload. Carr paints a grim picture about the contemporary human intellectual mind, and we are not left reassured that this will improve in the days to come.
While Carr focuses mostly on the negative neurological aspects of internet usage on the brain, he does briefly cover the benefits. Internet usage has made us more adept at being able to rapidly find specific information in a heuristic fashion. This ability to quickly find information has also promoted our related skills of skimming and parsing documents for the most relevant bits of information. However, this information swiftness comes at the cost of not being able to remember and contemplate the information that was acquired with such immediacy. The result is an overall superficial understanding.
As I was reading “The Shallows” I found myself wondering what the intellectual landscape will look like in 5-10 years. Our rapacious appetite for new technology, faster internet connections, and innovative hardware/software is only increasing. What will become of the human mind in the future? Does this technological boom mark the end of the western intellectual/philosopher tradition, or will a new internet-philosophy emerge as we continue further in the digital abyss? Could we possibly see an intellectual elite rise to power in emerging/third world nations as they become more connected to the internet?
None of the above questions are easily answered, and any answer that could be formulated would be speculation. The question that I think is the most salient is the latter regarding an intellectual elite gaining impetus in emerging and third world nations. This is one aspect that Carr omitted in his mainly western centric treatise. Does technology have the same effect on the brain of a person from a non-western culture? Would said non-westerner react the same way to the scientific tests describe in “The Shallows” as their western counterparts? We can infer that they would react in a similar way to the tests given that the human brain’s physiologically is the same, but humans are not the sum their brain’s physiology. The culture in which we are born and raised has a dramatic effect on how we absorb, parse, and process information. I think it would be interesting to see how people from other cultures would react to these tests.
Having lived in a Western culture for my whole life, I have grown-up and developed with the computer technology as it was being released. With that being said…I feel that computer/internet technology is a part of my culture, my mind, and in some ways it defines who I am. I think it would be interesting to find out how people in developing nations view computers, the internet, and technology vis-à-vis their traditional cultures. In 5-10 years could the youth of these cultures start to see computers/internet technology as a part of their culture and how they define themselves? I hope that this would be the case. I also hope that Carr’s depiction of the new technologically savvy, yet “shallow” brain, doesn’t destroy rich history of human intellectual thought that has flourished over the past epochs.
I know that there are many other videos about Watson on Youtube that give more technical insight into how it actually functions, but I found the above video particularly interesting because it relates to human intelligence. As humans we tend to pick up on nonverbal cues like gesture, facial expression, and body language in order to add context to words that are being said to us. This allows us to interpret these words in order to derive further meaning and subsequently understanding from what is being said. Nonverbal cues tend to be best understood when we are directly engaged in a conversation with somebody. In this situation the nonverbal cues are of equal importance to what is being conveyed to us verbally as the conversation progresses irrespective of whether we are conscious of them or not.
After thinking about this for some time, I started to contemplate how we derive meaning from a story that we read in a book. A book has a narrative structure, however, there are no nonverbal cues from which we can interpret meaning. In the case of books…how do we derive the same kind of meaning that we get from engaging in a conversation with somebody? It seems that from our life experiences we are able to derive meaning from Hamlet, The Scarlet Letter, or Crime and Punishment because we have built up a memorised library of nonverbal cues from which we draw in order to interpret meaning from a book’s narrative.
Watson has no way to analyze the same kind of visual cues as humans do in order to add context to the questions being asked. Instead, Watson searches each individual word in each Jeopardy question, and then cross-references this information against its own (self-contained) databases of information in order to understand the context of the question. Once an understanding of the question is achieved, it then looks for answers that best fit the context of the question. It uses statistical analysis to determine a list of possible answers, and then ranks those answers from most correct to least correct. Also, Watson is able to learn and eliminate answers from the other contestants responses (right or wrong). All of this takes the power of some 3000 3.0 Ghz processors, terabytes of memory, and a tremendous amount of electricity. Regardless, Watson is a milestone in AI research and it does produce mostly correct answers at lightning fast speed; It out performed Ken Jennings. So what does this mean for human intelligence?
In the video above, Dr. Ray Mooney states at 1:07: “What the brain is doing is actually a lot like what computer hardware is doing.” This has been an principal paradigm in cognitive science research since the field’s inception. It is very easy to see the connection between the brain and a computer. Both take inputs of some kind, process the data, and then provide an output of the processed data. The electrons that are traveling through the circuits of a computer are also analogous to the firing of neurons in the human brain. Although the similarities are clear, I think that we need to be careful when applying the brain/computer correlation. There are fundamental differences between the human brain/human and a computer. Firstly, computers are deterministic, meaning that a computer will give you an answer based on the data that is given to it. If it’s given erroneous data the output will also be erroneous. If Watson were given poor facts, it would give an incorrect answer every time. Humans, on the other hand, do not suffer from this disability. We are able to take bad information and “spin” it into a cogent plausible argument (Lawyers). Secondly, computers must work within the bounds of formal logic. Watson at its foundational level must work on a YES or NO (1 or 0) logical structure…there is no middle ground. While I’m sure that many would argue that most things/situations in human life distill down to YES or NO as well, there is still room for an expansive middle ground where YES and NO are not always taken as the ultimate truths. The perfect example of this is in the legal field where facts/logic are important, but human rhetoric also plays a large part in determining guilt or innocence. I would hate to live in a world where a “Watson” type computer was determining guilt in a court of law.
Over the last week I have been thinking about Watson a lot, and I’ve also been thinking about what it means to be human. I know that Watson wasn’t designed to be a true artificial intelligence vis-a-vis a human, but notwithstanding it’s ability to answer questions is remarkable. While I am a fan of Watson and I think that the technology is truly a feat human intelligence and creativity, I’m skeptical as to how intelligent Watson actually is. Is it even possible with enough processing power, memory, servers, etc to devise a machine that-at its core–is purely logical and deterministic to emulate the human mind/condition? If such a machine was built and it could pass the Turing test, is the machine intelligent like we are? All of these questions are very hard to answer. My personal opinion is that as humans we too live by rules that are programmed into us by our upbringing, social class, education, and values. In this way we a similar to computers, however, we have the ability to–on a whim–break our own rules and change the trajectory of our lives. As we live we are constantly writing and re-writing the code of our lives. The computer remains fixed in a set of predetermined rules that it can not break unless it’s given the option to break the rules by a programmer (the programmer is breaking the rules that he or she programmed into the machine). While Watson is autonomous on some levels(Decision Making, Machine learning), it can not be defined as truly self-governing. Human autonomy, and our ability to break rules is what truly separates us from intelligent machines. I’m sure that many would argue this point as to whether or not humans are actually autonomous. The philosophical debate could go on ad infinitum.
I’m going to end this article off by paraphrasing something that I read by the french philosopher Jean-Paul Sartre. Sartre had said in one of his essays that he had never felt so free as when he was in a Nazi prisoner of war camp in the 1940s. He came to the realization that there was always a choice no matter what predicament in which you might find yourself. He reasoned that if the situation became unbearable in the prisoner of war camp, he always had the option of taking his own life. This is analogous to my previous example of changing the rules/your own rules on a whim. Until a computer can work through a similar kind of problem and realise its own autonomy, I don’t think we can call it truly intelligent.