In the
future, I will try to stick to articles outside of the ones assigned for class,
but the “Is Google Making Us Stupid” article was very interesting to me. It
discussed a few points that I enjoy thinking about within the topic of the
exponentially growing information age. The main idea of the article was that
using computers changes the way we think and that this expanding use may bring
about important societal and behavioral changes within our species as a whole.
On top of that, I think that this synthesis between our own consciousness and
the vast wealth of knowledge known as the Internet can bring about biological
and evolutionary changes for humanity as well.
The author
of the article, Nicholas Carr, opens and closes his piece with a connection to
the artificial intelligence system from 2001:
A Space Odyssey, HAL 9000. Carr writes that through the fast paced
absorption that comes with learning on the web, he is unable to stay focused on
lengthy works of writing and in general, learn the way he used to. He feels his
brain changing, much like the death of the AI in Stanley Kubrick’s film. Now
that so many people have smartphones and social media accounts, practically all
of the world’s knowledge is at our fingertips. For those who are wary of this
advanced technology, they say that this easy access to infinite knowledge will
make us lose the importance of remembering anything at all. In opposition of
the naysayers, and with the growing technology of Google Glass, cybernetic
implants, and nanocomputers, the next logical step for ease of access to the
Internet would be to directly sync it up to our consciousness and thought
processes.
Brain
computer interfaces (BCIs) are a relatively common technology nowadays, mostly used
for neuroprosthetic devices meant to restore lost or damaged hearing, sight, or
movement. While the medical applications of BCIs towards repairing
sensory-motor functions are astounding, its uses for altering cognitive
functions can be just as extraordinary. Imagine being able to Google anything
with just a thought. There are many different ways that this process would be
able to be implemented. One way is to replace the common computer desktop
outputs, such as the monitor and speaker system, with direct outputs into your
own biological inputs, i.e. eyes and ears. Within your field of view, signals
from a chip can interact with your occipital lobes and impose a computer screen
on top of your vision. Whatever you would normally hear in that moment could be
overlapped with music or whatever other sound effects occur when you are
interacting with the desktop in your mind. Another way to implement BCIs is to
more directly connect our thoughts to the internet so that we can browse
different web pages as if we were browsing our own thoughts. This version of
implementation is a bit more complicated, but would provide the user with a
more streamline connection to the Internet’s vast arrays of data.
Both of
these BCI implementations have their pros and cons. Obviously, the
psychological effects of having an entire world wide web attached to your mind
could be catastrophic to any mere human. But if our species were to somehow
connect our conscious minds to the Internet, it would be one step closer to the
singularity, an event predicted by computer scientists and mathematicians where
artificial intelligence would either merge with our own collective human
intelligence and radically alter the evolution of our species, or completely
exceed human capacity and control and eradicate our civilization. Either way,
the increasing expansion of our technological ability and dependence is a very
concerning subject that could have a wide array of consequences for our
society.
I also felt very strongly about this article. I believe that our brains are in fact changing --- with every new technological advance, more and more people are becoming addicted and seem to lose side of the important things in life. An example of this would be social media. In my opinion, people are too focused now on being what the world wants them to be instead of just being themselves. Its all about how many likes you can gain on an instagram selfie and how many retweets their tweets can get. Someone, somewhere in the world every single night is watching a sunset and forgetting to actually watch it and soak up its beauty. Instead, people are too hung up on "capturing the moment" for their followers to look at. While I am someone that believes that pictures say a thousand words and truly do capture unforgettable moments, I think we're too used to having the world at our fingertips. Taking the picture is fine, but spending another 25 minutes after that picking the perfect filter on instagram, and thinking of the perfect taylor swift quote to put in the caption, is the problem. I can't tell anyone the last time I've been in any social situation where everybody wasn't constantly checking and using their phones. To me, it is actually upsetting to witness so many of my peers addicted to the constant approval of others...
ReplyDeleteEven before social media, I'm sure many of us can think back to the days of playing outside and having parents call us in from the front door. Cell phones had just started coming around throughout the earlier years of my life, and by the time I hit 13 everyone had their flip phones. We may not realize it all the time, but we truly are the generation of advancement. We have lived through an unbelievable amount of changes in our societies, and its easy to see (in my opinion) the difference between the way we grew up, and the way children are growing up now with all of the new technologies. You see so many young kids now with ipads, cell phones - things that didn't even exist to us at that age. Of course, all of this leads back to the debate you've set up here; will these new technologies eventually end up wiping our civilization? Or will robots really end up taking over the world merged with our own human intelligence? As for putting a chip into our brain, obviously I'm 100% against it... but I can only speak for myself. Only time will tell the future.
An interesting note here would be that the concern for neural uploading and cognitive centralization seems to be advancing at a far faster rate than the actual technology needed to produce it. While it is the center for many new groups of research and the recipient of many new millions of grant dollars, the actual amount known about the human brain is very little, and even less is known or understood about consciousness. Today most computational neural researchers are forced to focus on the purely physical aspects of the brain, attempting to replicate neural patterns and the unique plasticity (ability to rearrange) shown by them. While many algorithms exist to do this (the Hodgkin–Huxley model [1] at their forefront for the time being) they fail to replicate the speed or interconnectivity of an actual brain - a simplistic neural network of a thousand or so neurons with perhaps ten times as many connections will take several minutes for one second of actual simulation. With these numbers in mind it is interesting to compare to the average human brain consisting of approximately 86 billion neurons and 10^14 - 10^15 connections between them [2]. When looking at BCI implementations it is important to realize that in all they treat the human brain as a black box, interpreting output to a limited extent but not being able to understand what causes that output. Additionally there is limited if any understanding of input mechanisms in response to certain stimuli. With this much distance between our understanding of the human brain, the ability to upload our consciousness, the most abstract piece to this enigma, (while not impossible) is far from realization. A more approachable pattern, and the current aim of many forward thinking companies, is virtual reality and augmentation. Many companies now are vying for the market in virtual reality headsets who’s use range from the ‘Google Glass’ [3] for true augmentation, though with limited display capabilities, to the ‘Oculus Rift’ [4], which provides a fully immersive display but with no possibility for augmentation. None of these devices however rely on any understanding of the human brain or consciousness. Instead they ‘augment’ our already connected peripherals. Think of it as adding a duplex mechanism to a pre-existing print-spooler. The printer itself does not even recognize that the device is attached nor does it need to, the spooler feeds the paper after printing into the duplex device which feeds it back to the printer upside-down. In the same way while wearing an augmentation device the user’s senses act in the same way they would but with additional or modified input. In noticing this the question becomes, how is it any different from the tool of a neanderthal? When hands weren’t good enough for hunting humans created the pointed stick. Is it not in line that humans see fit to create another tool, one that pushes the constant connectivity of life (in email, SMS, chat, etc) to an extension of the mind just as the pointed stick became an extension of the hand?
ReplyDeleteWith all of this taken into consideration it appears far more likely at least for the foreseeable future that our individualism will persist regardless of the method of interaction.
[1] http://en.wikipedia.org/wiki/Hodgkin–Huxley_model
[2] http://en.wikipedia.org/wiki/List_of_animals_by_number_of_neurons
[3] http://en.wikipedia.org/wiki/Google_Glass
[4] https://www.oculus.com
Stan, I thought you brought up some good points relating to this article. I also found the "Is Google Making Us Stupid" article interesting and felt that the author was spot-on with some of his observations, and some definitely apply to me. As I was reading the part about not being able to apply oneself to a reading like before, it occurred to me that even while trying to just read this article, I also had two email tabs, an Amazon tab, and a Facebook tab open. I kept a mental note of how often I multi-tasked (or tried to multi-task) after that point in the article until I finished the article, and I was surprised at how much I felt the overwhelming need to accomplish--so much so that I would stop reading, go do the task (even if it was just sending an email or adding something to my calendar), and then return to the reading. I noticed that the farther down the article that I read, and the more little tasks that I completed, I was not able to concentrate as much and would re-read lines or paragraphs, which was not happening in the beginning. It's amazing what we feel the need to do when we're using technology and how most of the time, we're not even aware of it. Even right now, I have two windows open with at least 4 tabs on each window--it's interesting to think about why our brains feel the need to multi-task so much, especially when technology is involved.
ReplyDeleteI thought your ending about BCIs tied in well with this article. A lot of times, new technology like this is frowned upon, as the old saying of 'people fear what they don't understand' holds true. I would imagine that explaining what a BCI is to someone who is not particularly tech-savvy would automatically lead to them thinking of the cons and what would happen if something went wrong and if this type of technology turned evil or something. However, I thought that it was good that you also focused on some of the pros. Like with almost anything, there is good and bad, but with technology, I feel that it is important to remember that one of its biggest selling points is that it allows for a way to do things that could not previously be done. A great example of this is what you said about BCI use in neurosthetic devices. Outlets are being created with this technology and similar technology to truly make a difference in a person's life. While the article brings up excellent points and borders on a more pessimistic view of how technology is changing both the way that we think and how we think, I believe that it is important to keep in mind all of the good that it has the ability to do as well.
This comment has been removed by the author.
ReplyDelete