This past summer, I worked on a research project here at Stevens. I worked from 9 to 4 every weekday, and with the lack of homework or other obligations, I ended up with a lot of free time. What did I do with this free time? For the most part, I read. I tried out a couple of genres, but always found myself drawn back to 19th century horror stories.
There's something special about monster stories in that time period. The 19th century seemed to have the self-imposed isolation of today's society, but without the technological advances. There always seemed to be a sense of "community" in the 17th and 18th centuries, and it isn't really until the 19th century that one starts imagining people wanting to be left alone. Even though I'm sure there were, it's hard to imagine bums living on the street before then. Everyone seemed to have their own nice little cottage in the 1700s.
This is what I find so appealing about the horror stories of that time period - it was no longer uncommon for people to wander the streets alone at night, seemingly asking for a werewolf or a vampire to attack them. Soon, the newspapers would be printing, "Mysterious Attacker Drains Victim's Blood," and you've got the makings of a solid horror/detective story right there.
"But Dan," you may be asking, "this is a computers and society blog!"
That it is, and that's my problem with today's horror stories. Computers. Technology. In the 1800s, if you were attacked and were being chased, you were lucky if a bum heard your screams through his drunken haze. Nowadays, everyone has cell phones, ready to call for help at a moment's notice. You can't imagine how much I long for a story where the victim helplessly cries out in their final moments, and no one even knows they're gone, at least for a while. With today's technology, there'd be amber alerts and missing person searches as soon as someone doesn't answer their texts for a couple of hours.
But that isn't my only problem with today's horror stories - there are so few new ideas. Everything's some rehash of a vampire, or a werewolf, or, most often, a zombie. I just finished reading Aileen Erin's Becoming Alpha, and while it's a great book, it essentially embodies today's horror stories: everyone's trying to be postmodern, and almost satirical about it.
Back when the first stories were being written, nobody had any idea what a werewolf or a vampire was. When a character started turning into a werewolf, all they knew was that they were losing control and becoming increasingly furry. In Becoming Alpha, when the main character is turned into a werewolf, she turns to her fellow werewolves and asks how much of their abilities are shared with those of movie werewolves.
Granted, I realize there's not much more space to be covered, but I really feel like we need a new kind of monster. Everything's been done so much that even the writers seem like they're getting tired of it. We need a new Jekyll and Hyde, or Frankenstein, or Dracula, or even something like Dorian Grey. The closest we've come to a new kind of monster is a sentient computer turning evil, and even that's getting old, with the Terminator and HAL 9000 effectively dominating that territory.
Who knows, maybe a monster that emits electromagnetic waves that disable your cell phone so you can't call for help? Or is that maybe a little too scary?
Monday, September 29, 2014
3.62 * 10^2159 articles
As a socialist who babbles constantly about the exponential returns (both monetary and political) of increased capital, I find myself in a state of horrified fascination with the idea of a company literally trying to own all possible creative capital, as Quentis Corporation is currently trying to do. Quentis is attempting to use distributed computing to generate and copyright all possible combinations of text, graphics, and music. I'm going to proceed with a thought experiment based on a world where this is actually feasible, which it is (for various reasons) absolutely not. The technology doesn't exist, copyright law doesn't work that way, and despite their claims of progress, Quentis can only reasonably be a scam or a parody.
Despite all the reasons Quentis can't succeed in what they claim to be their goal, the fact that they or their investors would try to speaks volumes to the environment we live in. Clearly, businesses and wealthy investors merely see human capital as an unfortunate expense standing between them and maximized profit margins. There's a terrifying possibility, with the rise of automation and the exponential growth of computing power, that the economy will swing drastically in favor of physical (or digital) capital over the next few centuries. While I don't believe we'll ever reach 100% automation, the idea that someone's trying to replace even artists and musicians with machines is proof that corporations likely would employ 100% automation if they could.
Now, the unfortunate reality is that more ridiculous ideas have been entertained and upheld in court. Despite the fact that US and many other nations intellectual property laws require some minimum level of human creativity to justify copyrights, the people involved in Quentis are not the type of people who mess around. Quentis undoubtedly has lots of high profile investors with big bankrolls and powerful lawyers, and people like that can get away with a lot. I do believe Quentis is pushing the boundary a bit too much for our current political climate to handle, but it draws attention to the attitude of big business people, who believe that their money makes them above everyone else. While I am a fervent supporter of intellectual property laws, this is a sign that we may need to modify our approach to IP.
Given all this, I am glad I can confidently say that this isn't the end of art or artists, and that Quentis won't nearly accomplish what it is trying to do. Fifteen years ago, the popular cartoon Futurama joked about Quentis' premise, claiming that in the year 3001, "the only names not yet trademarked are 'Popplers' and 'Zittzers'." This is of course, absurd, since there are an uncountably infinite number of conceivable names, even if you restrict the idea to only pronounceable names. However, Quentis claims it can effectively do just that, by using n-grams and other algorithms to create all possible meaningful texts under 400 words, and it wants to try similar methods with images and music. Anything over that length will have to take excerpts, and Quentis will presumably try to profit off of that, too. Even this basic approach won't work; just getting the English text alone would require tens of googols worth of permutations. Creating "every possible image" would be a beast unto itself. The most likely goal to achieve is to copyright all musical riffs (since there are really only 13 recognized pitches, and US courts are willing to protect relatively short segments).
Regardless of how this works out in the courts, there will always be workarounds. Writers may make up new words, or vary their sentence structure in a way we've never thought of before. Or, since there is a statute of limitations beyond which works enter the public domain, this may just have the inadvertent effect of ending intellectual property laws altogether. There's actually something poetic about that: intellectual property trolls becoming so pervasive as to destroy the very system that allows them to thrive.
Despite all the reasons Quentis can't succeed in what they claim to be their goal, the fact that they or their investors would try to speaks volumes to the environment we live in. Clearly, businesses and wealthy investors merely see human capital as an unfortunate expense standing between them and maximized profit margins. There's a terrifying possibility, with the rise of automation and the exponential growth of computing power, that the economy will swing drastically in favor of physical (or digital) capital over the next few centuries. While I don't believe we'll ever reach 100% automation, the idea that someone's trying to replace even artists and musicians with machines is proof that corporations likely would employ 100% automation if they could.
Now, the unfortunate reality is that more ridiculous ideas have been entertained and upheld in court. Despite the fact that US and many other nations intellectual property laws require some minimum level of human creativity to justify copyrights, the people involved in Quentis are not the type of people who mess around. Quentis undoubtedly has lots of high profile investors with big bankrolls and powerful lawyers, and people like that can get away with a lot. I do believe Quentis is pushing the boundary a bit too much for our current political climate to handle, but it draws attention to the attitude of big business people, who believe that their money makes them above everyone else. While I am a fervent supporter of intellectual property laws, this is a sign that we may need to modify our approach to IP.
Given all this, I am glad I can confidently say that this isn't the end of art or artists, and that Quentis won't nearly accomplish what it is trying to do. Fifteen years ago, the popular cartoon Futurama joked about Quentis' premise, claiming that in the year 3001, "the only names not yet trademarked are 'Popplers' and 'Zittzers'." This is of course, absurd, since there are an uncountably infinite number of conceivable names, even if you restrict the idea to only pronounceable names. However, Quentis claims it can effectively do just that, by using n-grams and other algorithms to create all possible meaningful texts under 400 words, and it wants to try similar methods with images and music. Anything over that length will have to take excerpts, and Quentis will presumably try to profit off of that, too. Even this basic approach won't work; just getting the English text alone would require tens of googols worth of permutations. Creating "every possible image" would be a beast unto itself. The most likely goal to achieve is to copyright all musical riffs (since there are really only 13 recognized pitches, and US courts are willing to protect relatively short segments).
Regardless of how this works out in the courts, there will always be workarounds. Writers may make up new words, or vary their sentence structure in a way we've never thought of before. Or, since there is a statute of limitations beyond which works enter the public domain, this may just have the inadvertent effect of ending intellectual property laws altogether. There's actually something poetic about that: intellectual property trolls becoming so pervasive as to destroy the very system that allows them to thrive.
Government's Grasp on Social Media
Social Media has always been a
powerful tool. With the power to spread information and news in the matter of
seconds, it can be a dangerous platform to governments and society. That is why
some governments have opted to limit and censor social media in their country.
With many of them opting to limit access to the biggest social media sites such
as Facebook, YouTube, and Twitter, they have decided to limit the freedom of
their citizens. What we take for granted in America is limited and it
demonstrates the power that other governments have on their citizens. But to
understand their decisions we have to look into the actions that these
governments are trying to limit.
Ruling with an Iron Fist, North
Korea has been in the forefront in limiting citizen’s rights. Even though North
Korea has access to the Internet, many citizens are not able to access it or
are too scared to use it. Even the graduate students and professors in North
Korea’s Universities frightened to use the internet because of the backlash
that comes from the government. The dictatorship rule over North Korea is the
modern interpretation of the society that was ruled by the totalitarianism
government in 1984 and it is really scary to know that this exists. Yet,
not all counties that limit access to social media are like this. Many
countries block social media due to the government issues or controversial issues
that appear on these sites. For example, Turkey and Iran have limited access to
social media platforms before and after their Presidential elections. Due to the
increased political tensions that happen during these elections they have
decided to stop access to these sites. With the spread of videos that include
violence and political corruption it is easy to see why these governments have
stopped access to these sites. Yet, politicians and governments should place
more emphasis on stopping these actions from happening in the first place. If
they prevent these crimes from happening then they won’t have to limit access
to Social Media sites. The ability to limit the communications of people will ultimately
lead to disagreements between the state and the government and that will lead
to more violence between the two.
This leads into the case of China’s
limit on social media. Even though they have access to these sites since 2009
due to violent riots, they have violated the rights of another country and limited
their communication. Ever since Sunday, Hong Kong has been struggling to
protest against the Chinese’s government take-over of its city. With many
students taking to the streets of Hong Kong they have been demonstrating
against China’s decisions to limit their elections. These protests eventually
led to violence between the police and demonstrators and soon China began to
limit social media to Instagram as the protest escalates. This reveals how
scared China is of this protest that is taking place in Hong Kong. The Chinese
government should’ve taken more precaution to prevent this protest from
happening because it will escalate to something bigger and more violent.
Hopefully this doesn’t escalate and become the next Tiananmen Square Protest
that has ended in the death of thousands of people which included many students.
Governments like these should end their absolute rule over people and they should
allow them to have more freedom to express their opinions. Without these basic
rights that people deserve, many people will protest in peace and in violence
against these governments and it will end with chaos and violence in many places.
http://www.mediabistro.com/alltwitter/countries-social-media-banned_b59035
http://www.motherjones.com/politics/2014/03/turkey-facebook-youtube-twitter-blocked
http://www.mediabistro.com/alltwitter/countries-social-media-banned_b59035
http://www.motherjones.com/politics/2014/03/turkey-facebook-youtube-twitter-blocked
Probably shouldn't take notes on a computer
I very strongly considered writing about cellphones and the problems the created in social relationships. Now, the only thing preventing me from writing about that is how it would most likely come across as very techno-phobic. Especially when I start talking about the fundamental purpose of a cellphone is terrible because the act giving someone your phone number tends to impress upon them that they have some measure of control over your time when they message or call you. So instead of that I am going to write a completely different rant about technology and why you should not use it to take notes.
Quick test before we move on. Grab your laptop, if you are not on it already, and pen and paper. Got it? Great. Open up some sort of text editor on your laptop (I recommend notepad since it is very bare bones), put your hands on the home row, and hammer on the 'j' key as many times you want. Done? Without counting the characters on screen write down how many times you typed the letter j. Now count up the amount of j's on screen. How close are you to the actual amount? Now take the pen and paper and write out the letter j until you feel like stopping. Write down how many times you think you wrote j and then compare to the actual amount. If you are like me the number you wrote down for the amount times you wrote j is a lot closer to the actual amount than the amount you thought you type out was to the actual amount.
Another experiment you can do that takes a bit more time is to take notes by hand for a chapter of a book you are reading and then take notes on another chapter on the computer. Wander off for an hour or so and come back. Which chapter do you remember the best? What were the important parts of the chapter? Did the way you take notes on each medium differ?
For a lot people, the written notes were probably the easiest to remember and provided a better view of the important points of the chapter. The written notes are probably the easiest to remember for a couple reasons.
One, writing usually takes a lot more time then typing does nowadays especially with computers being so prevalent in our daily life. I am a bad typist on a good day I probably hit only 40 words per minute on a good day and more than likely hover around about 20 words per minute on average. Even with my poor typing speed I could still probably type out this blog post faster than I could write it down. Writing and typing are not particularly hard, but writing out a paper seems a lot more difficult than typing it. This is because it takes a lot longer to write out the paper. Things that take a lot time seem difficult not because they are but because it presents the illusion of difficulty. Video games use this trick all the time, modern Zelda games are a good reference for this in particular. Boss has a big glowing weak point to damage it but you cannot hit it whenever you want. You have to wait for it to be vulnerable then you can attack and even though the boss only needs to be damaged a few times it will take a lot of time to defeat it. If you decide to replay the game, you will probably remember pretty clearly how to beat the boss because you spent a lot of time on it originally.
Two, the information is most likely more concise. When writing out notes, you are condensing information to what seems to be the most relevant or important details as well as a more chunked version of the information. Most people employ shortcuts when they write down notes so that they can keep up with the professor or just to speed up the note taking processor. So with becomes w/, and becomes &, etc. Not only are you thinking more about what details are being recorded but also about how you are presenting the data. When it comes to typing up notes as one reads or sits in lecture, most people end up typing the notes word for word as well as more notes in general. There is a lot less analysis done on the information itself in this case because there is a lot more information being absorbed. This leads to less pathways being formed in the brain about the information so it is more likely to stay in one's short term memory instead of moving into the long term memory.
Another reason for why it is easier to remember notes you write rather than typing is due the medium itself. When writing notes there is a lot less to potential for distraction as ostensibly the only thing to distract you while writing is the things happening directly around you which is minimize-able. On the computer, there are a lot more potential things to distract you from what you are working on at that moment. Here is a short list of things I have done while writing this blog post: checked my email, facebook, twitter, reddit, hacker news, facebook again, got message, etc. The list goes on for awhile and this just while I am trying to write the blog itself.
Taking notes on the computer might seem really convenient, and it can be, but for the most part there is a lot of inertia centered around it that make it bad for taking notes. Sure, you can take notes on your computer exactly the same way you would write them down but you would probably still not remember it as well if you were to write it down. In fact, there is study that looks into this [1][2]. Now I am not saying never to take notes on your laptop, but to be more mindful about when you are using your laptops to take notes in class. I personally learn and remember information a lot better and more clearly when I write out my notes on paper rather than typing them up; but I am not everyone and neither are you. So test it out, figure out which works better and think about why it does. If you do experience a big loss in information retention from one medium to another, consider switching mediums as it could improve how much you remember in class by quite a bit.
[1] If you happen to have a subscription, you can find the paper here http://pss.sagepub.com/content/25/6/1159
[2] If not, here is an article about it. http://www.vox.com/2014/6/4/5776804/note-taking-by-hand-versus-laptop
Quick test before we move on. Grab your laptop, if you are not on it already, and pen and paper. Got it? Great. Open up some sort of text editor on your laptop (I recommend notepad since it is very bare bones), put your hands on the home row, and hammer on the 'j' key as many times you want. Done? Without counting the characters on screen write down how many times you typed the letter j. Now count up the amount of j's on screen. How close are you to the actual amount? Now take the pen and paper and write out the letter j until you feel like stopping. Write down how many times you think you wrote j and then compare to the actual amount. If you are like me the number you wrote down for the amount times you wrote j is a lot closer to the actual amount than the amount you thought you type out was to the actual amount.
Another experiment you can do that takes a bit more time is to take notes by hand for a chapter of a book you are reading and then take notes on another chapter on the computer. Wander off for an hour or so and come back. Which chapter do you remember the best? What were the important parts of the chapter? Did the way you take notes on each medium differ?
For a lot people, the written notes were probably the easiest to remember and provided a better view of the important points of the chapter. The written notes are probably the easiest to remember for a couple reasons.
One, writing usually takes a lot more time then typing does nowadays especially with computers being so prevalent in our daily life. I am a bad typist on a good day I probably hit only 40 words per minute on a good day and more than likely hover around about 20 words per minute on average. Even with my poor typing speed I could still probably type out this blog post faster than I could write it down. Writing and typing are not particularly hard, but writing out a paper seems a lot more difficult than typing it. This is because it takes a lot longer to write out the paper. Things that take a lot time seem difficult not because they are but because it presents the illusion of difficulty. Video games use this trick all the time, modern Zelda games are a good reference for this in particular. Boss has a big glowing weak point to damage it but you cannot hit it whenever you want. You have to wait for it to be vulnerable then you can attack and even though the boss only needs to be damaged a few times it will take a lot of time to defeat it. If you decide to replay the game, you will probably remember pretty clearly how to beat the boss because you spent a lot of time on it originally.
Two, the information is most likely more concise. When writing out notes, you are condensing information to what seems to be the most relevant or important details as well as a more chunked version of the information. Most people employ shortcuts when they write down notes so that they can keep up with the professor or just to speed up the note taking processor. So with becomes w/, and becomes &, etc. Not only are you thinking more about what details are being recorded but also about how you are presenting the data. When it comes to typing up notes as one reads or sits in lecture, most people end up typing the notes word for word as well as more notes in general. There is a lot less analysis done on the information itself in this case because there is a lot more information being absorbed. This leads to less pathways being formed in the brain about the information so it is more likely to stay in one's short term memory instead of moving into the long term memory.
Another reason for why it is easier to remember notes you write rather than typing is due the medium itself. When writing notes there is a lot less to potential for distraction as ostensibly the only thing to distract you while writing is the things happening directly around you which is minimize-able. On the computer, there are a lot more potential things to distract you from what you are working on at that moment. Here is a short list of things I have done while writing this blog post: checked my email, facebook, twitter, reddit, hacker news, facebook again, got message, etc. The list goes on for awhile and this just while I am trying to write the blog itself.
Taking notes on the computer might seem really convenient, and it can be, but for the most part there is a lot of inertia centered around it that make it bad for taking notes. Sure, you can take notes on your computer exactly the same way you would write them down but you would probably still not remember it as well if you were to write it down. In fact, there is study that looks into this [1][2]. Now I am not saying never to take notes on your laptop, but to be more mindful about when you are using your laptops to take notes in class. I personally learn and remember information a lot better and more clearly when I write out my notes on paper rather than typing them up; but I am not everyone and neither are you. So test it out, figure out which works better and think about why it does. If you do experience a big loss in information retention from one medium to another, consider switching mediums as it could improve how much you remember in class by quite a bit.
[1] If you happen to have a subscription, you can find the paper here http://pss.sagepub.com/content/25/6/1159
[2] If not, here is an article about it. http://www.vox.com/2014/6/4/5776804/note-taking-by-hand-versus-laptop
Robotics in Biomedicine
Biomedical robotics
seeks to solve the current shortcomings that exist within the medical field.
Society constantly strives for new methods of aiding the physically and
mentally impaired with new creations and ideas, revolutionizing the healthcare
system and its capabilities. By reflecting on the achievements and shortcomings
of past and present technologies, a prediction for future issues can be made,
lowering the margin of error in not only testing, but also implementing the use
of new technologies. Many experts anticipate the integration of robotics in
medicine to revolutionize healthcare. Surgical robots that assist in expediting
surgery and three dimensional organ printing are just a few or many
achievements illustrating the potential of biomedical robotics. Further
research is needed to fully implement such creations into practical procedures,
but professionals anticipate their influence to lead biomedicine towards
positive growth.
Professionals will be
working with mechanical assistance, but there is a possibility that future
professionals will be too heavily reliant on technology. There a possibility
that the younger generation will grow to lack the basic skills that their
presiders had, or even worse, their original focus. There is a possibility that
humanity will gain too much control over naturally occurring phenomena, making
the artificial indistinguishable from the natural. The development of soft
tissue robotics, the integration of tissue and mechanical engineering, has the
potential to create fully functioning limbs for victims of amputation or innate
handicaps. The development is exciting and has the potential to give many
patients more natural looking alternatives to standard prosthetics, but all in
all may not be entirely necessary. Soft tissue robotics could potentially
become more of a cosmetic procedure than a necessity for patients. The
aesthetic that this project could provide would, for many, outweigh the functionality
that a more efficient robotic limb would provide a patient with, simply because
of the patient’s own negative body image.
While soft tissue robotics may not be entirely morally sound, that is not to say that it is not revolutionary. When thinking about robots the general consensus is that these structures are machines made of rigid metals or synthetic plastics. Harvard
University Labs has recently launched a tool kit that allows consumers to build
and operate soft robots, mechanisms with a flexible base material. The kit does
not contain the basic parts necessary for building said structures, but it does
delve into detail regarding how to build them. With this technology so easily accessible,
it is clear that robotics in biomedicine is much further along than many could have anticipated.
Sunday, September 28, 2014
Should Robots Be Used In War?
Artificial Intelligence has taken of
since the start of the 2000’s. With
faster processors and computer power growing at alarmingly fast rates, the
concept of robots being used more often is more achievable. Of course, as is most technology, robots and artificial
intelligence are inherently political. Who
will decide the multitude of moral questions these robots will have to face? A
human will have to make a decision for all robots since robots cannot calculate
decision like humans can. Our ability to
feel is what drives our decisions. For example,
one of the issues the UN plans to speak on this year is the use of robots in
the military. If a robot is programmed
to kill, what is to stop it from taking out an entire country so that the
robots manufacturer can win that war?
According to Isaac Asimov’s laws which were first introduced in 1942, “A
robot may not injure a human being or, through inaction, allow a human being to
come to harm”. How can we expect to use
robots for military purposes but expect those robots to follow the three laws
of robotics? Eventually, war will be
between robots of one country against the robots of another. Seeing how much of a failure it is in terms
of war to have robots killing robots, we would eventually resort to other means
of fighting such as biological and nuclear weapons. You are not winning a war if the only
casualties are robotic ones.
As of now, every piece of technology
used, whether in the military or not, is monitored by another human. According to David Akerson, a lawyer and
member of the Internation Committee for robot Arms Control, “Right now everything
we have is remotely controlled and there’s always a human in the loop…We’re
heading the direction to give the decision to kill an algorithm”
(Garling). How can we trust a piece of
technology to rely on an algorithm to kill a human being. People say that because robots don’t have
feelings of empathy, rage, revenge, etc. that they would be more reliable at
killing. I disagree. I think every human life is different and no
algorithm can decide whether or not someone should die. Would an algorithm be able to read a person’s
emotion and know whether or not to kill him/her or question him/her for
information? There are too many things
to consider that no one algorithm can teach a hunk of metal whether to kill or
not.
I think that we should research
robotics and artificial intelligence, but I do not think robots should ever be
made to be used in the military. I think
to live a safe life, robots should be made to follow Asimov’s three laws of
robotics (which are attached below).
These three laws may seem simple in theory, but to actually program a
robot to understand and abide by these laws has proven to be on of the most
complex problems in the field of artificial intelligence.
Source:
http://www.sfgate.com/technology/article/As-artificial-intelligence-grows-so-do-ethical-5194466.php
1) A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2) A robot must obey the orders given to it by human beings,
except where such orders would conflict with the First Law.
3) A robot must protect its own
existence as long as such protection does not conflict with the First or Second
Law.
Impact of Technology on Television
Technology has brought many advances into our lives
and society. Take for instance television industry. Technology has helped revolutionized
television. The way people watch and perceive television now compare to few
decades back has totally changed due to technology.
In the past, television was very limited. Only few
networks, shows, and movies were available. There was also a space restriction
with where one could watch the TV. Look at now! The television industry has succumbed
everyone with its unlimited options. There are so many more networks, shows, sports, and
movies bombarding a viewer with unlimited options ranging from all sorts of genres.
Television has indeed evolved! Technological advances such as faster transistors,
chips, and displays have definitely helped shape the television
industry. People can experience TV in such great HD quality – providing a great
viewing experience. Internet, in my opinion has been the key factor in revolutionizing
the television industry. Television was only a means to so called “entertainment”
before. People would watch TV to get some relaxation or relieved stress from
everything (work, chores, etc). Not anymore though! It is no longer restricted to space anymore.
It is made available anywhere and everywhere thanks to the Internet. People do
not need to be sitting home in their living room in order to enjoy TV. They can
enjoy it anywhere they like thanks to computers, tablets, smartphones, etc.
Television is a part of life for everyone. Everyone is
hooked into shows nowadays. Some people get so absorbed in them that they repeatedly
watch an episode over and over again that special term is created for those – binge watching! So many people spend
time just by absorbing themselves into these shows. There are so many varieties,
such cleverly written stories out there that it is impossible to resist them.
Social media here becomes the key factor in spreading the news and making
something go viral that everyone gets dragged into it. It creates a peer
pressure for teens, adults, etc. It has become a social norm - conversation starter and much more. For some, these shows become addiction. All they can think of
is about watching these shows. They can’t separate themselves from them and waste all their valuable time.In addition, recent Internet networks such as Netflix have took this even far - allowing access to these shows and movies anywhere, anytime. How can one not get absorbed in them?
Shows aren’t the only thing television provides. Sports
are another huge thing. There are so many sports as well. Sports are even more
popular than shows in America. People spend time following (watching or keeping
up with their games online through stats), talking, arguing, betting about
their teams and much more. Sports is like a religion in America. Every other person is watching them.
So many of us (including myself) are absorbed into the
television (shows, sports, etc) that we waste so much of our time. While writing
this, I realized how much of my valuable time I have wasted on watching sports,
shows, and movies. I understand that some sort of “entertainment” is required
to relieve our stress. How much of it is needed to relax though or how much
fun do we need through television? Why can’t we control ourselves with the excessive use? Think
about how much more productive we become if we could only control the “how much”
part of the television.
Adblock as a Political Tool, Adblock as Theft
I installed Adblock on Firefox back in 2007 as I went on a lot of sites that would maul you with popups and ads that slid out. Invasive ads are incredibly frustrating and detract from the experience of the website, so it made sense for me to do so. However, as the amount of people reading online instead of through physical media increased, the annoying ads no longer were needed. Regular ads like ones you'd find in the newspaper However, my ad blocker stayed on. I used it to give ad revenue to sites I supported and still read the content of sites I did not. Others go an extra step and use sites like archive.today to not even give the site the pageview. Are these moves strictly to show disdain for a site or are they actually stealing from site owners' profits?
According to Ken Fisher, founder of Ars Technica, Adblockers are taking away 40% of their audience for ads and bite into their profits severely. [1] He states that, when ads are blocked that much, they're forced to run more questionable ads, whether of topic or of invasiveness. It's self-fulfilling: people block ads because of content or method, so ads get more questionable since they pay more and more people block ads. Fisher states that, instead of going to sites and ad-blocking them, either enable ads on the site, pay for a service the site provides, or simply don't go. To him "it's a lot more annoying and frustrating to have to cut staff and cut benefits because a huge portion of readers block ads."
However, that's not exactly true. While Adblock definitely takes away ad views, it would have brought so many sites out of business by now. It's not as if Adblock is a new technology; Adblock Plus is reaching almost 8 years old at this point. Additionally, ABP has accepted a whitelist of certain ads if they are determined to be inoffensive. [2] Some have the view that, although Adblock is a problem, sites should not be trying to focus on just standard advertisements to pursue revenue. Since Adblock is so prominent, "it’s impact might be surprisingly large, but the fact it is having an impact should be far from surprising." [3] The common solution around the issue of Adblock is to find other monetization options than advertising. For example, the New York Times moved to a "metered" solution where visitors are given 10 articles for free before being asked to pay for a subscription. This result not only blew away their financial expectations but it actually increased their number of physical papers sold. [4]
After observing the viewpoints on both side, I actually decided to uninstall ADP on my Chrome. I don't believe it's theft as ads are more supplemental income than anything else. Sites that rely on just ads are just looking for an audience for "free" content. They either have other ways to monetize that audience or they're going down eventually. Although I still believe in not supporting certain sites, I do agree that it would be better entirely to not visit them. It would be like if I disagreed with Chick Fil A's stance on gay marriage but I tried to scrounge for coupons to get their chicken for free. I'm taking way too many steps to avoid helping the business and it would be better if I discouraged one business by going to a competitor. I stop going and they stop getting my page clicks and my ad views. Another site gets them instead. The goal should not be ending one site but supporting the good ones. So I'll turn off ADP and visit those sites instead.
Binge Watching on Netflix
Netflix is an online platform which streams movies and TV shows for users for a designated price per month. The ease of access to missed episodes and favorite movies has made Netflix appealing to people of all age groups. However, this vast database of movies and TV shows has created a new pastime for people of America - binge watching. This phenomenon ( definitely a phenomenon) occurs when a user watches approximately two to six episodes of the same show back to back - I am a guilty participant. This process seems to create a sort of trance where people escape from their daily lives and duties into the world of TV shows. Sure catching up on one or two missed episodes shouldn't be cause for alarm, but many teenage students are taking binge watching to another level. This addiction to online shows can cause loss of attention, a slowing of circulation and metabolism, sleep deprivation and many other health concerns.
According to a study done by Epidemiologist Steven Blair, a professor of public health at the University of South Carolina, sitting for an extended period of time can cause an increased level "of cholesterol, blood sugar [and] triglycerides" which can lead to serious health issues like diabetes. This new trend of bing watching has become prevalent due to the ease of access. If the availability of sensational shows were limited to primetime TV or just cabe networks and not the internet, such issues would not arise.
People have take serious damage to their social lives, familial lives and in the case of students, their educational lives. Michael Hsu writes in the Washington Street Journal that while he binge watches, "At 3 a.m., bleary-eyed and faced with the choice of watching another episode or going to bed so I could be ready for work and family the next day, I've often found myself opting for "just one more" hit." The instant gratification binge watching provides takes away from our levels of patience in the real world. Just because we are so used to moving forward with the plot of a TV show, we start to subconsciously expect that from our daily life.
Binge-watching has also been perpetuated by popular websites talking about "best TV shows to binge watch." These people work to exploit the weakness of people and lure them into watching their programs for an extended period of time. According to an article on Newsday, there are guidelines on which show to binge-watch. The increased hype around this activity has made it a "cool" thing for students to do causing a decrease in their performances academically. If there were regulations placed to the amount of episodes one can watch per day, maybe this problem wouldn't arise. Until then, binge away.
2. http://www.huffingtonpost.com/2014/09/04/binge-watching-tv-harmful-to-your-health_n_5732082.html
3. http://online.wsj.com/articles/how-to-overcome-a-binge-watching-addiction-1411748602
4. http://www.newsday.com/entertainment/tv/best-tv-shows-to-binge-watch-1.5631924
According to a study done by Epidemiologist Steven Blair, a professor of public health at the University of South Carolina, sitting for an extended period of time can cause an increased level "of cholesterol, blood sugar [and] triglycerides" which can lead to serious health issues like diabetes. This new trend of bing watching has become prevalent due to the ease of access. If the availability of sensational shows were limited to primetime TV or just cabe networks and not the internet, such issues would not arise.
People have take serious damage to their social lives, familial lives and in the case of students, their educational lives. Michael Hsu writes in the Washington Street Journal that while he binge watches, "At 3 a.m., bleary-eyed and faced with the choice of watching another episode or going to bed so I could be ready for work and family the next day, I've often found myself opting for "just one more" hit." The instant gratification binge watching provides takes away from our levels of patience in the real world. Just because we are so used to moving forward with the plot of a TV show, we start to subconsciously expect that from our daily life.
Binge-watching has also been perpetuated by popular websites talking about "best TV shows to binge watch." These people work to exploit the weakness of people and lure them into watching their programs for an extended period of time. According to an article on Newsday, there are guidelines on which show to binge-watch. The increased hype around this activity has made it a "cool" thing for students to do causing a decrease in their performances academically. If there were regulations placed to the amount of episodes one can watch per day, maybe this problem wouldn't arise. Until then, binge away.
Works Cited:
1. http://www.npr.org/2011/04/25/135575490/sitting-all-day-worse-for-you-than-you-might-think2. http://www.huffingtonpost.com/2014/09/04/binge-watching-tv-harmful-to-your-health_n_5732082.html
3. http://online.wsj.com/articles/how-to-overcome-a-binge-watching-addiction-1411748602
4. http://www.newsday.com/entertainment/tv/best-tv-shows-to-binge-watch-1.5631924
How Technology has Affected the Spread of Social Issues
Freshman year I took a class during which the professor criticized our generation for not standing up for what we believe in. He would say "my generation and my parents generation, organized sit-ins and parades, and all you people do is like something on Facebook," or something to that extent. That really bothered me. Just because he couldn't physically see people standing up for social justice doesn't mean that those people didn't exist. The integration of technology into our daily lives has shifted the war front from the streets to the webpages.
Over
the summer, there was a worldwide campaign to raise awareness for amyotrophic lateral sclerosis (ALS) or Lou
Gehrig’s Disease through the ALS Ice Bucket Challenge. I was shocked at how
many people were bothered simply by the fact that videos filled up their Facebook
and Instagram newsfeeds. So many others just missed the point of the videos.
This movement was meant to awaken people to the struggles of the disease and
what is required of the people who have it to perform daily activities. Articles
were tainting the effectiveness of the challenge and calling those who did it
attention-seeking. Since July 29th, the ALS Association has received 115
million dollars in donations. That’s a 3,500 percent increase from the 2.8
million dollars that the Association raised during the same time period last
year. What was the difference? Utilization of technology and social media. Even
singers, athletes, actors, and businesspeople got involved, along with the 3
million other everyday people. You may say the Ice Bucket Challenge was stupid,
but the numbers don’t lie. This isn’t the first time technology and social
media has accelerated fundraising. In 2004, it only took one year for Livestrong
to raise 50 million dollars with its yellow bracelet cancer-awareness campaign.
Ferguson, Missouri used to be and, in some circles,
still is a hot topic of social injustice and controversy. With so many news and
media outlets, it is difficult to discern the whole truth; however, the truth
is inevitably out there already. Regrettably, I neglect to keep up with the
news and current events, but my interest in the Ferguson issue was peaked and
maintained on social media. Through Facebook posts and tweets from people on
the ground in Ferguson, I was able to piece together what I think really
happened, which I admit could be fully wrong. Without all of these outlets, my
only source of information would be from the news stations, which I think we
can all agree have some degree of bias and slanted information. The constantly
updated information about situations in real time from Ferguson is something that
we didn’t use to have. Technology and social media have given social justice
fighters an opening to speak their mind and have thousands of people hear them.
They have spread awareness faster than ever before. They give people online access
to other people with similar or dissimilar opinions, so they can meet up and
then organize something physical or try to convince others to see things their
way.
I agree – our generation does not organize as many
revolutionary events you read about in history books as previous generations. We
still participate in similar events such as Pride Marches and Relays for Life,
but the real influence occurs online, where people can spread awareness and
demand change for issues that affect our society.
Virtual Reality Usage for Real Life Purposes
Virtual
reality, also known as immersive multimedia can simulate 3D environment for the
user to explore and interact. Virtual reality technology has gone a long way
since Morton Heilig’s invention of Sensorama in 1957. It had a very slow start
and during some periods seemed like an unattainable idea. Most of the ideas
were just concepts and plans for the future. Concepts, which were brought to
the practice, were not really popular, because they were pricy and not good in
quality. Most of the virtual reality projects did not hit the world interest,
until now. Nowadays huge companies are
pouring millions into the virtual reality industry. The latest and most
advanced virtual reality technology today is Oculus Rift, “which will change
the way you think about gaming forever”. While some people consider virtual reality to
be very dangerous to the real life as we know it, I believe that there are some
practices with it which could prove to be useful in our everyday lives.
Oculus
Rift is a virtual reality headset designed for video games. It has a very wide
field of view, high resolution display for the maximum sense of reality. It is
not the first virtual reality headset made, but it is definitely the best.
Oculus brings 3D gaming to the next level.
One
of the dangers that most of the people are afraid of is that virtual reality
might get too close to “reality” and people will start forgetting about the
real life. Of course it is ok to forget about real life problems once in a
while and emerge yourself into the virtual reality, but completely giving up on
real life is really unhealthy and dangerous. It is a viable reason to be
against virtual reality advancement, but it also can be easily fixed using any
kind of self-controlling software, which would lock the system after a certain
amount of usage.
Virtual
Reality has been used for the entertainment purposes only, but what if it is
taken beyond that line. Using the virtual reality technology, such as Oculus
Rift in almost any field can prove to be useful. For example, medical field –
improving surgery procedure with practice, without risking lives of other
people; or piloting – practice flying without any risks or using fuel. Such
technology would improve most of the aspects of our daily lives.
Virtual
reality is one of the most advanced technology innovations in today’s
society. So far it has only been introduced
to the video game industry, but imagine the benefits it would bring if used in
other fields. From performing a surgery to flying a plane with the maximum
sense of reality would change our lives forever. Of course it comes with risks,
but in my opinion it is worth it. Oculus Rift is a massive push towards the
virtual reality future.
[http://bits.blogs.nytimes.com/2014/09/21/oculus-brings-the-virtual-closer-to-reality/]
Does Technology Help or Harm Communication?
Sometimes in today's world, communicating can feel overwhelming or even ineffective at times. It was only twenty years ago when we communicated through snail mail and telephone. Twenty years ago you probably would not have had a voice mailbox. Even ten years ago some of my friends families did not have a voice mailbox. Within the past fifteen years, technologies such as email, instant messenger, and mobile text messages have been developed. These technologies were adopted into the mainstream very quickly and have become ingrained in our lifestyle. They allow for very fast communication. What took ten minutes, ten hours, or a few days to communicate now takes mere seconds. As a result, businesses operate more quickly and efficiently and overall our lives are more fluid. What I mean is there is less waiting for people and information. Being that most people check their phone and email constantly throughout the day, it is very easy to make dinner plans or communicate emergencies without hassle. However, being that there is no filter on our channels of communication, we are subjected to much useless information.
As mentioned earlier, it is clear to see how technology has positively influenced communication and, therefore, society. Businesses operate more effectively and simple means to contact loved ones make certain aspects of personal relationships easy. For example, because of Facebook I have been able to keep in contact with my closest friends from high school and even four years later our friendships are just as strong, if not stronger. If we did not have a centralized platform for group communication, I am unsure of whether our friendships would have been maintained. Regardless of that, it is undoubtedly true that maintaining a meaningful relationship would require much effort from all parties. From this perspective, technology is central to my closest friendships and invaluable to me.
That being said, it is also common for channels of communication to become polluted with miscommunicated messages or purely useless information. First, lets take a look at email. Email allows us to send electronic letters to people that are able to be received in a moments time. While this is great when communicating to friends and colleagues, it also means we are all subjected to spam and advertisements that we are not interested in. At virtually no cost, email bots can send thousands upon thousands of messages to personal email addresses. At the same time, regular people can do the same. Combine these messages and your inbox is a cluster of information you do not need or want which can be frustrating as you are expected to check email constantly. Mobile text messages are less subject to advertisements, but communication here can be a challenge. Choice of wording suddenly becomes vital because there are no non-verbal cues to give context to messages. We all know sarcasm is potentially the worst culprit for miscommunication when sending text messages. It is very easy for the receiver to misinterpret messages which again leads to frustration when communicating.
How do we address these issues?
Some would say that simply reducing technology usage solves the issue. This is true, however it's at the sacrifice of communicating quickly which, today, is important to most. Certain platforms such as gmail has helped reduce message pollution by implementing default labels for message categories: Inbox, Promotions, and Social Media. This separates meaningful information from the rest. This is a step in the right direction and many platforms are working toward effective solutions. Although it seems that the filtering of messages is to be left to the user, at least for the time being.
As mentioned earlier, it is clear to see how technology has positively influenced communication and, therefore, society. Businesses operate more effectively and simple means to contact loved ones make certain aspects of personal relationships easy. For example, because of Facebook I have been able to keep in contact with my closest friends from high school and even four years later our friendships are just as strong, if not stronger. If we did not have a centralized platform for group communication, I am unsure of whether our friendships would have been maintained. Regardless of that, it is undoubtedly true that maintaining a meaningful relationship would require much effort from all parties. From this perspective, technology is central to my closest friendships and invaluable to me.
That being said, it is also common for channels of communication to become polluted with miscommunicated messages or purely useless information. First, lets take a look at email. Email allows us to send electronic letters to people that are able to be received in a moments time. While this is great when communicating to friends and colleagues, it also means we are all subjected to spam and advertisements that we are not interested in. At virtually no cost, email bots can send thousands upon thousands of messages to personal email addresses. At the same time, regular people can do the same. Combine these messages and your inbox is a cluster of information you do not need or want which can be frustrating as you are expected to check email constantly. Mobile text messages are less subject to advertisements, but communication here can be a challenge. Choice of wording suddenly becomes vital because there are no non-verbal cues to give context to messages. We all know sarcasm is potentially the worst culprit for miscommunication when sending text messages. It is very easy for the receiver to misinterpret messages which again leads to frustration when communicating.
How do we address these issues?
Some would say that simply reducing technology usage solves the issue. This is true, however it's at the sacrifice of communicating quickly which, today, is important to most. Certain platforms such as gmail has helped reduce message pollution by implementing default labels for message categories: Inbox, Promotions, and Social Media. This separates meaningful information from the rest. This is a step in the right direction and many platforms are working toward effective solutions. Although it seems that the filtering of messages is to be left to the user, at least for the time being.
The Death Of The Music Industry
Before the internet became mainstream, music was purchased at record stores and at concerts. Many people did pirate music, but it required a tape recorder or a disc copier to do so. Today, one can download nearly any album they wish in a matter of minutes or less. Business people in the music industry call it dead. They say there is no money to be made. Music streaming doesn't make money for the artists, only the provider. Selling CD's and vinyl's online is fine, but they are considered collectors items and have plummeted in popularity. For example, in 2000, almost 800 million albums were sold while in 2009, that number was down to 300 million. According to billboard, digital music sales decreased for the first time in 2013 from 17.6 million units from the previous year’s total of 117.7 million.
How can one expect to be a musician able to sustain themselves off their work? As mentioned earlier, business people will likely tell you it's near impossible. They're correct in a way. One can not sustain themselves if they expect to publish their work, do some advertising, and throw in a couple concerts. Again, business people will tell you this is a very bad thing. However, those who argue this point neglect to mention the shift of power to the artists from the music corporations.
Before, it was a huge deal to become signed. It was your ticket into the industry. Sure, you signed a binding contract, but that's what you needed to do to become platinum. Now becoming signed is an option that needs to be weighed by the musician and becoming signed to a big label is needless unless you are actually recruited to be a pop star. Because of the internet, services such as Bandcamp make music purchases quick and simple while it also allows musicians to retain power over their own work. Bands may set the cost of their album to whatever they desire - even free or based on donations - and Bandcamp takes a very small sum of the users purchase. What is stopping people from downloading without donation? What is stopping people from pirating? Nothing is stopping anyone. Without getting into a discussion of piracy, I believe giving the customer a hassle free option to purchase a digital product will greatly reduce the amount of piracy that happens. Many people do want to give back to those who provide content, but nobody wants to jump through hoops. I am actually far more inclined to purchase an album for $5 on Bandcamp than $5 on iTunes or Amazon because I know $4.75 is going to the band and not $0.25.
Even with Bandcamp, music producers will not be able to make a living. So how else will they make money? The most obvious way is playing concerts. This is one of the few reasons anyone should sign to a label, in my opinion, as the label has managers who will organize tours for/with you. Sumerian Records is notorious for having their big name bands tour with their new bands to get them promoted, which I think is great for upcomers. Even with album purchases and playing shows, it's likely you will still be eating ramen and PB&J daily. After several months of being a starving artist, it will become old. The good news is, you don't need to get a job at ShopRite to sustain yourself while not touring! If you wish to continue with music as a career, it's time to start writing music scores, repairing instruments, mixing and mastering music, and becoming a teacher. In recent years, there have been a strong number of musicians hosting clinics for guitar, drums, production, vocals, and more. Additionally, many offer lessons while on tour before the show begins. In fact, I am considering purchasing lessons from guitarist Aaron Marshall from the band Intervals when I see them live in October. All I have to do is go online, purchase my lesson pass, and wait for the day. This has been made possible by BandHappy which allows students and teachers to connect online to arrange online or on-tour music lessons. Without the internet, I imagine waiting in line at a venue to purchase my ticket months in advance and hopefully that guy in front of me isn't purchasing the 5:00pm to 6:00pm guitar lessons slot.
Gone are the days of living as a touring band. Few musicians today live as Guns 'N Roses did in the 80's and even fewer will in the forseeable future. Despite this fact, this is the best time to be a part of any music scene. The potential exposure one can get on the internet is incredible and even bedroom producers can get appropriate recognition. It is easier than ever to be a successful musician. Screw the big name labels and forget what the suits tell you. Nobody is platinum anymore because nobody needs to be platinum. The power is in the musicians hands now, not the corporations.
Esports is not dead
On July 21st in Seattle, Washington, two teams of five players took their seats on the main stage of The International 2014 - the winning team would take home over 5 million dollars. However, money isn't the only thing on the minds of the players. Teams Newbee and Vici Gaming are fighting for who would be known as the best DotA 2 team for the next year.
This is esports.
While The International is considered the largest gaming tournament of the year - boasting a crowd-funded 10 million dollar prize pool, 20 million viewers, and over 2 million simultaneous viewers during the grand finals, not including ESPN, MTG Europe, or China TV - it is not nearly the only opportunity players have to exhibit their skills and win prize money. Throughout the year, hundreds of tournaments are hosted across many games such as DotA 2, League of Legends, Starcraft 2, Counter Strike, and SMITE. In addition to tournaments, many players utilize online video streaming services such as Twitch, YouTube, and HitBox to broadcast their practices. For some, their income from gaming is enough to sustain a living without taking another traditional job. A few years ago, this was impossible except for perhaps a few dozen players.
Esports (or electronic sports) are becoming hugely popular on a global scale. According to a report for SuperData, esport viewership more than doubled in 2013 with over 71 million viewers worldwide. Looking at the data and consistent buzz in the news, it's undeniable that esports is here to stay in the foreseeable future. But what will it take for esports to grow further? What will bring it into the mainstream?
One great challenge facing esports, and what I think is the greatest challenge, is the idea that playing video games for a living isn't respectable. What is the root of this idea? It could be a number of issues; video game addiction, stereotypes of what a gamer looks and acts like, or that video games are a waste of time in general, among others. What even makes a profession respectable? Is it based on salary? Supporting yourself and your family? Contribution to society? Is a job as a garbage collector more or less respectable than a secretary? I am not sure of the answer to this question, but I will say that I don't believe a janitor is less respectable a professional athlete.
Watching an esport can be likened to watching other competition based entertainment. This includes football, Mixed Martial Arts, and chess. What is different about esports? Games are played on a computer and most viewers watch online - not through a television service. The mechanics of competitive games are different from traditional sports for sure, but they are still played in a set arena with a set of rules and bounds that are easy to understand but still offer depth. From this perspective, I can equate any esport to any other physical sport as well as mental competitions like chess. Some are offended by the idea that a video game is a sport. Traditionally, sports are an exhibition of physical ability by competition and professionals consist of the best of the best. Esports challenge the players mental abilities. Using the word "sport" to describe competitive gaming seems like it would alienate the idea from many people. I believe these are the same people who don't consider chess a sport. It certainly is fine to not consider any esport a sport, however that does not invalidate their existence.
Still, I wonder what is preventing society from seeing competitive gaming an acceptable profession. Other arguments I see express the thought, "I will not pay someone to play video games too much." This argument seems to water down esports quite a bit. Why would you pay an athlete to play soccer too much? Is it because we consider them hobbies? Physical sports can be fun hobbies, so can video games, so can watching films, and so can playing music. Yet, we pay athletes, progamers, film critics, and musical performers to keep doing their hobby for a living (Musical performance is certainly an outlier in this comparison as typically there is production of music involved, not only performance). How do people view video games differently? I think the answer or at least a large part of the answer is that people don't know what competitive gaming involves.
Becoming a great player takes thousands of hours. Progamers play for 7 to 12 hours a day to maintain and develop their skills just as professional athletes do. Some people would say that their kid does that already, why aren't they a progamer? Well, they might become one, but are they willing to change their perspective of games from sole entertainment to the more analytical and technical understanding that is required of any profession? Playing a pickup game of basketball can be great, but nobody makes it to the top without first the physical ability and then the mental ability to analyze the game. I would even argue that physical ability has immense diminishing returns compared to analytical skill and strategization.
When people water down what competitive gaming is, they are viewing it like one would view paying people to play tug of war all day. Tug of war is a simple game with one goal: pull the rope with more force than the opposition. It has some strategy such as putting the biggest person at the end of the rope and loop them in, but nothing terribly interesting as far as I am aware. From a top level, one might be unsure what they're looking at when they first see an esport. After a few minutes of observation, you will see characters fighting one another, and maybe it would take you a few more minutes to see who is on which team unless it's a one on one match. I imagine one viewing a traditional sport without any prior knowledge would have a similar experience. However, underneath what's immediately noticeable is a lot of detail, nuance, and strategy. There are mechanics that one may take hours or more to pick up on if they don't have some sort of guide. Here lies the disconnect between what people think they know and what actually is. The same happens to traditional sports when a person is disinterested: "This is just a game of get the ball in the goal," or "fight the opponent until they give up." These statements are true, but they severely underestimate the complexity of what is actually happening on the field.
In the end, I'm not sure it matters if esports overcomes these challenges. The esports scene is currently growing at an extreme rate and is almost to a point where thousands of people can sustain themselves solely off progaming. Either way, esports is here to stay and it's not going to mind anything the negative have to say.
The Centralized Internet
Try to picture the vastness of the
Internet – and not just the World Wide Web or otherwise publicly
resources. Think about how much of your day relies on your being able
to access it, how much data is being exchanged every second. The
mental picture seems to be larger than life, doesn't it? There can't
possibly be only a handful of companies that control intercontinental
IP transit in its entirety! While there is some dispute over what
qualifies an entity as a Tier-1 internet service provider, it is
usually agreed that such providers do not pay for data transit; they
include: AT&T, CenturyLink, Cogent, GTT, Deutsche Telekom, Level
3, NTT Communications, Sprint, Verizon, and XO Communications.
(http://www.technologyuk.net/the_internet/internet/internet_service_provider.shtml)
That's it. If you are sending or receiving any data
internationally/over geographically long distances, chances are that
your data is passing through at least one or two tier 1 networks. To
appeal to what appears to be the talk of the times, it's apparent
that it really wouldn't be exceedingly difficult to monitor data
ingress and egress to and from a particular region or country.
Next, imagine that the confidentiality
of your data (while in-transit over the Internet) depended on a mere
few dozen corporations – the name “VeriSign” probably rings a
bell. VeriSign is one of the larger, more popular certification
authorities in the world. VeriSign, like the other certification
authorities, runs its service based on the concept that they are
universally trusted – all operating systems and browsers implicitly
accept any resource as “genuine” if they have been verified by
VeriSign. To put that into perspective, websites and services like
PayPal, Facebook, Amazon, Twitter, and countless others depend on a
third-party organization (like VeriSign, DigiCert, or Comodo) in
order to prove to their users that they are “genuine”, and to
prevent anyone else from easily assuming their identity under false
pretenses. In essence, the very foundation of confidentiality, data
integrity, and “trust” is deeply flawed; in theory, if a
certification authority wished/was compelled to, they could very
easily spoof any website/service or decrypt any data sent from/to one
of their customers. Suppose all certification authorities were
saints, and would never accept a court order or bribe intended to
persuade them to compromise a client of theirs. Yet, a problem still
remains: security of the authority themselves. It isn't beyond
imagination that such an authority can be hacked, is it? In fact,
it's happened in the past.
(https://www.schneier.com/blog/archives/2012/02/verisign_hacked.html
(https://technet.microsoft.com/en-us/library/security/2607712.aspx)
I'm probably expected to offer a
solution of some sort, but that's not why I chose this topic for this week's post. My main
motivation in writing this was to simply portray how much we depend on
a select handful organizations for the Internet/WWW to function
correctly. There are, of course, some “solutions” (meshnets,
keyless SSL, etc.) that have been offered to both of these issues,
but they will likely be very difficult to implement on a widespread
scale.
Sexism in Gaming
Sexism is a huge issue within the gaming world, from girls
being harassed in online video games, to women working in the industry being
treated unfairly. It’s nothing new that a woman might get a lower salary than a
man for the same job, and might have a harder time acceding to higher
positions, but there is another level of unfair treatment towards women within
the video gaming culture.
Recently, a controversy known as GamerGate has brought a
lot of attention to the unfair treatment of women in this culture. A female indie
game developer named Zoe Quinn had allegedly had scandalous affairs with video
game journalists as a way to get them to give her game better reviews. These
allegations were made by an ex-boyfriend of Quinn’s, which could call in to
question how true the claims were. However, whether or not she had illicit affairs,
or whether or not she did it to earn her game ratings it didn't deserve isn't
the real issue that was brought into the light from this scandal. At least not
the one I'd like to talk about. The problem lies within the reaction of the
gaming community to this alleged scandal.
Quinn became the target of a flood of hatred. Hate mail
isn't news when someone does something disliked by the public, but Quinn was
the target of violent threats. Threats of rape and even death were sent
her way. Her personal information was being spread around so that more people
could join in, and the privacy of her and her family was being completely violated.
Sadly, this sort of abuse can really happen to anyone, especially when the
internet is involved, but I think this case exemplifies sexism in the gaming
culture.
A developer having strangely close relationships with
journalists and publishers is not a new thing. Normally the relationship
between a company’s marketers and other outside publishers can be interesting,
because the outside publishers can make marketer’s jobs difficult, especially
when the publishers or journalists don't like the product. However, when it
comes to video games, it’s not at all unusual that a company would do what it
could to ‘butter up’ the journalists to score an extra point on their game’s
score. That could be from treating them to all sorts of nice things when the
journalist comes to review the game, or it could simply be giving only certain
journalists the inside look needed to write about the game before it’s released
to the public. If a company thinks journalist A is going to give a better
review than journalist B, the company is going to invite A to come check out
the game and write about it, as opposed to B. It’s logical from the company’s
standpoint, if not a bit unethical.
So why was it suddenly such a big deal if Quinn attempted
to get herself unfair reviews? Perhaps it was the involvement of an illicit
affair tossed into the mix. If she did have an affair, however, there was no
proof that it was in order to get better reviews for her game. We could argue
about the ethics of her affair, but in the end, that’s her personal life, so
why did it get brought into her professional life? The amount of hatred she got
as a game developer was unprecedented. The leaders of companies don’t get
threats of rape or death when they do something unethical to get themselves
undeserved reviews. So why did Quinn? I think the issue lies with the sexist
tendencies of the male gaming population. Not to pin the blame on any male
gamer, but it’s undeniable that women in the gaming world get harassed more
than men, and often sexually. I’ve seen it happen myself, as an online gamer.
Sexism is an issue everywhere, but it seems so extreme in
the gaming community by comparison. It’s something that will hopefully change,
and the gaming community becomes more main stream as years go on, and the
female percentage of gamers continues to balance with the male. What can gamer’s do to help? Treat everyone
like they actually have feelings, male or female, regardless of scandalous things
they've done, and stand up for those being harassed.
Virtual Reality
Virtual reality is evidently the next revolutionary step for
computer interaction and entertainment. Video
games have gradually presented more immersive experiences since their
conception. It is probably ideal to a game developer to have their players feel
comfortably situated in the game’s world, in that way they are better capable
of understanding scenarios in the game with less guidance and responding to the
game’s stimuli quickly. As the technology used to support video games became
more sophisticated, so did too the levels of detail and realism that developers
could create in their game worlds. It would seem natural that video games are
edging ever closer to what should be considered a virtual reality experience.
Primitive
forms of virtual reality have existed for quite a long time. Nintendo had an
early vision for virtual reality with their infamous Virtual Boy console, which
was a large headset that players could peer into to play their games and block
out the rest of the world, rather than staring at a distanced screen. More
recent video gaming consoles, including Microsoft’s Xbox, Sony’s Playstation, and
Nintendo’s Wii have employed motion-control features to give players a more
tangible and “realistic feeling” approach to interaction with games as an
alternative to sitting down and pressing buttons on a controller. Although
these features were innovative, they were still far from what might be
considered true virtual reality. A device that may just be capable of creating
the most advanced virtual reality experience today is the Oculus Rift.
The
Oculus Rift is currently a prototype virtual reality headset that is designed
to track a player’s head position and orientation, deliver 3D content to both
direct and peripheral vision, and be lightweight and comfortable to wear. Oculus
is one of the modern companies trying to truly create a consumer solution to
virtual reality. They’ve developed several prototypes of their Rift as strides
towards a fully-featured consumer version. These prototypes are intended to be
used by developers as “dev kits” to get a head-start in producing Rift-compatible
games and technology, although they are available to anyone who is curious about
the future of virtual reality.
However,
Oculus was recently acquired by Facebook. This ignited a lot of skepticism in consumers,
as Facebook is known more for its massive social network and advertising
platform than any virtual reality innovation. Since the announcement of the
deal, followers of Oculus lost hope and joked about things such as ads
appearing directly in your face, breaking you from immersion in your favorite
game, or perhaps the new Facebook-branded Oculus Rift will ask you to share
your in-game perspective to other friends on your account. Realistically,
Facebook could simply be looking to expand its market and the acquisition of
Oculus could be mutually beneficial. Facebook will claim ownership of Oculus and
receive the credit of the Rift’s continued innovation, now fueled by a supreme
budget.
Other
companies have already developed prototype virtual reality headsets of their
own. Sony has its Project Morpheus and Samsung has its Samsung Gear. Both aim
to provide a truly immersive experience for consumers, and now that Facebook
owns Oculus, may appear to be a more viable option despite the iconic appeal that
the Oculus Rift may have once had.
Saturday, September 27, 2014
Has the Technological World Been Blind to the Deaf?
I found this article sort of by accident. I have always had a interest in language, especially the more...nontraditional, e.g. Sign Language, but until I am deaf or blind, I don't think I will ever truly understand what it is like to need a different form of communication. Over the past few years, I have really started developing a passion for Web Development, and I try to consider how to optimize websites for the blind community when I build them. I grew up with a neighbor who slowly became blind as he aged, though in her early nineties decided she wanted to learn how to use a computer and send emails. I remember the frustration she encountered trying to use programs like Jaws to read entire webpages to her.
What never crossed my mind, however, is how the internet and the world of computers is used by the deaf community. Because a deaf person can see just fine, it never occurred to me that they would not be able to use the internet in at least close to the same way as I can. It turns out though, deaf people tend to have a very difficult time learning to read and write. Before I really delved into the contents of the article above, I did a little research about how true this statement was. I found that, according to VOA News, "on average, deaf high school seniors are likely to read at the level of a nine-year-old." This was beyond shocking to me. I had never even considered how difficult it could be for deaf people to learn about a verbal and written language they can't even use in the "normal" way.
The article goes on to state that deaf people "can face the same problems as other bilingual people." I thought about this for a minute. Yes, I think I have always considered American Sign Language to be a separate language all-together from English, because, after all, it is not like a deaf person is relating the sign for an object to the word for that object. They are relating the sign for the object to the physical object in the same way I relate the word for the object to the physical object. The article continues that signers "brains have to choose between two languages all the time." This, however, was news to me. I had never really considered the languages that separate.
After all of the research I did, I returned to the original article. A team of researchers in Germany are trying to develop an avatar that will sign to deaf people on the internet. Because reading can be difficult for many people in the deaf community, why not present the information on the internet, and even around a city, in a way that is easier for them to relate to? The Saarbrücken team proposes in its paper, Assessing the Deaf User Perspective on Sign Language Avatars, the use of avatars instead of video recordings because recordings "imply considerable production cost, their content cannot be modified after production, and they cannot be anonymized with the face being a meaningful component of sign language."
As a lover of 3D modeling and animation, this all makes complete sense to me and I am very excited about the project. I hope that it can not only be used as a means for the deaf community to feel more included in the technological world, but for the hearing community to learn more about the deaf community.
What never crossed my mind, however, is how the internet and the world of computers is used by the deaf community. Because a deaf person can see just fine, it never occurred to me that they would not be able to use the internet in at least close to the same way as I can. It turns out though, deaf people tend to have a very difficult time learning to read and write. Before I really delved into the contents of the article above, I did a little research about how true this statement was. I found that, according to VOA News, "on average, deaf high school seniors are likely to read at the level of a nine-year-old." This was beyond shocking to me. I had never even considered how difficult it could be for deaf people to learn about a verbal and written language they can't even use in the "normal" way.
The article goes on to state that deaf people "can face the same problems as other bilingual people." I thought about this for a minute. Yes, I think I have always considered American Sign Language to be a separate language all-together from English, because, after all, it is not like a deaf person is relating the sign for an object to the word for that object. They are relating the sign for the object to the physical object in the same way I relate the word for the object to the physical object. The article continues that signers "brains have to choose between two languages all the time." This, however, was news to me. I had never really considered the languages that separate.
After all of the research I did, I returned to the original article. A team of researchers in Germany are trying to develop an avatar that will sign to deaf people on the internet. Because reading can be difficult for many people in the deaf community, why not present the information on the internet, and even around a city, in a way that is easier for them to relate to? The Saarbrücken team proposes in its paper, Assessing the Deaf User Perspective on Sign Language Avatars, the use of avatars instead of video recordings because recordings "imply considerable production cost, their content cannot be modified after production, and they cannot be anonymized with the face being a meaningful component of sign language."
As a lover of 3D modeling and animation, this all makes complete sense to me and I am very excited about the project. I hope that it can not only be used as a means for the deaf community to feel more included in the technological world, but for the hearing community to learn more about the deaf community.
Subscribe to:
Posts (Atom)