Sunday, August 31, 2014

GamerGate and the Internet Echo Chamber

I've never been very active on Twitter. However, a new trend appeared in the gaming Twitter accounts I follow.  A hashtag, #GamerGate, has gained popularity. In short, a group of gamers have looked into the journalistic integrity of gaming journalism sites. They claim that these sites have had conflicts of interest, misrepresent stories, and committed other tabloid-esque behavior. The journalism side have ignored them, called gamers misogynists (as the story originated with a sex scandal but grew past that), and is now declaring the term "gamer' dead.

The reason why I bring up this #GamerGate hashtag is because it's a great example of how the Internet can amplify things to a ridiculous level quickly. When we talk about gamers, we talk about a large, diverse group of people. 59% of Americans play games with an average age of 31 and a nearly even split between genders [1]. The people who have shown up against journalism come from multiple forum-like sites, multiple industries, and multiple ideologies. They have assembled evidence, want to inspire debate, and want to be heard. Looking at the journalism side, they are either journalists themselves, game developers, or people active on social media. Many have sided themselves with the "social justice" movement in order to bring minority groups to the forefront and praise equality and peace. Nothing wrong with that, it's a good movement. The problem comes when these same people are being incredibly aggressive and even violent towards gamers. I believe this hate is a result of the echo chamber.

In media, the echo chamber begins with an idea proposed by someone within it.[2] The other people in the chamber agree with it, repeat it to each other, and spread it throughout the chamber. This results in communities that all agree with each other and opposing viewpoints are automatically shut down. People believe everyone should believe what they believe since, to them, everyone that matters follows what they believe in. One social media example is in politics and the fragmentation of conservative and liberal.[3] According to New York Magazine, most Republican congresspeople follow conservative organizations while most Democrats follow democratic organizations.[4] This is one of many reasons why it is difficult for the two sides to talk: "If everyone I know agrees with me, you must be some sort of idiot to not agree." Of course, that doesn't play out so literally. American politics have such high stakes that calling someone an idiot would become national news. However, the world of games journalism and the social justice movement can say these things as the stakes are lower.

Over the past few years, journalists gained relationships with social justice members and progressively-focused developers. An echo chamber formed with one idea: inclusion of all. This is good! However, the echo chamber tends to make ideas extreme. As Harvard law professor Cass Sunstein says about the echo chamber in a university setting, "the mere discussion of, or deliberation over, a certain matter or opinion in a group may shift the position of the entire group in a more radical direction. The point of view of each group member may even shift to a more extreme version of the viewpoint they entertained before deliberating." [5]

So what happened? The inquiries into journalism are accused of being misogynist because it began with a female developer being romantically involved with a journalist. This idea is put into the social media echo chamber. It bounces around, becoming more and more extreme each time someone adds their viewpoint. A week or two later, ten articles are published decrying the death of the gamer and how a new kind of gamer needs to be brought up.[6] The ISIS comparison is made, death threats are made against people, and people leave the Internet out of fear and anger.[7][8] This level of harassment is the result of just asking for more journalistic transparency. Even if one were to think there were "trolls" who wanted journalists to get this angry, the trolls aren't people with rank. They're just Internet denizens, unlike the journalists with positions acting like a troll would. They end up worse than trolls since trolls usually say things just to provoke people. These journalists actually believe in their threats.[7]

From my observation, this particular echo chamber has become a cult of personality for the idea of social justice. It doesn't matter if there should be discussion, you have to fight, be right, and be the most social justice-y of them all. Otherwise the group ostracizes you for not keeping up this idea. Thus, nobody listens to reason. If this is true, it's a shame because there's nothing intrinsically wrong with wanting more inclusive games. The issue is when that idea is supported by people who try to promote inclusivity by ostracization.

This is just a small microcosm of the Internet. It's tough to see why it would matter on a sense of scale. However, it matters because of two reasons. One, it's happening right now and it can be observed. Two, it shows how echo chambers can be made without trying. I doubt there's some conspiracy between these groups to attack gamers, but it happened. That's frightening, but if we observe this issue and note we may exist in an echo chamber, we can improve discussion in the future.

[6]https://archive.today/wO1sh (one example of an article, links to other articles)
[7]https://twitter.com/devincf/status/503650957800919041, archived at http://imgur.com/a/j8P3l (user is writer for a journalism publication)
[8] https://pbs.twimg.com/media/BwYlEEUIEAA5lE5.jpg (no twitter link, tweet was deleted). There are many of these tweets and articles, so I'm limiting it to the 2 most extreme and prominent.

Net Neutrality and the Internet Slow-Lane

Net neutrality has been in the news a lot recently in light of a recent court decision which struck a major blow to the FCC's ability to regulate ISPs.  Providers like Time Warner Cable are already exploiting new rules that allow for ISPs to charge for internet data both upstream, from content providers such as Youtube, Facebook, and Netflix, as well as downstream, from internet users like myself and the reader.  Furthermore, these companies are eagerly planning to charge all customers extra to have faster internet service, which will undoubtedly mean the bare minimum internet offerings will actually be much slower than they are today, with little chance of any savings for the customers.  Understandably, the denizens of the internet are upset.

A lot has been said on both sides, and the majority of people who know about the issue seem in favor of net neutrality.  Many have argued that, since all downloaded content has to be uploaded first, telecom companies are basically planning to charge double for the same amount of data.  Others have expressed concern that this will stifle innovation on the internet, since large companies will be able to pay for faster internet, whereas the next scrappy start-up might not be able to compete because of slow internet. There are some who are calling for the internet to be reclassified as a common carrier service, which would allow the FCC to bring back net neutrality.  Others argue that this will lead to worse and more numerous consequences.  They're all missing the point though.

In 1996, Congress passed a law that made available $200B in funding for internet infrastructure based on a coaxial/fiber hybrid model.  Telecommunications companies made various proposals on state and local levels, and agreed to expand service in exchange for local monopolies and federal funding.  Notably, these companies agreed to provide open networks (so that multiple ISPs could provide service over the same infrastructure), to keep prices down, and to provide nationwide, symmetrical service upwards of 45 MB/s, all to be delivered by the year 2000.  Not only was this deadline not met, but it is now 18 years since the passing of the law, and the average US download speed (not to mention upload speed) is still just 29.5 MB/s.  On top of that, cable and fiber networks are closed to competition, not symmetrical, and rates have risen much faster than we were promised.  The word that I would use for what happened is theft.

It's neither here nor there that the internet is an "information service" that is not liable to the same level of FCC regulation as it would be if we labeled it "common carrier" instead.  The fact of the matter is, telecom companies stole hundreds of billions of dollars from America, while giving us weak service and making us pay extra for the privilege of being lied to.  Splitting hairs over the FCC's categorization of the internet is a disgustingly successful shift in the terms of debate by the people who stacked the deck in the first place. Anything less than outright socialization of internet infrastructure is pitifully kowtowing to criminals.

Humans Need Not Apply

The notion that modern computers are intelligent and flexible enough to take over many existing human-employed jobs was touched upon in our first class meeting as an example of what topics may be applicable to “Computers and Society”. A couple of weeks ago, YouTube user CGP Grey uploaded an enlightening video with extended commentary on this particular topic titled “Humans Need Not Apply” (https://www.youtube.com/watch?v=7Pq-S557XQU). The video explains the roles that robots and computers presently inhabit in our society, ranging from the mundane to the complex and the enormous to the delicate. As technology continues to become more sophisticated, the variety of roles that it fills will only expand. Naturally, this also means that the number of roles that need to be filled by a human will become smaller and smaller.
                There are many people who believe that technology will never be able to accurately emulate human creativity, as it often involves complex emotion and individual expression. Thus, roles that require a level of such creativity could never be properly filled by technology, right? Actually, as CGP points out, it’s true that there are modern computers that can already produce creative content to some degree. These computers use algorithms that mimic a human’s decision-making when it comes to writing or creating art: making comparisons between various prose, colors, objects, etc and, based on some original rules established in its system, can choose which fits the best in varying context. IBM’s Watson, for instance, is capable of swallowing massive amounts of data and, using advanced algorithms, can use it to perform complex tasks like antidote synthesis as well as more creative tasks like making up recipes for quite delicious food. In fact, IBM has its own food truck where Watson’s culinary creations are served to the public - http://www.ibm.com/smarterplanet/us/en/cognitivecooking/truck.html. Of course, systems like Watson will always try to figure out the “best” way to combine things creatively or usefully (as defined in the data and rules it relies on), while one may argue that a considerable aspect of human creativity is the imperfection of it that will vary between individuals. A romantic would say that these imperfections or ‘quirks’ could not truly be simulated, as they are linked to an individual’s personality, or something called a ‘soul’. However, I’m not convinced that any human behavior is inexplicable; as extremely complex as human thoughts and feelings can be, it makes sense that they are caused by the plethora of minuscule chemical reactions that happen in our brains. Because of this, I will not be surprised when a computer can one day have a personality too, by simulating a human brain with semi-random proportions of ‘chemicals’. It’s definitely eerie to imagine a society that employs such technology and what then might follow.

                Despite the possibilities, I personally am not certain if modern and future technology will affect my career in a way that I should be very concerned. I’m looking forward to a career in web programming and have had enjoyable experiences with website design and development in previous internships. On many occasions I’d have a lot of back-and-forth with a client in order to come up with creative designs or solutions to their concerns; I don’t believe that sort of interpersonal communication could be readily provided by a computer in “near” future, but of course I may be mistaken. It is difficult to predict exactly to what extremes of work that technology may be able to handle one day, and as CGP says it is even more difficult to know how to prepare for a world in which almost all roles are filled by technology.

Affects of Technologies In Our Lives

       This past century has made so many modern advances with which our daily lives are affected today. The advances range from assembly line to produce cars, television, computers, internet, mobile phones to biometrics, and artificially functioning robots. In the beginning, circuits, transistors, chips, and other necessary hardware components found in all the electronics today were developed. They started very big in size, limited in capacity and storage, and transformed into very small chips to be embedded onto the motherboard or a circuit. Moore's Law predicated that about every 18 months, the size of the circuit chips and the cost would reduce when the storage capacity and the speed would increase. This law laid the foundations of today's modern computing. 
      As time went by, more and more technologies started emerging - giving a beginning to today's modern age. Computers, internet, mobile phones and other electronic gadgets were developed. The intent behind all of these technologies was to make our lives better, efficient, and productive while reducing the human labor. All of the above started out fine, but as they grew they took over the old ways of life and formed a new digital world.  A world where everything is digitally connected; where computers, smartphones/tablets and other gadgets consume most of our time whether it be to learn, shop, play, watch, listen, or talk, they are the virtual medium for it all.
       These technologies have gave us so much. They have definitely reduced human labor by automating the tedious tasks. They have created new jobs while old skills hence jobs are tendered obsolete. They have provided a new medium of learning for sure but a broken, incomplete, interruptible learning. A lot of collaborative tasks have been made possible due to these technologies at the same time interruption in learning and lack of concentration while reading have also been caused by them. As new technologies are developed especially in the field of artificial intelligence, the time isn't far where all the manual labor will be performed by the robots - killing more jobs. The way to balance the society in my opinion is to provide sufficient need and a balance for all sorts of work. That is the only way in my opinion, a society can functional well.
           The problems aren't over yet. What about the our privacy? With these technologies being part of our everyday lives, is there anymore privacy left? I don't think so. Also, people are becoming extremely open about the things they post on social networking sites without any consideration for consequences. Who knows what is happening to the data that is provided to Google, Facebook, Twitter, and other such sites? People tend to put their routine activities and other personal things on these sites for everyone to see, why? What is the need for sharing such private information?
            In conclusion, I understand that this past century has brought us many great advances in technology. We should definitely be proud of how far we have advanced. I am definitely in favor of technology and its usages as long as it doesn't take away our rights and means of living lives.

The Problems of the Internet and Why it Should be Faster


              With the new upgrades to the Steven’s network, it seems that the University is taking the step in the right direction to improving productivity on campus. However, as someone who resides off campus and thus spends most of my time away from campus, this improved network seemed fruitless. Nevertheless, this piqued my interest on the topic of internet speeds and the struggle to achieve higher internet speeds. While I wait for the newest computer game to download I wonder why the internet is flawed like this. Why do I have to wait hours for something to download when fifteen minutes could save me fifteen percent on my car insurance (Geico). The Internet is the center of our lives and we do everything on it. We have to be connected to social media and be entertained by the latest episode of our favorite show. However, we are still complaining about the buffering and lag that we suffer while we are on the Web. This led to my ultimate question. Why is the service that we depend on every day slow and unreliable?
                Of course, the problem lies at the companies that provide “internet” for us. The main ISP’s (Internet Service Providers) in the United States are Comcast, Time Warner Cable, and Verizon. These companies compete with each other to provide the fastest internet speed for the lowest price. However on February 13, Comcast proposed an acquisition of fellow ISP Time Warner Cable and that led to many dilemmas.  With the two largest ISP merging, people feared that there will be less competition between ISP so they are not incentivized to provide better service. This became a reality when Comcast began to charge Netflix because they took up most of their broad band during peak hours. This has led to a lot of outrage because Netflix is essentially paying Comcast to provide the same service. Also this is against net neutrality laws where all data on the internet should be treated fairly. Even though Netflix takes up a bigger percentage of the broadband, I think that it shouldn’t be limited and charged for the same service that they have always provided. These complications that the ISP has with companies such as Netflix only limit the potential growth that the Internet has in the United States. The main problem that the United States faces is that we need provide faster internet speeds for consumers so that they don’t worry about anything else. If something as basic as Internet can be improved instead of being a product that is fought over, then this will solve the fights that companies have with the ISP and increase productivity of consumers in the United State.

                With faster internet speeds, people will be able to achieve more than ever. In Hong Kong and South Korea, their internet speeds are among the highest in the world. Their internet speeds are more than double or triple the average of the speeds in the United States.  Those countries and cities are leading the world in innovation because they are able to connect to each other faster. This leads into the reading for this week “The Distraction Addiction”, where the author describes the amount of time we waste waiting for our computers to load up and connect to the Internet. Even though these inefficiencies are small, they add up and we waste a lot of time waiting for something to boot up and load. That is why the United States should invest more into improving internet connections for consumers instead of fighting each other the amount of broad band a company uses. With the Information Age growing and evolving more rapidly than ever, it is crucial that we must keep up with it so that we don’t lag behind any other nation.

The Internet - Uniting the World, Only to Divide it Again

Anyone who browses Internet message boards for more than information would need to be a special kind of crazy, since these boards can be some of the most vicious parts of not only the Internet, but society as a whole. Whatever type of dispute you may be looking for, whether it be Marvel/DC, Star Wars/Star Trek, or maybe even science/religion, there's bound to be a multitude of insults, slander, and immaturity coming from both sides of the argument.

What does this prove, you may ask? Only the most ironic aspect of the Internet: it was designed to bring the world together, yet at the same time provided a way to further divide these communities. Previously, the only opportunities these arguments had to fester were either among acquaintances (in person), or at conventions, such as Comic-Con. With the creation of the Internet, and the later development of message boards, these conflicts could not only be furthered from the comfort of each party's living room, but also provided the sense of anonymity that inspires false confidence in their points. Suddenly, there's no threat of physical danger, so the insults become more severe.

Where does this leave the disputes, and those who partake in them? They go on forever, since you can literally post and respond from anywhere in the world, anytime you want. What at one point might have remained a small disagreement among people who could otherwise be friends evolves into a mutual hatred, thanks to an inconceivable amount of contempt thrown from behind the veils of namelessness.

Alternatively, one can argue that these boards provide better, more well-constructed arguments. Since your opponent isn't standing in front of you, waiting for your answer in person, you can wait a couple of minutes, look up some supporting evidence, consider how your opponent may respond, and put together a cohesive argument. Granted, these can lead to arguments lasting for days, or even weeks, but theoretically, they should be more intelligent than, "I'm right!" "No, I'm right!" "Nuh-uh, I'm right!" ad infinitum.

However, as my brief yet painful research has shown me, this is more often than not the opposite of the truth. I spent several long, unyielding minutes reading a message board concerning superhero movies, and the idea of a shared cinematic universe (you can take a wild guess which I'm talking about). The fruits of my labor consist of many people repeating "X movies suck! Y movies will be so much better!" Occasionally, someone indicated some level of intelligence, but they were rapidly drowned out by a flood of neanderthals complaining about big words, and I can only assume they gave up and moved on with their lives.

What have we learned? At the end of the day, only one point is proven. No, it's not that Marvel is better than DC, or Star Trek is better than Star Wars, or even that humans are influenced more by nurture than nature. The point proven is the fundamental paradox of the Internet: that however well-intentioned it may have been, the World Wide Web has done just as much to unite the world's population as it has to drive it further apart. Tim Berners-Lee would be rolling in his grave if he were dead.

Social Media Beyond Instant Gratification

Social media platforms are meant to deliver instant results - whether it is getting likes on your Instagram picture of your lunch or a Facebook status update about your new relationship. These platforms feed into the selfish nature of instant gratification. Short term satisfaction isn’t the only reward social media provides. Over the last years, social media has evolved into a platform to raise and spread awareness about global issues.

The use of social media as a platform of true communication has become a tool for the public to get their voices heard. Media such as twitter and Facebook have been in use from the events of the Arab Spring to the recent incident of Ferguson, Missouri. The raw encounters of people in these tense situations have been virally spread online, bringing the action to the front doors of people worldwide.

Social media has worked to highlight discrepancies that exist in traditional news outlets while delivering information to the general public.  With the use of media platforms like twitter and Facebook, the events are seen unfiltered. In a recent interview on Yahoo! news, Twitter co-founder Jack Dorsey discusses the role of social media in Ferguson by saying that, "you can just simply get out your phone wherever you are, and you can add to the conversation, you can read the conversations and you can contribute, and that was something that we always wanted to empower more.  And fortunately the world wants the same thing (Shapiro).”

With the help of social media, the average person has also been able to get their opinions out. Traditional news outlets usually rely on professionals to analyze events and deliver insight about sensitive situations going on in the world. However, armed with social media, students like me have been sharing their opinions and providing different facets to one story. Trends like #iftheygunnedmedown have been a popular tool for students to express their feelings towards the negative portrayal of Michael Brown in the news after his shooting in Ferguson, Missouri. Other hashtags are also in place to create a following for current events and create a population that is much more aware.

The events of Ferguson has caused an immense response on social media and these very responses have been the fuel towards demilitarizing the police and holding them accountable for their actions. The public of Ferguson have been using their social media platforms to document events unfolding in their community and are pushing towards creating a rule where officers have to wear cameras on duty. This helps in diminishing police violence upon civilians and creates an accessible way for the public to see what their law enforcement officials are doing. These ideas which have been brought forth through the use of social media can work towards creating a future where there is trust among the general public and the people who are supposed to protect them.

Although the “traditional” use of social media has been to reconnect with people from our past, the roles of these outlets has evolved from something so mundane. Nowadays, the public have been using these platforms as a way to express their opinions of current events and bring forth to light what they perceive. This much intimate version of current events gives a new facet towards observing and understanding information in a world that is changing constantly.

Source:
Shapiro, Stephen. "Twitter Co-founder Jack Dorsey on Social Media's Role in       Ferguson." Yahoo! News. Yahoo!.Web. 28 Aug. 2014.

Thoughts on NSA surveillance



Edward Snowden, a former American technical analyst who work in Central Intelligence Agency, told us about PRISM program and XKeyscore plan. “One that gathers U.S. phone records and another that is designed to track the use of U.S.-based Internet servers by foreigners with possible links to terrorism.”(KIMBERLY DOZIER) PRISM program is launched by National Security Agency in 2007. PRISM program is essentially a data mining project, where the data is gathered from nine big internet company. PRISM program can monitor email, message, video, picture, documents, video chat and social network. XKeyscore plan is another monitor and control program, which is carried out to monitor people’s online activities. Edward Snowden leaked classified information from National Security Agency, because he wanted to let people know that the government is monitoring them.

Edward Snowden believes that the USA National Security Agency should not monitor people, because it violates people’s privacy rights. National Security Agency can draw out a complete picture about a person, but this is done without regards to one’s privacy. People are given the right to have their own spaces and secrets, even if they are online. Normally, people feel uncomfortable when they are being monitored by other people; they would demand personal space and privacy. But now, NSA is monitoring the citizens on a great scale, so who would feel comfortable? In addition, while the government monitors people’s cellphone and email, business conversation and transactions are also involved. Thus, business secrets are compromised, because companies’ phone and email communications are monitored. For example, a person creates a strategy to trade stock, and he talks with his coworker about the strategy. If this strategy is eavesdropped and end up in wrong hands, that person’s work is in vain.

On the other hand, the American government agency said that the program is an efficient way to identify potential terrorists and engage them. By using the surveillance system, law enforcers can locate terrorists, listen to their conversations, and catch them, thus protect people from terrorist attack. According to Associated Press, a NSA director said in 2013 that the surveillance programs have foiled some 50 terrorist plots worldwide. 

Some people think Edward Snowden betrayed his country, because he disclosed the secret of important anti-terrorist programs. However, it is just for Edward Snowden to stand out and talk, because it is true that the programs violate people’s lawful rights and privileges. He conducted a brave act, speaking the truth even though he knows that he would be in danger. Meanwhile, other people in the government, who knew about the programs, simply never say a word.

In conclusion, I share the view of Edward Snowden. People deserve privacy and freedom; they are human being’s natural right. In this sense, a massive eavesdropping project is really not a product of justice. Indeed, a project like PRISM is proved effective in the battle against terrorism. But as far as I am concern, there should not be a system that surveillances citizens, it is against the original purpose of protecting people.

The Social Implications of the Smartwatch

                Since the beginning of the smartphone era in 2007, our daily interactions with those around us have fundamentally shifted.  The smartphone became a tool to keep us constantly entertained and connected to the people we care about.  As the technology improved, the societal impact became more widespread and prevalent.  Everywhere you go, you see people watching videos, browsing websites, and talking to their friends, all with heads bent down to their screens.  Many people even have a tendency to check their phones and respond to other messages while holding face to face conversations.  A new technology, called the smartwatch, could improve the way we communicate by allowing us to focus on each other instead of on the phones in our pockets.
                The smartwatch, an electronic wristband that connects to your phone to display notifications, has been growing in popularity over the last one to two years.  The device category came into the spotlight in early 2013 after a successful Kickstarter campaign by Pebble.  Now, in mid-2014, big-name manufacturers like Samsung and LG have released smart watches running a highly-optimized version of Google’s Android OS.  In addition, companies like Apple and Motorola are expected to announce their implementations in the coming weeks, pushing the device category into the mainstream.
                Adoption of the new technology has been expectedly slow.  The implications of receiving notifications on your wrist seem pretty negative at first; the vibration is much more noticeable so you’re more likely to be distracted by it, you’ll be seen as rude for checking your wrist mid-conversation, and you’ll look like a geek for staring at a one-inch screen on your wrist.  However, having been wearing a pebble for the past four months, I’ve noticed that these problems aren’t present at all.  Firstly, custom vibration patterns mean right off the bat, you know what type of notification you’ve received, therefore allowing you to judge whether or not it’s important enough to check.  Secondly, the form factor means that reading your newest email or text message can be done in a matter of seconds, making it unobtrusive and discrete.  Thirdly, new devices designed to resemble high-quality watches mean that you won’t look as geeky as you think.
The social benefits of the smartwatch are extremely significant.  In the past four months that I’ve owned my Pebble, I’ve noticed that I absentmindedly check my phone less than when I didn’t have it.  This means that during conversations and interactions with others, instead of regularly checking my phone for the newest, text, email, or Facebook notification, I wait to receive the buzz from my watch and judge at that moment whether it’s worth the interruption.  Additionally, when I do have to respond to a message, I spend less time checking each of my inboxes because I’m instantly aware of any notifications.  Without the watch, my phone is almost always out on the table in front of me as a constant reminder that there are other people I could be communicating with.

In becoming more connected to my phone than ever before, I’ve become less attached to it.  I spend more conversations focused on the people I’m talking to instead of on the computer in my pocket, and I’ve noticed these changes in other smartwatch owners.  As the technology becomes more popular, my interactions are becoming more involved and personal.  The changes I faced socially when I got my first smartphone are being reversed, and I’m grateful for that.  If this new device category sweeps the market in the same way the smartphone did, then we can all expect these changes to make our daily lives much more involved and interactive.

No more 40 hour work week?

When you think about getting a job after school or whenever, you probably think about the typical 9-5, 40-hour work week. For some, their weeks are over 40 hours of work. Throughout the day, you get your usual morning break, lunch break, and sometimes an afternoon break too. These can last anywhere from 15 minutes, to 45 minutes for lunch. However, do these long days with small breaks actually help in terms of productivity and efficiency? I dont think so. If you are in the middle of doing something, it is tough to just back away and go get something to eat or just relax and then come back to what you were doing. It takes you time to get back into what you were doing, figuring out where you left off. It doesnt seem like much, but it adds up, especially if this happens 2 to 3 times a day, 5 days out of the work week. Now a days, people can be so easily distracted at work because of their smartphones. I know people who go to work, and play games on their phone all day. In the job, this can seriously affect production and the quality of work, and might also be the reason people need to work more than 40 hours in a week, to pick up other peoples slack.

There are countries such as France and Germany who work less hours per week, and studies show that France has one of the best output-to-hours worked ratios in the world. They work less hours, but the quality of their work is just amazing. It seems "quality not quantity" works out pretty well for the French. Germany too. They have a very stable economy and a very low unemployment rate, which could be from working lower hours a week.

It is a tough situation for some jobs. I do heating and air conditioning when im not in school, and that work is very physical (which i love). However, whether it is in the middle of the summer or in the middle of the winter, it is tough for us to work straight through to the end of the day without a break. We are constantly climbing up ladders, carrying equipment, and doing all this work in the heat, the freezing cold, or even in the rain sometimes. Some guys dont mind working the 8 hours a day, i know i dont really mind working 8-4. In terms of productivity and efficiency, we dont lose much from taking a small break in between our work. We set ourselves up to do each job as fast as we can.

Then again, if somebody wanted to force me to work less hours a week and not hinder my pay too bad, then i guess i wouldnt really mind getting out a little extra early. But then again, if the pay doesnt change and you're offered to work less for the same, who wouldnt take that offer?

The Toothbrush Test

Who do you think the Technology firms in the Silicon Valley contact when they want to acquire a startup or some other established company? If you answered Wall Street Bankers, then you clearly aren't up to speed with the way Silicon Valley deals with Mergers & Acquisitions in the 21st century. It isn’t new for the technology companies to behave like investment banks but this is the first act by them of replacing Wall Street firms in its entirety. But the fact that the technology firms are successfully able to make deals on their own is only baffling to the investment banks and no one else!

The term “Toothbrush test” was first coined by Google co-founder & chief executive Larry Page. For the past few years there has been a lot of wheeling and dealing in the technology industry in terms of buyouts and acquisitions. Recently Apple closed a deal worth $3 Billion to buy headset company Beats Electronics. After the deal was finalized, Apple CEO Tim Cook announced that there was no involvement from Wall Street Banks either in terms of financing or advising which shocked very few people. Earlier in the year when Google bought Waze for $1.1 billion and when Facebook bought WhatsApp for a record $19 billion, the M&A heads of the big banks had no knowledge about it. It isn’t as if the technology firms don’t value or look into the financials of the firm they buy, they look at it from a much different angle compared to the bankers.

Companies like Google, Apple and Facebook have been beefing up their internal M&A or corporate development departments which consists of former bankers from Wall Street. These bankers either are those who got tired working long hours as analysts in banks or wanted to join the recent trend of working in t-shirt & jeans. Their job role is essentially the same as it was in financial services industry but they are trained to look differently at a potential target. Instead of just looking at numbers and financials, these new breed of tech-bankers look at the potential and talent that a company can bring with it. The negotiation aspect of acquiring a new firm is left to the C-level executives unlike in other industries where the M&A guys first reach out with the idea of buyout.  

The reasoning behind not using investment banks is the belief among many tech executives that banks who act as advisors on the deal lack the humane touch and their thinking don’t align with the needs of today’s tech companies. Bankers usually try to look for ways to increase the profit margin & money savings in the short term when they look for acquisition targets. But tech companies act in a different horizon, instead of profit margins they look for potential and prospective in early stage tech companies. Often investment banks only look at potential targets which are stable companies with a successful business models. Private equity firms and hedge funds usually look out after underperforming companies and use their expertise to turn around their acquisition’s’ business model. Majority of tech companies follow the ideology of private equity firms in which they make big bets on future instead of the present. The tech companies prefer to instead go to venture capitalist and entrepreneurs instead for advice and very rarely even for financing.  This gap between Wall Street & Silicon Valley is increasingly widening with the tech companies dictating their own terms when it comes to Mergers and Acquisitions.

Does technology affect attention span and is that a bad thing?

Netflix, my email, Blogger, Google, The Guardian, and Facebook. 

These are all the tabs that are currently open on my computer. Not to mention the application that is currently running on my phone. 

In today's society it takes only a few seconds to reach a completely different area of the internet. I'll admit, I don't have the greatest attention span. I am tempted as much as anyone to pull out my phone during class, or check social media a thousand times while I'm working on an assignment, because maybe someone is trying to get in contact with me! This is a growing concern, as some professors demand cell phones to be turned off in class and their students respond with a collective eye roll. 

The National Center for Biotechnology Information, the U.S. National Library of Medicine, and the Associate Press have verified that the average attention span between 2000 and 2013 have decreased by four seconds with the increase in external stimulation, specifically technology such as smart phones and laptops. The average attention span of the people surveyed is now less than a goldfish's. The CDC found a statistic that the number of children diagnosed with ADHD has increased from 7.8 percent to 11 percent between 2003 and 2011. It would be misguided to assume that the increase in technology would cause a rise in ADHD. However, it stands to reason that the increased stimulation would make it increasingly difficult for kids with ADHD to focus, leading to an increase in the severity of the disorder. During the second lecture of my Materials class, Professor Eitel quoted a statistic that the focus time of students in a lecture setting is, at maximum, 15 minutes (I couldn’t find this study, but it is also quoted in Time, and a presentation at Columbia). This fact is the reason why he intends to make this class more interactive.

It can be argued that an average shorter attention span is due to positive aspects of technology. Students are not required to have extensive attention spans because it takes less time to perform tasks. Thanks to search engines, research takes considerably less time. It takes less than a second for Google to drudge up tens of millions of results from a single search. People can respond almost instantaneously to a query posed to them by a peer through text or email. Some televisions even have the ability to split screens between two or more channels so people can easily switch between TV shows or, in the case of my father, sports stations. For someone like my father who does not have the time to focus on a two-hour long TV program, and for someone whose job it is to understand what is going on in the world of sports, most of which is happening simultaneously, multitasking technology is necessary. A short attention span might not necessarily be a detrimental to people in today’s society.


Eight seconds of attention span is a very small window of concentrated focus, and if you read through this whole blog post without checking your phone or clicking on another tab, I applaud you. Technological advancements create the powerful temptation to check if you have any new Facebook notifications or favorited Tweets, leading to procrastination and even forgetfulness. I cannot count the number of times I have forgotten to send a text message or email because I would open another app right after receiving the message. In situations like that, a short attention span becomes detrimental to important tasks. However, a short attention span grants the ability for people to multitask, constantly focusing on one thing or another, and could be due to the fact technology speeds up the process of completing assignments. A shorter attention span might be the mind’s way of adapting to the constant influx and ease of access of that information. 

Sources: http://www.theguardian.com/teacher-network/teacher-blog/2013/mar/11/technology-internet-pupil-attention-teaching
http://www.cdc.gov/ncbddd/adhd/data.html

Internet Addiction is Not an Illness

Internet usage has increased drastically over the past two decades and it’s not stopping to grow even now, when almost everyone have it at home and use it daily. Recently I have noticed that some people call excessive internet usage an “illness”, which requires a treatment. Internet addiction is not in any way an Illness, it’s a habit.

You must have heard parents complain about how their children are mentally unstable because of their addiction to the internet. Most of them blame the internet referring to it as a plague, which consumes child’s brain, but very few find their own fault in it. Internet is an addiction, which can be easily contained if done so early on. Parents who allow their children to be on computer for hours daily at an early age should not get surprised that by the times these children grow up they become internet addicts.

You might have also heard about so called “treatments” for internet addicted children. During these treatments children are treated as if they had a mental condition, which can result in them believing that they actually have one. Also these treatments are not free and sometimes can be quite expensive and take time. These treatments basically do what parents could have done themselves if they viewed internet addiction as a habit and not illness. And it is limiting internet time and encouraging children to do more physical activities outside other than letting them stay inside and do whatever they want. Who else but parents can teach their children about the negatives of internet addiction?

As for example there is an article about a four year old who had the serious case of game/internet addiction. Of course parents blame it all on the internet and technologies, but in fact her parents allowed her to use iPad since she was a baby and by the time she grew up to the age of three she couldn't live without her gadget. Now her parents pay 16000 euro a month for a ‘digital detox programme’. It is very sad to hear that this girl is undergoing a mental treatment, because of her parents’ mistakes. Also the article provided a poll, which showed that “more than half of parents let their babies use a smartphone or tablet, with one in seven allowing it for four or more hours a day.”

Myself I can certainly say that like most of teenagers in our time that I’m addicted to internet. It is very hard to pass a day without going online, checking mail, looking up articles. I feel like it is my responsibility to control my internet usage, so I try to limit myself by doing other things, such as meeting up with my friends or just reading.


In conclusion I would say that Internet addiction is not an illness. It’s a habit, a very bad habit, which can damage your physical and social conditions. If you have one no matter how old you are, it is your own choice to get rid of it.

source: [http://www.dailymail.co.uk/news/article-2312429/Four-year-old-girl-Britains-youngest-iPad-ADDICT-Shocking-rise-children-hooked-using-smartphones-tablets.html]