Saturday, September 21, 2013

Google is Not a Monopoly

A monopoly is classified as an enterprise that is the only seller of a certain product, thus having no economic competition. During the early Twentieth Century, J.D. Rockefeller’s Standard Oil Company, under the Sherman Antitrust Act, was labeled as a monopoly since it owned most of the country’s and world’s oil production and marketing business resulting it to be split into several different companies, including today’s ExxonMobil, resembles its parent. As I have analyzed it Google does not fall under this category, since almost all of their services have a handful of competitors. Even though Google offers a large variety of services, they tend to improve on a service that is already out there and consider all their services to be in beta, with Gmail as the exception, which was launch officially in 2004. Even their futuristic Google Glass, which is currently in beta, has an opposition known as Jet, which was created by Recon Instruments.
So there is no way that Google is a monopoly, with them making a service when many other corporations already offer it. Google is more of an experimenter than anything else since what they do is spread themselves in. different fields and try to improve them. Their products range from social networking with Google Plus, internet and cable provider with Google Fiber, cell phones with the Google Nexus, laptops with their Chromebook, and they even run a laboratory in California known as Google X Lab. Google actually makes the lives of their consumers simpler by syncing their data across all their devices through Gmail. This data can also be downloaded through the Google Account Activity Report to be able to manage and organize one’s information. On top of everything Google supports open source and post the codes for their Android and Chromium operating systems online as well as giving hints on how to modify and improve them for different needs.
Currently the only service that Google has, which may not have a competitor, is their augmented reality game known as Ingress. Currently Ingress is only available for Android devices and an activation code is needed to be able to play, even though the app can be downloaded through the Google Play Store. But, even this does not make them a monopoly because just like most of their other services Ingress is in beta, is free of charge, and syncs with Gmail as well as ad free. In reality the only difference that Ingress has from other video games is that it requires players to go outside and play, instead of remaining stationary in a room. So I may be that other mobile games and maybe even console games can be counted as a competitor, even though those games one way or another try to make a profit off of its consumers, whereas Google does not.

 From personal experience I actually enjoy using the services provided by Google. With Google Drive I am able to start an assignment on my phone when I am not able to use my laptop and then later on access the file on my laptop without having to send my email attachment or tether my phone. Also, with Gmail syncing my bookmarks, history, and forms from Google Chrome, I am able to access all my internet data from my cell phone, laptop, and desktop with no problem. As I have said before I see that Google actually makes everyone’s life easier by syncing the data from all their services onto one account which can be access from any device that runs almost any operating system. Google even has a sense of humor with their easter eggs and annually pranks on some of their more widely used services such as Gmail and YouTube, which adds another reason why I support them. 

Friday, September 20, 2013

Monitoring People via Technology Started Before Prism

Using technology to monitor people has been going on way longer than the NSA’s prism. Whether it’s been tracking location, screening emails, turning on web cams, or another form of monitoring using technology traces back before Prism was created. Over the last decade or so I have heard plenty of stories about how schools have provided laptops to students and months later the school is being sued for illegally monitoring the students. Some schools were just constantly tracking where the laptop was backing it the reasoning that if a laptop was stolen they could use the GPS to track it down. I find this appropriate but there is no need to constantly be tracking and recording the information.
                Some schools were even turning the webcams on without alerting the students. They were able to see what was on the students screen and everything going on in the room the laptop was in. I have an absolute huge problem with this and I know so does just about everyone else. Under no circumstance does a school need to turn on a webcam to see and hear what is going on in a room. To me this is even worse than the NSA’s Prism.
                Prism became a huge debate over the summer and still is heavily being argued today. It did not surprise me at all to hear that the NSA was recording information about users on the internet. What did and still does surprise me is hearing year after year another school or town being sued for illegally using technology to monitor their students. I do not see any reason a school needs to monitor their students. If they want to catch students doing drugs, cheating, or engaging in illegal activities, there is much better, legal, and moral ways to achieve this. To an extent I can understand why the NSA is tracking information and recording it. They have a much bigger and tougher problem, tracking and preventing terrorism.

I’m not sure if all, of even if any of the schools getting sued are victims of individuals misusing power or as an entire institution misusing their powers. I feel that in most cases it is probably an individual or small group of individuals acting on their own and taking the power they have to spy on students. At least, I hope this is the case. If institutions as a whole are making steps to track and monitor their students I see a huge problem slowly developing in this country. I feel like one day this will just become so normal that it won’t even be argued.

It scares me to think that one day it will be so normal that those with more power and authority will just be tracking my every move. As technology advances the monitoring will increase. It will become easier to track locations of people and what they are doing. Issues like Prism could have been resolved well before it was even created. All it would have taken was people that were not affected by schools monitoring students to speak out. If there was enough of a push for monitoring people as being immoral and intolerable, Prism would have panned out completely differently. Maybe it would be the same, except no one would have spoken out to release the information, but I think it is more likely that Prism would have been discussed and designed differently.

Sports?

               After the large discussion sparked in class last week, I decided I would create my blog post this week with a focus on eSports and the effect video games have had on competition in various regions. Competition in video games has always existed as an alternate purpose for video games besides entertainment; from the early days of high scores on arcade gaming machines, to the heyday of Counterstrike and the current crop of competitive games such as Starcraft 2 and League of Legends.
               Competition in the realm of video games started at the average arcade, where high scores were the true test of skill (besides perhaps the 1v1 fighting games like Streetfighter and Mortal Kombat), and were a precursor to competition in the future. The era of the first computer games and early consoles came next, which is where the important starting point of eSports came about. The first huge game to start competitive eSports as a competition was also the first serious and somewhat criticized game, Doom. As a first-person shooter about demons and monsters and violence, there was a lot of criticisms launched toward the Id Software game. Multiplayer using dial-up was a feature of the game, and team deathmatch (involving two teams where the first to a set amount of kills wins) and capture the flag were prominent game types.
               The two major genres for competitive gaming became the team-based FPS (such as Doom, Quake, Unreal Tournament and Counterstrike) as well as the real time strategy genre (such as Starcraft and Age of Empires II). The team-based FPS games still to this day follow a similar model as the earliest days, with the majority of FPS game competitions based on team deathmatch with some capture the flag depending on the tournament or game. The current crop of games for FPS players tend to be Call of Duty and Halo, which offers the difference between a realistic game and a sci-fi game depending on the player’s taste. Starcraft became a huge force in the eSports world and would only be dethroned today by the ever expanding League of Legends style game.
               Starcraft was a huge force in the all the way up until the release of the second part of the sequel. Starcraft competitions took place in many places across the world, with a huge focus in Korea. Due to rampant ease of piracy, South Korea had a large amount of Starcraft players, and it soon became both accepted and common to play Starcraft. As more and more tournaments were created, teams started forming (Starcraft is a 1v1 dueling Real Time Strategy game) in order for players to receive money from sponsors to attend tournaments and even receive a consistent salary and personal coach. Unlike the rest of the world, Starcraft became the “big thing;” players would train 16 hours a day, working on mechanics and strategy and compete in tournaments sponsored by the Korean Esports Association (KESPA) who had a channel dedicated to 24/7 tournament streaming. It became acceptable to eat dinner and then watch Starcraft on television with family akin to sports gathering here in America. The South Korean players largely outplayed the rest of the world, having an average actions per minute of 300-400 (actions per minute are numbers of key strokes and mouse actions) and a large advantage in strategy due to their training regimen. At the major international tournaments for Starcraft, the players were usually defined as either Korean or foreigners (i.e. anywhere not South Korea) and almost every tournament had the majority of the top 8 players hailing from South Korea.
               Starcraft is still a dominant force, with a large portion of South Korea split between Starcraft and Starcraft 2. There is a lot more money in Starcraft 2 for the “average” pro players, but the absolute masters of the original Starcraft players up until recently still made more money playing the old version of the game due to their obscene skill level. Starcraft 2 has had a lot closer gap between skill levels of Koreans and foreigners, but there is still a difference between the two that can still be attributed to the difference in training and the use of training houses in South Korea (which is an idea that has been gaining traction in Europe and North America lately). The game that has dethroned Starcraft in the past year or so, though, has been League of Legends; with more prize support, more players (over 5 million, with 1.3 billion hours logged) and more viewers, this MOBA (multiplayer online battle arena) has become the largest game in all of esports. Currently, the world championship for League is occurring, and so far the teams from many different places (America, Europe, China, Korea, and even the Philippines) are more evenly matched than first thought. Indeed, such a tournament led the United States to recognize League of Legends as a sport so that international teams could get their visas to attend the tournament in Los Angeles for the weeks that it takes place.

               Overall, the history of eSports is fascinating to long-time gamers like myself, and the influence of eSports is growing daily. I would not be surprised to start seeing a game of Starcraft or League of Legends next to a football game at a sports bar, or for private parties on weekends where everyone crowds around to watch their favorite players duke it out in a game of League as opposed to a game of hockey. There are already a few “barcades” in America where drinking and watching eSports occurs, and I feel like this will only become more normal in the future.

Facebook and my 900 'Friends'

Facebook, the juggernaut of social networking sites, is connecting people in ways even Mark Zuckerberg could have never imagined.  With more than 600 million active users (roughly 8.4% of the population of the world), an average user "friend" total of 130, and a total of 700 billion minutes per month spend on the site, Facebook has sure made its mark on the computing and communications world.  And why shouldn't it have made its mark? Over the years, and throughout its humble beginnings, Facebook has evolved from a simple 'get to know you,' Harvard exclusive, networking website, to a world wide, social networking albatross, capable of connecting long lost friends, archiving messages, storing countless albums of photos, sharing every type of media, expressing opinions, news and ideas, and now even capable of recognizing faces.  Because of its large status quo, and seemingly infinite repertoire of functions, I would be wasting everyone's time explaining the sight's capabilities any further.  Instead, I want to focus on what I believe are the sights many drawbacks.


Throughout my experiences with Facebook (c.2008 - March 2013) a gradual amassment of 'red flags' eventually turned me to distrust many aspects and manners, despite the site's incredible convenience.  A primary concern, and one that is of increasing national concern following recent NSA document leaks, was the lack of privacy that the website provided.  The social profiles, which include the user's name, birthday, 'friends,' 'liked' pages, pictures, statuses, etc., expose a great amount of information in a place where access to that information is simple to achieve.  Advertising companies feed off of this surface info, which is why, many times, user's will notice in sight ads tailored to their personality, interests, and age.  For me, this invoked questions regarding which personal information is shared, and which personal information is forked over to third party companies.  If the user-custom advertising wasn't alarming enough, the Facebook privacy policy once stated, "We may use the information about you that we collect from other sources, including but not limited to newspapers and internet sources such as blogs, instant messaging services, and other users of Facebook, to supplement your profile," meaning that an individual doesn't even have to be on the site for Facebook to log personal information.  


Another qualm with the social networking behemoth is its use of the term 'friends.'  In the minds of many active internet citizens, Facebook has redefined what 'friend' really is.  Before I traversed what seemed to be King Minos's Labrinth in order to escape the vice grip of Facebook (a topic I will cover in the proceeding paragraphs), I had amassed a total of 900 'friends.,' many who attended my high school, and many who had not, and many who I had never even met face to face before in my life.  The fact that 900 people, who a majority of which I had never shared a handshake with, had access to my thoughts, opinions, pictures, interactions, and social timeline was a little unsettling as I grew out of my childhood naivety.  Although I never feared that these individuals would collect my information and use it in a way that harms my well being, the fact that they could helped convince me to give up my 900 virtual friends.  Facebook makes it possible to achieve the feeling of knowing someone without actually 'knowing' someone.  The added 'timeline' feature made searching for an individuals past pictures, comments, and statues as easy as a click of a mouse.  Lives became virtual archives, indelibly organized upon the site's massive memory storage.  What's more concerning than having 900 virtual 'friends' is the inability to remove these friends with ease.  Trust me, I tried. Two laborious hours later, I had whittled down the list to 800.  In an ideal world for Facebook, everyone on the planet would be friends with everyone, or at least that's how it appears.  The bigger the social network, the larger the margin for profit.  

The biggest red-flag over the many years of Facebooking would have to be the difficult of removing content from the site.  Facebook, through an arduous and gilt ridden process, allows its users to deactivate their accounts but not actually remove account content from its servers.  The site asked if I was sure I wanted to deactivate my account and even went so far as to say <paste friend name here> would miss me if I left.  This was only to deactivate the account.  Any slip up or accidental give in by signing in to the website would immediately reactivate the account.  And this was only to deactivate the account.  To 'permanently' remove my profile I had to fill out a form with the reasons for my departure, which was then reviewed by Facebook.  If a user wants to remove all personal information from the site's servers, one must individually delete each piece of information, a process that could be impossible in one lifetime for some individuals.  A New York Times article discussed that emails and private user data remain on Facebook's servers indefinitely. 

File:Mark Zuckerberg 1984 Berlin Graffiti.jpgf

These qualities, overtime, convinced me to let go of Facebook's convenient capabilities and 'delete' my account and so far, the results have been rather satisfying.  I no longer use Facebook as an excuse to take a break from school work, or as a major means of staying in touch with friends.  The transition was difficult at first (addiction?) but overtime, I don't regret leaving the site, especially after learning that Facebook now uses face recognition software to tag pictures.  It only makes me wonder what could be next. 

-Andrew M.


Music and Technology



            Music affects almost everyone’s lives nowadays. From creating the perfect workout or study playlist, to attending a concert by your favorite band, music inspires and motivates us, and can offer great stress relief. With music having such a universal impact, it is worth taking a look at how technology changes the quality of that music and the genres that we listen to.
            While still attempting to remain contemporary, we could start this discussion as far back as the proliferation of electric guitars and basses and how they changed the musical landscape, but I will focus on the more recent deluge of synthesizers, keyboards, and turntables that brought about what we might call “electronic” music. This revolution in sound reached its stride in the 80’s, with synthesizers becoming nearly as widespread as guitars themselves.  The carefree and danceable sounds created with these instruments lent themselves to, well, dance music. This music’s popularity reinforced the idea amongst its listeners that everything was going just great, and there was no need to worry. This idea works well for the purpose it was intended for, to energize the dance floor, but it could be argued that it is not as effective a philosophy when it seeps its way into mainstream culture as the music did. Some bands did adopt the new technology and use it to deliver a meaningful message, but most did not see mainstream success. To be fair, the most successful electronic (and I would argue, but not here, the greatest) band of all time, Depeche Mode, are amongst those that adapted the music in such a way, but their success was not typical. So in general, the 80’s proliferation of electronic music brought us a turn away from the self-searching lyrics of rock and towards a more lackadaisical approach to lyricism.
            New technology took a break from having much of an influence on new genres in the 90’s, but sprouted up again in the 2000’s in a big way. Software for audio creation allowed for fully-featured music to be created with a minimum of hardware, thus making it easier than ever for anyone to start making music. This DIY approach led to an onslaught of DJs, creating their own electronic music and remixing others’. Emphasis was placed on experiencing the music in a live setting, for the purposes of dance and the communal experience. Music in the mainstream became less about the artist, and more of a social lubricant, an excuse to go out, dance, and relieve some stress. Lyrics hardly mattered, but if present, would certainly not present any ideas that might challenge the listener or give reason to think.
            Technology, then, had a powerful effect on how society thinks about music, and music has a powerful influence over how people think about life. New technology led to music becoming more of a tool for stress relief and entertainment, and less about finding meaning and inspiration in lyrics. Past this point, we could get into arguments of whether or not this change is for the better or worse, or if it is simply neutral, but that is simply opinion and I don’t wish to get into it in this blog, any more than my word choices already have. All observations in this blog come from examining the mainstream music culture, as it is by definition what most people are listening to and thus being influenced by. In any time period, there is of course an almost endless array of styles of music for a variety of purposes, but I have attempted to examine only the music that shapes the majority of culture in any given time period, and how advances in technology have been used to change it.

Would Steve Jobs be Proud Now?


Steve Jobs was the entire brain and power of Apple Inc, and it is sad to know that the company will never be the same since. If he were still alive today, what do you think the standpoint of the company would be? I have looked up to Steve Jobs; his method of running a company, and his way of directly telling people that he thinks their ideas are complete trash. Thoroughly and enjoyably reading Walter Isaacson’s biography of Steve, I have become a follower and I was incredibly proud of everything he has done for the company and this technological generation. I honestly believe that if there were no ‘Steve Jobs’ in the world, our state of technology would have been extremely far behind. Think about it… Just look at the phone in your hands, and it doesn’t matter if you have an iPhone, Android, HTC, Microsoft, or any other kind; they all have at least a couple of things in common because all of these phones were made by trying to out beat the standard of the last phone. It is all about competition and ‘stealing’ each other’s ideas. When the iPhone first came out in 2007, Apple has set the bar so high that all of the other companies had to climb mountains to reach the stages that Steve was at. It is incredible to think about how such brilliance can come from a company like this.

Think about every single year since 2007 up until when Steve Jobs was well and healthy, every time a new iPhone came out, a new operating system, or a new MacBook, there was always something that struck the public so hard, we were all appalled by the new features available to us in the palm of our hands. I remember watching the videos with Jony Ive, and just the way he was describing the brilliancy yet simplicity of each new feature of an apple product with pure love and passion.

In the unfortunate circumstances that Steve Jobs is no longer with us, I have this feeling that the company has not done anything completely remarkable ever since. I am one of those people who think that Steve Jobs was the heart and soul of the company, and for the past two years, I have not said “WOW!” to any new Apple product. But this is just my point of view. There are many people today who are raving about the new iPhone 5s and 5c that came out a couple of days ago, or even about the new iOS 7. (Personally, I’m still loving my personalized Jailbroken version of iOS 5.) Steve Jobs was a very strict leader, and if something didn’t suit him, there would be no way it would pass to be on an Apple product. He worked on perfection, which ends up working extremely well in the end.

This is an interesting article of Tim Cook, the ‘boss’ of Apple currently, and how he is trying to live up to Jobs’ potential. Unfortunately that is extremely hard to do.

            We all know that Steve left Apple a pipeline of innovative products for several upcoming years, and I am extremely glad to hear Walter Isaacson saying “We are not seeing them yet.” I don’t believe that the new 5s and 5c is what Jobs had in mind when he wanted to leave a legacy. I will still be waiting for the day that I can once again be amazed by an Apple product (which I hope will come soon.)  

(Oh, and also... Steve had a great sense of style throughout the years!)




Tell me What to Do

                Turning to computers for quick answers is nothing new. Routinely, we turn to calculators and Google for answers. This is because computers can process data faster than our brains can and do not run the risk of messing up any complex calculations. Using computers we can also quickly compare data from many different sources. This is exactly cancer centers have turned to Watson for assistance. Cancer centers including the Maine Center for Cancer Medicine and the Memorial Sloan-Kettering Cancer Center in New York City have turned to IBM’s Watson for assistance in diagnosing and treating cancer patients.
                Dr. Mark Kris of the Memorial Sloan-Kettering Cancer Center states that once a cancer patient is identified, they are prescribed two drugs out of 16 possible drugs to help combat the cancer. However this results in over 200 possible combinations of drugs that can be prescribed to the patient. Finding the right combination for a specific patient can take some time. Using Watson the cancer centers hope to decrease the time it takes to select the two drugs as well as increase the effectiveness of the combination for that particular patient.
                Turning to supercomputers and cloud computers for the complex problems is nothing new. However it seems that turning to these types of computers is becoming the norm more and more. Already people turn to Google’s search engine for answers to questions. Want to know the answer to that complex math problem, Google it. Want to know why black holes form, Google it. Similarly Siri from Apple and soon IBM’s Ask Watson also answer our questions. No longer do we go the library and look up these answers in books or take out a calculator and solve the equation for ourselves. Instead we let the computers do the thinking for us.
                While I hope that the drug combinations that Watson provides are cross checked to verify that there was not any mistakes made, how long will it take for this cross check to no longer be done? Sites like Google and Wolfram-Alpha provide a way for complex mathematical equations to be solved. However many people do not even think to check the computer, just assuming that it did not make a mistake. The sad truth is already some people trust everything that they see on the internet. If they were to Google colored elephants and find a Photoshoped picture of a neon green elephant they would take it as undeniable proof that green elephants exist. While luckily those people are currently few and far between, slowly but surely we are turning more of our thinking over to computers and accepting the answers they give us.

                Some people may say that we are not giving our power of thought away to computers; that they just provide answers and it is up to us to analyze and synthesize the results. For now they are correct. However, as mechanisms like Watson become more and more advanced they will also be capable of the synthesis and analysis of results and data. And it will only be a matter of time before we stop performing cross check on the results we are given.

Damage control is in full swing.

Excuse my brash ranting. It seemed...fitting.

The current administration is playing damage control, hard. As more NSA security leaks come out, people in the tech community continue to be outraged. The problem is the standard citizens not nearly as motivated to do anything, and as such, Obama is clearly focusing efforts on placating the correct people.

The first thing that is generally sweeping to help put water on the flame is his new 'task force' idea. While he gives no credit to Snowden for this attempt at transparency in the NSA, he plans to create an independent group of expert citizens who would review NSA policies in the future. I have to say, this crap is admirable at best: who says they won't create this force and just tell them to shut up and take their pay check all the same? This won't really change anything.

Other than that, the other proposed methods of transparency can be just as easily filtered. They plan to put in place a new website, outlining NSA actions and plans, and a new position simply to keep watch over privacy and constitutional concerns. They're both laughable, as they are internally controlled. If it took this long for someone directly involved to speak up, who says these things will actually get a hold of any information that would be needed to help the problem. They also plan to reform a couple pieces of the Patriot Act and FISA, which are what currently allow NSA's actions to be considered 'legal'. If they really wanted to change things, they would need to remove the whole damn thing. The Patriot Act is full of these holes and broad-sweeping statements that I GUARANTEE will come back up as a problem in only a few years, tops.

The second thing I took notice of this week is a little less obvious, but has to be influenced by the current mess the administration is dealing with. Obama has been urging the FCC to require mobile devices be permanently unlocked. This means that a phone would not be carrier-specific and can be taken and switched between carriers freely. This would be a wonderful thing for the tech community, and the standard consumer. Being able to take any phone to any carrier would drive competition back into the stagnant and terribly overpriced mobile platforms. We currently pay so much for mobile service that companies are boning us for huge markups that are hardly reasonable.

The idea of unlocking phones as a standard has been under debate for a while, but in recent years has quieted down for more pressing issues. They had made the practice illegal early this year after the exemption to the Digital Millennium Copyright Act it had expired. The fact that a major tech-heavy issue has come back up and announced to be supported by the administration in favor of the general public? There's no way it's coincidental, and they're simply trying to redirect focus from the NSA screw ups. Good job guys. At least you didn't attempt to wave 9/11, abortion, and gay rights in our faces - you needed some new tricks for those tech guys who didn't care about those.

It's terrible, but the fact that cell phones are how they're trying to distract people show the majority of the people with issues are tech-oriented. Non-tech-savvy people just don't grasp everything happening, and most probably don't know what unlocking a phone means.

See, in truth, I can't complain about the cell phone thing. I know there's not actually going to be a lot of change with the NSA's bullshit regardless of how much I or the public want it to - they'll find other ways to weasel these programs through. So, if something good comes out of them trying to win back public favor, screw it, right? It seems like that's the only way progress actually happens in this country anymore: someone needs public support, so they give us something to contrast pissing us off to lighten the blow. Politics are a mess here, and they drive me to consider leaving the country more every day. Here's hoping something happens to reform our outdated systems before I get there.


http://www.huffingtonpost.com/2013/08/09/obama-surveillance-reform_n_3733090.html
http://www.washingtonpost.com/business/technology/obama-administration-urges-fcc-to-require-carriers-to-unlock-mobile-devices/2013/09/17/17b4917e-1fd4-11e3-b7d1-7153ad47b549_story.html

NASA is Enemy to Freedom! huehuehue

The topic of the uneducated public is one that constantly weighs, or at least should weigh, on the minds of the relatively educated. Now, more than ever, in the midst of recent NSA related scandals, has the rift been highlighted. However, in a surprising turn of events, the uneducated have shown that, what they lack in general subject matter, they make up for in destructive capability. Recently, the Brazilian hacktivist group "BMPoc", has staged a protest against the NSA's ethically questionable programs regarding national security. Their fatal flaw was the inclusion of an unnecessary letter. Thus, BMPoc ended up vandalizing the site of our own poor, little NASA with messages of discontent regarding the visible actions of "the Illuminati." Unlike BMPoc, I did not make any mistakes with the acronyms, and you read that correctly. The National Aeronautics and Space Administration website was the target of this internet vandalism, and, yes, these vandals included a reference to the popular crackpot theorist obsession of the "Illuminati." So, what does this mean for us as American citizens?
Nothing. It means absolutely nothing. If anything, it just brings to light the fact that there are so many idiots in the world that know almost nothing about current events and their political influence. These guys seem to think that, not only is NASA responsible for this spying, but that the spying program is also responsible for America's involvement in the conflict in Syria. It's the ignorant masses responsible behind the support of such techno-vandalist movements that are responsible for spreading and perpetuating the misinformation that forms the blanket of obscurity that the NSA so dearly requires for its shady dealings. It's not much of a stretch to say these vandals are actually doing the NSA a favor with their poorly planned actions. It's also not unfair to assume that the general American public will adopt something like this for their mindset if this story becomes mainstream news: "Oh those darn Brazilians don't know anything about us Americans. Can't they see we're trying to bring peace to the middle east? And what's this about NASA invading our freedom? That can't be right. They were just saying the other day that the NSA was invading our freedom. I don't think I believe any of this hokey-pokey spying baloney." That's when the NSA finally gets the heat off their backs, and goes back to perfecting PRISM by potentially uniting it with BULLRUN, thus liberating our information. Keep in mind the historical tendency of the United States to force democracy on foreign nations in an almost methodical fashion. First, the foreign power establishes itself as a player on the world map. Next, US Intelligence is sent in to gather intelligence and disrupt peace. Then, the situation is declared a conflict and ground troops are deployed. The internet is just the newest frontier of forced American liberation, following right in the footsteps of multiple middle eastern nations, and, personally, I don't want to see the internet equivalent of ground troops.
However, the general public will completely forget about everything that's come to light recently in favor of more sensationalist media. Americans are the purest entertainment addicts, and, thus, the only way to really grab their attention and actually get something done is to organize an entertaining, but not too time-consuming, protest. Seeing that this is the only way to get a sizable amount of Americans to exercise their right to peaceful protest, and that the success rate of such a protest is through the floor, I'm beginning to consider becoming an expatriate of our lovely country. God Bless America.

Source: http://www.techdirt.com/articles/20130918/07151324566/angered-nsas-actions-brazilian-hacker-defaces-nasa-websites.shtml

Digital Media Piracy – Reasons for plundering in the modern age.

Piracy of digital media is one of the largest, most multifaceted issues of our current internet age. The rise of early peer to peer file sharing, and current prominence of bit torrent file sharing are defining aspects of the way many people consume their multimedia. Music sales have been in decline for years, and movies are less attended (on average) than ever. Much of the blame for these shrinks in revenue is placed on people illegally downloading and sharing copies of digital files. For this blog post, I'll be looking at a few of the reasons that (I believe) people are enticed to pirate, with what I perceive as solutions in a separate post.

Why do people pirate things?

It comes down to a variety of issues including (but not limited to) price, convenience, and availability.

Multimedia (video games, TV shows, movies, and music) usually cost money to experience. Most big budget, triple A games cost around $60 these days, and $60 is a big chunk of money for a lot of people. Many people will decide that a game is worth the money and shell out for it, but some people just aren't in a position to do so. Prior to the launch of the current generation consoles (Xbox 360, PS3 and Wii) the price of most games was $50. Not a huge jump, but large enough to make game purchases a more serious financial decision for people. Movie tickets have also been getting more expensive throughout the years. Television shows are divided up into those on network broadcasts, cable stations, and premium channels. Network TV channels (such as CBS, NBC, ABC, etc) put their shows out via aerial broadcasts that anyone with compatible hardware can tune into for free. Many families pay for standard cable access, which gives them the access to shows on other networks that do not broadcast through the air. The most costly option though, is the premium channels (HBO, Showtime). Much like the expensive video games, the premium channels are a strong financial consideration for many people. Music, on average the least expensive of the considered formats, hasn't risen all too much in recent years in terms of price. Piracy allows people to experience multimedia that they normally couldn't do financially.

Getting media to consume is another aspect where piracy can win out quite easily. Television and movies commonly run for lengths of time that have to be considered as part of a person's daily routine. TV shows running in their broadcast slot don't always line up with when somebody can watch them. There may even be two shows that somebody wants to see on at the same time. Movies on television fall into a similar problem, and movies in a theater might not be showing at a time when it's convenient to go see it. Piracy answers these problems very well. A user can download a file to their computer and watch it whenever they want. They can pause it and come back later, and they can watch it as many times as they want for as long as they keep the file (if they lose it, they could just download it again). For music, movies, and TV shows piracy also offers the choice of format. Just a click away is the desired content in an array of formats and qualities, often far superior than what is commercially offered.

Lastly, sometimes something just isn't available for you to buy. Movies come out in theaters for a while, disappear for a few months, and then are available to purchase legally. TV shows air, might come back in reruns later, and eventually come out to purchase. How can you see them in these intermittent periods though? That's where the pirates have the answer. Piracy varies vastly in quality for movies and TV shows though. People are able to capture TV shows and maintain the quality that they were broadcast in. Movie piracy (prior to a retail version becoming available) is usually a person sitting in a theater and recording the movie screen.
Other issues arise with availability in different regions throughout the world. Not all TV shows are available in all countries. Many movies never make it to some places, in theaters or to purchase later. Similar issues arise for music. Piracy allows people to acquire media that they can not legally purchase because either their nation won't allow it, or the company won't offer it to them.

Video games suffer from even more issues. They have the issue of not being available for purchase in some places, but also have to deal with the occasional odd restriction. In 2012, Borderlands 2 (sequel to 2009's Borderlands) was released in two different versions. There was one version for most of the world, and a separate version for Russia and countries nearby. The two versions of the game were unable to be played cooperatively with each other (cooperative play being a selling point of the franchise), with the Russian version containing only Russian audio. Some games have also been censored in some countries. Valve's Left 4 Dead 2 had most of the gore in the game either toned down drastically or eliminated completely. Volition's Saints Row IV was refused classification in Australia due to some of its content, and had to be resubmitted twice (with various changes). Pirated versions of these games with none of these issues could easily be acquired.
On another hand entirely is the concept of Digital Rights Management (DRM). DRM is quite often the bane of PC gaming. Companies enable DRM to keep games from being distributed illegally, but the DRM software often impedes the game from being played. For example, Ubisoft's Assassin's Creed II had a very strict DRM policy that would halt the game if it were disconnected from the internet. Pirated versions of games have their DRM stripped from them, so the games no longer care if they can connect to the internet.

Media companies have some serious hurdles ahead of them if they want to compete with piracy. They can compete, and they can win, but that's for another post.

iOS7, Windows 8, and Skeuomorphism


 
            This past Wednesday, I did not waste any time in downloading iOS7, Apple’s new mobile operating system, to my iPad and iPhone. Part of the reason why is that I like to always have my devices running the latest software; I just like to keep everything current. The other part of the reason why was everything that I had been reading in the news about iOS7. The expanded capabilities of Siri, the new Control Center, and the ability to open Safari and see what web pages I had open in on my other devices were all tempting and I was excited when I was finally able to try out these new features for myself. The biggest change of all, of course, was the radically redesigned user interface, which marked a strong shift toward “flat design” and a strong shift away from “skeuomorphic design.”
                In an article for TIME Magazine this past June, Lev Grossman defines skeuomorph as “an element in an object’s design that’s no longer functionally necessary but has been retained anyway for ornamental purposes.” Grossman cites as an example of skeuomorphism the design of leather thongs. In ancient times, thongs were used for holding together tools such as stone axes, but thongs were soon rendered obsolete. However it would be quite common to still see objects with twisted-leather thong patterns etched into them just for show.
                Although as Grossman points out we do not see thongs that much anymore, skeuomorphism is still quite prevalent, particularly in software design. Digital objects do not have the same constraints as objects in real life, but they are often made to look like real objects anyway, particularly to help users understand what purpose the digital object is meant to server. On the Windows operating system, the recycle bin icon looks just like a recycle bin, and on Mac OS, the trash bin icon looks just like a trash bin. There is no particular reason why these recycle bins need to look like recycle bins; they could look like ordinary folders and they would still be trash bins. There is a legend that Steve Jobs’s private jet inspired the leather-stitching pattern in Mac OS X’s iCal calendar application, but who really cares about a design that is glossy and fancy? All that really matters is that the application works and does everything that it is advertised to do. Software designers might think that making applications look like real-world objects might help the consumer, but in reality it does not. Employing skeuomorphic design can even create false expectations; Grossman points out that a digital book may look like a book, but you cannot feel the pages or turn multiple pages at a time or fold corners down or scribble in the margins. Users do not care much for flashy design, but they do care for honesty, and sometimes a flat design is all you need to be honest.
                One of the most striking examples today of an anti-skeuomorphic OS is the Windows 8 operating system, which dispenses with the traditional desktop and instead is designed around brightly colored tiles (although you can access a traditional Windows desktop by clicking on a particular tile). The heavy criticism that Windows 8 has generated highlights one of the dangers of completely eliminating skeuomorphism; you should not eliminate it so thoroughly that it user-friendliness is compromised. With Apple’s iOS7, skeuomorphism has been significantly reduced, but users familiar with iOS will still know how to navigate the operating system and how to launch into and exit from apps. Apple has not completely removed skeuomorphism with iOS7, and that might just be alright. As Grossman points out, “It’s possible for software to be too flat. Skeuomorphism isn’t inherently bad when used responsibly. There’s nothing wrong with being user-friendly.”
                Since becoming an Apple mobile device user, I have experienced iOS4, iOS5, iOS6, and now iOS7. I enjoyed the flashy design of the native apps in the first three, but after installing iOS7 and reading Grossman’s article I realize that I did not need such flashy designs. All I really need is a design that is user-friendly and honest. Even though iOS7 has largely done away with skeuomorphism, it is still honest and user-friendly, and that is just fine with me.
 
 
Note: The article by Lev Grossman that I referred to in this blog post can be found here: http://content.time.com/time/subscriber/article/0,33009,2144110,00.html. A TIME subscription may be needed in order to view the article in full.

California School District Monitoring Students' Social Media Accounts

A school district in Glendale, California has hired a firm to monitor social media posts on websites including Facebook and Twitter for one year.  The goal of this program is to identify potentially suicidal students, cyber bullying, drug use, or truancy.   The school would then be able to seek out the students in question and hopefully resolve their issues.  Although all students fall under the scope of this program, the firm can only look at social media posts that are public – private posts or pages are off-limits.
Although this might seem like a good program, the consequences of student surveillance in this manner are severe.  When I first saw this article, I was immediately reminded of the news story a few years ago involving a school inthe Philadelphia-area that took webcam photos of students with their school-issued laptops, without their knowledge or permission.   Blake Robbins, a student at this school, was sitting in his room eating “Mike and Ike” candies in front of his laptop – a picture was taken of this, without his knowledge, via the webcam on his laptop.  He was later reprimanded by the vice-principal, who saw this picture and mistook the candies for drugs.  Robbins ended up suing the school, and it was later brought to light that the school had been collecting hundreds upon hundreds of webcam photos from all students without their knowledge.  The school ended up paying $610,000 to settle two lawsuits related to the incident.   This school had no concept of boundaries whatsoever, and the results where horrifying – spying on students in their own homes, Big Brother style.
Although the California school district is not spying on student’s webcams, it is definitely pushing boundaries.  Just like in the case of Robbins’ school, there could be false-positives where the school interprets an innocent picture or post out of context.  What if this California school reprimanded a student for posting a picture of eating Mike and Ike’s, because they mistook it for pills?  Perhaps that student would get suspended, or punished, on that basis – we have no idea, and that is exactly the problem.  The school is asserting itself as an authority in this matter, and they are the judge of whatever students post.  If the school can reprimand students for things that happen outside of school, and especially for things that can be taken out of context like a social media post, this is a clear violation of boundaries.
The school should worry about affairs that happen inside of the school – words and pictures that are exchanged on school property are fair game for the school’s jurisdiction.  However, things like social media that students do outside of school should not concern the school at all.  Parents should be responsible to make sure their child is using social media responsibly, not the school. 
I was disturbed when I watched the video that went along with the article in the CNN link above – some of the people in that video seemed to think that the school should indeed be monitoring their students, and one person even went so far as to suggest that parents should consider making sure their children’s social media accounts are public so that they will be fair game for the school to monitor.  This logic is completely counter-intuitive and backwards.  There is no way that a parent would want their child’s social media accounts to be public for the world to see.  Even if the parent thought it was a good idea for the school to be monitoring their child, making the social media account public would open it up for any stranger to see and interpret what their child is doing.  I certainly would not want strangers knowing what my child is up to, especially since some could be watching with malicious intentions, trying to use the social media account as a way of determining what times of day that child is away from safety.  A public social media account also means that everything that child posts is also public for anyone to archive and remain on the internet forever, which is extremely ill-advised because there are plenty of things that children say in middle and high school that seem like a good idea at the time, but that they may regret for a long time to come.

I think in the end, the parents should accept responsibility for their children’s well-being outside of school, NOT the school itself.  Having the school take on this responsibility opens up too many negative possibilities, including students being reprimanded for posts that were taken too far out of context.  In the Blake Robbins’ case, it was clear that the school may have had good intentions, but it was so reckless with regards to privacy that it was downright sickening.  Ultimately, parents need to accept responsibility over their children outside of school, and the school needs to limit its authority to what goes on inside of school, or else we will continue to have cases like with Blake Robbins where schools will immensely overstep their boundaries, all with good intentions, but with extremely negative consequences.

How Far Has AI Come?

Artificial Intelligence continues to grow. An interesting article from The Guardian spoke about artificial intelligence and how it still grows today. Most of us have heard of Alan Turing, I’m sure. Computer science majors certainly should have heard of Alan Turing. The Guardian’s article, entitled “To Turing and beyond: the future of artificial intelligence”, mentioned how the consumer-brand relationship can certainly grow thanks to this growth. It starts with a quote:
“In 1950, computer science pioneer Alan Turing famously predicted that “… in about 50 years’ time, it will be possible to programme computers…to make them play the imitation game so well that an average interrogator will not have more than 70% chance of making the right identification (between computer and human) after five minutes of questioning.”
It then mentions how Turing was spot on. It is true, for sure. We look at a few consumer brands with artificially intelligent devices and realize that they can be attractive. One example would be the IPhone’s Siri. It is possible to have a conversation with Siri, if you so choose. Another example of interacting with computerized voices would be when someone calls a store or restaurant and they receive, not a human, but a machine on the other side. The machine will attempt to understand what you are saying to it and translate it in a way that those at the store may understand what you are saying. Often times, however, the machine does not really understand what you are saying to it.
It is great to have these machines, but their non-personable voice can be very off-putting as well. They sound the same often times. There is no distinguishable voice between the voice at your local pizza place and the airport you are trying to reach. The personal touch between humans is fading away behind these machines. Some do not mind that at all, but it can be something that those who are more “old-school” cannot get behind and support.
There is also the fact that these machines are not nearly as intelligent as humans are. They can make a person with an impatient personality frustrated. I have seen it with my very eyes with some people. They get visibly annoyed because the machine simply does not understand like a human would in their interactions with each other. But not all is lost with these intelligent machines.
Humans have a desire for speed. They want things done quickly. There is a huge positive in being able to do things on your own. Consider the supermarkets of today. Most of them have self-checkout machines. All they have to do is scan their items, the machine will tell them how much they have to pay, they bag their groceries, and then they are on their way. This has become so popular because it is simply convenient. Sometimes, the cashiers simply do not move as fast as the consumer does in checking items out, so the consumer can do it themselves. There are occasionally errors that require seeing someone to fix the machine, but that does not happen often.
In reality, artificial intelligence has come far. However, they have not come as far as Alan Turing predicted, I would say. Artificial intelligence is getting there, for sure, but I think most interrogators can still tell between an intelligent human and an intelligent machine with a far greater success rate over 30%. Can artificial intelligence become more life-changing in the future? Can it replace human interaction to a far greater extent? I think it will be successful in doing both these things, but I do not think it will be anytime soon.

http://www.theguardian.com/media-network/media-network-blog/2013/sep/13/turing-artificial-intelligence-brands-consumers

Apple's Giant Bug

Apple, the massive technology giant, once again “shocked” the world with their new iPhone5S and 5C.  Announced mid-September, they displayed their shiny “new” piece of technology, first starting with the iPhone 5C.  Targeted for emerging markets, the 5C is what the first iPhone 5 was, just wrapped in a plastic back.  The least expensive version starts at about $99 and comes in about five different colors.  Apple continued their event with the unveiling of the iPhone 5S, which is a 5 on steroids.  They quickly began touting is as the first 64-bit phone, that is, the first phone has ever used a computing architecture of this type.  They continued their press conference; displaying the phone’s new camera, new fingerprint scanner and finally explained that it would all be run using their new operating system, iOS 7.  While many people were dazzled by the new phones and OS, there was an equal amount that was indifferent.

Earlier this week, Apple released the final version of their remodeled operating system.  As I said earlier, it was met with mixed reviews.  Some of the new features, such as the quick settings and the swipe anywhere to search abilities, felt like they were playing a game of “catch-up” with Google and their Android operating system.  Regardless, the update was refreshing and much needed.  The new, flat look of all the icons and displays was a radical step into a new era of minimalist design.  No matter how beautiful the iOS can get, there is always an underpinning of the same thing; a paneled design with not much customization.  Apple left a gaping hole in their OS, which is leading to be catastrophic in their security.

Posted to Forbes on September 19th, Jose Rodriguez discovered an exploit in the way the iPhone is locked and operated.  Using the new “quick settings” or “command center”, you can get right passed someone’s lock screen, even without their fingerprint on the new 5S.  The steps below demonstrate the exploit:

As the video shows, anyone can exploit the bug by swiping up on the lockscreen to access the phone’s “control center,” and then opening the alarm clock. Holding the phone’s sleep button brings up the option to power it off with a swipe. Instead, the intruder can tap “cancel” and double click the home button to enter the phone’s multitasking screen. That offers access to its camera and stored photos, along with the ability to share those photos from the user’s accounts, essentially allowing anyone who grabs the phone to hijack the user’s email, Twitter, Facebook or Flickr account.


Although this seems like a stroke of luck to discover this, or just brute force testing that Apple overlooked, they are aware of this issue and should be releasing a patch soon.  It is just a shocking development that Apple, a huge company that should check all security measures, overlooked such a simple thing.  This exploit is public knowledge now and while most people couldn’t care less about your information or your pictures, there always is that one person that could be out to get you.  Be wary iPhone users running iOS 7, don’t leave your phone out in the open.

Thursday, September 19, 2013

Flawed Crypto in Smart Cards



                Just when we thought the NSA/NIST crypto revelations could not get any worse, there are now reports about smart cards in Taiwan having bad random number generators. This makes it relatively easy for hackers to guess what encryption keys are being generated and to tap into what was previously thought to be secure communications. These reports are coming out despite the fact that the smart cards had two international certifications. With that in mind, you may be wondering what the NSA and NIST have to do with all of this. Apparently, NIST and its counterparts worldwide manage the certifications in question. All required tests supposedly passed, yet a smart card maker now apparently has worthless certifications. As many as 10,000 people could be affected Ars Technica is reporting, and people are starting to questions whether or not this is further evidence of the NSA having weakened encryption standards.
                For people who are unaware of what a smart card is, here is a basic run down. A smart card is any pocket-sized piece of electronics with integrated circuits. In most cases, the cards are usually used for authentication. In this case, it is a card that is used to identify citizens and do tasks like file taxes and register cars. Rather than have to go about remembering a password or risk losing a random file on your computer, smart cards will store generated encryption keys for you. Since the keys are stored on the card, it is both harder for attackers to get ahold of them in addition to having other additional protections.
                Based on what was discovered, a Royal Holloway scientist told Ars Technica that there was no way for the smart cards to somehow have passed certifications without this problem with random numbers being found out. The tests were either not run for some reason, or NIST purposefully allowed bad encryption to be used so that it could later be taken advantage of. The NSA revelations have caused many to question NIST approved random number generator algorithms and some news sources have hinted that our government is saying we should avoid the algorithms altogether. The close ties between the two organizations only adds more concerns as time goes on and more evidence is found.
                Although the NSA claims it is protecting America, these malicious acts that we keep hearing about seem to be more damaging than anything else. It is no secret that other countries spy on us, so why is the NSA making their job easier? Why is my personal information at risk solely to appease the NSA’s paranoia with no noticeable benefit for myself?  I have used an encryption hard token to remotely connect to my work computer in the past. Is my work now in danger of being stolen by competitors? The only non-bad news that came out of these revelations is these specific smart cards only seem to be in use overseas. That does not mean that other technologies I may unknowingly use are safe though. I don’t think anyone but the NSA knows just how many different technologies have been compromised.
                Regardless of whether or not the NSA is involved with this specific case, there are some important lessons to learn from this. What is important to know is that this further highlights the need for international review of our standards and for more scrutiny when governments suggest changes to existing cryptography. We also need for cryptography implementations to be open. Security through obscurity does not work and is not all that trustworthy in the first place. Hopefully news like this will get people to think twice about the technology they use and to be more careful in the first place about where they are storing their data.