Tuesday, October 28, 2014

Revolutionizing Air Traffic Control





Rarely does the word efficiency come to mind when talking about Airports. But now there is a cutting edge technology that could revolutionize how airports are run. Remote controlled air traffic controllers.
                This technology is very limited; it’s only used at a single regional airport in the back woods of Sweden. But the world has taken notice. Paul Jones, operations manager at NATS, which provides air navigation services at Heathrow airport in England (the largest airport in the country), believes that it is the “next big thing for our industry.” With this development small airports like Ornskoldvik (the only place where this technology exists) can save hundreds of thousands of dollars a year. The idea is that controllers from multiple airports will eventually meet at this central location and do their jobs from a much more convenient place.  This is more cost effective, takes some burden off of the humans themselves, and also somehow much safer.
                How it works, is that there are a series of cameras at the airport connected to a fiber optic cable that goes to the new control room where all the controllers would be located. It gives the operator a 3-d look at the airport as if they were sitting there themselves! This exciting stuff is just the tip of the iceberg, apparently there will eventually be an augmented reality headset the controllers can dawn and control the planes like a 3-d video game. According to the controllers themselves, they embrace this technology. Mikael Henriksson has been a controller for 40 years and he says “Controllers are already spending most of their time looking at a screen instead of out a window,” so they might as well do it around other people instead of by themselves!
                Many of you are probably thinking, “this could easily get hijacked!” Well you’re wrong. The data transmitted between the camera tower and the remote control center is scrambled using dedicated hardware and encryption software, created by Saab, the automobile maker. If you’re still worried about your safety…its too late. Many of the large airports already have backup systems that are controlled from a separate location, so it’s pretty definite that this will eventually be universal.
                I don’t necessarily get why this is such a big deal, but apparently it is groundbreaking to the airport industry so that's a good thing! Right?

http://www.nytimes.com/2014/10/28/business/international/directing-planes-by-remote-control.html?ref=technology&_r=0

First Amendment vs. Publicity Rights

In today's culture well-known figures are depicted in video games and played by actors on shows like SNL. The First Amendment technically protects the ability to portray real-life people in fictional works, but the United States also gives individuals a legal “publicity right” to control how they are represented in the media, including video games. These conflicting rights pose an issue in some circumstances. Where does freedom of speech end and the enforcement of publicity begin? It is impossible for the line to be objective – it must be drawn on a case-to-case basis, similar to how Google has to decide which information requested to be removed actually should or shouldn't be removed. Deciding between the legal dogmas of the First Amendment and the publicity right depends on the damages inflicted to the person involved.

During the month of July, actress Lindsay Lohan began a legal battle with Rockstar Games, the makers in Grand Theft Auto V, because one of the characters had a similar likeness. Looking at two pictures side-by-side, there is some resemblance, but the developers stated that the persona and image was of a look-a-like model. In this case, Rockstar can seek defense against publicity rights by claiming “transformative use,” which means that the likeness was used to create a wholly new and original work. The issue with Lohan’s case is that the game never mentions her by name or alludes to her in any sense. However, if the judge is convinced that the character is unmistakably a depiction of the actor, the situation changes. Since the character is associated with content that turns a profit for Rockstar, freedom of speech is less of a safety net, since it only extends to limited artistic works, and not commercial products.

If Lohan wins this case, will this set a precedent for future works of satire? Lohan’s reputation has already been set. Her likeness in a popular video game will not damage or remedy it any further. The only case she can make is that she deserves a royalty for the merchandise with her doppelganger on it, if the judge is convinced Rockstar used Lohan’s image intentionally.

Another case where someone is suing a video game company is focused on Call of Duty: Black Ops II, by Activision. Set in the Cold War era, it is not surprising that important historical figures are involved and depicted. One of these characters is the former dictator of Panama Manuel Noriega, currently in prison for his crimes while in power. There is no attempt to disguise the fact that he is a part of the game’s story. This case is complicated because he is not a United States citizen. U.S. publicity rights have been extended to people in foreign nations in the past, but it is unlikely a judge will do so for a former dictator. Even if Noriega is granted U.S. publicity rights, Activision’s First Amendment rights will probably overrule them. Activision is not gaining anything from using Noriega’s character – it is solely for the purpose of maintaining the accuracy of the time period. Noriega’s reputation, like Lohan’s, has already been damaged. The reasons for his imprisonment will vastly overshadow any black specks the Call of Duty has put on his reputation, if any.


These cases are just two examples of the multitude of lawsuits filed against video game companies for using the images or likenesses of people who simply did not want the attention. Both Lohan and Noriega have lost nothing by being depicted in the games. The First Amendment should prevail in these two cases, unless the person involved has suffered damage to a positive public image from being featured in the video game.

How your privacy has been hijacked

                In recent months, privacy has been pushed into the mainstream media more than it ever has been in the past.  Technology companies have been implementing new policies and working to ensure that all user data is protected from prying eyes.  However, not all companies have been the most proactive in this respect.  It was recently discovered that Verizon Wireless has been intercepting and manipulating user data for the purpose of serving targeted advertisements to its mobile customers.  Not only does this present a huge privacy breach to any users who are victim of this, but it also represents the complete lack of care the company has towards customer privacy.
                Privacy advocates noticed last week that Verizon has been inserting a string of 50 characters used to uniquely identify the traffic of its customers.  The company claims that this ID is exclusively used for targeted advertising and it not sold to other companies or organizations.  However, no information has been given to prove this, and revelations about government spying and partnerships between American telecommunications companies and government security firms have lowered credibility for data providers across the board.  Similarly, Verizon hasn’t made any steps to protecting that identifier from other companies, meaning that websites customers visit can see the traffic just as well, which means that, even if it’s not sold by Verizon, it can easily be quantified and sold by any website you visit.
                Many people have made the argument that Verizon Wireless has no reason to use the unique IDs at all.  Firstly, as an ISP, the company makes millions of dollars in subscription costs and various fees.  Not only do they sell their product, but they also sell customer data, because the profit they already make isn’t enough.  Secondly, such insertions defeat any attempts the user makes to maintain private connections.  The identifiers can act as “permanent cookies,” meaning that any “do not track” settings they use can be completely nullified.  Thirdly, intercepting and modifying http requests is a huge breach of trust between the service provider and customer.  It is inherently expected that any requests made are sent as is with no modifications or extra information attached.  This is a founding principle of the internet, and should be expected of all providers.
                The biggest problem with this breach of privacy is that there is virtually no way to disable it.  Verizon offers an “opt-out,” for the service but it only lasts for a short period of time and then reactivates itself.  Without paying for a third-party VPN network, there is virtually no way to avoid the token without simply dropping Verizon as a provider.  This behavior is unethical and immoral, and the company should not be exploiting its customers like this.  Verizon is the biggest wireless internet provider in the country, and the fact that it can get away with such blatant privacy violations sets a dangerous precedent for other ISPs.  The company has a responsibility to its customers to keep their data secure, and the fact that they are acting in complete opposition to that means that we, as consumers, no longer have any expectation of privacy.

http://www.wired.com/2014/10/verizons-perma-cookie/

Monday, October 27, 2014

I disagree a lot with David Golumbia.

I really disagreed with Golumbia's points in the reading for this week. In the parts of the article that discuss topics that I have some previous knowledge about, it seems like he did not research his points well. For example, he said the term “script kiddie” is about young hackers, but it's actually about people who don't know how to program but can run “scripts” and call it hacking. This was a solid example of something he said that wasn't well researched, but there were other iffy statements. We'll probably get into that actual reading during class, but I decided to read more blogs written by Golumbia to see if maybe I just misunderstood his point.

I went to his blog, www.uncomputing.org, and read two articles. The first, “Opt-Out Citizenship: End-to-End Encryption and Constitutional Governance” (http://www.uncomputing.org/?p=272)”, was his critique on universal encryption. His overall thesis was that people do not have the right to privacy, as that's one of the sacrifices made to have a functioning justice department. He was against universal encryption, as it would do things like allow child pornography or other illegal services online. I heavily disagree with this for several reasons. My most practical argument is you can't just add a backdoor that only the government can use. It's like leaving your keys outside so the police can investigate your house in case you're doing drugs or something. Somebody who's not the police can break into your house.
Also, cybercrimes are things that have to do with data. Communication could always be done in relative secret. Before the internet, it's not like there were microphones everywhere recording everything. Criminals will find ways to secretly communicate. The only thing stopping encryption would do is cause innocent people to get hacked more.

Before I move on to the other article I want to say that it feels like Golumbia is writing articles for people who already agree with him. He alienates someone like me by referring to “cypherpunks” hacking everything, and talks about how “we” know stuff already without actually backing up his statements.

Anyway, his last article (http://www.uncomputing.org/?p=1575) calls for “cyber disarmament”. Since things like encryption and advances in other branches of cybersecurity are naturally offensive and defensive, there should be some sort of overall effort for everybody to commit to not hacking each other and have sort of a nuclear disarmament of technologies. This so dumb. Like, it actually makes me question if Golumbia knows what he's talking about.

Technology is not nuclear weapons. Anybody can hack anything with the right knowledge. They don't need a government budget. Nobody can stop people from hacking things. Advances in cybersecurity usually aren't pre-emptive. They happen because something gets broken.

Also, I think Golumbia is under the impression that cryptography is just waiting to be broken by someone with a huge budget, but rhere are formal proofs about why cryptography is hard to break. Golumbia talks about the encryption “arms race”:

“..no matter how carefully and thoroughly you develop your own encryption scheme, the very act of doing that does not merely suggest but ensuresparticularly if your new technology gets adopted—that your opponents will use every means available to defeat it, including the (often, very paradoxically if viewed from the right angle, “open source”) information you’ve provided about how your technology works.”


 It doesn't make sense that this means trying to encrypt stuff is bad since it will create an "arms race". Golumbia's alternative is disarmament. That's crazy.

Am I too paranoid?

I was going to write a page about an article from Phrack, an old-school hacking e-zine. It wasn't about hacking, but was about how to make yourself happy by “hacking the idea of happiness”, using various studies to figure out some interesting unintuitive solutions. It had some cool points, and I had things to talk about, but I didn't post it. I didn't because I was scared that if someone traced me back to reading Phrack it could potentially cost me jobs in the future. If I decide at one point to work in government, or do anything in the cybersecurity industry, it could be a bad mark on my permanent record. In my mind, if when I'm in college I write about reading an article that is associated with a hacker e-zine, it's almost the same as having a felony in practice on my resume.

I'm not a hacker. I haven't done anything. I just read stuff on line that seems interesting, but now I'm scared to associate myself with even visiting a website. These are websites that aren't illegal, and I'm posting from an anonymized blog. I should be pretty secure, and even then it shouldn't even matter if I'm secure because I'm not doing anything bad. However, I was scared of literally writing an anonymous blog entry online about something that is barely arguably controversial.

This got me to thinking, am I too paranoid?

That's what this blog is about. Realistically, somebody who googles the name “REDACTED” would not find this blog. However, a more in-depth background screen might. Then the question comes up: If this blog DID show up in a background check, would it matter?

One of my professors had worked on a program before she worked at Stevens to find “at-risk” employees. This was intended to find “outliers” who might be the Edward Snowden of their respective companies. However, finding outliers is hard, so it ended up resulting in saying an employee who visited the wikileaks website once is a high risk indiviidual.

If the background check uses something like that, then this blog would be bad to find.
The bright side is maybe I don't want to work for a company that has a system like that in place (not because it doesn't make sense, but because it isn't accurate). When I was 14 I did a few dumb things online so I couldn't “become some corrupt politician” in the future. That was kind of dumb but the bright side is this blog entry could be the same kind of thing.

Horror Stories of the Postmodern Age, Part II

If anyone actually pays attention to the author names on these posts, you might recall a piece I wrote last month on why technology is ruining horror stories. If I were to sum up that post in one sentence, it would probably be, "go read the post." However, there was something I said in that post that I didn't think much of at the time, but another something recently reminded me of it, and I realized I had more to say about it.

The first 'something' was how sentient AIs are seemingly the next big horror monster. At the time, I dismissed the notion as already conquered by the likes of Skynet and HAL, but there was one famous fictional AI I neglected to mention, and though you may not know him now, I guarantee you will come next year.

The reason for that is the aforementioned second 'something,' and oh, what a beautiful something it is. I'm talking of course about the Avengers: Age of Ultron trailer that everyone needs to go and watch right now. The trailer doesn't reveal much about the story, but it speaks volumes about the tone of the movie - namely, that it doesn't look like a kids' movie. As much as I love the first Avengers film, I won't deny that aspects of it are obviously meant more for children and families. The sequel, however, looks to be taking on a much deeper, more serious tone. Set to a morbid and creepifying distortion of Pinocchio's "I've Got No Strings," the shots of a mangled, puppet-like Ultron shuffling across the floor, or of Captain America's broken shield seem to imply that this movie will be above all else, scary.

"But Dan, why does this seem scary? I thought Terminators and HAL had cornered the market on sentient AI horror?"

You see, the catch is something I never would have thought of had it not been for this trailer - taking characters that audiences know and love, and throwing them into horror stories. Many horror stories today have the disadvantage of needing to convince people why they should care about these characters, even when the audience already knows that they're in a horror story. How did Marvel and Joss Whedon get around this? They made all of their other Iron Man and Captain America films so far seem joyous and carefree, so that the audience was lulled into a sense of security about these characters. "Nothing bad could possibly happen, right?"

Wrong. The Avengers: Age of Ultron trailer is exactly the remedy the horror genre needed - taking beloved characters, and pitting them against horrific creatures, rather than making characters tailored for the story they're in. Imagine putting Frodo and Sam into Insidious, or Shrek and Donkey into The Conjuring. It might confuse audiences at first, but you can bet they'll be worried for the characters' lives.

Did I expect something like a trailer for a superhero movie to inspire me to revisit an old idea on this blog? Of course not. Am I going to be seeing Avengers: Age of Ultron opening night? Absolutely, and not just because I'm a fan of superhero movies, or a fan of horror movies, but because I'm a fan of both, and I have a good feeling that it might just be one of my favorite horror stories of the postmodern age.

Microsoft changes

    Today Microsoft has announced that any office 365 subscriber will get unlimited cloud storage space in addition to their access to the office products. Whilst this plan has not fully rolled out yet, users can still get 1 TB drive storage for signing up for an office 365 subscription. This is a major move, as most cloud storage companies such as Google Drive and Amazon tend to charge similar prices for limited space. Other providers which used to offer unlimited space have reverted to offering limited storage. It will be interesting to see if Microsoft decides to stick by its decision of providing unlimited file storage space.

https://blog.onedrive.com/office-365-onedrive-unlimited-storage/

    This follows a set of trends of Microsoft adapting its products, the upcoming OS Windows 10 is bringing a lot of changes. It will introduce new features such as multiple workspaces (virtual desktops) as well as a packet manager called OneGet which is based on an existing windows package manager called Chocolatey.

http://www.howtogeek.com/200334/windows-10-includes-a-linux-style-package-manager-named-oneget/
http://betanews.com/2014/10/03/how-to-use-virtual-desktops-in-windows-10/

    Between these and other products such as the surface pro 3, Microsoft has managed to have a very successful quarter earnings. Let's see if these trends shall continue in the near future.

http://www.fool.com/investing/dividends-income/2014/10/24/microsoft-corporations-first-quarter-earnings-the.aspx

Sunday, October 26, 2014

Wine Connoisseur's New Best Friend

As a wine drinker, a person begins to learn the difference between a good wine and a bad wine.  Having wine with dinner each night for years will expose a person to a whole pallet of tastes.  As a new wine taster grows along with his or her understanding of the different tastes, so does his or her curiosity. I started to expand my further understanding about what makes wines so different.  I was curious to why 2 bottles of the same wine, such as cabernet sauvignon, could have such different tastes if they were supposed to be so similar.  Once I started being capable to tasting the differences in wines, I researched the basics of wine and why the tastes differed.  One of the first things I learned was that the aroma of the wine plays a more influential part in its taste than most people think.  Wine that is unaged does not have as a desirable smell as an aged wine.  Another factor to consider are the tannins.  Tannins, not to be confused with the level of dryness (which most people confuse because tannin does dry your mouth), are the presence of phenolic compounds that basically add bitterness to the wine.  They provide tastes similar to that of an herbal tea.  It adds balance to the wine and provides the chemical that makes wine last longer.
                Recently on Kick Start, a new product known as the Sonic Decanter, made its way to the market.  This product is an invention unlike any other wine ager before it.  In an ageing wine, a reaction of different molecules take place to enhance aroma and flavor.  In most wine ageing technologies such as an aerator, the objective is the same as the Sonic Decanter but achieved differently and less efficiently.  An aerator exposes the wine to oxygen forcing it to oxidase faster. Although oxidation is not usually a problem if you finish the bottle the same day it’s opened, most people don’t finish a bottle of wine every day.  The Sonic Decanter considered the problems that came with aerators and used the innovative idea of utilizing sound waves to age the wine without having to open the bottle.  The sound waves totally extinguish the use of oxygen when ageing the wine.  With the Sonic Decanter, a cheap $10/$15 bottle of wine will easily taste like a $30/$40 bottle of wine.  The Sonic Decanter also implemented an app that can start the device while you are away and on your way home.  The device extracts more flavors than any wine aerator or decanter before it because of the way it ages the wine.  The sonic waves make it significantly more efficient than any other device of its kind. 

                Although the product is just beginning to be made public, I believe the Sonic Decanter will be an invention no wine lover will live without.  To taste the best of each wine you buy and be able to preserve it with a longer shelf life than an aerator will be a luxury.  

The Real Reason for Data Caps

Pull out your phone and check how much data you have left, if any, for the rest of this month. You, like many others, probably pay great sums of money on a yearly basis to one of the big carriers in order to fuel your Facebook, Snapchat, texting, or whatever other social media addiction only to be restricted each month with the amount of data you can use. People also fear paying the overly zealous overage fees so they quickly jump into paying for plans that are more expensive. Are these data caps actually required or are they just vehicles for larger profits for these carriers?
                A recent review of carriers’ policies on data usage and caps has concluded that such strict caps on customers are unnecessary. Recently, the big four carriers, Sprint, AT&T, Verizon, and T-Mobile, have been playing a round robin game of increasing the data caps on their shared plans from around 10GB per month up to 30 and even 100 GB a month in order to stay competitive with each other. During this period, so-called network constraints seemed to vanish. Carriers did not magically just double in capacity overnight nor do they need to. It might seem encouraging that carriers are remaining competitive with each other in their customers’ best interests. However, the same carriers have been saying that data caps have been implemented as a way to manage their networks. This simply cannot be the case if they can raise data caps so easily and so quickly.
                Carriers are using data caps to maximize profits in an easy and manageable way. The caps allow the carriers to tack on extra charges for every megabyte of data used past the limit. This either forces customers to watch their limits carefully or forces them into buying more expensive plans in order to gain access to more data usage per month. On the data provider side, content providers can actually pay carriers to allow users access to their services without affecting monthly limits. This two-sided business plan actually helps carriers make immense amounts of money easily. This is why carriers are pushing grandfathered users still on unlimited data plans to switch to capped plans.
                Management tools used by carriers to load balance their networks are much more difficult to implement than putting data caps in place. Utilizing tools that detect congestion at certain cell sites and throttle the heaviest users are much more complex than simply placing a limit on the amount of data each user can use. Carriers that are operating such large networks should have such tools in place instead of just pushing customers into restricted plans. The Federal Communications Commission (FCC) has been investigating the market and carriers in order to determine whether exploitations of the market are being made. However, the FCC has been more talk than action when it comes to protecting customers and the market overall. It will probably be a long time before carriers change their methods, but for the near future, we are simply subject to their regulations.

                

Google's USB Security Key




Is it a great news or what that Google is looking into USB security keys for 2-step verification process? Google has recently announced that they want to move into Security Key technology for their 2-step verification process. Google is the first to adopt the Fast Identity Online (FIDO) Alliance for their second-factor authentication or U2F. FIDO Alliance is a group of nearly 120 companies, including Microsoft and Google but not Apple, that supports better online security through open technologies. This Alliance and their support for open technologies and their impact for allowing users to securely login to all of the supported services through this secure key is something worth considering.         
The way 2-step authentication works now is the user logs on with his/her username and password into respected service, which then sends an SMS or an e-mail notification to the user with the code which the user enters to be allowed in the respected service. This is a great security mechanism for the security minded; however, there are occurrences where the second step can turn into pain. For example, if the user is using the SMS (data rates apply, of course) means to receive the code, the user is relaying on the cell-phone service and their strong signal reception. There are times at which if the reception is bad, the code will not be received by the user and in times of urgency, it could be costly. To avoid this pain, users download an app for the respected service which has pre-installed code to allow users to log into the service using this app. Security of using an app with pre-installed codes for authentication in services can be debated as well.
That is why the new approach to using the USB security key for the 2-step verification can be considered advantageous. Users will have to buy this USB key for about $20, which will act as a  second medium for the verification. This USB key will have a built in chip which will support Public Key Encryption via only the Google’s Chrome browser (at the moment). Chrome will verify the security process for the encryption and decryption of course. What this means is that users will have to use Chrome in order to use this USB security key for any 2-step verification services. If users want to use any other browsers, then they will not be able to complete the 2-step verification process. The USB security key only supports Chrome browser at the moment and might be adopted by other browsers in future. This might make some paranoid types uncomfortable; that is why Google is recommending not switching to this new means of authentication and remaining with the old way for these security paranoids. For others not concern about using Chrome and Google’s tweaked cryptography algorithms, they can most certainly take advantage of this new means of 2-step authentication. This will ensure more security for users against attacks like phishing, keylogging, and man-in-the-middle.      

http://arstechnica.com/security/2014/10/google-offers-usb-security-key-to-make-bad-passwords-moot/

Tinder Meets Ebola

Recently I saw Professor Vinsel tweet an article and it got me thinking about the phenomenon of appification. This article called Tinder Meets Ebola: Creepy App Gives You Real-Time Distance From Nearest Sufferer discusses the utilization of the mass hysteria surrounding Ebola - app developers are using the panic of the people as a method of generating income. They have developed a web based service called Ebola Tracker and an app called Ebola Near Me which uses the location of the app user and pinpoints the nearest reported case of Ebola. This app gives the users a feeling of impending doom and an inevitability of contracting the disease.

Ebola near me
image source
My main concern regarding this app is the fact that there is an app for this. People are using valuable resources to make apps that only spread mass hysteria and panic among perfectly healthy people. If there was an app that discusses the ways Ebola can be contracted or the prevention methods associated with this disease, that would be useful.

This leads to the discussion of how permeated our lives have been with applications and online technology. If there is a major issue happening globally, there is an application made regarding that topic. Major public health issues, economic crisis, national and international threats have been commoditized for the gain of few people. These applications do nothing to solve the problem, rather they serve as a tool for fear mongering and mistrust among people. These apps lead the way for stereotypes to be made about certain groups of people and certain cultures and heritages. This may seem like an overreaction but these very small issues are how things escalate - once this information is set into peoples' minds the wrong way, the spread is inevitable and mass hysteria is inevitable.

It is great that people are working towards learning more about Ebola and the causes and looking for preventative measures but this app seems like a ridicule towards this major issues. The developers have made Ebola and its surrounding panic into a commodity and are perpetuating the idea that this should be taken as a joke and not as something that is grave and is a major problem.

The Child Algorithm

Genetic modification has been performed on plants and animals for many decades now, but it has not been until more recent times that the focus has turned to humans. This is a very touchy subject because it greatly challenges the age old definition of humanity. To many genetic modification is seen as a horrible and terrifying dark art while to others it is the great savior of humanity and planet Earth. Genome mapping for humans was completed in 2003 by the human genome project. With the completion of the human genome project people are able to have their genes tested to check whether or not they are carriers for certain diseases and the possibility of passing them down to their children.

One methodology of testing for whether or not a couples children will have specific genetic diseases is to have them send a lab egg and sperm samples from the prospective parents. With those samples the lab will fertilize the egg then let it mature for a few days, after this maturation period a cell will be taken from the egg and tested to see if it contains the genetic diseases that are to be tested for. If the embryo does it will be discarded then another will be tested until one which does not contain any of the disease is created. This is a slow and time consuming process which also requires the parents to send egg and sperm samples to the lab, which to some may be too much to ask for, and to some it may be seen as the same as abortion. This is where a new company GenePeeks hopes to make a difference in the field.

GenePeeks was created by Anne Morriss due to the fact that the sperm which she was donated had the same genetic mutation she had which caused her child to be born with a MCAD deficiency. Currently GenePeeks's services are only available to women who are looking to receive sperm donations to have a child at the cost of 2000 dollars. What makes GenePeeks so different from previous testing methods that is does not involve embryo fertilization just genetic samples from both the donor and the receiver. They use a specialized algorithm, invented by geneticist Lee Silver, which simulates around a thousand “babies” and tests whether the targeted diseases come back positive or negative. Currently there is a limiting factor on GenePeeks's ability which is that is only able to test for simpler single gene disorders. That does include thousands of disorders but it does not touch upon more complex disorders such as diabetes and heart disease.

The moral issues behind this are astronomical. Will such technologies and services lead to a genetically modified race of super humans such as how humans genetically modified chickens to increase their size and yield or will the services be kept clean of this political side of the technology and only be used to test whether or not a child will be born with certain genetic diseases. I believe that this is a field that is subject to have limits set on what a couple, or even just a scientist, may be able to do with an embryo. I believe that what Stephen Hsu mentions in his article should not be allowed. The over genetic modifications of humans is a dark path which I do not think science should be allowed to go down.


Why encryption is so important

There have been countless cases of police abusing their power and illegally searching through peoples' electronic devices. This is all too common in our post-9/11 police state society. There are various laws that govern whether or not your electronic device, whether it is a phone, tablet, or smart watch, can be searched through. But whether it is a legal search or not you can protect yourself from it. You do not have to incriminate yourself by giving up your electronic device to the police. Today we have encryption technology available which can completely lock your information away from anyone. The amount of computing power required to access this information is so great that not even the NSA could decrypt it (for now...). The question is, why doesn't anyone use it?

Why don't you have your entire computer fully encrypted? Or your phone? Today I read an article about a woman who was pulled over and arrested for DUI. The officer asked her for the password to her iPhone and then searched through her photos and sent them to other officers. While the government doesn't seem to be willing to reform this pattern of abuse within our police state, we can protect ourselves from this threat by encrypting all of our devices. In addition, we can also invoke our fifth amendment rights when asked for the passwords to our devices.

So why does all this matter? Maybe you have nothing to hide and don't care at who looks at your data? Consider that there are other people that do have things to hide. Recently, the FBI announced that it is very unhappy with apple's decision to make iMessage fully encrypted and only accessible to the people who sent and are receiving the message, not even apple itself. So now the FBI and various other government agencies are trying to scare the public into believing that if they don't have access then very bad things will happen. By using these fear buzzwords such as "terrorists" and "child molestors" they are trying to convince the public that it is in their interest for all these government agency to have full access to all their social conversations.

The problem with this logic is that the people that want to keep their conversations private have many many other tools they can use for such purposes. So what is more important? People having the freedom to have completely private conversations with other people or saving at most 19 lives from terrorism (http://www.huffingtonpost.com/h-a-goodman/of-the-17891-deaths-from_b_5818082.html) which would unlikely be saved even if law enforcement agencies had access to the conversations of those terrorists.

The main point I am trying to make here is that we should all start using encryption to our advantage. Whether it is to prevent the abuse of the power abusing police state or just because we value our privacy and think that nobody else should be allowed to snoop in on our conversations. My belief is that your information should be for your eyes only and for nobody else to snoop on.

Source: http://arstechnica.com/tech-policy/2014/10/chp-officers-reportedly-stole-cell-phone-photos-from-women-in-custody/

Source: http://www.huffingtonpost.com/h-a-goodman/of-the-17891-deaths-from_b_5818082.html