Saturday, November 30, 2013

Graphene, The Super Material


Graphene is the thinnest substance ever created, is as stiff as a diamond, and is hundreds of times stronger than steel.  It is flexible, stretchable, and conducts electricity faster at room temperature than any other material.  It can even convert light at any wavelength into a current.  It has many potential applications such as faster computer chips, flexible touch-screens, and efficient solar cells.  Graphene has one major flaw however; it is unsuitable for the binary on/off mechanisms that are fundamental for electronics.  It is difficult to make in quantity as the single sheet of carbon atoms in a hexagonal honeycomb shape requires rigorous attention to detail else tears will occur.  

The European Commission in Brussels approved a graphene flagship project in January which is a joint effort between industry and academia.  Several researchers have expressed concern of the size of the project stating that it will inevitably be fettered by the bureaucracy of a venture of such magnitude.  The flagship programme is divided into 16 separate work packages that are focused on developing technologies in separate application areas.  Production of graphene using the surefire method (2010 Nobel Prize worthy) is far too slow and can cost over $1,000 per micrometre flake.  Graphene has already seen applications such as enhancing tennis racket strength and in conductive circuits but this is in an impure, weaker form.  



Graphene potentially can revolutionize the computer world with its clocked 400 gigahertz speeds, far more than comparable silicon devices.  Europe has been the leader in academic research with scientists coming from many countries but Chinese, Japanese, South Korean, and American companies leading in development.  The European Commission wants the flagship to be as inclusive as possible to ensure that the under-represented members can be a part of the action.  The consequence is that existing research groups are barred from bidding for funds which greatly limits what already successful groups can achieve.  Researchers prefer cohesive teams in which results can be shared among all members. When funds are split among multiple groups that each have their own agenda, progress is halted for the sake of fairness.  

The way electronics are thought about will have to radically be changed in order to incorporate graphine into itself.  The cost barrier definitely is slowing down development but techniques have risen to make production more affordable and with time and research I am sure that costs will decline.  It will be interesting to see whether silicon valley opens up to the idea of being called graphene valley someday.  

src: http://www.nature.com/news/graphene-the-quest-for-supercarbon-1.14193

I find my attention span getting shorter

Earlier this year, we read an article for this class having to do with the effect of the internet on our brains – that it may be changing our brain structure to make our attention spans shorter.  Ever since I read this, I cannot help but notice how short my own attention span is.  I would link this article, but I’m too lazy to go through my email and find it (I’ll take this opportunity to blame my short attention span – thanks, internet).  There is rarely a time during the day when I am not constantly consuming some useless information, whether it come from the internet or the television or some other media device.  Reddit is the one example that stands out foremost in my mind – every link delivers a quick, momentary jolt of excitement, which fades away as quickly as it came, prompting me to click the next link – and so on, and so forth.  Ever since I have become conscious of this effect, I cannot help but notice it everywhere – I love getting these quick jolts of excitement from seeing a picture or reading a quick news article.  It rewards having a short attention span by giving you a little bit of excitement for everything you click.  Things that require periods of focused attention, like reading a book, make me physically anxious – I feel like I’m missing out on something, like I could be doing so much more.  Of course it’s not true – for everything on Reddit that was a genuinely piece of content, I have to sift through hundreds of links that contain genuinely crappy content.  Look at the front page right now – it’s all stupid pictures that will give you a quick laugh, and then will proceed to immediately forget.  Maybe there are a few news articles about things that are going on right now, but many of them are misleading, not presenting the whole story, or something that you will forget about five minutes after reading it.  Yet I and many others still go on there when we’re bored, or to procrastinate or whatever.

            Before the internet, it was hard to get information – you had to go to a library or have access to someone who was knowledgeable.  Now, we have the opposite problem – we have access to an abundance of information, but most of it is not worth your time.  Most Facebook, Reddit, or online news posts are not worth reading at all, because they contain useless information that will not improve your life.  We just keep consuming it because we get a rush when we get to that 5% of content that is genuinely good and worth consuming.  Why aren’t we alright with consuming less?  For me personally, I struggle with what to do with my time – time when I am not doing anything feels like a waste for me.  But I think I could use some time where I just separate myself from this information stream coming from television, computers, and cell phones and just think.  When I do, I find that I am really able to relax.  After the anxiety about being disconnected goes away, I can collect my thoughts.  I have to learn to be alright with doing nothing sometimes – it is often better than consuming useless content on the internet, which I used to consider to be “doing something.”   I find that trying to read for pleasure also helps me to increase my attention span – I made it halfway through The Brothers Karamazov this semester, just reading it for pleasure and not for any class.  I intend to finish it, because aside from being a genuinely great novel, reading a long book helps me practice increasing my attention span for long periods of time, which helps me do things like study for a test, or learn a hobby.  Obviously, the internet is a great thing, but I definitely find that it encourages having a shorter attention span.  I am going to be a little more careful about that from now on.

Stevens' Residence Life

Although it's not specifically the internet that's the problem, I feel that the Residence Life office at Stevens' ought to have a lesson or two in C&S; they have been recently switching all the doors from keys to ID cards. Now, this is great and all, because it stops you from making a copy of your key and giving anyone full access to your building. However, what they have failed to consider, or simply chose not to, is the denial of service this creates in the event of a power outage. When I lived in Castle Point Apartments last year, the Office of Residence Life did the same thing - they removed our key cores, so that keys could no longer be used to access the building. Then, Hurricane Sandy hit Hoboken, and because they were too brainless to put key functionality back in, a lot of people were locked out on a daily basis, and the solution was to simply leave the front door open at all times. Really? Isn't limiting access to the building the reason they removed the key cores in the first place?
One would think this would teach them a lesson - but no, this year, now that I'm living in Palmer Hall, they decided to do it again: they removed the key cores from Palmer. Literally later that week, there was a power outage in Palmer Hall, for some unknown reason, and, oh so surprisingly, the ID cards didn't work. I propped the door open - Residence Life wasn't even aware of the power outage for a good few hours. When myself and a few friends decided to email Residence Life to ask for the key cores back, we were ignored. However, one friend decided to call - and he was told that the reason for using ID cards only is not just for limiting the people who can enter a building, but also for keeping track of where each student is at all times – for safety purposes. The idea would be that in the event that a student goes missing, they have some idea of the last time the student was seen.
Is this not the exact same thing that created loads of hype with the NSA? For the sake of security, some higher organization has taken it upon itself to invade privacy and automate the recording of “public information”. The worst part is that the majority of students doesn't know or haven’t been told the reason for the removal of key cores – they just go along with it, perfectly accepting of the new oppressive Residence Life policy.   

Digitizing everything works sometimes, but the ramifications of doing so need to be considered – it’s simply not a good decision to compromise availability of a service just for the sake of “added security” – I somehow doubt that their security system that they speak of, where they can keep track of where each student is ever actually gets put into use. I mean, I could easily avoid swiping anywhere for a week – there’s a side door to Palmer which is always left ajar, or I could tail behind another person and not use my swipe. Would they do anything if I didn't use my swipe for a week? Probably not – so why do they insist on this digital system as a means of “security”, when all it really does is hinder availability? 

The Internet and the Death of Thought

                Thoughtless internet use can lead to a dangerously shallow pattern of thinking. The internet's massive stores of information can essentially serves as a shield from any sort of deep thought. Instead of taking the time to process each individual kernel of knowledge, people often jump back and forth between different pages without internalizing anything. In a constant web of links it can become incredibly difficult to focus on any single thought, and this is dangerous. Over time the repeated fragmentation of one's attention can have a serious effect on ones attention span, and this harms much more than the value of the time one spends online.

                The objectivity of this post was quickly derailed once I began to examine how I interact with the internet. So instead of exploring how the internet effects people in general, I am going to talk about my own personal relationship with the net.

                While on the computer,  I fall prey to the exact situation I described above. I entrench myself in a constant internet feedback loop, and I allow myself no time for deep thought. I go through the same three websites over and over again;  absentmindedly hoping for some sort of superficial stimulation. I rarely dig into the content of these pages, or synthesize any sort of new ideas. I simply skate along the surface and experience the web in a transitory way. I may grab a few headlines here and there, or chuckle at dog picture, but this does not amount to anything of value. I don't allow myself the time to process the information in any meaningful way; I just keep clicking and clicking and clicking.

                This is a completely mindless use of my time, and I feel like it has been eroding my ability to concentrate. Even as I write this post I find myself going through the aforementioned  cycle.  It is hard to focus on my thoughts, because I habitually silence them through the click of my mouse. What's worse, is with the advent of the smart phone there is no reason for me to ever leave the feedback cycle. I am constantly connected to the internet, so I can check my top three websites whenever I want.  Why do any sort of self reflection when I can look at funny dog pictures on Reddit? I have never explicitly asked myself this, but in wasting my time browsing the net this is essentially what I am doing. My mind often does not have enough time to breathe.   


                I feel as though this whole post has been a bit dramatic. It is not like I never have a single thought while on the web, but I believe this sort of cursory web browsing is really effecting me in a negative way. I waste far too much of my precious time staring at a screen. I think this is a problem for a lot of people today, especially those of us here at Stevens. It is all too easy to get sucked in by the internet.  I encourage you all to do a similar analysis of the time you spend online. As you can see it has been a very revelatory experience for me. 

Snapchat and Social Interaction

Snapchat is a free app that allows anyone to take a picture or a quick video, text or draw a caption, and send it.  The picture can be displayed for a maximum of 10 seconds, and after the 10 seconds it disappears into I don't know where, but the idea is that it's completely deleted.  It has become very popular in younger generations as a way of interacting with each other and I think it is very interesting.
  There are days where I can get as many as 25 "snapchats" or not even one.  I find there are definitely different types of snapchat personalities as well.  One of my friends ALWAYS sends a snapchat of a selfie with a caption (there is a red line under the word selfie and I believe it was just recognized as a word in the dictionary so I think this website should address that).  I don't see her often, so I didn't care in the beginning.  But let's get real ... I know what she looks like and I don't need a picture of her face every 3 hours of what she's doing at work.  Another friend of mine always sends pictures of the children she babysits -- now I just have an unrealistic expectation of how cute and well-behaved children can be.  Some people never send selfies and just send pictures of what they're currently doing, no matter how exciting or boring it can be.  Regardless, it has become a very common way of communicating with each other outside of texting.
  One of the most interesting things I find about Snapchat is the conversations I have with my friends about how they use it.  A couple of my friends have said that if someone doesn't answer one of their snapchats, they personally feel a little embarrassed or offended and will send fewer snapchats to that person.  However, in my opinion, what if I'm doing something that I don't find is worth taking a picture of? Because I'm definitely that person who doesn't answer all of the snapchats I receive.  What if I'm in a situation where I can't take a picture? While I didn't ask my friends those questions I still thought it was an interesting way of thinking. 
  Another interesting part of Snapchat I recently came to notice is that she uses Snapchat as a way of communicating more than texting.  My preferred mode of communication is in-person interaction, then maybe texting, and last of all, Snapchat.  I'd rather have conversations with someone over texting, but she seems to do it in Snapchat.  I understand if it's a great picture, by all means go ahead.  But once all she was taking pictures of was the TV in front of her and was sending me 7-word captions (since that's about the limit).  I was answering because we seemed to be having a conversation, but after I replied the second time I thought, "why can't we just text this?"  It's happened quite a few times in the past week.
  Anyway, I think it's an interesting thought of how Snapchat has changed the way we interact with each other.  I believe it has made communication with pictures much more common and has added yet another dimension to the way we socialize with each other.

Ancient Aliens -- really??

Sometimes when it's late at night and I can't fall asleep at home, I'll find a TV show on the History Channel (I think) called Ancient Aliens.  If you've never heard of it or watched it, I highly suggest you watch at least 15 minutes of one of the episodes.  It's so entertaining and there's a great meme of one of the guys who is always appearing on the show, convinced that every piece of history can be attributed to aliens.  So much so that I even think he once claimed it could be possible for Jesus to have been an alien (my mother was not too happy that was broadcasted on television).
  Anyway, some of the somewhat preposterous theories they present on the show are actually pretty interesting.  Whether it's weird heiroglyphics on the Pyramids of Giza depicting things in the sky or ancient drawings/documents with a diagram of an accurate electrical circuit, it always makes me think about ancient history.  While I think about the possibility of ancient technology being lost in the past, I always hope there aren't (too many) people who actually believe that aliens were the cause of all of our history.
  In our class earlier this week, the subject of our minds was brought up.  The class was discussion that one change in evolution that made humans superior to all other animals in terms of consciousness, awareness, "being," morality, etc.  While I was sitting in my seat, all I could think of is that Ancient Aliens would absolutely say aliens were the root of the cause.  I'll conclude this short blog with something I found on the internet a while back:

Elementary School: Here's a basic understanding of history and how the world works.

High School: Actually, that's not quite right.  Everything is actually a whole lot more complicated than that.

College: EVERYTHING YOU KNOW IS WRRROOONNNNGGGG.

History Channel: Aliens

Human Intervention

I had the unfortunate case of coming late to the last class due to feeling under the weather, so I was unable to take part in a majority of the discussion on post humanism. I was really looking forward to hearing more from everyone regarding this issue because I feel that despite all these technologies at our hands, at the end of the day it is up to humans to still make the right decisions.

Sure, we may be giving more and more things to technology to do for us, and sure, technology may make life easier so we can focus on supposedly more important things in life, but the human factor will still play a role in anything involving technology, since we are the ones using it. We are the ones that decide if we use a certain protocol or not. We are the ones that can choose to be lazy-asses who don't know how to do anything and just let the technology do it for us. We are the ones that can use technology in the right way and benefit others or the wrong way and harm others. That's why we should constantly have to question the makers of new technology.

With any technology, we need to ask ourselves if it's really necessary to make--does the problem that the technology solves really that much of a problem? Or was it developed solely to encourage being lazy and to make money off of the fact that people have this desire to do less work? I know there's a bunch of people out there that think that they do too much and want to be able to do less, but maybe there's a reason why we should do the work rather than have technology do it for us. The iffy example I want to give pertains to what I might possibly tackle in my final paper, in that the internet has made it much easier to look up anything, so why bother with a college education when you can just search up the information you need? Why not just make searching for the information you need easier than actually learning material, which can take up a lot of time? For most people, they'll swat this question away because they're not concerned with actual learning--they just want what's necessary to get the job done.

That's why I feel like rather than just focus on making technology better, we should also find ways to make humans better. And by that I mean without having to use technology. Educate people so that they can make better decisions. Have them do the work and reinforce the information that they should know. Because everything pretty much comes down to the choices that we make, whether individually or as a group. And the more knowledgeable people are, the more informed their decisions will be, which will usually turn out to be a better choice to make.

That's one of the things I'm taking away from this class--that even with all these advances in technology and improvements to the human condition, we still need to strive to be better without the use of tools. That way, when we do have technology in the palms of our hands, we'll be better able to use it.

The grammar in technology

  For my final paper I chose to read the book The Shallows by Nicholas Carr.  It was about how technology from the past and present has affected our brains.  As a Biomedical Engineer, I found this extremely interesting and agreed with almost everything I read because I felt that my brain was, indeed, changing.  For this blog post I want to only focus on one part of the book -- grammar, syntax, and writing skills.
  Carr only focused on this for a very short duration of the book, but I thought the subject was profound. He was emphasizing that as digital word processors became a normality, the writing and language skill of several people basically went out the window.  While this may not be true for those who are in English-related majors, I feel that this is extremely true for the general population (or maybe this just seems exaggerated because I go to a tech-school where writing is not a priority).
  Before there were programs like Microsoft Word, whenever someone wrote anything, they had to ensure perfection before it was printed.  Printing was an incredible innovation in itself, but once something was printed and distributed, there was no going back and editing any mistakes.  Therefore, writers had to edit and re-edit countless times to make sure their ideas were succinct and clear.  Once word processors came out, writers were blessed with the convenience of being able to easily edit their works on the computer with a simple delete button, easily reprinting their work, or as the internet became a convenience, just go online and edit their online document.
  The convenience of editing one's work led to people gradually becoming less diligent about their writing.  People edited less, and in this huge domino-effect type scenario, people became less attuned to proper grammar and syntax.  While some can disagree and say they used slang on AOL Instant Messenger since they were 10 years old and basically invented words, one can argue right back and say the adults of Facebook have no idea how to write.  Yes, Facebook can be treated as a relaxed, social environment, but when everyone can see it including younger people, these writing habits become engrained in our minds.  The end result is that no one knows proper grammatical rules -- I don't even know them.  But I find it a shame when my mother, a teacher in an elementary school, comes home and says that her coworkers don't even differentiate between "good" and "well" properly in front of their students. 
  Even while I write passionately about this issue, in the back of my mind I'm wondering (and I'm sure other people would too), what does it matter?  As technology becomes more streamlined and the importance of grammar seems to be less and less important, is it really that bad that few people know these things?  I'm not really sure, but after thinking about this I think I'm just old fashioned.

Techsgiving

November 28th finally, and that means a ton of turkey and other foods that we, as Americans have associated with Thanksgiving.  As a hard working college student, I took the liberty this Thanksgiving to sleep in until 11 am, then wake up and start my annual ritual of eating everything in sight.  But this year, I noticed something strange everywhere I went; everyone was on a phone or tablet for the majority of the day.

I woke up on Thursday to a multitude of text messages from people wishing myself and my family a Happy Thanksgiving.  Even one of my friends studying abroad in London managed to wish me well on this special day.  After responding to the messages, and reading a few e-mails and tweets, I went downstairs to find my sister and my mother both talking on their phones, catching up with people.  As the day progressed and food came and went, the food coma set in and we all ended up on a computer or some device.  We were not stuck in our own worlds however, we were still communicating, but with a glowing box in our face.

After a quick cleanup, I decided to visit a few friends because the best way to spend the holidays is by driving around.  Once I arrived at a friend’s house, I said hi to their family and watched as many of them broke in and out of conversations with their phones or tablets.  The TV was on in the background, streaming the various football games that were being played, as is tradition on Thanksgiving.  One family member (who is less knowledgeable in the computer field) even engaged me in a question about ISPs and routers.


Perhaps it was just me noticing much more of a prevalence of technology present this year, but almost everyone, young or old, was on a personal electronic device.  Whether these devices brought these people together or not is still up for debate.  Many of the distractions that were caused came from buzzing phones and beeping tablets.  But I would like to think that technology brings everyone together on the holidays.  As a challenge, I give you this: On New Year’s Eve, as you party with the people around you, when the ball drops and the New Year starts see how many people you end up messaging, or how many phone calls you get.  Even check a social medial site or two and just read how so many people are excited for the New Year.  Technology has quickly brought us together as a people, especially on the holidays.

Elysium

   A couple of weeks ago I watched the movie Elysium.  It recently came out this year and starred Jodi Foster and Matt Damon.  The plot took place in a futuristic Earth, around 2150, and heavily focused on technology and how it shaped society.  Even in the first fifteen minutes I thought this movie applied so much to our class and how technology affects human civilization.
   To quickly summarize:  Elysium is this satellite-type place just outside of Earth.  Those who are wealthy enough to live on Elysium do so, and the rest of the seemingly corrupt society resides back on Earth.  They often show the poor gazing up at Elysium, just like we look at the moon, wishing they can go there.  Back on Earth, people are struggling to find work since robots and technology have taken over almost every aspect of life.  Money is scarce (the movie has a scene where little boys are fighting over a $5.00 coin).  The rulers of Elysium, however, are these pretentious looking people who have no regard for any of the poor people who try to hijack spaceships to get to Elysium -- they just shoot down the spaceships because they don't want their home to be full of people that have less than they do.
  I thought this concept brought up an interesting concept that we had not discussed too much in class.  How would technology affect the classes in our society?  Would they further the gap or somehow make it closer?  This movie portrays the scenario with the former.  In the movie, Elysium, the lower and upper class are as polarized as it gets.
  I think that when/if technology begins to take over more human jobs, the gap between classes will also increase.  There are certain people in the world that are born into insane amounts of wealth and keep it within the family.  They are prepared their entire life to take over the same career paths their parents have -- ones that bring in a lot of money.  As a senior looking for a job, I'm already seeing this in my town.  All of the people my age who have parents working on Wall Street are getting jobs on Wall Street, one of the most highly lucrative and competitive places to work.  If technology ever took over the human role in even more workplaces, the middle/working class would struggle since they always have had to work hard for their money, possibly even combine with the lower class, and the upper class would prevail.
  While this thought is certainly depressing, I'm hoping it's a worst-case scenario option.  I think before new technology becomes regulated in human life, there will be a lot of ethical discussions that need to happen.  Even as engineers, we theoretically have a code of ethics to follow -- I know as a Biomedical Engineer I take that very seriously considering I hope we do not all become robots. 
  Elysium wasn't one of the greatest movies I had ever seen, but in terms of science-fiction I think it was a really interesting concept to entertain in regards to the hardcore societal affects technology could have.

Patenting SSL with a Certain Cipher?



Earlier this week Newegg lost a patent lawsuit. Newegg was being sued by TQP for infringing on their patent. The patent they were infringing on was using SSL and RC4 as the cipher suite. This is a big deal because the patent holder does not hold the patent to SSL or to RC4, only to using the two together.

When a server operator is setting up a web server, and they want to use SSL to secure the connection to the client they have to choose which ciphers are allowed. Some of the other choices available are AES, 3DES, RC4, and Camellia. Using RC4 is less computationally intensive than the other options. This means that it is often used with servers on older machines. The combination of SSL and RC4 is used by a lot of companies and private individuals.

SSL was first developed by Netscape, and RC4 was developed by RSA security. Since SSL was meant to be public it's easy to see how one would use that. RC4 was reverse engineered, or leaked by a member of the cypherpunks mailing list. Since these two technologies separately are completely free and unencumbered software. The option to use the two together is actually an option on some graphical interfaces for web servers.

How could a company own the patent to using the two together? They simply should not be able to. By giving TQP this power, TQP will have the ability to sue millions of companies for 'using' their patent.

Google Books



Google Books allows users to search for full text of books and magazines that Google has scanned and converted to text using optical character recognition. It was estimated in 2012, Google's database encompassed more than 30 million scanned books. Just recently, not all books are distributed as “full complete” books. However, books in the public domain are available in "full view" and free for download. While in-print books where permission has been granted, the number of viewable pages is limited to a "preview" set by a variety of access restrictions and security measures by the author.  For books where permission has been denied, Google only permits a viewable "snippet" of the book. This may consist of maybe only two to three lines of text from the book itself. 

On November 15, a U.S. copyright federal judge ruled in favor of Google after a long 8 year legal battle. It allows Google’s book-scanning project to make complete copies of books without an author’s permission. Judge Denny Chin of New York ruled that “Google’s move to digitize millions of university and commercially available books is on its face a violation of the owners’ copyrights. But Google’s limited use of the work makes the scanning “fair use” under copyright law”. It was also cited that Google’s scanned book project benefits society by making books more available, discovered via internet or university library searches. Google countered the argument of copyright infringement and it provides links to where the books can be legitimately purchased. In a legal statement Judge Chin wrote, “In my view, Google Books provides significant public benefits. It advances the progress of the arts and sciences, while maintaining respectful consideration for the rights of authors and other creative individuals, and without adversely impacting the rights of copyright holders. It has become an invaluable research tool that permits students, teachers, librarians, and others to more efficiently identify and locate books”. The Authors Guild was disappointed in the judge’s ruling and stated that it would immediately appeal the decision. Last year, Google compromised with the Authors Guild agreeing to obtain permission of rights holders for the scanning of university library books before it was added to Google’s database. 

From my perspective it seems Google’s book scanning project is a benefit to society, but Google’s main reason for digital copies of books may be strictly for profit. Even though it doesn’t post ads to view books, it still ranks book links and forces authors and publishers to pay for top results.

Vim is Love, Vim is Life

I recently switched over to Ubuntu as my primary operating system on my laptop. It's been a long time coming considering that I'm a Computer Science student with a heavy C oriented course load. However, it's not just the operating system that matters when you're writing code. You also have to decide what you're going to write your code in. Some opt for an IDE like eclipse, while others keep it simple with nano or gedit. As with almost all other things in life, it's important to find a healthy medium. When you're looking for something without the bloat of an IDE but with more functionality than a simple text editor, most roads will lead you to either emacs or vim. I've long considered myself an opponent of fanboyism, so, after taking 392 and listening Professor Gabarro's rants about how great emacs is and how much vim sucks, I decided that I'd start using vim. However, I've also always considered myself a hypocrite, so let me tell you about how awesome vim is!
Vim, short for vi improved, is an improved version of the classic text editor vi (go figure). Classically a modal editor, vim works in modes. When vim starts up, you start off in normal mode, which means that you can't just start typing. Now I'm sure you're thinking "what the heck is this boomshocky? why the flip can't I just type the shtuff I want to?" (edited for G rating). The answer is simple, vim allows you to navigate and manipulate text files right off the bat. Once you learn the dialect associated with vim, hopping around your text document in normal mode becomes less of a combination of shortcuts and more of a language. The best part is, all of this is available from the home row. With the proper acclimation period, a good touch typist can become more efficient with vim than they would be with a regular IDE. Now that we've gotten the awkward introduction out of the way, let's tackle some of the deeper questions associated with using vim before I allow you to touch my baby. You're almost definitely thinking, "why would I learn this antiquated method of editing if I'm already as addicted to IDEs as Ewan McGregor was to heroin in that one movie?" The answer is multifold. For one, you won't always have access to a GUI when editing code. You can't always expect to be able to edit from the comfort of your own rig. Sometimes you have to SSH into a machine to do some work. Another point is that you can really be more efficient with vim. Think about all the scrolling and clicking you'd have to do to navigate to a particular line in eclipse or a similar IDE. In vim you'd just type the ":" character and the line number. BOOM! You're at the line you wanted to be at. Another benefit of learning vim is that it's mostly POSIX compliant. Any concepts you learn to become a better vimmer basically carry over to linux operations. For instance, using a regular expression search in vim requires you to know POSIX regular expressions. Knowing vim makes you a better computer scientist.
Overall, what I'm trying to say is that every computer scientist should at least know how to use vim. It's not perfect, considering that it's really only a text editor and lacks functionality in terms of syntax highlighting and autocompletion without plugins, but I find it better for my needs than emacs. So, I'll generalize: vim is better than emacs. Prove me wrong in the comments.

Friday, November 29, 2013

Business and AI

A quick read of an article entitled “Beyond Watson: 3 Techs That Depend On Artificial Intelligence,” written by Lars Hard, brought some nice points into my head on artificial intelligence. I have blogged a bit about artificial intelligence and talked about how it won’t get very, very far, but I do think it has gotten far enough to play a significant part of our lives. This article shows that it can help with business ventures. It can be argued that some AI systems have become more skilled than humans. I still think that AI can only do as much as a human allows them, but most of mankind does not know how artificial intelligence works. So technically, they would be more skilled. This means that humans should take advantage if they can. The following business areas should be paid attention to with regards to the growth of artificial intelligence.

Hard talks about E-commerce and product recommendations first. He talks about Amazon’s first generation collaborative filtering for online recommendations. It would work in the fashion of “if you bought product A, you will probably like product B” pitches. The limits on this were that it did this for when you are buying items in large quantities. What if you were buying a high priced item? You would probably only get one and that is what made this filtering tough. Enter artificial intelligence and you have the second generation of filtering for recommendations. It expanded the searches and made them more advanced so that recommendations can be focused on what specific features they looked for.

Taking advantage of this can help retailers so they can work their product databases better. Artificial intelligence can gather information from multiple data streams across the e-commerce sites so they can recommend relevant items to customers. It is now looking past the statistical correlations and looking further into it. I think this would be an interesting area to investigate.

Hard talks about vertical search next. Vertical search is mainly used in areas from travel and electronics to books and films. They go directly to a site like IMDB rather than through a general engine like Google. It uses semantics, text classification, feature extraction and advanced big data analytics mixed with other AI algorithms. It has got to the point where Google even feels the need to respond and create some vertical searches. Google hasn’t fully responded yet so it would be best to get a leg up on them now. To take advantage of this, consider making vertical search pools plentiful, narrowly focused and well-stocked with content. These AI technologies can make vertical searches more intelligent than a Google search.

Hard ends with talk on virtual assistants. Smartphones are growing and they are not stopping. We know of some of the more popular virtual assistants like Siri and Google Now. They are not there yet, but they are supposed to work in a better function for the user to really feel they are interacting with the user. Apple is working on improving Siri now and wish to get a leg up on the competition for the best virtual assistant. The best way to take advantage of this is to watch how they grow. Creating APIs will keep in mind the goal of developing long-term, intimate relationships with customers.

The smarter an app or platform is, the more predictive it can be and this is what the consumer will ultimately want. They want items that can do things for them if they are smart enough to do so. Watching how AI grows and moving along with it will help in business ventures.


Lets stop defending video games by talking about hand eye coordination.

               Every now and then a study pops up which says that video games are linked to some sort of small psychological benefit. This is inevitably followed by the gaming community extolling the values of their hobby. This is foolish, because this is not a good reason to play video games. It is simply a pleasant side effect for a specific type of entertainment. It is a very child like defense of an art, and whenever I hear it my mind immediately goes to a child attempting to justify some bad behavior to his mother.  "But Mom why can't I have pizza?! Pizza has tomato sauce on it, and tomatoes are a vegetable!" Instead of this, why not discuss the range of emotional experiences this medium can provide, or the time and energy that goes into creating the a games assets, or the ability of games to offer a societal critique.

                I am going to start off with the idea that video games increase your hand eye coordination. The most recent study to explore this idea compared the skill of laparoscopic surgeons who played video games to those who did not play video games. For those of you who don't laparoscopic surgery involves making small incisions and using cameras to see what is happening inside a patient. So the surgeon is looking at a screen while doing this type of surgery. It turns out that gamers are better at this type of surgery then non gamers. I don't mean to minimize the work of these researchers, but this is pretty obvious isn't it? Doing complex tasks on a screen correlates directly to doing complex tasks on a screen. This is not a good reason to play video games, it is just a fairly intuitive circumstance. It might have some interesting implications for the development of surgery training simulators, but that is about it. What makes video games a worthwhile experience is the feeling of complete serenity one gets from playing Flower, or the emotional catharsis one experiences when they retrace the steps of a missing family member in Gone Home. Who cares about a gamers reaction time when creators are exploring such interesting new methods of storytelling.


                Another recent study of gamers determined that they have "an enhanced allocation of spatial attention over the visual field."(Nat Geo) They tested this by running a number of different tests such flashing objects on a screen and asking people to point out where they were, and asking people to state how many objects were on screen at a given time. Not surprisingly people who played FPS games did well on this test. Once again why does this matter?  Yes these people might be a bit better at identifying objects in their field of view, but how does this justify someone spending their time on this medium. The answer is it doesn't. No one is playing these games to prepare themselves for situational awareness, so let's stop acting like it's such a big deal. If we are going to be defending video games let's talk about their merits for telling stories, not for shaving milliseconds off one's reaction time.



http://news.nationalgeographic.com/news/2003/05/0528_030528_videogames_2.html

Code, all of you


                In a somewhat recent trend, learning to program has become a popular subject. I think it’s great, but there are a few problematic concepts associated with it. The reason it’s great is because of how technical society has become. As a technical individual who does programming as both a career and hobby, it’s annoying to earn the association of “you work with computers? Well, my computer isn’t working…” Often, it’s the user, not the machine. But here lies the first issues, the general public shouldn’t be going from “Computers are complex, I don’t know how to work them” to “I’m going to program”. They should first learn general IT, a boring but simple and fundamental step to learning programming. If you know why you can’t plug a USB mouse into a Ethernet port (people have plugged them in there, and if it doesn’t fit, trust me, they make it fit) then they might know better that if one port doesn’t work, they can try another one. Then if/when they go to get the computer repaired, instead of “my mouse doesn’t work” (which was a user issue as the repair person will plug in the correct port and say everything works) they will say “The USB ports on my computer don’t work”

                Now, let’s take the next step… scripting. When people are taught math, they aren’t given a function and then told to solve it. They are first given numbers. What are the primitive types used in math. When people are taught formal language, they are first given letters, the basic primitives of the language. Next, in math they are given operations such as add and subtract, in language they are given combinations of words that are arranged in a certain structure. Rules are applied to the structures of words, termed as grammer. As time goes on, the simple math becomes simple functions, y=a+b, language becomes sentences, paragraphs, and pages. If we skip further ahead we have full math equations such as integrals and Taylor series and language becomes specific formatting, layout, grammer, written layout, and structuring.

                Now let’s look at programming, I mean scripting. First we learn the basic primitives; character, number, string. Next, some simple operations such as +, -, if, and goto. Now some might be thinking “hold on there, goto?”, yes, goto. Because… as stated, they should start with a scripting language. Most scripting languages have comparison operations, aka: a basic if, and jump statements to respond to the ifs, goto. Throw in learning storage of variables, retrieval of variables, printouts, and reading information from the user and you have a nice little Turing complete language understanding. Create a batch or shell script that will let someone pick a set of files they want to copy to a flash drive, and name it something well known such as “Backup Computer” and suddenly, the idea might click “if I run that, I will backup my computer.” If an error comes up saying “can’t find drive D:/” they should obviously know they didn’t have their flash drive/external drive plugged in. But, as anyone who has dealt with IT knows, and I will be blunt about it, people can be amazingly stupid sometimes. “Flip the switch on the back of the device”, they press the reset button with a switch being directly above it. This is where general IT comes in, the “Wait, this literal statement is literal. It means exactly what it says!”

                Now, let’s try this again: let’s look at programming. First we learn the basic primitives; char, int, string. Next, some simple operations such as +, -, if, else, while, and for. Next we combine these into functions like in math. We make a whole set of these and then call some of them from other functions. We make one special function called “main” and call all the others from it. Congrats good sir and madam, you have written a program. It was just like learning math and formal language, which both become primitive instruments to learning programming. “Darn, my computer’s running slow”, you’re general IT knowledge will guide you to discovering the cause of the slowdown. “Hmm, I fixed the slowdown. Now I need to combine these two files”, well, go write a basic program or script to do so.

                But learning to program just moves the complexity of the computer to the software while also making software an understandable concrete concept instead of a complex abstract one. But these basic learning steps teach one additional skill that seems lost amongst many today: problem solving. It’s abnormally common to find someone who can identify the problem, but can’t produce a solution. “Hmm, the computer isn’t working”, “Hmm, the car isn’t starting”, “Hmm, the oven isn’t getting hot, and I have company coming over in an hour”, etc. But there is the basic response “I’ll take the car to the mechanic” or “I’ll order take out”. But when a computer isn’t working, something that’s integrated into everything from your watch and (smart)phone, to your furnace, TV, game system (which is a computer in and of itself), car, and much more and nearly all workplace activity, even if it’s just punching in and out, that person is stuck. If they say “I’ll call IT” or “bring it to Best Buy”, it simply becomes another task that may or may not resolve the problem. Knowing programming can solve little tasks and issues in the modern individual’s life while reducing the overall feeling of complexity and difficulty of using a computer or other electronic device. It also provides a buffer between the computer issue and IT, who can then focus on updating the devices (which bring bug fixes and newer software) and fixing actual issues instead of “my mouse isn’t working”.

                Go learn to code, I’m not fixing your computer until then.

TQP Development vs. Newegg. Patent Trolls are at it again.

Patent Trolls have struck again this time against newegg.com. The patent this time in question was a patent for RC4 a cryptography algorithm that is used in SSL keys. Just like all patent trolls they did not have the original idea and they put the idea in broad terms so that you can take any ownership of the technology. most of the time companies just pay off the defendant to not get stuck in the trial process. This is not what newegg tried to do instead they presued the subject and sent it to court because of the absurdity of the claim.

Unhackable Systems Are a Myth

Many people believe that there exist a way to make a computer or electrical device unhackable, but in reality what cyber security does is slow down a hacker through the creation of a large complicated math problem. Since, the algorithms that are used to encrypt systems are technically complex math problems, if a person has a computer do the calculations eventually it will find the solution to pass through the security system. Even though there does exist certain types of cryptography techniques such as One Time Pad and MD5 encryptions, usually the systems used by the public are not this secured and would not take as much computer power to decrypt them.  

There are two major things currently being researched that can completely cripple current cyber security. The first is one of the six unsolved mathematics problems, known as P versus NP. The idea behind P versus NP is that if a computer can verify that a problem has a solution then it should also be able to find the solution just as fast as it verified it. Since all encryptions are already known to have solutions this means that a computer that is able to verify that a certain encryption algorithm has a solution, it would also be able to crack that solution with no problem. The only thing that a P versus NP computer would be able to do is find the solution to the encryption algorithm, a hacker would still have to write a code that can apply the solution and get into the system. Whoever is able to solve P versus NP would be able to earn one million dollars, but in reality who would sell an undetectable hacking software.  

Something else that would easily be able to overcome current security is quantum computers. Most security systems are technically meant to slowdown or make it difficult for a hacker to get into a system, quantum computers would easily be able to get around most security systems since quantum computers would be working on a level way above those that are silicon based. One of the main goals of quantum computers is to make a system that is extremely efficient and extremely fast when compared to the technology we have today. A hacker with a quantum computer would easily be able to run the calculations and software needed to hack systems that even have state of the art security systems.  

Many people believe that certain operating systems are unhackable, but in reality that is a lie, since hackers target the systems that are more widely used by the general public such as Windows and Mac. But there does exist one way to make a system almost unhackable. For a computer to be almost unhackable a person would have to get a new computer and make sure to always have the internet disconnected at all times, it might even be better to take out the network chip. By not connecting to the internet a user would not be able to download viruses and malicious software to their systems. All updates would have to come from a USB that the user knows where it has been connected and have gotten the updates from official websites. The only way that a computer like this could be hacked is if a hacker physically accesses the computer but without the internet the files on the computer would be safe. Other than this all electrical systems would be hacked no matter what type of security has been placed to prevent it.

Bionics' Life-Changing Potential

A subject very near and dear to my heart, as it was a brief video of Dean Kamen’s bionic “Luke Arm” that started me down the path of robotics and automation engineering. Bionics is still a budding field, and it may be a while before anything truly substantial makes its way to public use, but should it come to fruition, it has the potential to improve the quality of life of very many people.
While once the simple dream of science fiction, robotic replacement limbs are slowly but surely becoming a reality, thanks to some rapid advancements the field of robotics. It’s no easy task trying to fit the components necessary to replicated human movement into the size of a human arm or leg, without being too overbearing on the user. I remember the Luke Arm had to use some flexible circuits in order to fit in its self-contained restrictions, something that, if not entirely new, then was still very early in its use.
For a person who has lost one or more limbs, the goal here is to recreate the functionality they once had. At present, many of the more common bionic prostheses can at least open and close the hand and provide some basic functionality. There currently is much interest in trying to use the user’s own nerve endings from the point of amputation to control the prostheses. In theory, the nerve impulses could be read by the tech inside the prosthesis and used to control it, in a way not dissimilar to how we control our limbs normally. At present, “Scientists at the RIC have developed a procedure, targeted muscle reinnervation (TMI), which reassigns nerves that controlled arms and hands to pectoral muscles.
After surgery, patients are fitted with a prosthetic arm and are given therapy to strengthen their core muscles and training on how to use the arms. Once these nerves are reassigned, people with upper arm amputations are able control their prosthetic devices by merely thinking about the action they want to perform.”
Not too long ago, I believe, there was a story about a woman who was fully paralyzed who learned to control a robotic arm with only her mind. A probe, surgically inserted into her brain, could read her nerve impulses and send the info the arm, effectively allowing her to control it.
A number of problems do exist at present, of course; a couple of hurdles to overcome yet. Nerve impulse reading is not easy and nor is it always accurate. The surgery undergone for the brain probe provides a better read than nerve endings, but it involves invasive cranial surgery and the brain’s immune defenses could negate or block the probe in only a few months. Additionally, the cost of creating these devices is very high. Bionic prosthetics very likely will not be cost effective for many many years, until the components involved drop significantly in price. It will help drastically improve many people’s lives, both now and countless others in the future. But even that said, funding can be very hard to come by.

It’s still a technology that needs development before it gets anywhere close to wider use, but advances and research now can lead to something substantial and possibly life-changing for millions of people. Cyborgs of science fiction may not be too far. 

Cell Phones


                Smartphones provide us with a constant connection to our social circle, but is this really a good thing? I would argue that cell phones often work to  both take our attention from the things that matter, and to turn us into poor conversationalists.  

                With a cell phone one is free to communicate  through a bunch of different services like Snapchat, Facebook, and SMS. It is nice to be able to talk to our friends, but what is the value of this interaction? Do people have meaningful conversations through these venues?  I would say the answer is generally no. On apps like Snapchat it is almost impossible to have a conversation. One cannot do much more than send a picture of themselves. There is nothing inherently wrong with this, but dependence on it as a means of expression can be dangerous. One does not want to get into a habit of expressing themselves with a picture and a string of 80 characters. This helps to reinforce the short attention span that is endemic of people today.

                 When communicating via text one can definitely express themselves to a greater degree, but do people take advantage of this? I would once again say that the answer is no. On a cell phone it is incredibly easy to get distracted by the breadth of options one has, and this can lead to inattentive conversations.  Personally, my attention becomes seriously fragmented on my cell phone. It is far too easy to switch between both concurrent conversations, and multiple apps, and this leads to me sending my friends half baked thoughts which are either off topic, or just completely meaningless. I do not give anyone the time they deserve, and thus conversation quickly devolves and stops.

                This can create a dangerous precedent of poor communication. Once people get used to having such frivolous and careless conversations on their phones what is to stop them from doing the same thing in real life. I sometimes find myself falling into this very trap, and it worries me. Conversations is an important aspect of daily life, and we need to work to keep it from falling prey to short attention spans. I don't mean for this to sound alarmist, obviously cell phones are here to stay, but we need to be sure we are not losing our attention spans to them. I am sure this does not apply to all people, if you are in the habit of giving conversations the attention they deserve then the medium is unimportant. I just feel that cell phones lend themselves to inattentive communication, and this is something we should all reflect on.   

                Cell phones also serve as a distraction when people are interacting in the real world. It is quite common to see a group of friends sitting together silently, because  everyone is staring at their phone.  This is silly, and it is something we should work to stop. People need to pay attention to the world around them. Phones also serve as a crutch,  whenever there is an awkward moment, say in an elevator,  everyone immediately reaches for their phone. Why not talk to the people around you? Who knows what interesting things they could have to say to you? But no, everyone needs to stay in their comfort zone and stare at their screens.

                In closing I have to say that I do not always practice what I am preaching here. I often send people dumb Snapchats, and use my phone to escape awkwardness. It is just interesting to think about how phones have affected the way we look at the world around us.     


 Did you guys know you can change the color of the words in your blog posts? 




                

The Numbers Game

For years now, critics have been paid to consume some sort of media, and report back to us on whether or not they think it is worth they time and/or money to experience it. Critics are trusted for consistent opinions on their chosen field of multimedia. Sometimes critics can rise to celebrity status, like Roger Ebert and his collaborators Gene Siskel and Richard Roeper. Whether the review is put into print, audio, or video, many people trust the opinions of the critics and form their media habits around them. I myself am partial to Bob Chipman's (aka Movie Bob) movie reviews on theescapistmagazine.com. After watching a few of his videos, I found his taste in movies to be similar enough to mine that I could use his reviews as a good baseline for whether or not I would enjoy a movie. I won't agree with his opinion on every movie that he reviews, but our opinions are close enough that I can usually judge for myself with his information. Video games are the latest form of multimedia to enter into the mainstream, and with them have come the video game reviewers. There are, however, a few quite serious issues with video game reviews and reviewers lately.
First off, I'd like to highlight the issues with the numeric scoring system so often employed with any review in any medium. Many reviewers will use a scale of some sort, giving a score along a range of (for examples) 0-5 stars, 1-10, or even 1-100. A main problem with this system is its confusion/collision with the grading system of many schools. In school, a 75/100 is considered 'average', with 80/100 being 'good', and 90-100/100 being great. Any grade below 65/100 is considered a failure. Having this rubric applied to review scores is somewhat confusing. This condenses many game scores into a very narrow range. Ideally, an average score would be 50/100. At this point, it would be something that is not great, but not bad, just average. This view has recently put game journalist Jim Sterling into focus. In his review of the latest Batman game (Arkham: Origins), he gave the game a 3.5/10. While the education based system would mark that as a total and utter failure, Jim merely reviewed it as somewhat below average. Many of the comments on Reddit's Games section echoed similar feelings about the review score – initially it seemed low, but then they read his words on it and saw that he backed his score up with the written review (the top three first-level comments are all basically this).
Secondly, I'd like to take on the issue of Metacritic. Metacritic is a review score aggregator. They take review scores from many sites on the internet, put them into a proprietray algorithm, and hand out a Metascore. The Metascore is, once again, on a scale of 1-100. They even further perpetuate the problem from the last paragraph by color coding their scores. Green is 75-100, yellow is 50-74, and less than 50 is red. That means half of the possible ratings are red scores. The vast majority of games released in 2012 for PS3, Xbox 360, and the Wii U/Wii were in the yellow category. One of the most mysterious parts of the Metascore is that different sites are given different weights for their scoring, and nobody outside of the company knows what the weights are. When a certain site scores a game, some of them affect the Metascore more than others, with some sites likely not affecting it much at all. Metacritic is also looked at by developers and publishers of games. Obsidian, developers of Fallout: New Vegas were to receive a bonus if their game scored over 85 on Metacritic. The game just missed that target, and only received an 84 on Metacritic. Creative Assembly, developers of the Total War franchise actually looked at Metacritic scores and cut content if they think that it won't give them a better Metascore.
Personally, I think that reviewers should use and advocate for rating systems more based around an average game being in the middle than in the top quarter of ratings. I know that some games writers/journalists are staunchly in favor of getting rid of the review scores completely, but I think that they can help provide information quickly and effectively is used correctly. Metacritic should also be far less important than it is in many developers' eyes, but with the marketing departments being so strong in some of these companies, it would take a lot of work to separate them from its influence.

Thursday, November 28, 2013

Technology Gets Old



                Every time a big new electronic or tech product comes out, it is hailed as making life easier and happier. Sure, we are on to the marketing hype and hesitant to take advertisers’ words at face value, but we buy the product anyway. Maybe it doesn’t meet their lofty promises, but surely it must make us somewhat better off, right? Well, no.
                From a psychological standpoint alone, these new purchases are doing nothing positive for us. We buy them to be better off, and sometimes we even are. However, the improvements quickly become routine. A new phone might be amazing for a month or so, each new feature noticed with every use, but before long it is old news, and is used without a second thought. We could be answering emails at double the rate we could before, but now that is the new normal, and no longer impressive. Our memory is very short when it comes to technology, and there is always ennui and a desire for a better version of what we already have.
                Thus, no matter how much more efficient all these gadgets make us, they will never make us happy. Technology is not a field that will ever leave us satisfied. In fact, due to the rapidly changing nature of the field, it is an ever-present cause for desire and discontent. Who cares that the amount of information at our fingertips would dazzle the previous generations? We take it for granted, and our short memories do not allow us the proper perspective to appreciate what we have in relation to what our predecessors did.
                Technology has surely improved our lives greatly, and is worth pursuing for the leaps in quality of life that it can provide us. The caveat is that it should not be looked to to make us happy.  In that respect, we are no better off now than ever before. Until we start being able to alter our brain states directly, ushering in a whole new super-fun dystopian world, look to other channels for a sense of fulfillment.