So as pretty much everyone on the Earth knows, the new Star Wars movie is coming out this week. As a Star Wars fan, I am thrilled. I love the franchise and I love movies so this is the best of both worlds for me. But as I talk to my other geeky friends about the possibilities of what the movie is going to be about I came across a weird thought.
Aren't all movies the same? They have good guys, bad guys, some sort of confrontation and someone wins. Now I know that this is a problem that Hollywood has had in a long time. Even best selling books follow some sort of structure. However, this thought has led me to another.
What if we can't create anything new? One could argue that even the most basic technologies we created weren't new, they were just used to solve a problem in a different way. For example, fire existed before humans conquered the flame and were able to use to it themselves. Same goes for the wheel. The physics of what a goes into a simple machine existed before they were put into practice. Humans are able to use things in new ways but I don't think we can actually create something new.
Lets say that I am wrong and we create some new technology that was never used before and created from scratch. If you look at the tiniest parts of this technology its atoms. Those atoms have existed since the start of the big bang. We can't create new atoms unless Newton was just some asshole who made up some laws that mean nothing. Therefor, even the ingredients for those technologies existed, the only thing missing was putting the ingredients together.
This leads me to this conclusion: Eventually, if humans never go extinct, we will run out of new inventions and innovations. Everything will have been tried before and nothing will be new. The only way to create something new is to create a new element or go to a different dimension. All which is probably impossible.
So can we always create something new? When will we run out of ideas?
Let me know what you think in the comments.
Tuesday, December 15, 2015
Monday, December 14, 2015
The potential consequences of being good or bad at video games
At the beginning of this year, I decided to go to a video game tournament. It was called APEX 2015, and was considered to be one of the largest Super Smash Brothers tournament series of all time. It was in Secaucus, so very close to Hoboken, making it hard to justify not going. Well, the price tag made it hard to actually justify going, and things were further complicated when the venue was switched to Somerset as the original was deemed unusable mid-tournament. I decided that I was committed at that point, and went through a ridiculous series of public transportation adventures to get there and back. In the end, I dropped more than $150 on just this one thing. I figured that I was paying for the experience of being there more than the actual competitive gaming I might be doing.
That aspect, however, did not end well for me. I registered for two games, and was eliminated from one fairly quickly. The second wasn't going to occur until the next morning, and I had already spent too much to make showing up then worth it. I didn't perform nearly well enough to end up on a live stream. In the end, you'd never have a clue that I was actually at the venue until I dug up my part of the tournament bracket to show the matches I was in.
I took a break from tournaments until towards the end of September. There was a Smash Brothers local in the city, and I decided to check it out. I did so 3 times, and they really ended up being experiences that I'd rather forget. I definitely felt pressure being put on my wallet, being nowhere near good enough to land in the prize money part of these tournaments. When the person signing me up misheard my name and wrote down the wrong one despite my appearance at prior events, I decided to call it quits. I was basically a ghost that made weekly donations (of money and victories) to the venue, and after so many of these donations I disappeared. I get the feeling that nobody there remembered me.
The reverse of this began when Rivals of Aether, a Smash inspired game that almost looks too old-school for its own good, appeared on early access. I found this game to be more comfortable than the actual Smash series, Coming back to the present, I've actually taken first at some online tournaments for this game (and them being free to enter helped as well). If I were to disappear from this scene, people would actually notice. It's interesting to show up in the live streams of people and have them recognize you, or to have a stream of your own that can draw a significant number of viewers. Suddenly, I feel like a core community figure. Being good at the game seems like taking a shortcut towards being recognized, while the struggle of being a donation ghost will still continue for those that are not. Looking at things from the other side, I don't think there's a solution, and am currently not sure if it's even a problem. Individuals not at the top of the ladder can still make it big in their community by helping out in other ways (like hosting their own tournaments), which is a different kind of work. That's taking the long way around, and I think the shortcut is still more satisfying.
That aspect, however, did not end well for me. I registered for two games, and was eliminated from one fairly quickly. The second wasn't going to occur until the next morning, and I had already spent too much to make showing up then worth it. I didn't perform nearly well enough to end up on a live stream. In the end, you'd never have a clue that I was actually at the venue until I dug up my part of the tournament bracket to show the matches I was in.
I took a break from tournaments until towards the end of September. There was a Smash Brothers local in the city, and I decided to check it out. I did so 3 times, and they really ended up being experiences that I'd rather forget. I definitely felt pressure being put on my wallet, being nowhere near good enough to land in the prize money part of these tournaments. When the person signing me up misheard my name and wrote down the wrong one despite my appearance at prior events, I decided to call it quits. I was basically a ghost that made weekly donations (of money and victories) to the venue, and after so many of these donations I disappeared. I get the feeling that nobody there remembered me.
The reverse of this began when Rivals of Aether, a Smash inspired game that almost looks too old-school for its own good, appeared on early access. I found this game to be more comfortable than the actual Smash series, Coming back to the present, I've actually taken first at some online tournaments for this game (and them being free to enter helped as well). If I were to disappear from this scene, people would actually notice. It's interesting to show up in the live streams of people and have them recognize you, or to have a stream of your own that can draw a significant number of viewers. Suddenly, I feel like a core community figure. Being good at the game seems like taking a shortcut towards being recognized, while the struggle of being a donation ghost will still continue for those that are not. Looking at things from the other side, I don't think there's a solution, and am currently not sure if it's even a problem. Individuals not at the top of the ladder can still make it big in their community by helping out in other ways (like hosting their own tournaments), which is a different kind of work. That's taking the long way around, and I think the shortcut is still more satisfying.
Ethics of being a youtuber
So I'm going to preface this blog with "I am not a youtuber so don't take this article as fact. However, I do have some opinions on youtubers from a non youtuber perspective."
I saw this video from H3H3productions on youtube which is basically a channel where this guy Ethan Klein (no I'm not related) does reaction videos about other youtubers and videos.
An example of one of my favorites and what this article is based off of is:
Feed The Homeless Challenge
In the video Mr. Klein talks about a youtube challenge that famous youtubers have been doing. The idea is to give back to the community and give $$$ or food to the homeless. Sounds like they are being nice and communal. However, many do not do this from the good of their hearts. They're doing small acts of kindness that don't cost them a lot of money and then soaking in all the views from their channel to receive even more money. To me this is very backwards and selfish, which is the exact opposite message people see when they watch these videos.
These videos have a lot in common. Most of them are black and white and have sad music playing throughout them like a Sarah McLachlan commercial. The youtubers are trying to get the viewers to feel bad about homeless people while simultaneously showing how awesome they are by helping feed them. It's heartless self promotion and I hope that people realize this.
Now I'm not saying the act of feeding the homeless or giving to charity is heartless. However, if you are giving a dollar away only to make thousands off of a video I don't think you are a saint. In fact you are probably a piece of shit asshole.
I think in this day and age that youtubers, especially ones with millions of views per video, have a duty to be transparent with their viewers about their videos. If they want to inspire others do well in their community they need to do more of an effort. For example, they should donate all the proceeds they receive from the video to a charity and show proof of it. Maybe this is a little too much but in the whole point is giving back than this is how they should do it.
It's crazy to think about how little kids and pre-teens watch all these videos all day and they really take a lot to heart. Youtubers need to be transparent and open with what they are doing so people can understand their true motives. It is too easy to lie on the internet and spread miss-information.
It's not even just these challenges that can be an issue of integrity. There are a lot of youtubers who steal others videos and give to no credit to them. Many do this on their facebook pages as well to get more views. To me this is insulting and a giant loss of integrity for those that do this.
My biggest problem is there is nothing as a regular person I can do about it. Of course I could go and make youtube videos about it like Ethan Klein does but I'm not even sure how much that helps. I agree with pretty much everything he says but I'm not going out of my way to stop these youtubers from doing what they are doing.
If you have any suggestions please comment below.
I saw this video from H3H3productions on youtube which is basically a channel where this guy Ethan Klein (no I'm not related) does reaction videos about other youtubers and videos.
An example of one of my favorites and what this article is based off of is:
Feed The Homeless Challenge
In the video Mr. Klein talks about a youtube challenge that famous youtubers have been doing. The idea is to give back to the community and give $$$ or food to the homeless. Sounds like they are being nice and communal. However, many do not do this from the good of their hearts. They're doing small acts of kindness that don't cost them a lot of money and then soaking in all the views from their channel to receive even more money. To me this is very backwards and selfish, which is the exact opposite message people see when they watch these videos.
These videos have a lot in common. Most of them are black and white and have sad music playing throughout them like a Sarah McLachlan commercial. The youtubers are trying to get the viewers to feel bad about homeless people while simultaneously showing how awesome they are by helping feed them. It's heartless self promotion and I hope that people realize this.
Now I'm not saying the act of feeding the homeless or giving to charity is heartless. However, if you are giving a dollar away only to make thousands off of a video I don't think you are a saint. In fact you are probably a piece of shit asshole.
I think in this day and age that youtubers, especially ones with millions of views per video, have a duty to be transparent with their viewers about their videos. If they want to inspire others do well in their community they need to do more of an effort. For example, they should donate all the proceeds they receive from the video to a charity and show proof of it. Maybe this is a little too much but in the whole point is giving back than this is how they should do it.
It's crazy to think about how little kids and pre-teens watch all these videos all day and they really take a lot to heart. Youtubers need to be transparent and open with what they are doing so people can understand their true motives. It is too easy to lie on the internet and spread miss-information.
It's not even just these challenges that can be an issue of integrity. There are a lot of youtubers who steal others videos and give to no credit to them. Many do this on their facebook pages as well to get more views. To me this is insulting and a giant loss of integrity for those that do this.
My biggest problem is there is nothing as a regular person I can do about it. Of course I could go and make youtube videos about it like Ethan Klein does but I'm not even sure how much that helps. I agree with pretty much everything he says but I'm not going out of my way to stop these youtubers from doing what they are doing.
If you have any suggestions please comment below.
Do Ted Talks Matter?
It seems like an incredible idea;
professionals from every industry convening at a single conference, freely
discussing concepts that cross the boundaries of profession. An opportunity not
just for networking, but for learning and evolving ideas.
That’s the perception of the Ted
conferences, the umbrella term used to refer to more than a dozen subtypes of
conference run by the Sapling Foundation. Initially a conference aimed at tech
professionals, it has since evolved to include people from every field of
study. From this has emerged the Ted Talk, short presentations on anything from
art or science by notable people like Bill Gates, Bill Clinton, and Bill
Graham. Since 2006, videos of these have been recorded and posted online, free
to access, and this has led to their booming popularity.
However, this popularity doesn’t
necessarily equate to importance. Ted Talks have the benefit of fame; but do
they have value?
At first glance, it would certainly
seem so. An open forum to thoughtfully discuss ideas is an amazing tool for
communication. There has to be some power in allowing the rich and famous, intellectuals
and artists, to speak about their revolutionary ideas to the poor, starving
masses.
Oh, wait, sorry. I think I just
tipped my hand and gave away my point. I don’t actually think Ted Talks are that
great.
There are actually a lot of problems
with Ted Talks. A common criticism is that the conference itself is incredibly elitist.
At over $6000 a ticket, the event is only open to people who can afford to pay
out of pocket. Not only is it expensive, but horror stories about the
experience behind the event is intimidating as well. One speaker, Eddie Huang,
has spoken online about how
difficult it is to work with the organizers.
Ted Talks are also suffering as a victim of
their own success. As they exhaust their lineup of talented speakers – Bill Gates
can only speak so many times – the overall quality of the speeches has been decreasing.
More often, the talks trend towards the soundbiteable – things that can be
chopped up into short, interesting clips. Unfortunately, I can’t give an
objective measure of quality. However, I can link two Ted Talks that I feel
really demonstrate the quality of the conferences. First: The
coolest animal you know nothing about... and how we can save it. A painful
talk with a worse title. And on top of that: A
Beatboxing Lesson from a Father Daughter Duo. If you don’t have time to
watch, the talk is eight minutes of beatboxing, and one note that beatboxing started
in New York.
That’s not to say that none of the
talks have value. In the past, Ted Talks have generated some truly insightful
discussions on ideas. However, I believe that without a major reworking of the
conference, and an examination of the principles behind it, things will only
get worse.
Computers, Carbon, and Climate
The farther away from our reality a situation or event occurs, the less likely we are to notice it. For this reason, it can be difficult to understand the consequences of using the many technologies present in our everyday lives, including laptops, desktop computers, smartphones, and even less tangible systems such as the internet. In honor of the pledges made by 195 participating countries at COP21 to reduce global emissions, let us examine how our computers indirectly participate in the production of climate-altering greenhouse gases.
The possibility that our planet may soon become hot enough to liquify the polar caps, rising ocean levels by a sizable amount is frightening. Warming of our climate by just 2 degrees Celsius will guarantee this, although our current course has us set to raise the global temperature by more than just 2 degrees. The culprits responsible for the escalating climate are greenhouse gases, a set of gases that lie in our atmosphere, trapping heat emitted from Earth toward space. These gases allow Earth’s surface to remain at a steady, life-supporting temperature, whereas without them Earth would likely be a desolate, barren rock. The greater abundance of these gases floating about in our atmosphere, however, the higher the temperature will rise.
25% of greenhouse gas emissions are produced from the burning of natural gas, coal, and oil to generate the electricity which powers our lights, appliances, computers, and more. Most individuals trying to reduce their carbon footprint (or the electric bill) will begin by limiting excessive use of lighting and air conditioning/heating. I suppose that the computer differs from excessive lighting in that many people find it useful for a variety of things that they would not sacrifice so easily. Those many things, all part of what is known as the information, communication, and technology (ICT) sector, are responsible for 2% of global carbon emissions.
So if we treat computers as excessive lighting and turn them off when not being used, etc. this number should fall, right? Yes, but there is a much larger entity at play that we ordinary computer users cannot control: the internet, or more specifically the many data centers spread around the globe which host all of the cloud-based applications we use and all of the web sites we visit. To get an idea for how much power the internet needs in order to function, just imagine the amount of energy consumed by a warehouse filled with stacks upon stacks of high powered computers that are running virtually nonstop, and you will have imagined one of thousands of facilities which collaboratively form the internet.
Many companies, such as Apple and Facebook have taken an initiative to use renewable energy sources to power their data centers, but many remain dependent on electricity obtained from burning natural gas and coal. With our nation’s promise to reduce emissions by 25% of what was observed in 2005, remember to stay mindful not only of excessive lighting, but also of how you use your digital devices.
Final Class and Final Post
I was, to be blunt, frustrated and bored with today’s final
session of Computers and Society. It felt like the class was discussing an
endless chain of nothings over and over again. Many of the ideas presented about
AI and the problem of The Second Machine Age felt, well, half-hearted. One,
however, really made an impression on me, that idea being wage for automatons.
I think the idea of a wage for automatons is absolutely
dastardly. The idea is simple: for every job replaced by an artificial
intelligence, companies must pay some sort of tax, which would go into a pool
and then be redistributed as base income. Future citizens of America could get
paid by corporations to be replaced by robots. This would (hopefully) solve the
problem of paying for base income and encourage corporations to employ real
workers.
It’s clever, but leaves a bad taste in my mouth and I’m not
sure why. When the idea was first presented in class, I had visions of 1950’s-era
caricatures of communists dance through their head, wringing their hands and
smiling evilly against red and yellow backdrops as they plot the downfall of
America. This could just be a knee-jerk reaction against socialism, or it could
be my mind telling me that the math doesn’t make sense.
The math doesn’t make sense because all resources are
finite. All of them. Eventually we will run out of everything – space to grow
food, space for people to live in, clean water, fresh air, fossil fuels. It may
take billions of years, but even the sun will eventually wither and die, and
the Earth with it. If some other resource depends on a finite resource to work,
then it is also finite. (This is why the Internet is inherently not an infinite
resource, even though it appears to be at first.) This means, of course, that
at some point humanity will reach its maximum population. While I am no
economist, it seems as if a guaranteed income would eventually exhaust the
finite resources of a country like the United States, sooner rather than later.
If people supported entirely by base income had children, who also lived by
base income, and their children had children, and their children… The
profitability of US corporations would have to rise parallel with the growing
population, presumably infinitely, or the system would crash and burn. It seems
more likely and reasonable that the second option would occur. While humanity
stays on Earth, there cannot be infinite growth.
Or maybe it doesn’t make sense. While resources are
definitely, absolutely finite, human ingenuity doesn’t seem to be. There is no
reason we could not find ways to squeeze more and more out of our finite set of
resources, ad infinitum. Perhaps someone, or more likely a series of someones,
will create a set of technologies that will allow us to farm or even colonize
the ocean. Perhaps some sort of vertical farming technology will allow for, for
all practical purposes, infinite amounts of food. At this point, the old
capitalist notions of jobs and money will be obsoleted and the debate over AI
robbing people with jobs will seem very silly, because there will be no more ‘jobs’.
This is the problem with fantasizing about AI: once we are
willing to accept one bit of fantasy as a potential reality, there are
infinitely many more potential fantasies that could be potential realities,
which all stack onto of each other into a twisted, modern Tower of Babel, spiraling
wildly into the realm of unsubstantiated nonsense. I felt like our class tried
to climb that tower today.
Touring in Support of..
While doing homework this week, I was listening to a KEXP live in-studio music playlist. During one of the set breaks, the radio host asked Tamaryn, the performer, a fairly forgettable question prefaced with "it's been almost two years since you've been here." Tamaryn responded with an equally forgettable answer, prefaced with "yeah, that's because we haven't released an album in two years." This forgettable exchange forced me to ask why artists constantly tour 'in support' of a recently released album, when it seems that we as consumers are consistently told that labels steal all the album revenue, while the artists struggle to make it by touring and selling merchandise.
When I think about successful mega-artists from decades past, I think about platinum singles and signed LP's lining the wall. Today, I see exclusive streaming deals and verified status on Spotify. Are these artists really touring to drum up enough album sales to be Apple Music's next exclusive offering? Is making it onto one of Spotify's hundreds of curated playlist really that much of a monumental stepping stone in an artists career?
I think it's clear that today's music industry is suffering from an unsolvable dilemma: recorded music is no longer a novelty. Since its inception, recorded music has undergone a plethora of revolutions. We've seen EPs, LPs, cassette tapes, compact disks, mp3 files, and finally, today, we are in the golden age of music streaming services. While all these revolutions have stark differences to their previous iterations, streaming has one that stands out from those before it: consumers no longer need to make a choice. I never need to think about what label put our the most recent Blink-182 record. An exclusive bonus track will never entice the masses to pre-order the next big hip-hop phenom's mix-tape. A latecomer will never again be able to find a "Greatest Hits of Miley Cyrus" collection outside of some poorly constructed playlist posted by an unknown lurker on her subreddit.
The only choices we really have left are towards live music and merchandise. While I can stream upwards of 20 albums in a day, I'd be hard-pressed to purchase tickets to see 140 artists in a week at every bar in Williamsburg. Artists aren't touring in support of their increasingly devaluing albums, but touring in support of themselves and their live music as art in and of itself. I'm not entirely sure if this is how it should be and I'm not sure if it's how I want it to be, but as long as artists can continue to support themselves, and I can help them out by enjoying an awesome show, then I know, at least, that I'm not worried.
When I think about successful mega-artists from decades past, I think about platinum singles and signed LP's lining the wall. Today, I see exclusive streaming deals and verified status on Spotify. Are these artists really touring to drum up enough album sales to be Apple Music's next exclusive offering? Is making it onto one of Spotify's hundreds of curated playlist really that much of a monumental stepping stone in an artists career?
I think it's clear that today's music industry is suffering from an unsolvable dilemma: recorded music is no longer a novelty. Since its inception, recorded music has undergone a plethora of revolutions. We've seen EPs, LPs, cassette tapes, compact disks, mp3 files, and finally, today, we are in the golden age of music streaming services. While all these revolutions have stark differences to their previous iterations, streaming has one that stands out from those before it: consumers no longer need to make a choice. I never need to think about what label put our the most recent Blink-182 record. An exclusive bonus track will never entice the masses to pre-order the next big hip-hop phenom's mix-tape. A latecomer will never again be able to find a "Greatest Hits of Miley Cyrus" collection outside of some poorly constructed playlist posted by an unknown lurker on her subreddit.
The only choices we really have left are towards live music and merchandise. While I can stream upwards of 20 albums in a day, I'd be hard-pressed to purchase tickets to see 140 artists in a week at every bar in Williamsburg. Artists aren't touring in support of their increasingly devaluing albums, but touring in support of themselves and their live music as art in and of itself. I'm not entirely sure if this is how it should be and I'm not sure if it's how I want it to be, but as long as artists can continue to support themselves, and I can help them out by enjoying an awesome show, then I know, at least, that I'm not worried.
Distractions and Dice
When I was making
my way through the distraction addiction earlier in the semester, I
kept getting distracted by my own thoughts. I'd stop randomly to
consider how things would be relevant to my life, and there was one
thing that I kept going back to, how this applies to my gaming
sessions with my friends.
Whenever I'm home,
I'll try to get some tabletop gaming in with some of my friends, and
for a long time distractions have been a problem. Some of the games
we play have the rules online so occasionally, “I'm looking up my
spells” is a valid excuse, if not still equally annoying.
For me, tabletop
gaming is about escaping my technology for a few hours and just
focusing on the people at the table with you, be they elves, dwarves,
twi'leks or early 20th century businessmen. Even hiking
can't provide that same level of limited technology use for me, as I
always end up tracking my hike with GPS and taking pictures along the
way.
In role playing
games especially phones are just the outside world's way of leaking
into a shared face to face experience. Homework, drunk siblings
looking for a ride home, girlfriends and a multitude of other things
could all start being an issue at any moment or throughout the entire
gaming session.
In a better world
my friends and I could all strive to achieve what the author of “The
Distraction Addiction” accomplished when they started turning of
their internet connection and using simpler more focused tools.
Phones have modes to make sure only important notifications appear,
and some even come with an off button for some extreme situations.
Just like training people to not expect you to respond to email the
second you get it, you also have to train people to understand that
you need time to yourself so that you can let yourself turn your
phone off.
If I were a cruel
person I would start to turn off my router whenever I host, and let
the Faraday cage that is my house do the rest of the work for me, but
just like technological distractions as a whole each person has to
learn how to deal with this problem for themselves.
Phone Envy
At the tail end of my freshman year of college, I finally made the switch from an old slider phone with a tiny qwerty keyboard to the iPhone 4. For a while, I was ecstatic, My phone could now hold all of my music, send and receive pictures that were larger than thumbnails, and gave me access to the internet in all of its glory. I enjoyed the luxuries of my phone for about 8 months until one day I dropped it and shattered the screen, rendering it almost useless. The phone still worked but the broken glass was sharp and I was desperate for a replacement. So I found myself in the valley of the shadow of phone-death, looking for a replacement smartphone for less than 200 dollars. Eventually I found a fellow student who was selling a used iPhone 4 so I jumped on the opportunity to buy it.
This phone suited me well until I was ready for my next upgrade at which point I bought the HTC One M7. I loved this phone to death. It had front facing stereo speakers and it was fast as hell compared to the old pre-siri-iPhone I was using. But one day, more than a year before my next upgrade, I somehow managed to drop my phone in the toilet, so that was the end of that phone. I managed to borrow a phone from a friend for a while but that one was pretty slow and aggravating and eventually it just died on its own. At which point I was forced to return to the at this point 4 year old iPhone 4. The only problem is that apps and mobile computing have advanced so far that the iPhone 4 is incapable of running most apps at their full potential and most things take so long to load that there's almost no point in running them on my phone.
Now i'm not complaining about having a smartphone, It's still a better phone than I started out with as a kid, and most of the features I am missing out on are luxuries, not necessities, but it seems obvious to me now that the Fear of Missing Out, rather than just being a side effect of our fast paced consumerist culture, is a marketing strategy used by phone companies and others to scare consumers into buying the newest versions of products. I don't need a better phone but when I see people using their phones to play interesting games and browse Reddit, I can't help but get jealous that they've got a nicer phone than I do.
At this point I'm used to walking to class in silence, headphones in, because my music app loads so slowly that music doesn't start playing before i make it to my destination, and I can't say that my life has been ruined by a slow phone, that would be far too dramatic, but I can say that being priced out of a nice phone does cause a bit of social anxiety. I'm kind of embarrassed to admit that I'm jealous of people's smartphones; it seems a bit childish. The worst part of it though is I can tell that's what Verizon wants me to feel, they don't want anyone using the iPhone 4 anymore because it's not making them much money. They want me to think that an iPhone 6S Plus is what will make me happy, but I have a feeling there might be a way to be happy with less than an iPhone 4, it's just hard to think like that in the jungle of contemporary advertising, especially around the Holidays.
In the end, I still have a bit of Phone Envy, but I'm more envious of a much smaller subset of Phone users. I wish I could be more like the people who still use flip phones and don't find themselves concerned about replacing it with something fancy. It's important to aspire to be rational and smart, and the smartest phone users aren't the ones with smartphones, they're the ones who aren't vulnerable to the advertisements and who still just use their phones to make calls and send the occasional text. In the end who care's if you can snaps, or check twitter every 5 minutes as long as you have a phone and you can use it to meet up with people in real life.
More and more smartphones are released every year but they are becoming more and more intrusive into our daily routines. Maybe we need fewer Smartphones and more smart people.
The Knowledge Gap
The modern vision of a dystopian society
is dumb. People aren’t afraid anymore of being watched by ‘the Man’ a la 1984. People are afraid of becoming the
content, uneducated masses that we’ve had inside ourselves all along. It’s the
internalized fear of a generation raised on television at the same time they
were told it would rot their brains. Who hasn’t seen little bit too much of
themselves, or their friends, in the movie Idiocracy?
Who hasn’t seen news report decrying the failure of modern education? We’re
paranoid, more worried than ever, that maybe society really could become an
idiotic dystopia, culture controlled not by a hostile government or revolution
but a casual slide towards ineptitude.
It’s a ridiculous vision of the future.
In reality, the education system is working better than ever. Education
standards are getting higher and people are getting smarter, all over the
world.
Right?
Well, not quite. We’re actually a bit
worse off than we have ever been before, and we’re getting worse. Over the last
five years, education has become more concentrated on the upper class; many
people who aren’t just have to do without. This is creating, right along with
the income gap, a knowledge gap.
The theory is that, like money,
education is more prevalent in the upper class than in the middle or lower
class. We don’t have a socialist education system.
There are many factors contributing
to the knowledge gap. One of them is failing public school systems. While the
wealthy can afford to send their children to expensive private schools, the
cost of which averages out at around thirty-nine
thousand dollars, most people have to be content with public schooling. Private
schools are provably
more effective than public schools, and the consequence is that people who
attend public schools get a worse education. Only those who can afford the
expensive tuition, or who have earned a rare scholarship, can attend private
schools.
Another factor contributing to the
knowledge gap is the lack of infrastructure in underdeveloped countries.
Electricity and internet access both play important roles in education. Because
of the increasing importance of I.T. skills in skilled labor, people without
access to appropriate facilities are at a distinct disadvantage.
Finally, financially it’s becoming
more and more difficult for people to afford college. At the same time as
tuition increases, the total amount of financial aid has decreased. This
difference has to be paid out of pocket by students, or covered by loans. Only
people with full-ride scholarships and the extremely wealthy can afford to pay
for college straight.
There is no real, global solution to
the knowledge gap. Like income inequality, it’s a complex problem that can’t be
solved with any single measure.
Patent Trolls
The concept of a patent is quite simple -- it is a 20-year monopoly on a revolutionary, useful, and nonobvious invention, granting ownership rights to the inventor and thus criminalizing the invention’s theft in forms such as production or usage without the inventor’s permission, as if it were any other sort of privately owned property. Internet trolls are provocative antagonists who intend to wreak havoc on internet communities by starting arguments, posting unrelated messages, or generating spam. Patent trolls are not very dissimilar to the internet trolls with whom we have become familiar, except that they, in a sense, “troll” the patent office instead of online forums.
Software has been notoriously difficult to label when trying to claim intellectual property. The differences and similarities between object code, source code, and functionality result in a good deal of confusion as to what a patent of this nature refers to exactly. Some existing software patents claim ownership to common or broad techniques, such as scanning documents to email, offering an umbrella wide enough for patent trolls to at least threaten businesses which utilize these common technologies, if not file infringement suits. The act of accusing businesses with intellectual property infringement and threatening a lawsuit in pursuit of private settlement or licensing fees is called extortion. Extortion is illegal.
If what these patent trolling companies are doing is illegal, how are they getting away with it? For many small businesses, the cost of a lawsuit would be extremely difficult to bankroll, leaving these companies with no other option but to pay the accuser whatever licensing fee or settlement they demand. As explained by John Oliver on Last Week Tonight, 25% of infringement lawsuits are filed in Marshall, TX because juries selected from the mid-sized texas town are more likely to side with patent holders.
The inefficacy of the patent system to prohibit this abuse has allowed parasitic patent trolls to feed off of small businesses, a pivotal component of our economy, and has cost investors hundreds of billions of dollars. The more frightening, lasting effect that this has had is the formation of a hostile environment for new ideas, creativity, and small businesses, especially those which pertain to software. The theme of hostility and danger to startups due to intellectual property claims is quite palpable in the Emmy-nominated series, Silicon Valley. The television series portrays a grave truth about the hardships that many startups face, including threats of intellectual property infringement.
Patent trolling is a pure manifestation of greed, achieved through exploitation of the system which supports it. The ramifications of this greed include the suppression of creative ideas, and the failure of small businesses due to fear. This is not capitalism, it is evil. To repair the system, legislature reforming the patent office to be more rigid and structured in its patent granting decisions would need to pass through congress. However, trial attorneys lobbying the senate have successfully prevented and are likely to continue preventing such legislature from passing, as John Oliver also notes. All of this points to a larger issue in our legal system, but I will leave that for another discussion in a separate venue. For now, the most important thing to understand is that patent trolls are extorting money from many companies using incredibly vague patents, claiming ownership on most others’ inventions or operations while producing no novel ideas, inventions, or anything else themselves.
How Our Predictions Inspire Us
For a few weeks now I have been pondering the question: why do we bother predicting the future of technology? History holds a pattern of humans expressing predictions for future technologies or events, many of which never come to fruition. The lesson to be gained from this is that we are terrible at prediction, yet we love to predict. Perhaps our fear of the unknown affords a desire to know about the future so that we may rest peacefully knowing what lies ahead. Whatever the cause may be, I am more interested in examining the effects of our incessant prognostication.
Our predictions for future technologies have not been wholly inaccurate; in fact many foreseen inventions have been realized. Certain prescient technologies that appear in Back to the Future, once unobtainable and seemingly ludicrous, now exist and are commercially available to ordinary consumers. Perhaps one of the most astonishing prophecies of the late 20th century, Moore’s Law, has quite accurately predicted that the number of transistors which can fit into a dense integrated circuit doubles approximately every two years. Since its inception in 1970, Moore’s Law has held surprisingly true, which has experts wondering when, if ever, the rule will falter.
In the last century we have observed many similar prophecies which have been fulfilled, but also some which have not. Are some predictions simply lucky, or do certain individuals possess a sort of precognitive power? A more likely alternative is that these predictions are the conceptual models which inspire technological advancement. The creative ideas drawn by filmmakers, science fiction writers, and other technology enthusiasts offer pictures that scientists and engineers can use to realize these concepts. It may be naive to credit far-fetched ideas of the past with some of the inventions we use today, but the idea that our speculations subtly influence the future of technology is not entirely dismissible.
Our excitement for exploration and discovery is a powerful force for creativity which should never be mellowed. Discussing the future is a seemingly worthwhile and entertaining part of our culture with the potential to direct research efforts. So, why do we bother predicting the future of technology? As Abraham Lincoln once said, “the best way to predict the future is to create it.” Whether done as a wager or for entertainment, our fantastic speculations serve to foster the creativity and inspiration which drives invention.
Bitcoin's Viability
Bitcoin’s popularity has been steadily growing since its publication in 2009 and has been adopted by various markets, not limited to those which are internet-based. Its decentralized and community-oriented nature provides a dynamic that is arguably more fair than current nationalized fiat currencies. With Bitcoin standing as a contender for recognition by national bodies, how can we be sure that the system is stable enough to support our vast economic landscape?
Bitcoin’s innerworkings are relatively complex and at first seem to be cloaked in gramarye. This may be intimidating to some, but the complexity of the Bitcoin system is trivial compared to that of the dollar. We are predisposed to the dollar, a currency which is regulated and controlled by large entities, because it is standard, it is normal, it is what most people use to transfer value and settle debt. One of the beauties of Bitcoin is that one need not know how it works in order to use it, a characteristic common to the dollar. The technical details are available for those who are curious, but most users are content with understanding the basic functions of Bitcoin. At a minimum, Bitcoin requires its users to understand how to use a computer or mobile application, which may pose an issue for less tech-savvy individuals but is otherwise a very reasonable requirement in order to use the system.
Another issue with the system is the potential for starvation, or the pooling of total available bitcoins into the hands of wealthy individuals. When, for instance, one individual owns roughly one percent of Bitcoin’s total value, the rest of the system must react in order to balance that deficit. The potential for individuals to corrupt Bitcoin by causing deflation within the system does exist. If one entity manages to control at least 50% of the Bitcoin network, they have the ability to “double spend”, or make multiple transactions with the same bitcoins. There are a number of other weaknesses within the system, most of which derive from the rules governing it.
Perhaps the greatest issue, one which many overlook as it is not immediately apparent, is the enormous amount of computation required to ensure the system’s security. Bitcoin mining, in a nutshell, can be described as a race, consisting of a computationally expensive puzzle, among the network’s fastest and most powerful computers in pursuit of winning the next “block” along with a handsome reward of bitcoins. Many individuals and groups have spent large sums of real-world value to construct hardware designed specifically for solving these puzzles. During an age in which supercomputers allow us to analyze protein folding, simulate the big bang, and model the swine flu for the benefit of humanity, it is slightly unsettling to realize that our fascination falls into solving meaningless puzzles for a virtual reward.
Despite these issues, Bitcoin has made its way into many various markets, inching towards widespread acceptance with Germany at the forefront of this revolution. Clearly, there are many things to consider before declaring Bitcoin a currency, including whether or not it should be at all. It would not surprise me though to find the virtual network stronger and more intertwined with our current economic system in the near future.
Why Windows 10?
If you have a Windows installation
on your computer, you’ve seen the advertisements. Windows 10, the new, free
installment of the Windows operating system. It seems like a good deal; a free
software upgrade to a newer version of Windows.
We don’t live in a world where
companies provide free upgrades out of the goodness of their heart. It’s not
good business sense. So the question arises; why does Microsoft want to give us
Windows 10?
Well, there’s a saying; if you’re
not paying for it, you’re the product. That’s the axiom that’s driving the ‘free’
upgrade to Windows 10. The new operating system features new data collection
capabilities that are enabled by default. Microsoft is making a play towards Facebook’s
business model of collecting and selling data as a commodity.
It’s not a bad way to do business.
The collection side is a little underhanded, but the economics are solid.
According to a Financial
News Report, an active user’s data is worth around $4.50 to Facebook. That’s
not including the wealth of personal user data that an always-on operating
system can gather. Apply that number to the 110 million people who have
upgraded to Windows 10, and Microsoft has made roughly $500 million on the new
upgrade. That’s not accounting for the fact that the new users are a captive
market; there’s no way to downgrade without buying a new copy of Windows.
However, there’s a discrepancy between
the projected value of a consumer and the retail value of Windows 10. The
software sells for $120 on Microsoft’s online store. A single consumer is worth
less than $5. So where does the extra money come from?
Another way that Microsoft makes
money off of Windows 10 is the reduced cost of maintenance. Maintaining
software is expensive, and is a stage that can last years. By offering the free
upgrade, Microsoft can guarantee that the majority of their users will switch
over to the new operating system immediately. They can reduce legacy support
for old operating systems and, in doing so, save money.
Subscribe to:
Posts (Atom)