Monday, April 27, 2009
Sunday, April 26, 2009
Due to a creative interpretation of German copyright laws, various record label companies are now able retrieve personal information from internet service providers and take legal action. One of the most prominent demonstrations of this recently occurred last week. The popular website Rapidshare has been forced to hand over logs of their uploader's IP addresses. Ranked as the 14th top site from Alexa, Rapidshare is a dedicated one-click file hosting service. Already, a man has been apprehended for uploading Metalica’s new album Death Magnetic to Rapidshare a day before its schedule release date.
However, this does not only apply to music labels. The movie industry and various other right holders also has started making their move on taking advantage of this new interpretation of the law in Germany. This can extend to not only services like Rapidshare but also the bittorent community in Germany. Especially after the verdict of The Pirate Bay trial, bittorent has been a hot topic lately. Numerous bittorent communities have already closed, most of which were voluntarily, due to the fear of being prosecuted.
Ironically, Lars Ulrich of Metallica admitted to downloading one of his own albums from bittorent. A former anti-piracy advocate, Ulrich now admits to the changing times of the internet. Although he says this after he tries piracy for the first time, Ulrich finds it strangely bizarre.
In our current age of internet technology, privacy online is just as important as in real life. Not having full control of where our personal information goes is a serious issue that is bound to grow larger as the internet progresses. Hopefully, the German law will be revised and fixed up to protect online users.
Sand is a difficult place for a robot to maneuver. It requires a different kind of design from the traditional. Consider the three most common forms of transportation today: wheels, tracks, and legs. Each of these fail in a sand environment. In the case of the tracks and wheels, they dig into the sand and freewheel. In the case of the legs, they delve deep into the sand, and movement becomes prohibitively expensive. The authors present an alternate method of movement across sand: a crescent shaped ‘leg’ that spins. Six per vehicle, the legs propel the robot by pushing it along, in a walking pattern of two legs and one (tripod style). The authors tested their vehicle in a unique setup to simulate sand without the mess. They filled a tank with poppy seeds, and placed air nozzles on the bottom so that they can agitate the sand to just the density that required.
The interesting thing about this project is that the authors are designing their robot using the bionic principle. They’ve figured out that there is a problem of robots failing in sand, and have looked to nature for an example. The crab was a particular inspiration, as well as the zebra-tailed lizard.
One thing that I would have liked to see addressed in the article is what a robot that is either a sled design or a mono-wheel design would do in such an environment. Humans have used sleds to travel overland for thousands of years, so that might be a good starting point for another robotic design. Perhaps a pair of sleds where one pushes the other forward, and then they switch roles?
As with conventional computers, quantum computers are vulnerable to random noise in their quest to process data. The popular design is to use and alter the fundamental unit of quantum mechanics, the atom, to represent data in a meaningful and ultimately useful way. Unfortunately, random noise for a quantum computer can be anything from the heat of the sun to the movement of electrons in the air that the quantum computer is sitting in.
A immediately practical application for such levels of sensitivity is to use as a core for an extremely tiny, atom-sized sensor. Such "quantum sensors" would have the ability to detect natural occurrences several orders of magnitude smaller than what would currently be considered "undetectable". The article mentions, as an example, tiny magnetic waves emanating from the ocean floor that may indicate untapped oil reserves.
The Oxford researchers named their system the "quantum cat", after Schrödinger's thought experiment involving a box, a cat and a vial full of lethal poison. Perhaps the most interesting (and ironic) part of the story is the paradigm shift required in the manufacture of quantum computers.
"Many researchers try to make quantum states that are robust against their environment," said team member Dr Simon Benjamin of Oxford University’s Department of Materials, "but we went the other way and deliberately created the most fragile states possible."
Quantum Cat’s 'Whiskers' Offer Advanced Sensors [Science Daily]
Sunday, April 19, 2009
Most people however, agree that our digital information security is one of the most important issues that we are facing now. Greg Nojeim of the Center of Democracy and Technology stated that the bill is extremely vague and would greatly broaden powers in favor of the government while another one argued that the American public must have their private information protected. In our day and age, our digital selves are just as important as in real life. It was pointed out that cyber security on certain networks such as people’s electric, banking, health, and traffic records are prone to an attack. These attacks, if carried out, would have a large effect on the American public as a whole. Not only would private information be destroyed or leaked but the trust in the American government would greatly be damage. Despite his beliefs, Nojeim admits to the importance of the advantages of the bill but strongly urges Congress to modify the bill.
Friday, April 17, 2009
Monday, April 13, 2009
Unfortunately, signal noise has a tendency to introduce errors during computation. This can actually alter and corrupt the state that a bit (and by extention, qubit) is in, thereby corrupting whatever representated data that bit is a part of. A weakness of quantum computing is its obvious reliance on a basic principle of quantum physics, which make redundancy and repetition (a simple matter in conventional computing) impossible as a method of error correction. Researchers have also found that their current method of error avoidance (as opposed to the reactive error correcting) cannot be used on its own, dashing any hopes of using that method exclusively in future implementations of the quantum computer.
Then again, whatever that is determined to not work only narrows down the possible choices of what will work, so research continues to advance.
Sunday, April 12, 2009
Recently, it has been made public that a series of testing has been going on in order to attempt to create a laser powerful and strong enough in order to produce the energy of the sun. Located in California, the National Ignition Facility has already tested approximately 192 lasers. Their goal is to achieve a laser strong enough to heat a single pea sized capsule to at least 180 million degrees Fahrenheit. Once they have achieved this, they can then attempt to recreate a model of the sun to be useable by humans on Earth. This sun in turn would be able to produce limitless energy at a safe and environmental friendly means. The scientists came to this conclusion based off of Newton’s third law of reactions in hopes that the reaction would produce more then what is needed to start the reaction.
In order to do this however, they are relying on a nuclear reaction to occur once the atoms fuse together. This could, potentially blow California off from the United States of America. This seems like a potentially dangerous experiment, however there have occurred many other equally threatening experiments in the past, which have, threaten all of humanity. Although nothing major has been tested yet, the National Ignition Facility has announced that in 2010, they would attempt to focus 500 trillion watts of energy on a single pea sized capsule. Hopefully, the experiment will success and the capsule will be able to contain the amount of energy being produced by the reaction.
Sunday, April 5, 2009
This is a review of the book “Programming Principles and Practice Using C++” by Bjarne Stroustrup.
This book is one of the best general programming books that I’ve ever come across. It is a fast paced book whose contents are at the same time both fulfilling and informative. Programming Principles and Practice is a complete introduction to the theory behind programming, not just language features.
Written by the inventor of C++, Programming Principles and Practice is not what you expect. I anticipated a large amount of the text to be dedicated to justifying design decisions made when C++ was created, and counter arguments to some of the more pervasive criticisms. Instead, I found a deft sidestep: Stroustrup simple admits that C++ isn’t perfect, and moves on.
The author’s design of the book is not to teach C++, it is to teach students the theory about how to be a good programmer (and, by extension, a good computer scientist). Stroustrup spends much of the text discussing abstract notions like program design, applications of programming, and testing principles. He uses C++ simply as a way of putting these principles into practice, and not as the main focus of the book.
As a seasoned (student) programmer, I found this book to be delightfully refreshing. Most introductory programming books spend too much time on syntax (‘teach yourself in 24 hours’ types are a prime example) and too little about why they designed their sample programs the way that they did. In contrast, Stroustrup acknowledges what many of his peers do not: the novice programmer can quickly (and quite easily) look up implementation details online. It is more difficult to apply that same solution to the theory, so that’s what Stroustrup focuses on.
Despite being a theory text, Programming Principles and Practice is not like a typical scientific or scholarly textbook. The author uses a friendly tone (the use of ‘we’ through the text), and simple and concise explanations.
All in all, Programming Principles and Practice by Bjarne Stroustrup is an excellent addition to the library of any future computer scientist.
A recent discovery has founded that an increase in carbon dioxide (CO2) has lead to the rapid development of various trees across a wide range of different species. Over the years, humans will continue to increase their release of CO2 into the atmosphere as our population increases. Although this evil waste product will ultimately lead to the destruction of the ozone layer, trees are able to utilize its properties in for not only its own beneficial usage, but ours too. Trees are able to convert CO2 into sugars and proteins for its own growth and developments, which are just as important as water to a tree. Because of the increase of CO2, the trees have more access to it and are able to flourish in a better environment; even though the CO2 is ruining the ozone layer. Not only is this cause an increase in the amount of trees being produced, but it also speeds up its growth. By having more trees in the environment, they are thus able to “suck up” even more CO2 and release, in return, fresh air.
Regardless of this wonderful fact however, many scientists are still saying that the amount of CO2 being released into the atmosphere is still astronomically high. According to Professor Martin Perry, the trees are only removing a small fraction of the amount of CO2 we are releasing. Surprisingly, we release about 50 million tons of gas a year, a rate far too great for the trees to consume. The trees will not solve global warming, or anything near it, but it is helping the environment greatly and slow down global warming.
Original Article: "Trees are growing faster and could buy time to halt global warming." (6. Apr. 2009): Riverside, 5 Apr. 2009. http://www.telegraph.co.uk/earth/environment/climatechange/5109251/Trees-are-growing-faster-and-could-buy-time-to-halt-global-warming.html.
Researchers in London are claiming to have created an actual robot scientist. Much like a human scientist, the robot is capable of reasoning and carrying out experiments on its own. Ross King for Aberystwyth University has christened this robo-scientist, Adam. So far, Adam has performed experiments on the metabolism of yeast and has since become the first robot to make a scientific discovery. Adam was able to detect the gene in yeast that is responsible for the production of an amino acid called lysine. This amino acid is especially important because it is vital to growth. How did it do this? “The robot, called Adam, was able to work out where an important gene would be located and to develop experiments to prove its theory” (Smith). This discovery and those expected to be made in the future, are likely to be important to creating new treatments to illnesses. Adam’s discovery in particular will be useful when treating fungal diseases such as the common Athlete’s foot. This is because the gene responsible for growth can now be identified and disabled. Though robot scientists will without a doubt become important, it is unlikely that they will replace human scientists. Instead, robo-scientists will work alongside humans. They will be used to, “…carry out large numbers of repetitive tests that in a person would induce boredom and loss of concentration” (Smith) and to record very specific details. A new robo-scientist by the name of Eve is expected to be switched on soon. It is only a matter of time before these robot scientists are no longer a rarity.
For more information:
Smith, Lewis. "Robot scientist ." Times Online. 03 Apr 2009. The Times. 5 Apr 2009 http://www.timesonline.co.uk/tol/news/uk/science/article6024880.ece
The Traveling Salesman Problem can best be described as planning a road trip throughout the country, where the driver (no doubt excited to undertake the 1000+ mile journey) needs to decide the route through various landmarks. The passengers wouldn't likely want to visit the same place twice, and they would probably want to find the shortest route through all the landmarks to make the most of their time.
Researchers have noted that the human mind, specifically the portion that deals with spatial reasoning, is good at finding optimal solutions to this type of problem in a reasonable amount of time (e.g. the amount of time it takes to plan a trip). Computers on the other hand, while able to find the optimal solution, would take an extraordinary amount of time to find it (factorial time). This is due to the fact that a computer program would attempt to test the total distance of every possible route in order to find the shortest route. It's crucial to note that this would hold true even with an arbitrarily small amount of places to visit.
Jason Brownlee decided to start an experiment testing just how well a person's spatial reasoning skills can be used to solve TSP problems. Disguised as a game, users would log into the application and attempt to find the shortest route going through all the points on a given graph. Anyone who finds a shorter route than the one already saved and posted earns points which are used to rank the user on a scoreboard.